Access throws Error 3625 - Saying the text file [Import] specification doesn't exist, but it does.

maybe to help debug add some print out
Code:
 On Error GoTo 0
    Exit Sub

Test_Error:
    Dim strMsg As String
    strMsg = "Error " & Err.Number & " (" & Err.Description & ") in procedure Test, line " & Erl & "."
    strMsg = strMsg & vbCrLf & " Topo Path File: " & strTopoPathFile
    strMsg = strMsg & vbCrLf & " Topo Path: " & strTopoPath
    strMsg = strMsg & vbCrLf & " Topo File " & strTopoFile
    strMsg = strMsg & vbCrLf & " Topo XYZ File " & strTopoXYZFile
    Debug.Print strMsg
    MsgBox strMsg
 
The reason I asked is because if this fails in a loop, there is always the question of "What happened at the end of the previous loop (that appeared to have worked)?" Followed by "What happened in the beginning of the current loop (that then failed)?" The presence of two apparently completely normal iterations of your loop suggests that something totally anomalous occurred in one of the two places I just mentioned.

Since you are looking at a loop involving a TransferText (which is a "closed" action that you can't single-step into), it SHOULD be possible to run this with breakpoints set so you can see what happens. You can perhaps breakpoint your way around your "stuff it into an array" segment but you need to see what happens in some detail. For instance, you have an "if table exists, delete it first" sequence in your example. If there are other IF-THEN blocks and some of them don't execute in the first two loops, they become candidates for the locus of the error. And the only way to know what actually happens is via debugging with single-step.
Good ideas...
I did initially put a breakpoint on the TransferText command, as you might expect, but I did not yet try omitting some of the other steps - (like reading in the data etc.), which I'll try next. The only variation in what happens, though, is whether I skip doing anything with the input data or not. Apart from that, the same thing happens every time. I loop through the linked table, and pull the terrain elevation data. And I always do the same math with each data piece to keep track of where I am in the file etc. (It turns out they put redundant data on the edges, and I'm ignoring the last column and row for each file for that reason.)
 
Does it make a difference if you change the order of the files in the processing cue?
That's a great troubleshooting idea. In my current context, the order is rather controlled (walking an array of files that organized positionally, so I'd have to throw a monkey wrench into that to test this idea. If/when I can do that - I'll follow up.
 
maybe to help debug add some print out
Code:
 On Error GoTo 0
    Exit Sub

Test_Error:
    Dim strMsg As String
    strMsg = "Error " & Err.Number & " (" & Err.Description & ") in procedure Test, line " & Erl & "."
    strMsg = strMsg & vbCrLf & " Topo Path File: " & strTopoPathFile
    strMsg = strMsg & vbCrLf & " Topo Path: " & strTopoPath
    strMsg = strMsg & vbCrLf & " Topo File " & strTopoFile
    strMsg = strMsg & vbCrLf & " Topo XYZ File " & strTopoXYZFile
    Debug.Print strMsg
    MsgBox strMsg
Good idea.
I do think I have a handle on what is happening where and when - so I've not tried to include any feedback like this just yet. (I've found this kind of feedback to be absolutely VITAL when doing recursion - including how many 'layers' in I am etc. - that's for sure!!)
 
what happens to the array - is it cleared down after each import? or continues to grow? Arrays take up memory.....

Why not use a temp table instead of an array?
 
Are you sure it is talking about the spec and not the actual file being transferred? Does that exist as well?
Other Access messages can be a little misleading sometimes.
 
That's a great troubleshooting idea. In my current context, the order is rather controlled (walking an array of files that organized positionally, so I'd have to throw a monkey wrench into that to test this idea. If/when I can do that - I'll follow up.
Just skip the first two files?
 
The reason I asked is because if this fails in a loop, there is always the question of "What happened at the end of the previous loop (that appeared to have worked)?" Followed by "What happened in the beginning of the current loop (that then failed)?" The presence of two apparently completely normal iterations of your loop suggests that something totally anomalous occurred in one of the two places I just mentioned.

Since you are looking at a loop involving a TransferText (which is a "closed" action that you can't single-step into), it SHOULD be possible to run this with breakpoints set so you can see what happens. You can perhaps breakpoint your way around your "stuff it into an array" segment but you need to see what happens in some detail. For instance, you have an "if table exists, delete it first" sequence in your example. If there are other IF-THEN blocks and some of them don't execute in the first two loops, they become candidates for the locus of the error. And the only way to know what actually happens is via debugging with single-step.
OK - so @The_Doc_Man is spot on... again...

While experimenting with the file order (per @DHookom), I found that if I simply repeated ONLY the TransferText command for different files, it was fine. It progressed beyond the 3rd file etc. But when I included defining my recordset after each TransferText, in order to read the data from the table, that is where things went awry. Apparently, repeatedly using a recordset to connect to a linked table, and never closing it, but only redefining it over-and-over, and then trying to continue to link to another table is too much for it. (Doesn't that feel like a memory leak of some kind in Access? Agreed it is bad practice for me not to close my recordset, but I thought the Set command would clear it the next time anyway??) So, it turns out if I include a Set rstTopoData = Nothing, when I'm done with it (or at least before the next TransferText command), THEN it seems the next TransferText will work. Yay!!!

Perhaps ironically - this method appears to NOT be the fastest. Using FileSystemObject with Do Until strmXYZFile.AtEndOfStream seems to be the fastest method I've tried thus far. (Trying to use the regular Open File for Read As #intPointer couldn't even read the 1st line of one of the files. (It just ran home to MaMa and never came back... ) Not really sure what I did wrong there - or if that method just can't handle 60MB+ file sizes.

Thanks to everyone for the ideas/insight!!
I love this forum!
 
Last edited:
what happens to the array - is it cleared down after each import? or continues to grow? Arrays take up memory.....

Why not use a temp table instead of an array?
Well - in this particular instance - I'm having to process a very large amount of information, and every time I try to use tables - it either gets too big for Access to handle, or gets too slow. I generally err on the side of storing data in tables (probably to a fault actually...), but in this case, I'm trying to use a big array to hold all the data. (I pre-determine how big it needs to be before going through this data ingest exercise, so the array is already ReDimmed by the time we get to this code. So, there's no growth of the array during import(s).) I'm not sure how big of an array it can handle before that method starts to break down as well. At some point, MS Access isn't the answer. But I'm trying to find out where that edge is apparently - ha...
 
So the files do not hold the same type of data that goes into that table?
 
Apparently, repeatedly using a recordset to connect to a linked table, and never closing it, but only redefining it over-and-over,

In theory if you did this in a single-step-of-the-loop subroutine, this would not happen because of memory cleanup (a.k.a. garbage collection) that occurs at every END SUB. HOWEVER, it appears that you stay within the same routine for each iteration. That means that garbage cleanup does not occur between loop steps.

When you create a Recordset object, you create TWO things. The recordset variable is actually an object pointer to the recordset object that you never actually see. (You can get a hint of its structure using the Object Browser.) Failing to close the prior iteration of the recordset leaves behind a dangling invisible object. In fact, it may leave behind more than one object, since it technically counts as another file channel to be able to independently browse the recordset, which IS part of a file. The action of SET RS = NOTHING goes through the process of clearing out the stuff and closing connections - such as file handles.

In your example you had NESTED loops, one for rows and one for columns. How many rows and columns do you normally have? Because what MIGHT have happened is that you breached the limits of the program HEAP which then cause some catastrophic failure. Another notorious limit is the file handle limit - which usually runs into the error "RESOURCES EXCEEDED." The lack of a different kind of error message initially confused me, but thinking about, I believe I understand the mechanism.

You were doing a DoCmd.Transferxxx operation, which is a black box to us since Access is NOT Open Source. This means that you don't know what error ACTUALLY tripped the black box - only that it returned an error that you can't even verify - because as you noted in post #1, when you look in the appropriate system table, the import spec seems to still be where it should be. If you blow out a memory feature (HEAP) or a quota-based feature (recordset or file structure) while inside the black box, you will still only get ONE error message - that the box couldn't work with the named specification. Since you got a file-related message, I'm guessing you blew out the file handle quota.

Interestingly enough, if you had built those loops to set up the circumstances but then called a routine to do the work required for the individual row/column combination, the sub's cleanup implied by the END SUB statement would have correctly handled the unclosed recordset and open file handle. If so, you would not have seen the error at all. (Probably.)
 
In theory if you did this in a single-step-of-the-loop subroutine, this would not happen because of memory cleanup (a.k.a. garbage collection) that occurs at every END SUB. HOWEVER, it appears that you stay within the same routine for each iteration. That means that garbage cleanup does not occur between loop steps.

When you create a Recordset object, you create TWO things. The recordset variable is actually an object pointer to the recordset object that you never actually see. (You can get a hint of its structure using the Object Browser.) Failing to close the prior iteration of the recordset leaves behind a dangling invisible object. In fact, it may leave behind more than one object, since it technically counts as another file channel to be able to independently browse the recordset, which IS part of a file. The action of SET RS = NOTHING goes through the process of clearing out the stuff and closing connections - such as file handles.

In your example you had NESTED loops, one for rows and one for columns. How many rows and columns do you normally have? Because what MIGHT have happened is that you breached the limits of the program HEAP which then cause some catastrophic failure. Another notorious limit is the file handle limit - which usually runs into the error "RESOURCES EXCEEDED." The lack of a different kind of error message initially confused me, but thinking about, I believe I understand the mechanism.

You were doing a DoCmd.Transferxxx operation, which is a black box to us since Access is NOT Open Source. This means that you don't know what error ACTUALLY tripped the black box - only that it returned an error that you can't even verify - because as you noted in post #1, when you look in the appropriate system table, the import spec seems to still be where it should be. If you blow out a memory feature (HEAP) or a quota-based feature (recordset or file structure) while inside the black box, you will still only get ONE error message - that the box couldn't work with the named specification. Since you got a file-related message, I'm guessing you blew out the file handle quota.

Interestingly enough, if you had built those loops to set up the circumstances but then called a routine to do the work required for the individual row/column combination, the sub's cleanup implied by the END SUB statement would have correctly handled the unclosed recordset and open file handle. If so, you would not have seen the error at all. (Probably.)
I see - I wasn't really aware of when/where the cleanup occurred for the recordset, and certainly was not aware of the multiple pieces to it internally. There is always more going on under the hood than I'm imagining (especially in THIS world... :) )

As far as the rows/columns thing - in this case, the values are very small - at least now, and I would actually never expect them to get big enough to be an issue (in and of themselves). I expect arrays of 5x7 files or 15x20 files - that kind of thing. So, that array should stay very much under control in that sense. The scaling issue comes in with the fact that each file is over 60MB and has almost 1.5 million lines of data that has to be retrieved/parsed.

I'll probably have to read your 4th paragraph a couple times to fully understand what you have in there.

Interesting point on externalizing the retrieval part. I guess I have so many routines already in this solution, my inclination was to shy away from doing that. Oh well.

Thanks again!!!
 
So the files do not hold the same type of data that goes into that table?
The files contain X-Y-Z data from a geotif file, space-delimited with Longitude Latitude Elevation data, and I'm only retrieving the elevation data, and by placing the data into an array in the proper position, I need not store the Lat/Lon.

Parsing that 3rd column out was another little performance test as well. My initial approach was my usual approach - just using the typical textual parsing stuff (Len(), Right(), InStr(), that kind of thing), and it has turned out to be the fastest that I've tried yet. When I tried to use a Split() function, and then grab the value from the resulting array - that was a bit slower.

You know - this has me wondering - Is there some sort of OS-level tool/capability to split the entire XYZ file data out? This could leave me with a file that I'd only need to simply read vertically with no parsing. Hmmmmm...
 
Last edited:
You can use regex to split out the data but that requires using a udf.
 
You can use regex to split out the data but that requires using a udf.
Interesting...

I'm experimenting with PowerShell at the moment.
I've barely done anything with it, so my first attempts, although working with small files, are abysmal with the big files - just too slow.
It seems to be very difficult to just pull data into anything other than a single, one-dimensional array. And then I have to parse it - and... down the same slow rabbit hole... Oh well...
 
Provide a realist example of the file to import - variations in ‘field’ sizes, mix of letters/numbers etc

Only needs to be 20-30 rows (including the top 2 rows)
 
I created this text file which I'm guessing is something like your xyz file

Code:
11                                                 
                                                    
C       -0.180226841      0.360945118     -1.120304970
C       -0.180226841      1.559292118     -0.407860970
C       -0.180226841      1.503191118      0.986935030
N       -0.180226841      0.360945118      1.29018350
C       -0.180226841     -0.781300882      0.986935030
C       -0.180226841     -0.837401882     -0.407860970
H       -0.180226841      0.360945118     -2.206546970
H       -0.180226841      2.517950118     -0.917077970
H       -0.180226841      2.421289118      1.572099030
H       -0.180226841     -1.699398882      1.572099030
H       -0.180226841     -1.796059882     -0.917077970
I saved this in my downloads folder as a file called data.txt

I then created a schema.ini file, also saved to the downloads file - it specifies the file name, the format and the names, datatype and width of each column. The schema.ini file can contain schema for more files just follow on with the next one. The only requirement is it needs to be in the same folder as the target txt file.

[data.txt]
Format=fixedlength
Col1=id text width 8
Col2=C1 text width 18
Col3=C2 text width 18
Col4=C3 text width 18

and then I ran this query
Code:
SELECT
   *
FROM
    [TEXT;DATABASE=c:\users\chris\downloads;HDR=no].data.txt AS txt

which produced this result
image_2025-08-20_013605815.png

Columns identified (in text in this example, but read up on schema.ini for other data types). Admittedly missing first row, but you said you didn't want the first two rows anyway, and the second row is easily excluded with a criteria

Code:
SELECT
    *
FROM
    [TEXT;DATABASE=c:\users\chris\downloads;HDR=yes].data.txt AS txt
WHERE
    txt.id <> ""

So now you have a query - you can select which columns you require, join it to other tables or queries, turn it into an update, append or upsert query all in one step. This principle can also be applied using an import spec - select the fixed width option. So this is really just an alternative that avoids using transfertext

One interesting thing I've not noticed before - the query is editable - you can add additional rows and change values in existing rows. Something you can't do with a linked file. And not relevant to this thread.
 
Last edited:
@Papa_Bear1

I was going to post this. I had reached the end of Page 1, and didn't realise there was a P2.
Anyway parts of this may still be helpful.

I doubt very much the problem is access.
Secondly, the code you have posted is not complete.

Importantly, where is your error handling for the code block you posted. Where is it set? What does the error handler actually display? How are you handling the error? Maybe there's an unhandled/mishandled error with the previous loop causing the issue with this loop/

Immediately before this line, I would also put a break point, or a msgbox so you can review the contents

DoCmd.TransferText acLinkDelim, "MyXYZTextImportSpec", "XYZ_Imported", strTopoPath &amp; strTopoXYZFile, False

Might the file you are specifying be incorrect? That might throw an error, but maybe not the error you are seeing/displaying.
Maybe the path is wrongly specified for this file.

The block after that line

Set rstTopoData = CurrentDb.OpenRecordset("XYZ_Imported")
stTopoData.MoveLast
rstTopoData.MoveFirst
lngNbrRecs = rstTopoData.RecordCount


I would step through these as well, and display/inspect the value for lngNbrRecs in case this is the problem.
What data type is lngNbrRecs. It is a long isn't it?

If this is temperamental, I would certainly write a log file of each file processed to record the file names and record count.
 

Users who are viewing this thread

Back
Top Bottom