Importing Wide TXT file into Access Table

GregoryWest

Registered User.
Local time
Today, 14:23
Joined
Apr 13, 2014
Messages
161
I have a wide variable length text file. The record width is between 150 and 450 characters. To add to the complication the file as multiple record types in each file, in no particular order.


What I would like to do is write a piece of VBA code that will suck all the data into an existing table so the new table can be further massaged into the component parts.


The table the data is going into is extremely simple:
Field1 - Text - 255 char
Field2 - Text - 255 char


There has to be a simple way to do something like this. I have been playing around with Macro and the import function, my only problem is the imput file name changes often. Everything else stays the same.
 
I don't really understand your source file. How many fields? Are they delimited?

Multiple record types? Huh? A record is a record.

Perhaps you can post a sample of your source file and then how it ultimately should fit into whatever table you are putting it into.
 
There is one line of non-delimited text. There is no way to set up fields to be imported since the structure of any given line could be one of 6 different things.


I am not the one who created the export/import text file. It is coming from the government and this is what I am given.
 
So how do you propose splitting the long (or wide) text into 2 separate fields?
Why not just have one memo (long text) field instead and dump the whole lot in that?
 
I ran into something like this once, where the text file could have each line be a different record type, but they were identifiable via the first three letters in the line, and each record type had a specific fixed-width format (up to 1600 characters in length).

I set up a definitions table, with a field for the identifier, an ID field, and a field indicating which table the record gets placed in, and linked it to a formatting table that was little more than that ID as a foreign key and then fields for field name, field starting location, and field length.

The app would read the file line-by-line, identify the format for each line, parse it out based on the corresponding format, and drop the completed record into the table corresponding to the record type.

I also spent entirely too long trying to convince anyone who would listen that that file should be updated to a more modern format.

Anyway, assuming you have a way to identify a format for each line, you might be able to use the same approach.

We won't even go into what I had to do to import a file that used multi-row records formatted for a green-bar printer. That one was a mess.
 
I have often used a similar line by line technique to that suggested by Frothingslosh for reading text reports with variably line wrapped data interspersed by page headers and various lines of other unwanted junk.

There is virtually no limit to what can be done this way. You just have to define the logic.

Nothing really to be gained by importing into a table for processing.
 
Of course the same idea applies to any text file whether it be CSV, XML, JSON etc. Any such file can be split into component fields providing the structure is known so rules can be applied to parse it.

However in post 3 the OP suggested there was no clear structure.
That is the reason I asked how he proposed it should be split
 
If there is truly no structure then the data is incomprehensible. Otherwise the user must be applying some kind of logic to interpret it. That logic should be able to be expressed as code, though it may be very complex.

I can't see how dropping it in a table is really going to help.
 
Might be able to open it as a text file for input, then do an INPUT LINE on it and stuff that into a LONG TEXT record. But if there is no infrastructure in the record, I don't know what you would do with it afterwards.
 
Yeah, if there's truly no structure to the record, then at best it needs human intervention for each line, and at worst it's meaningless gibberish. I took his comments to mean simply that there's no one logic applicable to every line.
 
As plog noted, records do not have a type, data has type. We need an example file.
 
REMINDER: In post 3, the OP wrote:
There is one line of non-delimited text. There is no way to set up fields to be imported since the structure of any given line could be one of 6 different things.

On the basis of that comment, I wrote that he had suggested there was no CLEAR structure

Somehow that's been changed in subsequent comments into there possibly being no structure at all ....which is a different matter entirely
 
REMINDER: In post 3, the OP wrote:
There is one line of non-delimited text. There is no way to set up fields to be imported since the structure of any given line could be one of 6 different things.

On the basis of that comment, I wrote that he had suggested there was no CLEAR structure

"One of six different things" sounds pretty clear to me. Perfectly suited to line by line parsing which is where Frothy and I were taking the conversation until post 7 implied that we were on the wrong track with:

Any such file can be split into component fields providing the structure is known so rules can be applied to parse it.

However in post 3 the OP suggested there was no clear structure

A structure isn't limited to a single line.
 
there must be a stucture of sort. even error dumps has one.
 
Without SOME kind of sample, we can bump around in the dark all night. You can read long text from a file by opening it in text mode. Without more data, however, such as info on these putative six formats, it will be hard to get more specific.
 

Users who are viewing this thread

Back
Top Bottom