Adding an AutoNumber field to a linked table (1 Viewer)

KitaYama

Well-known member
Local time
Tomorrow, 07:11
Joined
Jan 6, 2022
Messages
1,834
When adding a linked table to a csv file, is it possible to add an AutoNumber field to be used as a PK while working with the records?
I can't see any option for adding a PK to the linked table in the wizard?

Thanks.
 
A primary key including an unique index is a property in a real database table. You cannot create this on a linked table, and certainly not on a text file.

There are two ways:
- A continuous number is generated in a query on the linked table. This gives you an identification feature.
- You import into a temporary table. In this table you can create additional fields and set your own planned indexes.
 
A continuous number is generated in a query on the linked table. This gives you an identification feature.
Brilliant. Never thought of this.
 
I wasn't able to add the continous number on my own. The csv file hasn't any column to be able to use it for Dcount in the query.
The following is a mockup of the table.

Any further assist is much appreciated.



ParentNoLineNoDrawingNameDrawingNoQuantityPerAssembly
11287BD701A01
10​
test11287BD007
1​
11287BD701A01
110​
test11012BD101
1​
11287BD701A01
130​
test11092BC104
1​
11287BD701A01
150​
test11287BD101
3​
11287BD701A01
160​
testAJ3321BC112
2​
11287BD701A01
180​
test3412BD211
1​
11287BD701A01
210​
test1287BD102
1​
11287BD701A01
220​
testAJ226SR105
1​
11287BD101
110​
test3266BF2
1​
11287BD101
120​
test3266BF202
1​
11287BD101
140​
test3266BF203
1​
11287BD101
160​
testSF1037TR108
5​
11287BD101
170​
test1037CK243
15​
11287BD101
210​
Jtest1037DP297
0​
11287BD701
10​
test11287BD008
1​
11287BD701
110​
testAJ11287BD801
1​
11287BD701
120​
test11287BD802
1​
11287BD701
150​
testAJ11287BD203
1​
 
You would have to add a sequential number column to the CSV file, but then you need a way to retain the sequence number between iterations of the CSV file (I presume), so why is it a CSV file rather than a relational table in the first place?

If not a sequential number, then you need a set of columns that provide a unique key.

Eg [parent no] + [line no]
 
so why is it a CSV file rather than a relational table in the first place?
The file is sent via a web base order system by some of our customers. We login to our account and the only format available for downloading new orders is a csv file.
 
The file is sent via a web base order system by some of our customers. We login to our account and the only format available for downloading new orders is a csv file.
So simply [parentno] + [lineno] seem to give the same, or in fact better functionality than a mere numberic identifier. (As I edited my post to note)

You could preprocess any file to append a sequential number to each row. You could start at 1 each time, or continue with an existing sequence. That depends on what exactly you need to do with your data.
 
So simply [parentno] + [lineno] seem to give the same,
ParentNo+lineno is not unique. There's a good chance for having the same combination in the same csv.

You could preprocess any file to append a sequential number to each row.
Sorry, I can't understand what you mean by this.
Do you mean opening the file with Excel and adding a column and fill it manually?


Maybe the shortest way is to use a sub report instead of a sub form. (the linked table is used as a sub form)
I hope it's not the only possible solution.
 
SQL:
SELECT
   DCount("*", "TableX", "ParentNo & '|' & LineNo <= '" & T.ParentNo & "|" & T.LineNo & "'") + 1 AS XNo
   [, FieldList]

FROM
   TableX AS T
Join multiple fields for an expression that is unique.
However, this field combination could also serve as a multi-field key on its own, as individual fields without a delimiter.

Because one thing has to be said: the generation of a sequential number shown is very resource-consuming, which is always the case with correlating subqueries due to the system. The subsequent use of this number as a key very quickly becomes performance critical.

Therefore, for more complex tasks and larger amounts of data, I quickly tend to use temporary tables where you can fix such intermediate states and have complete freedom in terms of design. Such temporary tables are then placed neither in the frontend nor in the backend, but in an additional temporary local backend file. This way you avoid bloat problems in your own important DB files.
 
SQL:
SELECT
   DCount("*", "TableX", "ParentNo & '|' & LineNo <= '" & T.ParentNo & "|" & T.LineNo & "'") + 1 AS XNo
   [, FieldList]

FROM
   TableX AS T
Join multiple fields for an expression that is unique.
However, this field combination could also serve as a multi-field key on its own, as individual fields without a delimiter.

Because one thing has to be said: the generation of a sequential number shown is very resource-consuming, which is always the case with correlating subqueries due to the system. The subsequent use of this number as a key very quickly becomes performance critical.

Therefore, for more complex tasks and larger amounts of data, I quickly tend to use temporary tables where you can fix such intermediate states and have complete freedom in terms of design. Such temporary tables are then placed neither in the frontend nor in the backend, but in an additional temporary local backend file. This way you avoid bloat problems in your own important DB files.



The main reason I try to stay away from temp tables, is frequent delete an append ends up to bloat.
I'll try the given solution tomorrow as soon as I'm back to my desk. If the performance was critical, I will use a temp table in another database.

Million thanks for the advice and solution.
 
Fear of temporary tables: They also exist in SQL Server, in an additional file. There they are automatically managed and removed again. My little solution is just an approach to imitation.
Attached is my standard solution for dealing with temporary tables. The interface is in German, but you can understand the process from the codes alone.

Principle:
When you open the frontend, a frmHidden is opened. This is used to create the additional backend. The form is there for closing and scheduling the backend because there is no file close event in Access.
The additional temporary backend has its own 2 GB of space for temporary garbage.
Closing the application will delete the entire file.
 

Attachments

Last edited:
Rather than using a temp table (presumably populated using transfertext), use sql to create a query.

Code:
SELECT *
FROM
     (SELECT  ParentNo & '|' & LineNo as XNo, * FROM [TEXT;DATABASE=X:\pathtofile;HDR=Yes].filename.csv)  T

Edit: clearly there is no indexing so might be slow for very large datasets but the above can easily be modified to compare with existing tables, be filtered at the outset, converted to append or update (or update/append) queries
 
An additional problem exists with linked text files, regardless of the form in which the link is made:
If you only use append queries based on this table, you won't have any problems. However, if you also use update queries, a bug related to an ISAM error often occurs. At least that was the case in the past. Then, from a purely technical perspective, you have to switch from linking to importing and thus automatically have a temporary table.
 
If you only use append queries based on this table, you won't have any problems. However, if you also use update queries, a bug related to an ISAM error often occurs.
No update is necessary. Only appending.
Thanks for the additional info and your sample file in #11.
 
What is the purpose of adding a uniqueID to this table? Wouldn't one be generated when you import the data to your permanent table?
 
Why do you need an autonumber field for a simple append? Such a field is automatically incremented in the target table.
 
Closing the application will delete the entire file.
I use a similar concept except that I don't delete the temp db when the app closes. I overlay the old one at the start of the process. Makes testing easier.
 
What is the purpose of adding a uniqueID to this table?

Why do you need an autonumber field for a simple append?
It's not a simple append. Before appending each record of the csv to tblOrders, there are a lot of calculation that should be done: The simplest one is the amount of necessary Materials, manufacturing, inspecting, shipping time study, and above all changing the current production line schedule to include new orders.
The process that should be done on this linked table, may take a long time, in some cases maybe more than a day or two.
So if for some reason there's an interrupt in the process, or the user has to stop in the middle and attend some other jobs, Without having a unique ID, it's almost impossible to recognize how many of the records have been processed and from where to continue.
I also need this unique id for some specific filtering (for example showing only every 5 record : WHERE UniqueID MOD 5 = 0

There are some other usage that I think is out of the scope of this discussion.

Thanks for your concern.
 
The process that should be done on this linked table, may take a long time
If this process is, or can be represented as, a SQL query process, that would be an argument for importing into a temporary table. A big advantage is that you can specifically index fields or field combinations, which can give performance a significant boost. Secondly: If you go into the temporary tables area, you also have the option of fixing calculation results as a table and indexing their fields in turn. Anything to reduce overall processing time.

may take a long time, in some cases maybe more than a day or two
Such times for pure processing would be out of the question for me and would trigger a need for action.
When I think of processing times, I think of minutes and fractions of them, but never hours or days.
The risk of a break in the middle of processing is also a huge problem in itself. More complex things in particular should be carried out as part of a transaction.
 
Last edited:
@KitaYama See if this helps. The sample assumes the "template" database exists and all it does is to copy it from the template folder to the local folder and remove the "TEMPLATE_" prefix from the file name. The FSO copy will overwrite any existing database. This allows the temp database to persist if you need it to because it doesn't ever get explicitly deleted. It gets overwritten the next time you start the process

I didn't include any of the import process that appends data to the template's tables because that is specific to every use of this concept. I just included the common aspects of the process. There is a table to hold the name of the template db as well as the folder to copy it to. The form that runs the process allows you to override either or both of these values or just use the saved values.
 

Attachments

Last edited:

Users who are viewing this thread

Back
Top Bottom