Solved Key Violation Error Trapping During TransferText Operation (1 Viewer)

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
- All -
I have reviewed many threads across this site and elsewhere, but didn't see an answer that fit my exact situation.

I have a form with a multi-select listbox that I use to allow me to easily import and export table content from my database using an import/export specification. Here is a small snippet of the code that I use to perform the import:

ElseIf Me!ogChoice.Value = 1 Then ' (Import)
' If DCount("*", "[" & Me!lstMetadata.ItemData(varItem) & "]") > 0 Then
' MsgBox "Stop! Make sure these rows" & vbCr & "don't already exist in this table. ", vbCritical, gstrAppTitle
' Else
DoCmd.TransferText acImportDelim, Me!lstMetadata.ItemData(varItem) & "_IMEX_Spec", Me!lstMetadata.ItemData(varItem), "C:\Temp\Initial_Dev" & "\" & Me!lstMetadata.ItemData(varItem) & ".txt"
' End If

If the table being imported to already has content and the file being imported has a duplicate, the dreaded "key violations" error is generated by Access. I have tried to trap this error both in the executing sub and also using the Form_Error sub in order to make it more user-friendly, but to no avail.

As can be seen above, for the moment (and perhaps going forward) I've performed a simple DCount test to see if the table being imported to already has content and then blocking the import if it does, but this prevents me from pulling in non-matching content. Can a key violation error generated during a TransferText operation be trapped?
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
Can a key violation error generated during a TransferText operation be trapped?
Very easy with a small variation. The text file is linked using a query. This allows you to create an append query that only inserts new records after checking.
Sample query
Code:
Dim sSQL As String
sSQL = "INSERT INTO DestinationTable ([FieldList])" & _
    " SELECT T.[FieldList]" & _
    " FROM [Text;DSN=NameSpec;FMT=Delimited;HDR=yes;IMEX=2;CharacterSet=850;DATABASE=C:\Temp\Initial_Dev\].[datei01.txt] AS T" & _
    " LEFT JOIN DestinationTable AS D ON T.Key = D.Key WHERE D.Key Is Null"
' Debug.Print sSQL
db.Execute sSQL, dbFailOnError
You can incorporate the variable parts into the SQL statement using VBA.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
Wow, thanks for your quick response!

All the above looks really cool, but there are some moving parts that I don't know how to fit into the puzzle. When I generate the sSQL string, it looks like this (for my test sample table):
INSERT INTO AL_TR_AL_Link (ALBUM_ID, TRACK_ID, ARTIST_ID, LOCN_ID) SELECT T.ALBUM_ID, T.TRACK_ID, T.ARTIST_ID, T.LOCN_ID FROM [Text;DSN=AL_TR_AL_Link_IMEX_Spec;FMT=Delimited;HDR=no;IMEX=2;CharacterSet=850;DATABASE=C:\Temp\].[AL_TR_AR_Link.txt] AS T LEFT JOIN AL_TR_AL_Link AS D ON T.Key = D.Key WHERE D.Key Is Null

I'm familiar with an INSERT query and I love the idea, but I'm not sure how to create the TextDSN; I thought it was another way of referring to my IMEX specification, but not so much. I think if I could figure out how the TextDSN part of the statement above works, this approach would work like a champ.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 01:27
Joined
Feb 19, 2002
Messages
43,275
When you get the "key violations" error, if you just click ok, the non-error records get added. I don't like getting errors I can prevent so I also use a query like the one @ebs17 posted.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
Thanks Pat. The only portion of the solution that @ebs17 posted that I'm still unsure of is how to create the DSN NameSpec. Would I have to create a different DSN for each and every different table that I'm importing?
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
...;DSN=NameSpec;...
This corresponds to yours Me!lstMetadata.ItemData(varItem) & "_IMEX_Spec", you should already have these specifications; insert the name in the marked place. The import specification defines how the text file is to be read as a table. If you have text files with different structures, you also need different specifications.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
Got it. I have already created the IMEX spec, so that's not the issue. Here's what sSQL looks like using my data:

INSERT INTO AL_TR_AL_Link (ALBUM_ID, TRACK_ID, ARTIST_ID, LOCN_ID) SELECT T.ALBUM_ID, T.TRACK_ID, T.ARTIST_ID, T.LOCN_ID FROM [Text;DSN=AL_TR_AL_Link_IMEX_Spec;FMT=Delimited;HDR=no;IMEX=2;CharacterSet=850;DATABASE=C:\Temp\].[AL_TR_AR_Link.txt] AS T LEFT JOIN AL_TR_AL_Link AS D ON T.Key = D.Key WHERE D.Key Is Null

And this is the result when the code attempts to execute the query:

1709235898263.png
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
The same import specification that you would use with DoCmd.TransferText is required. If an error occurs here, then the same with your code above.

And for reassurance: The name of an import specification is something different than the name of a saved import, which is created when you save the one-time manually created import including the specification at the end of the dialog.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
The spec absolutely exists, because I have already imported/exported with it many times successfully.
1709237421483.png

However, Access only returns the 3625 error code when attempting to use it to link to the text file. I will do some Googling on my end to see if I can better understand how to open a text file as an Access table.

Thank you for all your help!!! I don't want to waste too much of your time - since this isn't a production database, it's not the end of the world if I can't get it to work exactly as I might like. Thank you again for the idea; I will continue to play with the code on my end to see if I can get it to work.
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
DoCmd.TransferText acLinkDelim,...
This would link the text file as a table. If that works, the query must also work. It's the same technique.
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
I would like to point out once again that the suggestion in #2 is not an air code with a test character, but is (if the currently necessary terms are observed) a definitely working solution that I often use in this form.

At best, a problem lies in the concrete individual implementation.
If in doubt, you can upload a sample database with the (empty) table and import specification as well as an example text file here so that a third party can take a look at the overall situation. With the same objects in front of your eyes, you can talk to each other better.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
Thank you once again for all your time and help. Unfortunately I'm enough of a newbie that I have no clue how to upload an empty database with the necessary objects to this site, otherwise I would do so.

I'm starting to wonder if I have a too-old version of Access (MS Office Professional 2010, Version 14.0.7268.5000) or my operating system (Windows 7) is not up to the task. I very much like your solution, and hope that by playing around with the code I can convince my installation of Access to connect to a text file so I can implement it.

If I'm not successful, no harm done, it's not the end of the world if this particular portion of my application doesn't work exactly as I'd like. Once again, many thanks for your patience and sharing of a great idea. :geek: At this point, I'm willing to mark this question as "Solved" and move on.
 

CranstonSnord

New member
Local time
Today, 01:27
Joined
Feb 29, 2024
Messages
8
After much dinking around, I was able to get Access to link to the text file using my IMEX spec in one statement, and then use the linked table in your INSERT query to proceed. WooHoo! I am very pleased - thanks again for sharing your knowledge.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 01:27
Joined
Feb 19, 2002
Messages
43,275
Thanks Pat. The only portion of the solution that @ebs17 posted that I'm still unsure of is how to create the DSN NameSpec. Would I have to create a different DSN for each and every different table that I'm importing?
I didn't look closely enough at his suggestion. Just use linked tables. You can join the database table to the linked text table and run an append to the database table.

This is not efficient though. "Efficient" is actually getting the error and ignoring it. When you join heterogeneous tables, Access has to make them both local. So, the table from the linked BE is brought into memory locally and the table from the linked text file is also brought into memory locally - it makes no difference how you reference this table. Specifying the file name the way ebs did is NOT more efficient than simply linking to the text file. I use linked tables frequently. In many situations, the files have different names each time I link them. I still have never had a reason to use the technique ebs mentioned. I ALWAYS use simple linked tables. As long as you don't change the name of the linked table, none of your code or queries will break. So, my table is linked as BankStatement but the file name changes every month. The change in the file name is irrelevant. What is important is that I always link it as "BankStatement". That keeps my code and queries from breaking.

The nice thing about Access is that it actually allows you to join heterogeneous tables. I should have qualified my answer. In the case where I use the join, the tables are very small and won't ever get large. I actually wouldn't use the technique for a large (50,000 + ) table. I would simply ignore the "duplicate" errors. If my BE is SQL Server, I might upload the linked text file to a temp table on the server. Then the join is fine. SQL Server will handle the join on the server. It is only in the case of the heterogeneous join where Access will bring EACH table local. The larger the tables are, the more expensive this technique becomes.
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
"Efficient" is actually getting the error and ignoring it.
This needs special explanation to me.
Have you ever driven a car? Hit the wall and ignore it?
I think it's better to prevent errors from occurring in the first place instead of ignoring them. In some cases it may be very difficult to ignore it.

Specifying the file name the way ebs did is NOT more efficient than simply linking to the text file.
But it is no less efficient either.
There are two reasons why I like to use the variant shown (connection data directly in the query).

1) The text file is only linked for the moment of query access. I save the explicit linking and the explicit unlinking. This makes it easier to process entire batches of files to be imported using a simple loop.
Assuming the text file is removed, leaving a link to it wouldn't be a good situation either. Excel tables can be processed analogously.

2) I don't necessarily have to link the backend table and text file in my frontend in order to access both tables at the same time. As is well known, you can only create a DAO reference or an ADODB connection on one backend at a time. If I now link to the second backend directly in the query, I don't need a third part into which both external backends are linked.

Overall, this means I have a more variable position.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 01:27
Joined
Feb 19, 2002
Messages
43,275
This needs special explanation to me.
Then I will explain it for you. I/O is the most expensive operation an application performs. Therefore, if I/O can be avoided, it should be. I've never done timings with Access but I have done timings extensively when I was developing CICS transactions with as many as four thousand concurrent users across the country where most transactions required sub-second response times.

Simple things like not checking for a duplicate before inserting a row actually saves a lot of time once you are in a busy environment. There is no jeopardy of the bad data being saved because the database engine won't allow it. You just avoid the read and instead display the error generated by the insert. The logic being - most inserts will succeed. Very few would fail due to a duplicate value. Therefore, you should cater to the most likely outcome rather than the least likely and in the process save one I/O operation. It adds up:) Take the 99% path rather than the 1% path. That leads to efficiency over time even in an Access environment. It is similar to making efficient choices when coding. If you have a long select, you always (if you know what you are doing) put the most likely choices at the top of the list so they will be selected and Access can stop running the code in the Select Case as soon as possible. Same for If statements. Most common choice first is the motto.

The Access environment isn't as intensive timewise as running in a CICS environment on a mainframe where you are competing with resources with thousands of other tasks so with Access I am more inclined to prevent the error rather than report it. This is generally more user friendly because if you are not going to allow the user to save the record, you shouldn't let him enter any more data than necessary. So, this is actually a situation where I use the control's BeforeUpdate event so the error can be reported immediately rather than waiting to do all validation in the form's BeforeUpdate event. However, this particular situation is different because heterogeneous joins are extremely inefficient. It is a miracle that Access can perform them but be prepared to pay the cost.

As I explained, if the BE is SQL Server where you have the ability to create temp tables, importing the eternal data to the server and doing the join there is an option. However, performing a heterogeneous join with Access is VERY EXPENSIVE given that Access will bring all the data from BOTH tables into memory on the local PC to perform the join.

Does that clarify the problem?
 
Last edited:

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
Your explanation doesn't explain anything to me.

Either we talk about Access, since we are in an Access forum. Or we talk about something else. Mixing everything with a large mixing spoon doesn't make much sense.
Simple things like not checking for a duplicate before inserting a row actually saves a lot of time once you are in a busy environment.
I do not believe that. How can you say that without measuring it?
Not checking tends to save the developer's intellectual power. Is this the desired efficiency?
An error is an error, also an index error. That's how I see it, and @CranstonSnord sees it that way too.
That you can often get away with ignoring errors. may be so. Anyone who consciously allows something like this to happen can live with the consequences.

if the BE is SQL Server
That would be a different topic, namely the use of T-SQL.

However, the above queries with linked tables are Jet SQL, and since Jet 4.0 (MS Access 2000) this has allowed external databases to be integrated directly into a query formulation. Jet SQL works locally, even with linked tables on a SQL server.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 01:27
Joined
Feb 19, 2002
Messages
43,275
Either we talk about Access, since we are in an Access forum. Or we talk about something else.
I can explain it to you but I can't understand it for you. What part of I/O being the most expensive operation a program performs do you not understand? That is the only concept you needed to grasp to follow my explanation.
I do not believe that. How can you say that without measuring it?
What part do you not understand? The part that says the number of errors in this context is statistically small or that doing two I/O operations is not more expensive than doing one?
That you can often get away with ignoring errors. may be so. Anyone who consciously allows something like this to happen can live with the consequences.
What part of "there is no danger to this because the database engine traps the error" do you not understand? I write EXTENSIVELY about error trapping and how critical it is to a sound application. But if the database engine is going to trap certain errors, it is hubris to think that you must trap them also to "save the day".

One side of the join is a Jet/ACE table. The other side is an external text file. The only way to join these two heterogeneous "tables" is RBAR. No indexes or other SQL methods can be employed.

So, in your opinion, a RBAR join to do the append, is more efficient than an append query that does NO join but simply appends all the rows and traps the errors is a better option? OK. Whatever.
 

ebs17

Well-known member
Local time
Today, 07:27
Joined
Feb 7, 2020
Messages
1,946
"Efficient" is actually getting the error and ignoring it.
Allowing and even ignoring errors is an expression of efficiency? Of course, the database engine rejects anything that is invalid, and the database doesn't collapse straight away.
But what about data integrity, like full imports?
Has subsequent processing of error tables been included in the efficiency analysis, provided that they have recorded all errors?

The focus here was on defending against or avoiding unwanted duplicates and thus index errors against violations of a unique index.
Can someone now tell you how to ignore exactly this error, but keep other errors (index errors due to violation of RI, data type errors, conversion errors, violations of validity rules) in consideration.
But if the database engine is going to trap certain errors, it is hubris to think that you must trap them also to "save the day".
How can you be sure that errors only occur for which the database machine's own error trapping is sufficient?
Or do we forget about errors altogether so that an import is quick and therefore efficient, because it is no longer important to rely on correct and complete data.

As far as I know, a dbFailOnError causes a rollback in the event of errors, i.e. the table is reset to its initial state. You identify the error, correct it in a suitable form and start the query again from the beginning. This is easier than analyzing and cleaning up the table afterwards if the import is aborted and incomplete.
Expected errors such as those in the analysis just mentioned can be avoided during the import process, and violations of a unique index are expected errors, simply because a double import could occur (same data in different files, repeat import with the same file).
If I force a rollback because of expected problems, progress and efficiency are over.

It's a question of style and it has practical implications if you strive to be error-free. What's not to understand there?
 
Last edited:

Users who are viewing this thread

Top Bottom