Solved Key Violation Error Trapping During TransferText Operation

I my world, users can't just move the BE
Not in my world either. But they also have no reason to do something like that and disrupt the functionality of the application.

However, I don't see any connection to the thread. Errors and error messages as well as their avoidance as well as efficiency and performance considerations are the developer's tasks.
If the user has to help the developer with this, things look bad.
 
Not in my world either.
And yet, that's how your example gets around the problem I was describing. We can be done with this. It is simply going in circles as your only objective is to try to prove me wrong rather than understand the point.
 
I would really like to see a test environment that encapsulates and shows exactly the problem you describe. Reading from the text file and appending content into the table should be included.

If I am told that my test environment is wrong, I can accept it if there is conclusive evidence or if I am shown an alternative, better environment.

How do I get around the problem? Show me how to handle it with purpose. Follow the claims with actions!
 
Last edited:
How do I get around the problem? Show me how to handle it with purpose.
The table you are updating should be SQL Server and the input data some type of text file - .txt, .cvs, .xls, .xlsx.
 
The table you are updating should be SQL Server
Is this your specific decision, separate from the questioner's topic, or a rearguard action?

Based on the question you will realize that I will not build an environment for your satisfaction, as you can do that yourself if you are really interested. Self-employment is the better occupation because there is a learning effect.

As a general answer: Of course, the respective destination table can be a table from an external access backend or a table from a SQL server instead of in its own file. If you link this table into the FE, you have the same code.
The only difference is that the destination table is changed in an append query. If I test several variants, and in several runs, I need a separate destination table for each variant and run. The real difference is only in creating tables in the backend and necessarily deleting them, alternatively copying prepared backends and swapping them.
It takes work, but it's not impossible.

Instead of linking external tables to the frontend, you can point your DAO reference or ADODB connection directly to the backend. The queries still remain very much the same.
If I reference SQL Server directly, it makes sense to use T-SQL.

I don't know what numbers will result from measurements under these conditions. You are welcome to present and discuss these numbers here.
Since you talked about statistics above: The variants can also be expanded so that, for example, 5 duplicates are fired on 500,000 records in the destination table. Everything conceivable is possible. You then only need more time for test creation and test execution. Anyone with a high level of interest and a high desire to persuade will be happy to spend their time on it.

I reject the accusation that I would circumvent the problem.

The question of whether you can trust a text file coming from outside to the extent that it contains no errors other than possible duplicates and whether you can therefore ignore errors or whether it is better to ward off errors continues to arise. As I have written several times, I only have one clear answer.
 
the input data some type of text file - .txt, .cvs, .xls, .xlsx.
I never talked about anything other than text file as a data source.
An Excel table can be integrated into the SQL statement in a similar way, but there is no import specification, which increases the risk of errors during import (columns in Excel tables are not data type-safe). This is not a plus point for ignoring errors.
 
Last edited:
@Pat Hartman
I don't know if it calms you down.
Of course, I know and recognize that a text file and therefore the text table read cannot have an index.
Due to the use of LEFT JOIN, only a comparison can be made between rows and rows, the exact key of the row against the key of the row, which is more inefficient than using the index in the test. And foregoing an explicit test creates the expectation that additional time will be saved. So far I can understand and accept everything.

But the import does not just consist of checking and preventing duplicates. In any case, reading the source data and writing data to the destination table is also included. Runtime efficiency is reflected in performance. So I can only use runtimes of the minimal necessary process to make a truly practical assessment.
A chain is as strong as the weakest link. It's no use if there is a particularly strong link in the chain.

In case @isladogs reads along here:
You are recognized as an external person for speed comparison tests in the forum. If you want, you are welcome to use the entire content of the thread for your own test, even as part of your own series.
The topic of unwanted duplicates, errors and error messages should be a topic of general interest and deserve some attention.
 
Last edited:
You still do not understand the problem. Perhaps someone else can explain to you how a heterogeneous join is performed by Access.
 
You still do not understand the problem.
What is the actual impact of the problem that I do not understand?
Does the query not work or is incorrect?
Does it extend run times, in addition to what I measured?
Does it destroy the database?
Does it annoy the network operator?

Without negative effects, it doesn't really matter. I don't really need to know how to make the very best snow igloos unless I go anywhere near the Arctic.
 
It's the part about Access having to bring every single row from the server table into memory on the local pc in order to do the join.
 
Sure, Access, i.e. Jet-SQL, works locally and has to load the required data from the backends into its own area. This is always the case and not specific to this case. Since there are no usable indexes for filtering, it's really about entire tables that have to be loaded. However, only the key fields are loaded in the first step, not the entire records. In addition, Jet-SQL must communicate with T-SQL; it cannot directly affect the DB machine of the SQL Server.

So where is the real problem? There would be a real problem if the running times were to become unacceptably longer due to all the stories that had to be completed. In my test this wasn't the case. However, record numbers from statistics that are common to me were used.

But what you didn't register, at least not noticeable in the feedback: If I send a query in T-SQL to the SQL server via ADO, Jet-SQL and therefore downloading data into local Access is no longer an issue. I am convinced that you know that. But then this knowledge should also appear in the discussion.
 
Despite being name checked more than once in this lengthy thread by both @ebs17 and @Pat Hartman, I have deliberately kept clear of this 'discussion'.
It seems to me that you are both dogmatically restating the same views endlessly and are both determined to have the last word.
That is something I've certainly been guilty of at various times!
The last thing this thread needs is a third equally stubborn individual (me!) to complicate matters even further.

We're now on post #53 and the OP hasn't been seen since post #14. Is continuing this excessively long thread really serving any purpose?
 
Is continuing this excessively long thread really serving any purpose?
If no third party is interested in the content, then no longer. Thanks for your report.

Another opinion would perhaps help me to clarify where my mistakes may lie or to clarify to me THE PROBLEM (which somehow shifts when I try to approach it). But I won't die from an indecisive or lost discussion. Also: A problem that I don't have doesn't bother me.
 
Last edited:
For the hundredth time: What bothers you about the "inefficiency of heterogeneous joins" when runtimes across the entire import are comparable to those with avoidance or sometimes even better?
If you want to hear that they in themselves represent an efficiency problem: Sure. I always prefer using indexes for queries; this attitude of mine should be clear from some of my contributions. But JOINs with tables from the same database without using indexes have practically the same problem. Therefore, highlighting a query with a text file is way over the top, when it comes to practice and not ideology
 
Last edited:
We are done with this. Your sample does not replicate the problem. No one is interested in this conversation. So just end it.
 
Your sample does not replicate the problem.
After all, it addresses exactly the question asked by @CranstonSnord, and that is the most important thing in the topic.
If you have your own problem, open your own thread and wait for replies.
I would shyly ask if someone else understood you. The only one who spoke up runs away in horror.
 
Last edited:
OK YOU WIN. You don't ever need to understand the problem. LET IT GO.
 

Users who are viewing this thread

Back
Top Bottom