@ebs17 - The question I have to ask is, what did you do with the duplicates when you detected them?  (I don't open someone else's databases on my home machine so I didn't look at your code.)
Looking at this from the outside, I see two people talking 
past each other. Perhaps I'll just tick you both off, but I hope to do otherwise by adding my two cents' worth.
@Pat Hartman is saying (if I read her correctly) that you have two basic problems here, or two parts to the problem.  One of the parts is the heterogenous JOIN; the other is the expected duplicate entries.  The difficulty to which she APPEARS to have focused her attention is with the expected duplicates.
I don't want to put words in Pat's mouth but I have to agree with her on what happens IN ACCESS when you make your application code ignore errors.  Access, regardless of what you do, will do the import row by agonizing row (RBAR).  The DB engine, regardless of what you do, will detect the (expected) duplication where applicable.  The next question is whether you 
trapped the error or 
didn't trap the error.  JET/ACE will disallow the insertion no matter 
what you do in your error trapping routine because a key violation is still a key violation. So with respect to your test, when a duplicate is detected then a record is rejected, and that is a base-line fact. That is the LEAST you can expect to happen. The question then arises - so what ELSE can you do?
There is an implied code context switch in the innards of the TransferText operation any time it has to respond to an error condition, particularly if it engages the error-trapping dialog box.  If you are going to continue after the error AND you are trapping, you have just called a special handling no-parameter subroutine out-of-line with the transfer process. This CANNOT be faster or equal in speed than just letting Access drop the bad record and go one its merry way in the loop, because that drop/proceed is base-line behavior. 
Therefore, on simple logic alone, I have to disagree with your statement:
	
		
	
	
		
		
			I see myself in at least the same or better position by using the test for duplicates.
		
		
	 
How do you deal with those duplicates after the import is complete?  Different question.  You can either pre-scan for duplicates (which adds the time of one query equal in size to the import operation) or you can impose some kind of flag during the import operation to show that the record came from the import and then post-scan for non-flagged duplicates.  That would add a full-scan query as well.  I see a pre-scan or post-scan that doubles the number of records you have to hit OR I see an error handler routine that doesn't always take a straight-through path for its code, and the number of extra trap-based diversions can NEVER make the rest of the loop faster.
Therefore "better position" seems to me to carry some baggage that you have not clarified.  What, specifically, makes it better?  You aren't going to import more records unless you do something to handle the key duplication.  I don't see how that can be faster.  
If there is a linguistic situation here, that might account for something - but it doesn't seem so.  I just don't see your argument.