It is not entirely clear, but I surmise you are doing a positive-word/negative-word evaluation in a much longer text sequence, some sort of word frequency analysis for specific words with positive or negative connotations.
Issues that will make this work reasonably or not so reasonably:
1. In what kind of fields (in your target) did you expect to find the "connotation" words - short text, long text, something else?
2. How many neutral words will be in the field to be searched - i.e. what percent of the target string is likely to contain the information you want?
3. Are you searching for whole words or can there be included/overlap matches? I.e. if you look for "crap" will you also match "crappy" in the same search or will you do separate searches based on two different entries in your negative word tables?
4. How many words are in the the positive and negative tables and how many paragraphs are in the target table? This governs how many matches you will have to face when looking at your results.
I would guess that there is at least some potential for more than one word of positive or negative connotation in any arbitrary paragraph. Further, you are doing a complex type of string match-up, potentially with LIKE operators. Therefore this will be a compute-intensive query and you might not get a LOT of hits quickly depending on the exclusivity of your word tables. I wonder about performance for this case.
If this information is coming from a Word document and if you are at least a little bit adventuresome, there might be another way to deal with this, called a Scripting Dictionary object. You can use it in Access if you have a reference to the Windows Scripting library. (It is NOT native to Access, but IS usable by access.)
Instead of having a table of good words and a table of bad words and trying to do a join, you might create a dictionary of ALL of your "connotation" words with an attribute (called the "Item") of "Good" or "Bad" (or any other value you wish for these items). The dictionary words are automatically indexed, so they are easy to find once the dictionary is built.
What you do is create the dictionary by adding all the "connotation" words (perhaps by loops through recordsets for your good/bad tables). Then you can probe the dictionary; first, using the
dictionaryobj.Exists(keyword) method to see if your target word was one of your connotation words; then if the word IS in the dictionary, look up the good/bad marker item by using
value = dictionaryobj.Item(keyword).
See this article in MSDN:
https://msdn.microsoft.com/en-us/library/x4k5wbx4(v=vs.84).aspx
Several links on that page explain the available methods and give simple code examples.
It might sound daunting, but it is actually not too bad. Just as a reminder, if you go this way, you should probably remember to use the
dictionaryobj.RemoveAll method when you are done and then to set the dictionary object variable to Nothing. (
SET dictionaryobj = Nothing ) - though that last might not be necessary if the object variable is declared in a subroutine rather than in the declaration area of a general module. I advise it as more of a "belt-and-suspenders" precaution.
The idea would then be that you can step through the individual words in your target document, keeping track of paragraph and chapter, and check each word ONCE using the populated dictionary object to see if any of your flagged words are there.
Why do it this way?
1. Dictionary objects are indexed so that search for a "connotation" word is fast.
2. It is fast because the dictionary stays resident in memory during your search.
3. The search is memory based. The only disk I/O is for the target text source (plus whatever you wanted to store in a reference table.
4. Doing it this way is a one-pass algorithm with only a little bit of overhead for setting up the dictionary up front and breaking it down when done.
I'm not at all saying that a query wouldn't do what you want - but the performance would be hellish for larger lists of "connotation" words and larger targets. Referencing "chapter" and "paragraph" makes me think there is a potential for the target to be a large document.