Several issues to be addressed:
First, you have separate tables for your data. Why? (This is rhetorical.) If you split it only because of separate data sources, it makes sense to merge the data, but if you split the data because of normalization, your method of populating the table(s) shouldn't change. Separation for normalization won't change regardless of your data sources.
Second, finding differences between two iterations of the same imported table can be a bear when users have access to the table source. For that reason, I would never allow direct import from the spreadsheet. It will need some kind of massaging. Therefore, suggestions to import to an intermediate table followed by some sort of cleansing are spot-on.
Third, is there any need for historical tracking of changes? Because this situation and possibility just leaps out of the description you gave us.
I might take the approach of importing the updated spreadsheet to a table that has columns for everything in the spreadsheet that you need to keep plus one more column, type Yes/No, titled "UPDATEIT." Rather than importing the data to a new table, import it to an existing table with the correct field structure already. Just delete every record in the table before importing so that it is empty, then import. If necessary, go back and set the UPDATEIT field to NO with an UPDATE query on your temp table, where that query has no WHERE clause.
Then what you do is, you write an update query joining the temp table to the existing table on the field that you are using as the prime key. You update the UPDATEIT flag to YES if any of the critical fields in the temp table do not match the corresponding fields in the tables you have from last week.
If you need to do this step in several update queries, one for each pair of fields to be tested, that's OK. It might run slower but it would be easier to maintain and understand. If you also need to consider flagging records that don't appear in last week's table but DO appear in this week's table, again set UPDATEIT to YES. You might also do a pre-scan to see if there are entries in the temp table that would fail a test of relational integrity because they don't correspond to records in your more-or-less fixed table you mentioned.
When you are done, the UPDATEIT flag will be YES or NO depending on whether any differences were found. For the records in the temp table only, DELETE records via a DELETE query selective for UPDATEIT = NO. What's left in the temp table is what needs to be updated in the "real" table.
You could use a macro to run several update queries in a row, but I'm thinking you get better control (and better error handling) if you do it this way: First write your macro to run those update queries. Then convert the macro to VBA and add error handlling to the result. It's a quick and dirty shortcut, but then I've been accused of being a quick and dirty guy who CAN type but doesn't always like to. Converting a macro to VBA is one of those shortcuts that makes initial design easy, but then makes subsequent implementation of extra error checking possible.
Then you can trigger the operation in some other way, perhaps from a form with a control button that runs the code you converted. In VBA context, you can trap errors and look at intermediate data if necessary. Also make log entries showing whatever you found or whatever you rejected as "no change" or however that needs to work.
This might be daunting, but it is DEFINITELY a case where Julius Caesar got it right: Divide and conquer. Break this problem up into little pieces, because each of the pieces is eminently do-able.