If the import file is not linked, then the suggestion in #3 won't work. Another method is to read the first line of the csv file and compare it to a string of the concatenated field names of the target table. This way provides a check on correspondence in the number of fields, the order of fields as well as the names of the fields.
It would help to know the exact errors you got in those queries. If you had queries that selected specific columns for the imported table, those queries should have worked unless someone added columns in the middle of the sheet. If you had row headers in the sheet being imported, you would be able to get field names as a possible import option. But we have to guess at what you actually saw.
I will answer the question a different way as well. I had the problem of importing a spreadsheet. It wasn't a CSV case but it DID have the occasional situation of someone deciding to add or remove columns. I required them to use a template sheet that had first-row column names. So what I did was I opened the spreadsheet via an Excel App object and read through the column headers. It was easy to know if something was missing or something had been added. Tedious because we had a lot of columns. But easy because it was always the same thing:
IF worksheet.Rows(1).Cells.Count <> expected-number then GOTO BadSpreadsheet
IF worksheet.Rows(1).Cells(1) <> "name-1" then GOTO BadSpreadsheet
IF worksheet.Rows(1).Cells(n) <> "name-n" then GOTO BadSpreadsheet
Didn't matter that I had to repeat the column-name tests over 30 times with 30 different "n" values and 30 different "specific name" values. It was always a trivial test. If you WANTED to, you could add "bells and whistles" by making the IF/THEN actually have a part that tells you which column was wrong. That becomes more tedious - but still not difficult. Using this method, I was able to call out improper use of the template spreadsheet and it was easy to process everything after that test, because I knew I had a decent chance that the spreadsheet was in a decent (if not perfect) format.
Even if the field count matches (as in post #3), the field names may not match as the user could have added N fields and deleted the same number.
Although you can write lengthy routines as mentioned by Doc or use Cronk's approach, just specifying the field names as @arnelgp suggested is likely to work just as well and with far less effort. That's always been the approach I've used in my own apps when doing an annual archive of 50+ tables of old student data where, over the past year, some fields may have been added to or deleted from several of those tables.
In my case I kept a log of such changes but the principle was the same.