First, Uncle G is absolutely correct. You probably have tremendous amounts of redundancy in that data set. However, whether you attack this with Access or whether you have to do an Access FE and some other flavor of BE, there are some up-front decisions to be made.
The question is whether you actually need to remember every frimpin' little sample of the raw data or whether you are doing a lot of statistical aggregates. If you don't need the raw data to actually reside in the database because you will be doing averages, standard deviations, minima, maxima, and other things that mask the individual items, then leave the files in Excel. (Even if we are talking about yearly or monthly aggregates, this might be viable.)
Be sure that you rename the Excel files (if not already this way) to a specific name plus a 4-digit year number. Be sure that their format is known, predictable, and invariant from file to file. Whether you do this as a macro (unlikely) or as VBA code (recommended), what I would suggest is that you import one spreadsheet to a temporary analysis table. Just ONE. Analyze the heck out of it. Take your aggregates, look for minima, maxima, other statistical aggregates. Copy the parts you absolutely HAVE to have. At the end of that process, erase the contents of your analysis table. Lather, rinse, repeat!
This is a "divide and conquer" method that, if followed carefully - VERY methodically - would allow you to do lots of analysis. It could easily be managed via code. You could also easily store the aggregates in much smaller space.
Part I of your analysis is to use the analysis table to select distinct names for each well, each company, each county or other location designator, so that you have tables that you can use for establishing relations with the analysis table. If you have such things as "quality of crude" such as "sweet" or "sour" crude (based on sulfur content, if I recall that correctly), such gradations could also be linked through your tables. Anything you can do that would change a long descriptive string to an index into a separate table will help you in storing the aggregates as well as the raw data.
Part II of your analysis is, once you have updated the well, company, county, and other tables of identifying information, you run your queries that do group-by aggregates. You can of course do group-by on well, company, county, date, or any other aggregation needed. You might need tables for each different type of aggregation, but that should be possible.
Part III of the analysis is to dispose of the detailed data, which (if you use a temporary analysis table) might be as simple as erasing the table contents.
Another option is to consider that you can easily have up to 2Gb in an Access BE and can map more than one BE at the same time. (You just can't have duplicate table names without doing some things to qualify which BE you were using.) But if you wanted to transfer your Excel to a fixed-format BE file and make copies of the EMPTY BE file first, then you can open the BE file dynamically using VBA, store your data in that one file, and then close the file when done. Then in the future, you could do further analysis stepping through one BE file at a time, plus you can still have open your aggregate BE file and the FE file that contains the queries needed to drive that steaming mess. Yes, it WOULD be a hot mess simply based on the amount of data and the gyrations needed to manage it. But... it could be done. It would be a matter of how much up-front design you are willing to do, how much research you are willing to do on the topic of dynamically opening and closing database files, and how much time you are willing for this kind of analysis to take.
Here's one last hint to consider: No matter HOW carefully you code this, doing numeric analysis on 75 GB of data ain't gonna be fast, pardner. If you have to do these analyses in Access, DO NOT UNDER ANY CIRCUMSTANCES write anything as a VBA-based recordset operation unless there is no other way to do what you need to do. (Reason: VBA is INTERPRETED code, much slower than SQL, which is COMPILED code.)
Good luck!