To add to the Doc's comments. You can use SQL server express which is free - I believe the size limit is 10Gb rather than 2Gb. You also need to consider indexing - retrieving data based on an non indexed field will be very slow for such a large dataset.
As a rule of thumb, load say 4 weeks data into Access and indexed as required, compact, then note the file size. Multiply by 13 to provide an annual estimate.
I don't agree with the comment about compacting being limited to 1Gb file size. I've successfully compacted db's approaching the 2Gb limit. When access compacts, I believe the (simplified) process is:
1. create a new empty database
2. copy all objects to the new db on a component by component basis (records, indexes for tables) to eliminate space created by temporary objects
3. rename the new db the same as the old db.
I would also look at the data being stored - is it normalised? if not then normalise it. It can have a significant saving on space.
For example, say you are saving 250k invoices each week. If this has come through as a flat file containing customer name and address, invoice number and date and line item detail in a single record, normalising it into the three different components to eliminate duplicate can have a significant benefit.
OK, you'll need an additional index, but if done using longs rather than text will make a considerable saving. I've recently completed a similar exercise for a client - circa 2m records was bringing them close to the 2Gb limit. Normalising the data brought this down to around 300k.
This is a simplistic example, index pointers have been ignored for example. Consider a customer account number, name and address - perhaps on average 10 chars for the account number, 20 chars for the customer name and 60 for the address a total of 90 chars. So not being normalised you require 1m * 90=90,000,000 bytes of storage - plus indexing the customer account number 1m*10 =10,000,000. A total of 100,000,000 bytes.
Say in 1m records you have 50000 customers and you normalise it. so now you have
50,000*(90 bytes for the data, 4 for the PK and 4 for the index)=4,900,000
plus 1m * 4 for the FK and 4 for the index for the rest of the table)=8,000,000
a total of 12,900,000 bytes - 12.9% of the original storage requirement.
Something to consider....