You don't say if the database is shared via network or confined to a single system so some of what I suggest might not be right.
A common issue that crops up with Access that has to do with having to compress often is that you need enough memory and enough free disk space to do Save operations, queries, and other niceties.
Factors affecting this requirement include:
1. Do you have enough physical memory and swap space for Access to build a complete virtual image of the new database that you are about to save?
2. If you are on a networked drive, is a quota in effect for your folder and do you have at least as much free space as is required for the Save to create a new, unnamed copy of the database? (Access will later delete the old copy and rename the new one.)
3. Does your client have an established regimen of compression on a regular and frequent basis? Because, you see, Access does not voluntarily garbage-collect itself. Every temporary query you run chews up free space, causing Access databases to "bloat" like a puffer-fish in peril. The only way to reclaim the used space is to compress.
Now, the frequency of compression DIRECTLY AFFECTS the size requirements for my questions 1 and 2, because it is possible to have enough space/quota for a compressed version of the database, but not enough for one that has been allowed to bloat for a few days without cleanup.
Only you and your customer can establish the "bloat" rate for that database. Check the size each day, determine an average growth rate, evaluate the maximum size you can get before you are in deep doo-doo, and make sure you start a compression before things get too deep.