There are a few reasons a database gets big. From what I've seen of your posts, I'm betting on my item #5
1. You don't compress it often enough.
2. You have long strings in places where a lookup to a separate table might save space. I.e. storing literal strings from a listbox or combobox where 'limit to list' is true. If the source list is long enough, reducing the field from the text string to an index from a lookup table can save lots of space. Sometimes you have to be relentless about it. (More about this one later.)
3. Scrupulously dump anything that can be recomputed on the fly through a query, even though it might exist in a table somewhere.
4. Where possible, link to binary data objects rather than embed or import them. Particularly if they are static in size.
5. You might just have a lot of data, dude. Live with it.
-------------------------------------------------------------------------
In order to determine whether #2 is the case, open each table one at a time in datasheet view. Using the A>>Z or Z>>A buttons in the toolbar, sort on each text field, one field at a time. (Of course, don't save the table that way. And you can't do sorts on a Memo field so easily.) See if doing so reveals cases where you have text that reappears very frequently. In other words, a particular raw table shows repititious values in some fields.
If the number of rows is large enough and the field is long enough and the number of repetitions is high enough, it might be a candidate for treatment as suggested in #2. Of course, you now ask me to define "enough" .... (I knew you were going to do that!) Well, actually I can't define it cleanly. Because what you are facing is a "return on investment" situation. You have to tell yourself whether the suggested method returns enough space to help you.
How do you decide how much space you get back?.... (Darn, knew you were going to ask THAT one, too.) At least this time I can answer better.
Make a totals query that does a group-by of that field and has a second field that holds the count of the primary key field for that group. In effect, this second column tells you how often the given value repeats. You can do a sight inspection or perhaps write a third query to define the average count from the second query. This number, in some references, would be called the "Cardinality" of the table with respect to that field. It tells you how many records to expect, on the average, if you make that field into an index and probed each index value.
The higher this number, the more you will gain from converting the raw text field to a lookup in a separate table. If the cardinality is a fraction between 1 and 2, it ain't worth it. If the value is up in the 3-5 range, then it PROBABLY ain't worth it unless the strings are really big. If in the 5-10 range, it COULD be worth it to make the change unless the strings are really small. If we are in the teens or above, you are talking about a good candidate for a lookup. Because at that point, if you take the average size of each string times the number of strings, you can make an estimate of how much you save, and it could be big. And if the cardinality is over 100, you have hit the jackpot!
Suppose, for example, that the cardinality works out to 100 and the average string size is at least 20. Say that you have about 1000 commonly used strings. (This implies a table with 100,000 records, by the way.)
OK, the difference in size between the string and an integer lookup code is 20 - 2 = 18. So this string takes up 1,800,000 bytes in your table as-is. The lookup table would have 20 bytes per string plus 2 bytes for the lookup, or 22 bytes per record, times 1000 common strings = 22K bytes. You would remove 1.80M bytes and replace that with something that took up 22K bytes. Net savings, 1.78M bytes. OK, it's an extreme case. But it gives you an idea of what can happen if you look hard enough.
Now, here's a variation on that theme. Suppose that the table shows a lot of similarities though not actual equality in a given field when sorted. Can you modify the similar fields so that they ARE equal? Then do the compression?
Data analysis of this sort can be a real pain. Look up the word 'tedium' in a dictionary and you'll see an Access DBA doing one of these analyses. The good news is that you don't need to do it often.
Don't forget that Access itself can help you with sorting, counts, and averages. And the table analyzer doesn't always catch cases like this.