Access 'Compact database' = festival of suck

peskywinnets

Registered User.
Local time
Today, 15:03
Joined
Feb 4, 2014
Messages
582
So I have a database, about 180MB, I compacted it ....it dropped down to 150MB - pretty reasonable you might think.

Well, I wasn't happy with the size reduction, so I created a new blank database & then imported everything in from the old database...the size of the new database with *everything* imported is 48MB.

Therefore, I suggest if your database is getting too large & compacting isn't having much affect, to simply create a brand new database & import in everything - it only takes about a minute or so to do.
 
1TB drive costs about £38.97
You saved about 4 millipence
I don't think it's worth a minute or so:)
 
Moreover, the database size will rapidly climb back up to the compacted size of the original when queries are run and forms and reports are loaded as the background stuff they require is regenerated in the new database.
 
1TB drive costs about £38.97

That'll be useful for Access's 2GB limit.
You saved about 4 millipence

....& as my mother used to say "look after the millipences & the pences look after themselves"

Moreover, the database size will rapidly climb back up to the compacted size of the original when queries are run and forms and reports are loaded as the background stuff they require is regenerated in the new database.

Quite possibly...but like I say it took me a minute to do ...I'll see how this grows over the next week.
 
Last edited:
Your opening description bothers me tremendously. I know that Access tends to accumulate a lot of gunk, but a 3-to-1 manual compaction ratio (150 to 48) is a bit extreme even for Access, particularly if the compact/repair operation was performed to get from 180 to 150.

The operation of a compact/repair function SHOULD be nearly identical in effect to the manual copy-over of tables and other detritus, in that the compaction visits every DB object one at a time and copies it analogously to a manual export to another DB file. It SHOULD act just like doing the export to a new, blank database by checking every exportable object in the export dialog box. I have to ask what you didn't copy.

No, I'm not being cruel (at least, not intentionally) and not being a smart-arse. But that is like the old comedy routine of the mechanic who takes apart a car and puts it back together, but he has a few parts left over.

Unless you had a SERIOUSLY corrupted database, that extreme result should not happen. On the other hand, if you did a compact/repair and had a badly corrupted database, Access should have told you when it was going to barf. Maybe it is like a Shakespeare play ("Much Ado About Nothing"), but this just doesn't sound right.
 
pesky,

Do you do regular compact and repair? and backups?
With over 200+ posts, is this the first compact and repair where you have noticed the size difference before/after?
More info re Compact/repair here.
 
Doc's comments made me curious so I tried this on one of our databases. Here's what I got:

Original Size: 15,728,640 bytes
After Compact: 10,604,544 bytes
Initial Size of Imported Version: 20,709,376 bytes
Compacted Size of Imported Version: 9,888,744

Didn't help much in this case and the thing that I notice is that of course the options settings weren't the same in the imported version. They're important to us so setting them up again would add more time to this.
 
The operation of a compact/repair function SHOULD be nearly identical in effect to the manual copy-over of tables and other detritus, in that the compaction visits every DB object one at a time and copies it analogously to a manual export to another DB file. It SHOULD act just like doing the export to a new, blank database by checking every exportable object in the export dialog box. I have to ask what you didn't copy.
Actually I can see it happening in one of my databases too. FE after few days of use grows from around 4MB to 180MB, simple compact and repair doesn't change much (it goes down to around 150MB) but I know why. Access "caches" pictures that I display on forms and reports, pictures that aren't embedded in database, I only store paths to them in simple text fields. But they're still taking space in db in system table MSysResources. So apart from C&R I also delete all records from this table with Type="img" before compacting and my db nicely goes down to 4MB again.
 
cyanidem - you are right, I had forgotten about OLE objects. They CAN'T be so easily copied by the export operation, and I don't ever store embedded objects. I store LINKS to the items for picture fields, or I open the object using automation and a file spec. But that might be part of the original problem - excessive OLE storage vs. links to the objects.
 
The_Doc_Man - please read my post again. I also don't store OLE objects, I store links to them only. And I display those linked pics on forms and reports and that's it, they're never stored (at least by me, because Access does as he pleases). Apparently every time a new object is displayed on form, report, whatever it creates a cached copy in MSysResources.
 
I regularly compact & repair....but I have a seriously bloated database. Severals time per day, a fair amount of data gets added to a staging table, then 'massaged' prior to being added to the final table. the data then gets deleted from the staging table...the end result = massive bloat!

yes, yes I know...I know (I need to approach this database differently) but frankly it all works & the thought of now starting to change things to make it less bloaty make me recoil in horror (this is the database I use for running my one man business - if I don't have the database I'd be lost...so I'm loathe to do much in the way of major changes).

At one stage - mainly from hanging out around these parts - I was sold on the concept of normalised data, splitting out repeat data into tables ...so I did, but then had *massive* issues pulling in data from numerous tables to create self populating emails via VBA - it sent me gaga. So I still have a hulking"Whose your Daddy" database!!

therefore regular compacts & repair (which never reports any errors) with the odd 'start a blank/afresh' & import now & then.

About 18 months ago this same database once got near the 2GB mark (this was back in the day when I was blissfully away about bloat & compacting!!!)....compacting at near 2GB made almost no difference to it, but when I created a new database & imported the bloated database it dropped from almost 2Gb to 40MB!!!! (this illustrates how bloaty my database is!)

I now compact & repair regularly ...but it doesn't put much of a dent in the overall size (typically, I'll see it drop from say 180MB to 155MB etc.)

This is why I'm so astonished at the drop in size just by creating a new database & importing *everything* in from the bloaty database (hence the somewhat controversial title!).

Incidentally, I don't store OLE data in my database at all.

That said there a 'bugger' P.S. to this...previously when creating a new database, then importing the old one, selecting all tables, queries, forms reports & relationships ...everything works afterwards. But I received a few errors during the import & some queries aren't quite working as they should in the new database, so I've had to go back to the larger bloaty database again :-(
 
Last edited:
pesky,

If you have a large database --say several years of data --and some of that data isn't used regularly, you could move parts into a separate database. If you work with a split db (FE/BE) you could have a second BE for the "archived data". If you are constantly filling temporary tables, or modifying forms, queries, reports, modules, then you are depleting the space Access has available to you. To get this space back (for your use) you do compact and repair. Sometimes, Access gets confused and the glitch results in corruption, could be a control, an index..... and often a compact and repair resolves the issue. If not, then the next option is to create a blank database into which you import all objects.
And, for good database management and optimum comfort level, do regular backups.
Good luck.
 
Last edited:
I regularly compact & repair....but I have a seriously bloated database. Severals time per day, a fair amount of data gets added to a staging table, then 'massaged' prior to being added to the final table. the data then gets deleted from the staging table...the end result = massive bloat!

Putting temporary data in the Front End is a very common bad practice. I consider it akin to writing a shopping list in the margins of a reference book.

Use a separate disposable database for temporary data. I call such databases Side Ends. The user's Appdata folders are an appropriate location to put them.
 
Galaxiom, having a front-end temporary table is merely a mechanism for keeping that temporary user data separate from the rest of the world, thus minimizing shared locks and collisions. The Compact/Repair-On-Close works pretty well if you just arrange to erase the temp tables first, and that is easy to do if your control panel or switchboard stays open until you exit.

Making a separate private copy of a third file for these tables that are also private is - to my feeble old mind - very much akin to splitting a table into two tables with a 1:1 relationship. And my analogy is to make a shopping list in the margins of a disposable calendar book - because my users do NOT directly run the shared front-end. They copy it. The FE file already is disposable.

So from a pragmatic viewpoint, why do I want to add a third file to an FE/BE situation when the FE is perfectly handy? Not only that, but remember that some folks use tricks they have learned here in this very forum - such as the trick of launching a script that re-copies the FE file locally and then launches that copy. If that is the case, then adding that third file is like gilding a lily. (Although I'll freely admit that some of my earlier databases were closer to skunk cabbages.)
 
I suspect the reason for "bloat" possibly relates to stored queries, execution plans and the like. Who knows exactly what access actually keeps in the monolithic .accdb file

Reimporting all objects to a new blank database will mean that these stored objects will no longer exist, so one of the downsides to the reduction in size will be the additional time required to construct the stored objects once again.

So personally - by all means C&R from time to time, but reimport only in special circumstances of some "deadly" corruption.

fwiw, I rarely use side ends, I do import temporary tables to the front end. I cannot recall ever having to C&R the front end.

I struggle to imagine what would increase a front end to nearly 2Gb.

typically my larger front ends are up to 70Mb, and backends between 500Mb and 1Gb. The larger backends will typically have hundreds of thousands of records.
 
Last edited:
Galaxiom, having a front-end temporary table is merely a mechanism for keeping that temporary user data separate from the rest of the world, thus minimizing shared locks and collisions.

In my case I have multiple processes being run on a large subset of data from a very large linked table. The queries use joins that are partway to Cartesian self joins. I copy that data to a local table to reduce the network traffic and speed up the processing.

BTW I actually abhor temp tables. The one database where I use a one was an early project before I got into SQL server as the backend. Building it again now I would do the job in a stored procedure on the server and avoid the temp table entirely.

The Compact/Repair-On-Close works pretty well if you just arrange to erase the temp tables first, and that is easy to do if your control panel or switchboard stays open until you exit.

I prefer to avoid the Compact/Repair entirely. It demolishes the query plans etc so delays the first run. I know it is generally not a long delay but it can be significant on complex queries.

And my analogy is to make a shopping list in the margins of a disposable calendar book - because my users do NOT directly run the shared front-end. They copy it. The FE file already is disposable.

If the FE is disposable there isn't much point compacting it. I prefer to copy the FE only if it is a new version. It reduces network traffic, avoids a delay while the file is copied from the server and the query plan demolition mentioned above.

So from a pragmatic viewpoint, why do I want to add a third file to an FE/BE situation when the FE is perfectly handy?

My users' FE is in their AppData\Roaming folder so are preserved as part of their profile. Side Ends go into AppData\Local. I delete them as soon as they are no longer required but should that fail then they won't linger after a logoff.
 
I am a little late to the party, but I have a question(s) that are related to this topic and thought it better to ask it here rather than start a new thread; the is some GOOD stuff on here and thought it would be more comprehensive for those who read this in the future.

I have inherited a DB application that uses three BEs. It appears one of the BEs is for older records that are to be used for historical reporting. It was suggested earlier in this thread that this was a good practice - and I would think so too.

What I would like to know is why this is. I have read about the 2gb ceiling. Does multiple BE keep this from happening? When an FE links to multiple BEs does Access not sum up the total size?
 
I have read about the 2gb ceiling. Does multiple BE keep this from happening? When an FE links to multiple BEs does Access not sum up the total size?

It is individual tables that are linked so the total size of the backends does not matter to the front end. However Access has a 1GB recordset limit so this could potentially limit unions from large tables in multiple backends.

I think the limit to the Access database size originally stemmed from the 2GB file size limit in the old FAT32 file system. Access (or more precisely its engine) needs to be able to read and write at a lower level than most apps so that it can update records without writing the entire file each time.

Microsoft would not have rewritten this to support the larger files in NTFS because it had other products such as SQL Server for that market. Even the Express version does 10GB while the full version handles up to half an Exabyte per database.
 
I think the limit to the Access database size originally stemmed from the 2GB file size limit in the old FAT32 file system.

Galaxiom, it was my understanding that the problem is in the 32-bit memory model under which the Access image runs. You get a 4 GB address space. The first part (2 GB) is where Access and its DLL files map. The second part is more amorphous and relates to the idea of "virtual" files - which is something you can do in VB to make a file become part of your memory management space for direct bit-twiddling. The BE files become virtual files. The internal memory pointers that we don't see in detail are the cause for this limit. I think Access 2.0 and Ac97 had this same 2 GB limit for FAT16 files with large disk allocation units because I seem to recall having once had an Ac97 DB with a FAT16 disk and a large allocation unit. The 2 GB limit wasn't qualified by the disk mapping format - only by the disk size.

Hey, after 20+ years, I can forget stuff so I could be wrong. But I don't think so.
 

Users who are viewing this thread

Back
Top Bottom