Bloat

edwarric1

Registered User.
Local time
Today, 12:37
Joined
Feb 17, 2009
Messages
10
I am running 5 append queries that increase the size of a linked database from 0.5GB'S to 1GB! How can I reduce this bloat as I need to run these queries regularly and therefore have to compact the linked database each time.
 
Can you set the database to compact on close?
 
So they are append queries and not "make table" queries?
 
do make table queries reduce bloat as they are append queires?
I do compact after each run but was concerned of the bloat!
Database doubles in size!
 
do make table queries reduce bloat as they are append queires?
No, Make Table queries would bloat it further.
I do compact after each run but was concerned of the bloat!
Database doubles in size![/QUOTE]

What kind of data? Where's it coming from, how many fields, how many records and how are you importing it?
 
Database as 2 tables with 85,000 records and with 137 fields each that start at 360mb but doubles in size when a delete and append new records to them from another linked database.
 
just to confirm

you are deleting 85000 records, and reinserting them all?

why not just do compares and append new ones/update changes?

must be quicker and much less bloat
 
New records have no reference to previous months as they are from a different financial month.
 
Database as 2 tables with 85,000 records and with 137 fields each that start at 360mb but doubles in size when a delete and append new records to them from another linked database.
137 fields in an Access table sends me a warning signal that perhaps you should look at your design and check if your data is properly normalised. As a rule Access tables should be tall and thin not short and wide.
 
so are you deleting the table for last month, and reloading a new table for this month?

how are you doing this - are you creating a new table each time?

it might be easier to delete * from table, then insert into table - this will retain any indexes you have created, and this may also not give you so much bloat.


depends how big the records are, of course - but 80000 records can hardly be 500Mb, so something strange is transpiring
 
Deleting records from another database that is linked to this 500mb database and appends new data to it which then doubles the size of the linked database. i.e. Database as 2 tables with 90,000 records both with 137 fields but can't see why it bloats so much!
 
it occurred to me afterwards - you are importing the new table, so my idea might help a bit, but probabyl not much

so - new data table
90000 times 137 fields, depending on the size of each field - say 10bytes average is 120Mb of data alone

then delete existing data (or add this new table to the existing table), probably adds the saem again ...

then add indexes etc etc - (its not as simple as just extending a text file) - so it probably does make a big difference.

Its probably not so much bloat, as just a super big database - the normal reason for bloat is images, and temporary files (you do have the latter) - but I think you do just have a big data requirement


a) do you ever run out of apace in the dbs?
b) is compacting an issue?
c) perhaps using an SQL backend make a difference - certainly you could go bigger than 1Gb or 2Gb (depending on Jet version)
 
Dave,

a) I never run out of space in the dbs
b) No problem compacting
c) SQL backend - don't have one ofthose!

I have split the 2 tables of 90,000 records into 2 separate databases. One as 121 fields and the other 137 fields. After deleting and appending fresh data the one with 130 fields bloats from 188mb to 377mb where the one with 121 fields bloats from 171mb to 189mb!!!!! Is this becuase of the extra 16 fields?

rgds
Rich
 
as i say i think its just the sheer size of the data files you are handling

access needs to store the data - it also needs to store any temporary files, although you can get the space back from these by compacting. so importing a big file to a new table, and then copying that table into another table will increase the dbs size - if these files are very large, then the dbs will increase in size to a very large extent

it does not indicate any errors. the only issue is that
a) your dbs may exceed the maximium size allowed by Jet
b) you may not be able to transport it so easily via a memory stick, say

depending on what the files ARE that you are importing, you can actually treat them as an external linked data source, which might reduce the size of your dbs - but may or may not appropriate

assuming that you do need to import them, then access has its own paradigm for handling files - the new data is not just concatenated as additional text. - It is all managed by access, which also needs to maintain indexes etc for many of your fields. I am sure access doesnt even store text ifelds necessarily as identical text strings - since strings are stored in variable lengths, access MUST surely use some algortihm to split strings into chunks (probably a linked list of some sort), so that it can easily compute the location of any string type within the phyiscal .mdb file, so there is certainly an overhead for the dbs manager (ie access) over and above the size of the data.

what has already been pointed out is that a table with 137 fields is unusually wide - and you may be able to structure this in a better way. The fact that you have been able to treat this as a file of 137 fields,and another file of 120 fields definitley indicates something is amiss. if the file was normalised, then you would not be able to achieve any reduction in this way
 

Users who are viewing this thread

Back
Top Bottom