db to ->> ?

  • Thread starter Thread starter mission2java_78
  • Start date Start date
M

mission2java_78

Guest
My db has grown ... even after importing all objects, compacts and repairs, using the analyzer, creating better modules , etc.

Ive tried all that and Im still close to about 20 megs of space. Im quite certain you cannot convert an mdb to an exe if I remember correctly...or maybe Im wrong.

Is it possible to convert my db so its a lot smaller with the same functionality?

Jon
 
A few ideas that may help:

If not already done, I think you should split your DB into BE/FE to divide the pb into two parts:
- The global trend for the BE containing data should globally be to continue to grow (that's life ;) ). Not much you can do for that but set up regular compacting (on close...)
-Now for the front-end you can go for an MDE (further compilation state) but size gains are not necessarilly tremendous, the process is irreversible and some functionalities are lost (possbility to create new queries, etc.) Other possiblity is to try a decompilation (make sure to create a back-up before and to read this ). I already managed important gains by using that method.

HTH
 
Yes my db is in a fe / be format.
I dont think an mde will make too much of a significant change. Ive heard of other tools that can reduce the size of a db..just not sure where to look. Anymore input is welcome.
jon
 
There are a few reasons a database gets big. From what I've seen of your posts, I'm betting on my item #5

1. You don't compress it often enough.

2. You have long strings in places where a lookup to a separate table might save space. I.e. storing literal strings from a listbox or combobox where 'limit to list' is true. If the source list is long enough, reducing the field from the text string to an index from a lookup table can save lots of space. Sometimes you have to be relentless about it. (More about this one later.)

3. Scrupulously dump anything that can be recomputed on the fly through a query, even though it might exist in a table somewhere.

4. Where possible, link to binary data objects rather than embed or import them. Particularly if they are static in size.

5. You might just have a lot of data, dude. Live with it.

-------------------------------------------------------------------------

In order to determine whether #2 is the case, open each table one at a time in datasheet view. Using the A>>Z or Z>>A buttons in the toolbar, sort on each text field, one field at a time. (Of course, don't save the table that way. And you can't do sorts on a Memo field so easily.) See if doing so reveals cases where you have text that reappears very frequently. In other words, a particular raw table shows repititious values in some fields.

If the number of rows is large enough and the field is long enough and the number of repetitions is high enough, it might be a candidate for treatment as suggested in #2. Of course, you now ask me to define "enough" .... (I knew you were going to do that!) Well, actually I can't define it cleanly. Because what you are facing is a "return on investment" situation. You have to tell yourself whether the suggested method returns enough space to help you.

How do you decide how much space you get back?.... (Darn, knew you were going to ask THAT one, too.) At least this time I can answer better.

Make a totals query that does a group-by of that field and has a second field that holds the count of the primary key field for that group. In effect, this second column tells you how often the given value repeats. You can do a sight inspection or perhaps write a third query to define the average count from the second query. This number, in some references, would be called the "Cardinality" of the table with respect to that field. It tells you how many records to expect, on the average, if you make that field into an index and probed each index value.

The higher this number, the more you will gain from converting the raw text field to a lookup in a separate table. If the cardinality is a fraction between 1 and 2, it ain't worth it. If the value is up in the 3-5 range, then it PROBABLY ain't worth it unless the strings are really big. If in the 5-10 range, it COULD be worth it to make the change unless the strings are really small. If we are in the teens or above, you are talking about a good candidate for a lookup. Because at that point, if you take the average size of each string times the number of strings, you can make an estimate of how much you save, and it could be big. And if the cardinality is over 100, you have hit the jackpot!

Suppose, for example, that the cardinality works out to 100 and the average string size is at least 20. Say that you have about 1000 commonly used strings. (This implies a table with 100,000 records, by the way.)

OK, the difference in size between the string and an integer lookup code is 20 - 2 = 18. So this string takes up 1,800,000 bytes in your table as-is. The lookup table would have 20 bytes per string plus 2 bytes for the lookup, or 22 bytes per record, times 1000 common strings = 22K bytes. You would remove 1.80M bytes and replace that with something that took up 22K bytes. Net savings, 1.78M bytes. OK, it's an extreme case. But it gives you an idea of what can happen if you look hard enough.

Now, here's a variation on that theme. Suppose that the table shows a lot of similarities though not actual equality in a given field when sorted. Can you modify the similar fields so that they ARE equal? Then do the compression?

Data analysis of this sort can be a real pain. Look up the word 'tedium' in a dictionary and you'll see an Access DBA doing one of these analyses. The good news is that you don't need to do it often.

Don't forget that Access itself can help you with sorting, counts, and averages. And the table analyzer doesn't always catch cases like this.
 
Last edited:
Doc I've followed all of these...but have resorted to the following:
To reduce the size of a db you can use the following trick.
Use this on a copy of the database.
Open the copy and rename the autoexex macro so the db would not start automatically and close the db.
Create a shortcut to the copy of your mdb.
Open the properties of the shortcut.
In the target, enter the following:
"C:\Program Files\Microsoft Office\Office\MSACCESS.EXE" "C:\Documents and Settings\jhermiz\Desktop\Copy of IMS.mdb" /decompile
Check the path to access and to the copy of IMS and the name you give to the copy and close the shortcut.
Double click on the shortcut to open the copy of the db.
Open a module and select the menu Debug and compile the database.
Close it and look at the size. It should have decrease a lot.


With success!
Jon
(My db is now 4 megs in space...what a difference.)
 

Users who are viewing this thread

Back
Top Bottom