Help with Large Database!

mugen_rsx

Registered User.
Local time
Today, 09:56
Joined
Dec 1, 2004
Messages
20
I have split FE and BE, but the performance gets really slow. I hold my BE in win xp pro. But after I made some changes in the registry, things get a lot better. It used to take almost one minute to open, now only a few seconds. here is a link
http://www.winguides.com/registry/
sorry, i forgot which one I applied, because it was a month a go


However, now I am facing with another problem. I have thousands of data enter EVERYDAY, and the db jumps from 6mb to now 30mb in a short time. I think I need to come up with a solution to solve this. It still run very smooth right now, but not sure will it be so smooth after the BE gets to 500mb! Not many users are using it, only 3 (the most) at a time. Could anyone give me some advise here?

I would also like to know can I pull some OLD DATA to another file? So the BE would not be too heavily load?
 
:) Hello mugen !
Do you make a COMPACT database.
I suggest you to make Compact database every day.
Click on "Tools", "Database Utilities", "Compact and Repair
Database".

MStef alias Štef
 
I forgot, if you have Access 2000 or A2002,
click on "Tools", "Options", "General", and select
"Compact on close".
 
Yes, I did. It helps a bit, but I just worry in the long run. I am completely satisfy with access right now, and I really don't want to change. Am I inputing too much data for access?
 
You have to Compact BE and FE separate.
 
If you are entering thousands of records per day then Access may not be the RDBMS for you. SQL server may be better.

Not exactly sure of your problem cause but here are a few thoughts

If compacting does reduce the size then it sounds as if you are adding records and then deleting something which actually creates the spare space that compacting recovers.

If this is the case then I think you need to look at the data you are adding on a daily basis.

If you are adding and then deleting which is causing the bloat then I think that what would happen is

Day 1 fully compacted, new records added, Spare space created from record removal Database bloats

Day 2 more new records that actually do not add to size of database greatly because of space space created at day 1. Some records deleted creating spare space

Day 3 basically as day 2

So if you are getting rapid growth initially and then it settles down I suggest you look at the way you are processing the data.

If the rapid growth continues then upsizing is I think the only option

I am not an expert in this area at all. These are just my thoughts

HTH

Len
 
If you are entering thousands of records per day then Access may not be the RDBMS for you. SQL server may be better.

Not exactly sure of your problem cause but here are a few thoughts

If compacting does reduce the size then it sounds as if you are adding records and then deleting something which actually creates the spare space that compacting recovers.

If this is the case then I think you need to look at the data you are adding on a daily basis.

If you are adding and then deleting which is causing the bloat then I think that what would happen is

Day 1 fully compacted, new records added, Spare space created from record removal Database bloats

Day 2 more new records that actually do not add to size of database greatly because of space space created at day 1. Some records deleted creating spare space

Day 3 basically as day 2

So if you are getting rapid growth initially and then it settles down I suggest you look at the way you are processing the data.

If the rapid growth continues then upsizing is I think the only option

I am not an expert in this area at all. These are just my thoughts

HTH

Len
 
Basically, there are many ways that Access will generate a BUNCH of slack space on a daily basis. One of them is to import data into the DB through a temporary or staging table. Then erase the staging table. Every deleted record adds to the bloat.

However, you can also generate a lot of slack if you involve lots of huge JOIN queries in the import process even if only to do lookups. You see, the JOIN query generates a temporary LIST of matching records (technically, a list of POINTERS to matching records....) and when you are done, you get rid of the list just like you get rid of deleted records.

Various ways exist to minimize daily bloat. For instance, if your data comes to you in text, don't import it using a wizard to load a temp table that later is the source for an append. Write some VBA code to directly open the file for input and parse out the fields. Directly update the targeted table through a table-type recordset.

If your data comes in through Excel, perhaps you should LINK to the Excel sheet and process that as a read-only table. Or again, there are ways to write VBA to directly open Excel as an application object and import that way.

One source of bloat that doesn't have a high profile is indexing. If your main table has a LOT of indexes, thousands of inserts a day leads to having to rewrite the indexes thousands of times per day. Which means that though each potential rewrite is small, a thousand of ANYTHING is not as small. And if you have, say, six to ten indexes on the table, you are frequently expanding EACH INDEX.

Unfortunately, the bad part about Access is that you cannot see the nature of the slack space that is part of the Access .MDB file. In other words, there is no simple way to do slack-space taxonomy. 'cause if you could, you would be able to target your space-saving strategy a lot better.
 
thanks everyone!! ALL info are really helpful for me.
I am just a beginner on Access and I really appericate your help

Compacting does help a lot in my case, it actually reduce to less than 10mb :D . I am still talking my time to observe what causes all the slack space. With my current observtion, about just less than .7mb of data store everyday. I believe it will be more than 1mb once everything is fully functioning.

One thing is in my mine right now. Not sure would it be suitable. Right now, I have one FE and one BE. I would like to do what The_Doc_Man mentioned, and I would like to change my current BE to staging tables and a new BE to store my final data. Is this a good idea? Or all I have to do is just add another tables on my current BE.

I want to keep my working BE file size as long as possible. Since I am working in a network share with 3 computers, smaller BE file size would have a better performance.

I just need to share my current working orders with other computers. All the completed (finished) orders would like to store in somewhere else. I guess this could reduce my BE size and also can keep me running for years.

I never use append query and delete query, but I think I need them right now for copying data and deleting all temp (working) data
 
Using an intermediate table causes bloat, doesn't relieve it.

My suggestion is to learn to live with frequent DB compacting operations until you can comfortably write some VBA code to bypass intermediate tables almost completely. Anything that involves queries or joins will merely contribute to the bloat factor.

On the other hand, it is possible to have a database other than your primary BE as a repository for temporary tables that you LINK to your primary BE. And there is no reason why you couldn't compress that DB at will, reducing the bloat of the primary BE. Basically, I guess what I'm saying is that DB bloat is a fact of life with Access and intermediate tables. The best you can hope to do is choose WHERE the bloat will reside.
 
Thanks Doc_Man! I am now working on a secondary BE file to store those rarely use data.

My VBA knowledge is very limited, but I think is time to learn some more to handle what I am doing now. So basically, we should have pull data directly from table, rather than going through any queries. Use VBA as much as possible and avoid queries.

I use a lot of queries, more than what I actually need. I think this is one of the reason so such a huge bloat of my db. In many cases, pulling out from a table directly would do the job, but I still made a query for that. The reason.......... well......... I don't know too. This is what I learned at school, but I may be wrong, because it was like 2-3 years ago what I learned access. My professor ALWAYS create query, so I just follow what he done. Almost all my tables have a exact query. I guess it is also time to clean up all those useless quries.
 
Crazy idea for you

I may have overstated the case a bit and I don't want to mislead you. If you must use a query to perform an append or update, use it. However, there are some practices to avoid if you have VERY LARGE updates to perform on a regular basis.

First, nesting a query within a query (perhaps within a third query) causes each query to create a list of record pointers that must temporarily be stored somewhere and which can contribute to bloat. This should not be that large because the lists aren't that large. Nor am I sure that the lists always get stored in the DB if you have enough physical memory. This is not always clear with Access.

Second, staging data to a secondary (temporary) table inside your db creates an actual table that must eventually be released. And this is likely to be proportional in size to the amount of data to be updated/appended. Further, Access flat out sucks at space reclamation while it is running. It just can't reclaim space "on the fly" like some systems can. This is the actual cause of bloat - the inability to do dynamic space reclamation.

Third, if you use a secondary table and it has indexes on it, the indexes must also be released eventually. Again, this will be proportional in size to the amount of data involved. Yet indexes are useful. Maybe in some cases, unavoidable.

What I am saying is to reduce the amount of temporary table use to an absolute minimum if it is in your primary DB. Minimize the number of indexes on the temp table. Down to a prime key and nothing else, perhaps. Try to avoid nesting queries of large items as part of the append, make table, or update queries that you run.

Where it makes sense and where you have the expertise to do so, using VBA to read a file and do direct recordset operations (particularly if it is an append operation but also for some updates) is the most space-efficient method. Like all alternatives, this also involves a trade-off: complexity to code vs. minimization of space requirements.

BUT you can also open a second database file (.mdb) via linking or other methods. If you put your temp tables in the second database, you can perform some maintenance operations on that second DB while users are searching the primary one. An erase query to clean up tables in the secondary DB performed from workstation A won't affect the speed of a select query performed by workstation B if the select query doesn't require an operation in that secondary DB.

Further, you can even keep the secondary DB on another disk or in another folder. Now here, I'm going to offer a thought for experimentation that might be more than you want to do, but maybe it is workable. Think about it before you try it 'cause it isn't trivial.

Right now, you have some users sharing the BE database. But technically, this works because their FE databases point to the BE database for the tables they use.

If you had a custom FE that WASN'T a true copy of the FE that the others use, you could link some tables inside your private copy of the FE that is not in the FE that the others share. Your temporary update table definitions could be there and nowhere else. In that case, if the temp DB is local to the same drive where the special FE resides, an update operation will be faster than it would be if everything is on the BE DB only, and the temp table erasure would be totally invisible to other users. Nor would it involve file or table locks on the BE DB since the temp tables aren't there.

The cost of this is, in essence, double-entry bookkeeping. Your "special" FE has to be updated separately every time your primary FE gets updated. Now, you can point to the primary FE from the special FE and do the appropriate import operations. But it is just that you added a layer of complexity to do this. On the other hand, if it works to reduce the bloat factor of your DB, maybe it is a good idea. That will have to be your call.
 

Users who are viewing this thread

Back
Top Bottom