Access is slowing down big time!

Goalie35

New member
Local time
Yesterday, 22:26
Joined
Jan 18, 2005
Messages
5
I have an e-commerce site with an ms access database that has worked fine for the 1st 6 months of us being up and running.

Our business has picked up recently however and now for the past several days, our ms access database has been running at a snails pace. It's so bad now that our sales have taken a huge hit the past few days due to customers getting frustrated and leaving and I was just wondering are there things I should do to tweak my access db so it runs more quickly or should I look into upgrading to sql server?

Thanks in advance.

-Goalie35
 
Access is excellent to use for small databases. It was never meant for storing large amounts of data.

If you're using it for a enterprise e-commerce database where multiple users are logging on and you are seeing major data transactions, you would be much better off moving the database to a full SQL server version.
 
As an interim measure, consider doing regular repair/compact procedures until you can do the conversion.

Access DBs have the same problem that ALL Windows software has, but particularly all MS software has - garbage collection of memory or virtual memory ain't worth a rat's patootie.

When you do things in any Access DB, you run into the potential for it to get bloated with deleted but not-yet-reclaimed data elements. Repair and compaction takes care of that. Since your business depends on this data, I would make three copies during the process...

1. Before you do anything.
2. After repair.
3. After compaction.

ONLY after a successful compaction, you can discard 1 & 2 and keep the copy from step 3 for backup.

Also, I would seriously examine my system's swap-space limits and physical memory limits. Most modern versions of Windows have performance monitoring built in. Look for depleted swap space, frequent swapping, etc. Anything that incurs system swapping overhead will eat your lunch for you in a heartbeat! You combat this overhead with memory or increasing swap limits, which implies that you CAN increase swap limits.

Also, if you have the time in your schedule, I would run a DEFRAG during your down time.

If you have a second disk that has the room, consider moving either the DB or the swap files to the second disk so that they don't contend with each other for space allocation and big block moves.
 
After I posted the above, I had another thought.

Given the nature of your business, only you can answer this. What is the longevity of records in your DB? (I.e. consider their life cycle.)

If you can split the DB into two parts (warning: I'm not talking FE/BE split), you can use one DB for historical research older than time X and use the other for current/recent data. Then you need to devise a procedure to archive the current/recent records to the historical DB. After the archive step succeeds, delete the archived records. That should reduce the size of the index tables you use for faster operation. You could link to the archive database from your primary using the linked table concept.

Another thing to do is to review the number of indexed fields you have vs. how often you actually bother to use the indexes. (I.e. how often you search for something based on the indexes.) Any index that isn't part of a frequent search should be removed if you have more than two or three indexes. Each index, as the table gets larger, adds overhead to all update, insert, or delete operations. Once you have large enough tables to modify, the overhead for having too many indexes gets really nasty.

The combination of archiving and reducing the number of indexes might help your performance while you convert to a more robust db program.
 

Users who are viewing this thread

Back
Top Bottom