DB when split over network goes slowly - is it structure?

isaacski

Registered User.
Local time
Today, 07:14
Joined
Nov 30, 2012
Messages
67
Hi All,
This is my 4th or 5th db created since November which at the time, I knew nothing of Access or vba. This db was created with the hopes of being "normalized". I am now worried that I've pushed the limitations to far which is causing the db to be slow and barely work when split and put on an ancient network folder.

Background: This db is used for a group of analysts to track scores and their assignments for evaluations performed for our employees on their performance. There are a number of other things that need to be tracked along with just the score, i.e. trends (or what areas were scored down and why), audits (for newbies to get checked).

Considering that I'm working with 10-12 analysts who aren't comfortable with technology much less scary "Access" databases.... I've designed a dashboard style interface for them to use. This database with its current amount of data (which currently is just reference data), works great locally.

I was wondering if someone might be willing to take a look and let me know if there's anything that should be changed, if I'm on the right track, or if this db is completely and utterly incorrect...
 
I would recommend using either MS SQL Server Express or MySQL (preferred since it is a free Open Source database). Use MS Access as a front-end only.

We initially had an MS Access back-end on a local server. At first it was OK, but as we got more records it became slow and unreliable. Consequently we moved it to MS SQL Server and it has been working fine ever since.

At home I use MS Access as a front-end and MySQL as the back-end. Works fine.
 
Last edited:
Thanks for the reply, Steve. I've spoken to the tech mgr and his worry was that utilizing SQL Server Express would leave the data open. Is it that the data is stored on Microsoft's server(s) and accessed through a personalized account? (similar to a photobucket account?) I've researched the option as I believe SQL Server Express is what we need but there isn't really a lot of concrete information about where it is stored... (probably because they assume anyone who is considering this knows where it is stored, ugh.)
 
I had a quick look - you don't have much in terms of referential integrity. Look up the concept. But what makes everything slow is that you do not use indexes much. Read the links I gave you, and set up indexes on fields used in WHERE and GROUP and joins. Using an unindexed field in query criteria is like looking for a name in a phonebook that is NOT sorted alphabetically - a lot of pain. Tables without proper indexing will be slow in SQL server too, so forget the sever and fix your db first.
 
One problem is that as back ends get bigger and queries have to read more to read an entire database, you start bogging down your network. An Access back end is just a file that the front end reads using SMB protocol (server message blocks) to read every part of the file. And if you did a summary query on a large table, you probably would read every part of the file - literally. That is where your time goes.

Things to do to make a shared back-end go faster:

1. Index things that you search most often, particularly if you search more than you modify. Normalize as much as possible. Structures don't have to be fine-tuned - but a really bad structure CAN get in the way.

2. Minimize the data sizes of fields in your records because you get more efficiency with smaller records. If you don't need longword integers for everything, don't use them. Consider ways to defer complex joins until you are ready to work with a near-final result set. I.e. don't generate queries that drag along translations of fields that support lookups for names and such. Generate results and translate only when there is nothing else left to do. Also sometimes known as the "travel light" rule. Don't carry baggage you don't need.

3. Consider using some local tables in the front end as temporaries for computations or lookups based on relatively static data.

4. Make it a regular practice to compact and repair your database - and make backups every time you do a compact and repair. If there is a way to strip out or archive older data, do that BEFORE you compact and repair. Anything that reduces the size of the back end helps tremendously.

5. When nothing else works, upgrade the back end. If your network is already as hot as it is going to get, then you have to reduce the amount of traffic - and that usually means changing the passive Access back-end file to an active SQL Server or MySQL file.
 
one other thing - if any of your users are connecting other than a wired LAN - you are likely to see poor performance.

a long distance WAN will be slower. A wireless connection will be slower, and may be less reliable.
 
I didn't open the application but one of the most important things you can do to speed up a split database is to create a persistent connection in the front end that doesn't close until you close the program. Have you tried this?

SHADOW
 
3. Consider using some local tables in the front end as temporaries for computations or lookups based on relatively static data.

Much good advice from Doc above but I disagree with this one.

Local tables for unchanging data is fine.

However temporary data should not be written into the front end. It should go into a separate local database. I call this the Side End. Search the forum for this term to see previous discussions.

If you write to the front end, it will need to be compacted. The front end should be treated as an application file, not a data file.
 
Galaxiom, of course you are correct that a second local DB is a good idea, but I was just avoiding trying to confuse the OP with too much at once. Sometimes I do that without intending to overload folks. If you hadn't noticed, sometimes I tend to ramble on.
 

Users who are viewing this thread

Back
Top Bottom