Give me speed!

bigal.nz

Registered User.
Local time
Tomorrow, 06:19
Joined
Jul 10, 2016
Messages
92
Hi Access peeps,

I am looking to make my database faster

Some information:

The database has about 10,000 records (and growing). It has one main table and the users front end is split from it - they access it across a enterprise LAN with nominally about 5 users at a time in the DB.

The primary key is a auto gen ID, and I have index a few other fields which are where condition in the queries (like date, and status - since most often we are only dealing with recent events).

I compact/repair regularly which does improve things but this is short lived.

Any other suggestions? Could I archive old data in another table? If so how?

Would'nt archiving in this way make searches for stats since the database was created difficult?

Cheers in advance

Al
 
10k records should not bog it down.
be sure you compact regularly. But it could also be
network speed
cat5 cables
PC speed
PC RAM
 
What is the database about? What sort of processing is involved (updates, read only..)?
Was it slow when you tested it before going to production?
Has it just become slow recently?
 
Last edited:
One main table may be OK but implies the data is not normalised..
Split db is the right thing to do, but not clear from your post whether each user has their own front end located on their machine or all using the same front end on the server. They should have their own copy on their machine.

10k records is tiny, archiving will not achieve anything.

Sounds like you have indexing covered.

Take a look at this link for more ideas

https://www.access-programmers.co.uk/forums/showthread.php?t=291269
 
It is all very well to have indexes but if the queries are not designed to use them then they won't help.

Classic errors with date fields is to use the Year() and Month() function to select records for a particular month when the criteria should use a Between construct.
 
For that matter, if your queries have ANY executable VBA function specific to your application, such as a formatter or a numeric formula, your query has to stop and call that code once for each record, and VBA is interpreted, not executed. SQL aggregates aren't so bad, but if you have a DLookup in your query you also just did yourself in.
 
10k records should not bog it down.
be sure you compact regularly. But it could also be
network speed
cat5 cables
PC speed
PC RAM

Sorry for the belated reply on this one.

The computers are Corei5 8Gb Ram, Fast Ethernet with Cat5.
 
just checking - you have read the other four posts on your thread?
 
One main table may be OK but implies the data is not normalised..
Split db is the right thing to do, but not clear from your post whether each user has their own front end located on their machine or all using the same front end on the server. They should have their own copy on their machine.

10k records is tiny, archiving will not achieve anything.

Sounds like you have indexing covered.

Take a look at this link for more ideas

https://www.access-programmers.co.uk/forums/showthread.php?t=291269

Thanks for that. A little more information:

The database is split (is that different to normalised?) and each user access it via a front end on there respective PC's. The backend is on the server.

Its got progressively slower as time has gone on,and compact repair helps for only a matter of a few hours.
 
just checking - you have read the other four posts on your thread?

It could also be worth mentioning that one of the fields is a Memo Field and stores largish volumes of text (about a page's worth).
 
split and normalised are two different things.

Split refers to the backend being split from the front end (which you have done).

Normalisation refers to how tables and relationships are constructed. Unnormalised data (typically as seen in Excel) will almost by definition be slow because the volume of data being brought across the network will be higher, queries more complex and higher maintenance costs.

Without knowing more, if you are trying to apply Excel logic and processes to Access, you are on the wrong path.
 
Bigal.nz - I suggest that you look up the topic of "normalization" in this forum, or "database normalization" in the web.

As to having a big memo field, the next question is "how big"? 500 bytes? 1k? 10k? That is important because memo fields require special treatment. They cannot be stored with the relevant record so require extra handling for retrieval.
 
As to having a big memo field, the next question is "how big"? 500 bytes? 1k? 10k? That is important because memo fields require special treatment. They cannot be stored with the relevant record so require extra handling for retrieval.

Will have to check size. Do you store them in a related table?
 
No. You understand that MS doesn't post full details of the internals of everything that Access does. However, I recall reading about this once.

Because there is a record-size limit of one disk buffer which is 2kb (or is it 4kb? doesn't really matter) and a "Long Text" field (the new name for "Memo" fields) can exceed the buffer size limits, there is a place-holder in the record that points to the separately stored actual long text item. The pointer is part of the record. The long text item is USUALLY physically close to the record that references it, but there is neither a guarantee nor a need for the record and memo field to be adjacent to each other, internal-address-wise.
 

Users who are viewing this thread

Back
Top Bottom