addressing performance issues (1 Viewer)

CJ_London

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2013
Messages
12,602
As your db expands and there are more active users, it is quite possible you will experience performance issues and may be considering upsizing to SQL Server or MySQL.

Before considering such a transition, you should review your current design:

Overview
From a performance perspective, Access should be considered to be more like a web application than an Excel - and should be designed with the same considerations in mind - primarily to minimise network traffic.


Tables
1. are the tables properly normalised? If not, queries will be more convoluted and slower
2. have all lookups and multivalue fields been removed from tables? - they are not compatible with anything else anyway so will need to be removed if you are upsizing. In the meantime they have a drag on performance.
3. are the tables properly indexed? (any field which is used in joins or regularly used for searches and/or sorting should be indexed - but not those which mainly contain nulls or a limited range of values such as boolean fields) - see this link for more information - http://www.access-programmers.co.uk/forums/showthread.php?p=1516326#post1516326

Queries
4. have all domain functions been replaced with subqueries/sub tables? - domain functions are slow and do not use indexing - see this link for alternatives https://www.experts-exchange.com/articles/1921/Access-Techniques-Fast-Table-Lookup-Functions.html
5. are udf functions designed efficiently and reviewed to see if they can be moved to SQL? - udf functions are generally slower than SQL but sometimes cannot be avoided - but see the link in 4 above re the use of Static which can improve performance
6. have the initial wild cards used in searches (i.e. like *something*) been removed and users trained to use them when required? (the initial * negates the use of indexes with a resultant impact on performance) - see link in 3 above

Forms/Reports
7. Have these been designed to return only the records and fields required before being populated? - i.e. do not just have a table as a recordsource
8. have these been designed to not populate the sub forms/reports until required? - as a minimum on or after the main form current event
9. have any controls with domain functions been reviewed to see if the data can be included in the recordsource?
10. Are combo/listbox rowsources designed to only list what is required in both terms of width and depth? - having a combo listing 20,000 customer names is not efficient and slows the loading of a form. Instead leave the rowsource blank until a few characters have been typed by the user before then populating the rowsource based on what is typed.

Modules
11. do they have Option Explicit just below Option Compare Database?
12. are they compiled?
13. have you done a decompile?
14. are user front ends .accde and each user has their own copy?

15. has currentdb been assigned to a database object which is referred to in code rather than currentdb
16. does the front end maintain a persistent connection with the back end?
- if 15 is done right, it will

Access
17. Has name autocorrect been turned off?
18. has front end been compacted?

Network
19. does it have sufficient bandwidth for the number of users and level of traffic?
20. can bandwidth be increased?


Much of the above would need to be addressed anyway to take advantage of the performance benefits of upsizing the backend and if there is not another reason for upsizing such as db size, improved security, high volume of concurrent users.
 

static

Registered User.
Local time
Today, 18:33
Joined
Nov 2, 2015
Messages
823
Nice post.

Access sql isn't slower than SQL Server sql per se. A badly designed database will be slow whatever.

6. I know what you're saying but don't understand your point from a design perspective. If a user is able to design queries, performance is their own concern not the developers.As a developer you shouldn't want users to have to/be able to create anything.
9. There is never a good reason to use domain functions. Learn SQL.
11. has no impact on performance.
12. negligible
15. don't understand this. Dbengine(0)(0) would be faster than currentdb anyway.
16. Persistent connection only prevents recreation of .ldb file. negligible impact on performance


The biggest benefit of having a server is that the amount of data crossing the network is highly reduced. And that's just not something you can get around with Access.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2013
Messages
12,602
@static Thanks for the comments:

6. I was really referring to search forms. Many developers design a search form and in code set the filter or criteria as

"[somefield] like '*" & me.searchcontrol & "*'"

sometimes even when the values are numeric.

this automatically makes for a slow response because indexing cannot be used. And in many instances users know the first characters so the initial * is not required. I actually do train users (and it's also there in the manual/help page) about using wild chars and my code inspects the searchfield for them and builds the string accordingly (i.e. = or like). Some struggle but most 'get it'.

11. probably not, I just included it because many users don't so there is potential for weird things to happen which might impact on performance

12. as for 11

15. The point is that each time currentdb is referenced a new instance is created (which is why the recordsaffected function always returns 0) and depending on the code flow can lead to memory issues, whereas declare a global curDB as DAO.Database and assign currentdb to it when the first code is run overcomes this. I've not tried dbengine in this context so no idea if recordsaffected will work or not.

16. Disagree - a persistent connection can make a difference because there is still an overhead in opening the file

I fully agree with your comment Access sql isn't slower than SQL Server sql per se. But I would say that sql server, because of the nature of the beast is often running many different concurrent applications which can impact on performance. Last year I was asked by a client to move a highly interactive application off sql server and back to access because the performance was about four times slower. This was due to insufficient resources being available for the app and IT unwilling to address the issue in any meaningful way.
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 05:33
Joined
Jan 20, 2009
Messages
12,113
4. have all domain functions been replaced with subqueries/sub tables? - domain functions are slow and do not use indexing

Many years ago I had a discussion on this site about the effect of indexing on DCount().

Testing (see post 17) showed that an index on a field facilitated an almost instant response while several seconds were required to run the DCount on the same field with no index.

We concluded that DCount does use indexes.
 
Last edited:

The_Doc_Man

Immoderate Moderator, Former MVP, Retired SysAdmin
Staff member
Local time
Today, 12:33
Joined
Feb 28, 2001
Messages
18,378
Static - in re #11 - using Option Explicit has a potentially major effect for beginners but the effect is less for experienced programmers who learn to turn on Option Explicit to help them debug.

If you DON'T use Option Explicit and have an implicit declaration, the badly declared variable is a Variant. The problem then becomes that the pseudo-code has to determine the current type of the variable because Variants are nearly "faceless." You are dealing with a data structure rather than a data element. Although I've not gone as far as to reverse-engineer the code, I have seen some evidence via system calls Variants are examples of a general Descriptor such as occurs in another version of Basic with which I am intimately familiar.

The Variant structure in question CONTAINS the data element, but it contains a lot more that has to be tested first. In linear VBA code, this is a small cost of doing business. For a loop? Not so cheap.

I suppose that in theory, you COULD do the debugging such that your code is tight and clean, after which you could remove the statement. But it's the kind of statement that only makes a difference if you don't compile your code first.

As to #15 - a DAO.Database object: While DBEngine(0)(0) might be slightly faster - and I'll emphasize MIGHT in this context, it is harder to read. The libraries for DAO support the use of the { dbDAO.Execute SQL, dbFailOnError } construct, which is incredibly efficient when you are doing lots of queries. The ability to then get dbDAO.RecordsAffected makes for a great pay-off for the minor annoyance of remembering to instantiate the DAO object.

Regarding #16 - persistent connection: MASSIVE on performance when you are dealing with a back-end on a slow network. Unfortunately, we can't all ride 100 Gb Ethernet all of the time. With the Dept. of Defense, we USUALLY had hot networks, but some of the issues were in almost constant transition and we invariably had slow multipoint hops in some cases. If you have a persistent connection, everything rides the open connection and you skip access arbitration on every level of every directory for the BE. Even more important, you skip EVERY FIREWALL test because of the persistent connection. If you then have a slow network, the effect of re-negotiating the circuit is the modern equivalent of re-inventing the wheel for every foot of travel for your wagon.

Now share that back-end and put every file lock, every directory lock, and then every lock on the contents of the LDB file in a situation of lock contention. It's generally called a nightmare and you DON'T want to be in this situation.
 

The_Doc_Man

Immoderate Moderator, Former MVP, Retired SysAdmin
Staff member
Local time
Today, 12:33
Joined
Feb 28, 2001
Messages
18,378
The obvious reason that it is faster to use DCount on an indexed field is that Access only needs to read the index, not the whole table, and that means that you fit more records in a disk buffer. Older versions of Jet might not have done this, but Galaxiom's experiment would quickly detect the difference. Heck, you could try it too.

Take a table with a No Dups index. Use the Timer() function to get milliseconds since midnight before and after you run the DCount() function on the indexed field. Then repeat the experiment on any non-indexed field. The difference between the after and before times is the time it took to run the function in milliseconds. If the tables in question are big enough, you WILL see a difference.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2013
Messages
12,602
@Galaxiom - your link is comparing dcount for a whole table rather than one with a criteria - and dcount in that situation can just refer to the tabledef to get the value so indexing is not required. But I take your point, perhaps indexing is used - but still not as efficient as a count in sql. I'll see if I can set up a test later to run the code in post to get some definitive results.
 

static

Registered User.
Local time
Today, 18:33
Joined
Nov 2, 2015
Messages
823
As to #15 - a DAO.Database object: While DBEngine(0)(0) might be slightly faster - and I'll emphasize MIGHT in this context, it is harder to read. The libraries for DAO support the use of the { dbDAO.Execute SQL, dbFailOnError } construct, which is incredibly efficient when you are doing lots of queries. The ability to then get dbDAO.RecordsAffected makes for a great pay-off for the minor annoyance of remembering to instantiate the DAO object.

Regarding #16 - persistent connection: MASSIVE on performance when you are dealing with a back-end on a slow network. Unfortunately, we can't all ride 100 Gb Ethernet all of the time. With the Dept. of Defense, we USUALLY had hot networks, but some of the issues were in almost constant transition and we invariably had slow multipoint hops in some cases. If you have a persistent connection, everything rides the open connection and you skip access arbitration on every level of every directory for the BE. Even more important, you skip EVERY FIREWALL test because of the persistent connection. If you then have a slow network, the effect of re-negotiating the circuit is the modern equivalent of re-inventing the wheel for every foot of travel for your wagon.

15. Currentdb just returns a copy of DBEngine(0)(0).

16. So the 'connection' isn't a data connection it's just acting as a file opener.

Of course, Access gets slower and slower with every active connection. If you have 20 users and they all have the FE open, a persistent connection means that the BE has 20 active connections even though the majority of the users probably aren't even using it, causing worse performance for the few that are. Swings and roundabouts IMO.
 

The_Doc_Man

Immoderate Moderator, Former MVP, Retired SysAdmin
Staff member
Local time
Today, 12:33
Joined
Feb 28, 2001
Messages
18,378
Static - in re point #15: The DAO.Database variable is a copy of what you get by referencing DBEngine(0)(0). The object variable, by having a type of DAO.Database, has properties that DAO routines won't question and that won't confuse Intellisense, so speed or no speed, it has advantages in clarity and ease of use.

Speed difference in execution? Probably low though not necessarily zero difference. Remember, using the indexes - even if 0 - to traverse the implied pointers involved in that double indexing reference under the component object model also takes a couple of instructions. Admittedly, not very many, and we ARE splitting hairs that, on modern systems, are GHz-width hairs. For the DAO.Database variable, which is an object, the address lookup for the targeted object is done once. Remember that objects in Access are actually pointers to a data structure. So there is a slight speed difference in favor of the object that is multiplied by how many times you reference it.

In re #16 - yes, you are correct. The persistent connection's benefit has nothing to do with data per se. It has everything to do with getting to that data residing on the BE's host file-server (because for a pure Access setup, that's all it is.) But for split DB's, that speed of data fetching IS part of the performance thereof.
 

static

Registered User.
Local time
Today, 18:33
Joined
Nov 2, 2015
Messages
823
DAO.Database, has properties that DAO routines won't question and that won't confuse Intellisense, so speed or no speed, it has advantages in clarity and ease of use.

DBEngine is DAO.

Code:
Class DBEngine
    Member of DAO

CurrentDb = DBEngine.Workspaces(0).Databases(0)

Code:
Property Databases As Databases
    read-only
    Default member of DAO.Workspace

And Intellisense works as normal.

I'm not saying everyone should use DBEngine(0)(0) instead of currentdb.
I only mentioned it because this is a thread about performance. But as you say, nobody would ever notice the difference.
 

Users who are viewing this thread

Top Bottom