Best approach to large data sets in Access (1 Viewer)

JoseO

Registered User.
Local time
Today, 17:33
Joined
Jul 14, 2013
Messages
72
How many rows of data can I have in my Access database table before I have to seriously start worrying about awful performance?

I know this topic/question has been floating in the Internet for a number of years. But it seems that the answer is always to walk away from Access into SQL and other platforms. What if walking away from Access is not an option? I am interested to know:

  • How much performance degrading are we talking about? Run a query/report, go make a cup of coffee and 10 minutes later the query/report is complete?
  • Will users experience all sorts of problems and weird glitches with the front end?
  • What other shenanigans do I have to worry about?
I appreciate any answers, many thanks.
 

theDBguy

I’m here to help
Staff member
Local time
Today, 15:33
Joined
Oct 29, 2018
Messages
21,474
In most cases, I think it's not about the number or rows in a table, but how the FE was designed that matters when it comes to performance. You could have a small amount of data; but if the forms are not designed to be efficient, then you can still have performance issues. The opposite (large data set + efficient forms = good performance) can also be true. How many rows do you have now?
 

Ranman256

Well-known member
Local time
Today, 18:33
Joined
Apr 9, 2015
Messages
4,337
I had a table with 1 million records and came back in quite some seconds. Tho not 1 minute.
But I'd say its different for the pc speed, network speed, etc.

Currently our standard avg amount is 20k recs. Still very fast. If you are storing super large volumes (millions), then I'd move to SQL.
 

GPGeorge

Grover Park George
Local time
Today, 15:33
Joined
Nov 25, 2004
Messages
1,873
That is a very subjective question, hinging on YOUR definition of "awful" performance.

Many other factors come into play as well as absolute number of records.

But give us some specifics that you are dealing with in your current situation. Perhaps having some context will help throw some light on the potential problem.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2002
Messages
43,291
The size of the BE is 2g. There is no limit to the number of rows in a table. I've worked with millions but the tables were narrow and held mostly numbers.

Just because you have to upgrade the BE doesn't mean that you need to give up the Access FE.

If you expect that you might need to upgrade the BE sooner or later, you need to design the application with the idea that the BE will be SQL Server. That means you need to bind your forms to queries with selection criteria rather than using local filtering. You also need to understand how your queries will be processed by SQL Server. For example, you can't use VBA or UDF functions in any clause except the select because they can't be processed by the server.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 17:33
Joined
Feb 28, 2001
Messages
27,189
First, here are some published limitations.


Next, the answer to your question is "total size matters, piecemeal size matters less." Total size for a back-end file cannot exceed 2G minus some overhead tables. However, if you want to build wide records or narrow ones - or a combination of both - doesn't matter. I've had a narrow table frequently exceed 600K records and many text-and-number tables have exceeded 100K records. Therefore, it really depends on what you are doing and how you divide them up. Sort of like the old game TETRIS where different size-and-shape blocks have to be carefully dropped into place.

In practical terms, if your table is "update-centric" then you need to not exceed 1.0 to 1.3 Gb (very approximate) for the BE file because of issues with the Compact and Repair operation having trouble with REALLY big files - and also with "bloat" issues clogging up the DB more often. If your tables are updated once and then used only for lookups, you can use more table space than if you are doing frequent updates.

One way to get around some of these limits is if you have a logging table that does not have to reference anything via formal relationships. Such tables can be in a second BE file dedicated to logs and other things that don't have declared relationships to the main data. That (a) gives you another 2 Gb for table space in the logging BE and (b) reduces the strain on the main BE. NOTE, however, that you cannot establish relational integrity between two table, each in different BE files.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2002
Messages
43,291
I develop ALL applications no matter whether they are large or small or whether or not they might end up in SQL Server as if they WILL end up with SQL Server as the BE. Why? That way I don't have to make decisions regarding how to do certain things and if I'm wrong, pay the price if I do end up having to convert. All my SQL Server choices work equally well for Jet/ACE but not vice versa. Also, when the time comes to actually do the conversion. Chances are 99% that I can get it done in a few hours without having to actually change anything in the app. It takes a few minutes, maybe 10 to upsize the data and then the rest of the time is spent testing, testing, and more testing. Occasionally, I encounter a bad date during the upsize. SQL Server has different rules than Access regarding what is a valid date.

Part of this planning ahead is avoiding like the plague any Access specific data type added with A007 or newer since they are incompatible with SQL Server. So, multi-value, hyperlink, table level lookups, and attachments come to mind for starters.

And as Doc pointed out, the size of the BE is more constrained by having to do C&R operations due to bloat. The natural addition of records does not cause bloat. The BE grows but that is not bloat. Bloat is wasted space that needs to be gotten rid of. Bloat is caused by the addition and then deletion of blocks of data. The occasional deletion of a record isn't a problem but it is the constand add/delete cycle usually caused by importing data that has to be replaced frequently which causes the issue.

You still have to C&R BE's that grow naturally so that Access can update its statistics and therefore optimize queries better but once per month is usually sufficient.
 
Last edited:

JoseO

Registered User.
Local time
Today, 17:33
Joined
Jul 14, 2013
Messages
72
**THANK YOU SO MUCH** everyone for all the great, great answers. I have not started developing my DB because I first wanted to get expert thoughts/opinions before I proceed.

Once again, thank you so much for all of the great responses. I really appreciate it!
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 23:33
Joined
Sep 12, 2006
Messages
15,658
I think poor performance is more likely to be the result of indexing issues in many cases. What can be difficult is that access might take a different plan to resolve a query than makes best sense.

If you want to find sales in October above a certain amount, you would expect the query to first find the October sales, and then check the amounts. A query that works the other way might not work so well, but access can be obstinate about being as smart as you want it to be.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 18:33
Joined
Feb 19, 2002
Messages
43,291
A query that works the other way might not work so well, but access can be obstinate about being as smart as you want it to be.
I have discovered that in some cases, you can coerce Access into doing what you want by creating nested queries. So, even if you can write a single query, you can logically break it up to get Access to do it your way.

This is actually imperative if you are using a Left join and you have criteria on the right-side table. This query will not work correctly. But, if you make a query with just the right-side of the join and add the criteria there. Then make a query with the left join to the query, you get the correct results.
 

Users who are viewing this thread

Top Bottom