Can a database be too big?

kdirvin

Registered User.
Local time
Yesterday, 19:32
Joined
Feb 13, 2011
Messages
41
Hello all,

My Access 2007 database is growing quite large (although well below Access's 2 GB limit). I am aware of the dangers posed by corruption and am taking steps to reduce that possibility. But can a database's sheer size ever threaten the integrity of the data?

Thanks!
 
When you get close to the 2GB limit Insert queries fail and the user gets a message. Similarly an update query can fail if it increases the size of the data or indexes. There is no real threat to the data.

I have a table with ten million records and that has not caused any problems.
 
It might depend on how you use a database though. As long the fields on which you search are indexed, then access will be able to find the records you need efficiently.

however, if you try to retrieve data, based on some data that cannot be indexed (eg - something that needs to examine each row to determine whether you need the row or not) - then you will definitely notice a slowing down of data retrieval.

So maybe you need to look at archiving some of the transactions, or maybe collapsing some into a more compact analysis system.

The other thing is in general terms, a "big" system needs to be operated differently to a "small to medium system".

eg - take a sales ledger/accounts receivable.
With a small company you are likely to print out and examine the whole ledger.
However, in a large enterprise with hundreds of thousands or even millions of accounts - you just cannot deal with it in that way. Instead,you have to use statistical techniques, and manage by examining exceptions - so your app will need to be designed differently, with considerations like that in mind.

You may even get to a point where calculating things on the fly every time becomes inefficient - and then you have to think about breaking the "cardinal" database rule not to store calculated figures.
 
If you're expecting continued growth you might want to move to SQL Server as the back end to hold the data.
 
You all have given me some great insights. I will use this going forward - thank you!!
 
Here's a method I use to keep databases from bloating completely.

1. Split the database to FE (front end) and BE (back end). Users only share the BE.

2. Where "snapshot" concepts make sense, make a table in the FE file - i.e. one that isn't shared.

3. Drive whatever you can from the local (FE) tables.

4. Manually or automatically compact the FE file on exit or whenever it gets bloated.

What you want to stop is that just about any query, but particularly JOIN queries of large tables, create temporary space in your database file. However, with a split FE/BE, the temporary space is usually in the FE where it is NOT shared by other users. This means you can safely compact & repair when your copy gets bloated. That bloat will not affect the BE quite so badly - or at all if you work it right.

Using the temporary FE tables is only a problem when doing an update because you don't want to muck about in a copy of reality if that reality can be changed by another user. For report generation, this isn't really a serious issue.
 
What you want to stop is that just about any query, but particularly JOIN queries of large tables, create temporary space in your database file. However, with a split FE/BE, the temporary space is usually in the FE where it is NOT shared by other users.

I stridently recommend not creating temporary tables in the Front End. It is completely unnecessary as it is very easy to create and use temporary tables in another local database. Among other things it prevents the need for compaction of the Front End.

Search the forum for Side End if you want to see the discussion of this concept.
 

Users who are viewing this thread

Back
Top Bottom