How Big is 2gb ? (1 Viewer)

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
Hi, I understand that the max database size (unless I split it) supported by Access is 2gb. I know this may seem a funny question, but realistically what does this mean (apart from the obvious)? In context, I'm beavering away at my manufacturing application and many of the threads I've posted are a result of my progress. The application includes typical tables for a 'manufacturing' type app. My vision is to capture stock control, contacts, employee records, production reporting, QA records, dispatch etc. I welcome any feedback about members experiences.
 

June7

AWF VIP
Local time
Today, 02:00
Joined
Mar 9, 2014
Messages
5,423
2GB will hold a lot. I have seen posters mention 'millions' of records. Depends on what is stored in database. Embedding files in db uses up limit.

If you are concerned about exceeding limit, look at freebies MySQL and SQLServer Express which have 10GB limit. For more than that will probably have to pay big $.
 
Last edited:

isladogs

MVP / VIP
Local time
Today, 10:00
Joined
Jan 14, 2017
Messages
18,186
Almost all the space taken up in an Access file is the data in the tables.
Queries, forms and reports occupy almost no space at all and unless you have a huge amount of code nor do macros/modules

Unless you use things that bloat file size such as attachment fields or regularly use make table queries, it is unlikely you will ever hit 2GB. Splitting your database will make that even more unlikely.
The largest FE file I have is around 150MB and that's a monster.
The largest BE file is around 1.7GB almost entirely due to one table with almost 3 million records and 50 fields. That is a reference datafile so is never edited by users. However when a typical BE file goes much above 1GB, its size should be closely monitored. With regular backups and occasional compacting, it is unlikely to cause problems ….BUT if you ever do hit 2GB, your database may be permanently corrupted.

For info, a BE in SQL Server Express can be up to 10GB
 

theDBguy

I’m here to help
Staff member
Local time
Today, 03:00
Joined
Oct 29, 2018
Messages
21,357
Hi. As others already have said, the 2GB limit is a file size restriction. It doesn’t matter what objects you have in the database, if they occupy disk space, it contributes to the limit. It’s hard to say how many tables you can have or how many records or how many forms and reports, you’ll just have to watch the file size. Doing a C&R can recover some of the file size used, so you might try that occasionally.
 

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
Thanks, your posts pretty much explain the 'scope' I have. I've learn't about 'attachment' files & with assistance from the forum I now archive attachments in a separate folder on a network drive (arnelgp & June7 helped with that). THanks.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 05:00
Joined
Feb 28, 2001
Messages
26,996
First, you are not in trouble until a single table reaches 1 Gb, because at that point, an UPDATE that modifies every record will probably crash.

I frequently had tables with tens of thousands and even hundreds of thousands of records, for which serious normalization allowed me to compress the heck out of what I was storing. As I recall, I had maybe 25-30 fields in the main table and a few descriptive (one/many) tables had more fields than that. The down side of serious normalization is that to get an "aggregate" record, I had five-way or more JOIN queries.

The tricks you play to make this work are simple, though.

1. Split the BE into two or three files. EACH can be up to 2 GB, but typically you would split at the point where a file became over 1 GB.

2. Archive things to external files in another format. For instance, I never kept more than six months of event logs online. Once per month I archived the event logs to a text file, after which I erased the records followed by a Compact & Repair. This app also had records that could reach a state of "all required actions complete" and once those records exceeded 90 days old, we archived them to a text report as well. (The times were based on probable need for reports reaching back to some actions.)

3. We moved some temporary tables out of the back end and into the front end, and used the "copy new FE before launch" method to get rid of bloat.

Eventually, if you have records that go back far enough, you will reach a point where they can be archived. If so, your app will reach an "equilibrium" state and you will know just how close you are to the ragged edge of ruin. However, I've never personally brought any app to its knees on size concerns. (Did, however, have speed issues now and then.)

More likely than not, you will find that size won't be the driving force to make you move away from Access. It will be the need to have a dedicated BE active SQL type of server so that you can send pass-thru queries and have the results processed on the server. As long as you are using native Access and a split FE/BE configuration, you take more time with network delays than anything else. Having an active server as a BE cuts back on network load BIG-time.
 

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
OK, so i glean from this thread that I have no immediate concerns (if ever). Thanks to everyone whom contributed.
 

Mark_

Longboard on the internet
Local time
Today, 03:00
Joined
Sep 12, 2017
Messages
2,111
There is one area in manufacturing you may want to look at; are you keeping a running log of machine sensor outputs?

If a machine is putting out dozens of reads from meters per minute, recording them may be very useful for tracking performance, but the logs they create will become massive rather quickly. If this is an area you are going into, please start a thread on it.
 

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
Mark, nothing that ophisticated. I'd love data logging of certain parameters, but that's a project fot the future. Cheers
 

Martyh

Registered User.
Local time
Today, 06:00
Joined
May 2, 2000
Messages
192
I am currently working for the Canadian Armed Forces, on an Access db that has over 1.2 million items with over 2.2 million instances of the items (a 1 to many relationship), as well other information. It currently requires less than 1.2 G of storage.

I have redesigned the previous database which held the same amount of information but took up 1 frontend db and 2 backend dbs requiring close to 5 G of memory!

A database, when properly designed, is a beautiful thing. When it is not, then it is just a list of many lists!!

My advice: Get some specific design advice from qualified people before you commit.

Regards,

Marty Habicht
 

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
Martyh, cool. Thanks for the insight. I agree with all your comments. Non-Access users don't understand our passion!. My only problem is that my ambition outstrips my ability by a long margin. The support from this forum is priceless.
 

Guus2005

AWF VIP
Local time
Today, 11:00
Joined
Jun 26, 2007
Messages
2,645
Little bit of topic, but this bugs me:

Telecom providers are using GB wrong. 2GB is not 2,000,000,000 bytes.
That's why a new abbreviation is created GiB. this is now the real number of Giga Bytes
GiB stands for Gibibyte and GB stands for Gigabyte. 1 GiB equals 2 bytes or 1,073,741,824 bytes. Gibibyte is closely related to gigabyte (GB), which can be a synonym of gibibyte or can refer to 10 bytes or 1,000,000,000 bytes in accordance with International System of Units (SI).1 okt. 2018
Source: https://www.quora.com/What-is-the-Difference-between-GiB-and-GB

HTH:D
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 05:00
Joined
Feb 28, 2001
Messages
26,996
For what it is worth, Guus, that issue has been around for ages. For a while, the trend was that if you used GB that was Gigabits (and was used for network speeds) whereas Gb was gigabytes and was used for storage amounts, static OR in motion. Too many people adopted that incorrectly. But if you REALLY want to drive yourself cuckoo, open up your Windows system's Windows Explorer and find a directory with some big files. When you highlight a file in a folder with "detail" view, you get a size in the bottom of the explorer window bar. It will not match the size on the detail line because they use a different base for the two computations. One is based on KB, the other in bytes, and the two numbers will be approximately 2% different. Only shows up for large files.

Found out in some obscure article that the one based on KB isn't really based on KB - it is based on disk blocks, which are 1/2 KB = 512 bytes. But that is one reason for the discrepancy (or disk-crepancy?). Anyway, your observation and rant are noted and totally commiserated. Been there, done that, decided I didn't want the T-shirt.
 

aaronkempf

New member
Local time
Today, 03:00
Joined
Jul 7, 2020
Messages
14
I was recently playing with a dataset that had 43million records in about 4gb. My biggest problem with the limitations in Access has to do with the COLUMN count, not the rowcount.

I once was interviewing at eBay for a VERY high end job. and after 8 hours of grueling interviews, this guy tried explaining to me that they kept hitting the 2gb limit in EXCEL. I mean, before Excel 2007 came out, there was a row limit of what, 65535? And they had pivottables in Excel that were based off of MULTIPLE PAGES of excel data, and it actually was a major pain for them to be using a tool that had such a firm limit.

I was actually being interviewed in order to use Office Web Components against SQL Server Analysis Services that would be based in a PowerPoint presentation. With Access 2000, 2002 and 2003, the Office Web Component, it's an ActiveX control that was the basis for Data Access Pages, I think that the WEB based spreadsheet component had a limit of 250k records instead of merely having a limit of 65535. But the real magic was the web-based pivot tables running against SQL Server Analysis Services.

When I worked for the 'Worlds Largest Software Company' developing Antivirus software, I was solely reponsible for a datawarehouse that had a BILLION new records per DAY that was running on a Pentium 3 server. Whem I got there, every query took an hour to run. But through simply moving to a dimensional datamart and SQL Server Analysis Services, we had MOST queries running in under a second without 90 days.

In related news, I know of at least ONE vendor that uses TEN SQL Server Express databases that are partitioned together with a couple of union statements in order to allow SQL Server Express to scale to 100gb of raw data without major performance issues. I mean, doing a simple scan of 100gb of data is GOING to be slow if you only have 1gb of Ram to do it with.

I just think that MANY people vastly underestimate the power of SQL Server Express. I'd use it for nearly everything.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 05:00
Joined
Feb 28, 2001
Messages
26,996
@aaronkempf - Usually it isn't the power of the SQL Server's host that is the problem, it is the design of the DB it is running, including how well it has been equipped with indexes and how well-designed the relationships are.
 

HillTJ

To train a dog, first know more than the dog..
Local time
Yesterday, 23:00
Joined
Apr 1, 2019
Messages
712
Guys, I conclude that a 2gb limit is of no concern to me.! Thanks for putting this in perspective.
 

Users who are viewing this thread

Top Bottom