How to make Database that is open 24/7 faster and smaller? (1 Viewer)

omarrr128

Member
Local time
Today, 12:36
Joined
Feb 5, 2023
Messages
69
Hello,

So I've created a database that is used in a hospital. It is used 24/7 so closing the database to make changes isnt really an option.

After every few weeks the database slows down and sometimes crashes. After deleting all the records in the tables it becomes better.

The backend has gone up to 150mb now from like 5mb and the frontend has gone up to 400mb from like 5mb as well.

I need some advice on how to make the databases work fast consistently without constant repair. And how to stop them becoming so big.

I've split the database already.

Compacting automatically or manually every so often isn't really an option.

What can I do to keep it running fast and small sized?
 

GPGeorge

Grover Park George
Local time
Today, 04:36
Joined
Nov 25, 2004
Messages
1,873
Hello,

So I've created a database that is used in a hospital. It is used 24/7 so closing the database to make changes isnt really an option.

After every few weeks the database slows down and sometimes crashes. After deleting all the records in the tables it becomes better.

The backend has gone up to 150mb now from like 5mb and the frontend has gone up to 400mb from like 5mb as well.

I need some advice on how to make the databases work fast consistently without constant repair. And how to stop them becoming so big.

I've split the database already.

Compacting automatically or manually every so often isn't really an option.

What can I do to keep it running fast and small sized?
150MB is not very large. 400 MB isn't very large either. An accdb can be up to 2 GB. Most developers consider 1 to 1.5 GB the maximum usable size. you are now less than a third of that at the most.

The front end is growing, probably, because you are executing a lot of append and delete processes in it. Other causes of increase can be embedded images in attachment fields.

Sometimes, as developers, we have to bite the bullet, so to speak, and perform maintenance on a relational database application in non-critical times, not during our regularly scheduled work hours.

You are, in fact, closing the database from time to time now, correct? Following a crash.

Plan and execute your maintenance in a more controlled way.

All of that said, I suspect the more significant problem could be that the code running constantly leads to memory leaks that slow it down, and eventually cause the crash.

Do you open recordsets, for example, during any processing steps? If so, do you always, always close them after using them?

I suggest you not worry about the raw file size. Make sure your VBA is not the bottleneck.
 

plog

Banishment Pending
Local time
Today, 06:36
Joined
May 11, 2011
Messages
11,646
Without being able to dig into it and poke around we can only give you general advice. Actually, Microsoft has a list of things:


I'd also recommend a maintenance schedule where everyone knows its not available. For example, every Monday at 6am you take it offline, do whatever you do to make it run faster then make it available again.
 

omarrr128

Member
Local time
Today, 12:36
Joined
Feb 5, 2023
Messages
69
150MB is not very large. 400 MB isn't very large either. An accdb can be up to 2 GB. Most developers consider 1 to 1.5 GB the maximum usable size. you are now less than a third of that at the most.

The front end is growing, probably, because you are executing a lot of append and delete processes in it. Other causes of increase can be embedded images in attachment fields.

Sometimes, as developers, we have to bite the bullet, so to speak, and perform maintenance on a relational database application in non-critical times, not during our regularly scheduled work hours.

You are, in fact, closing the database from time to time now, correct? Following a crash.

Plan and execute your maintenance in a more controlled way.

All of that said, I suspect the more significant problem could be that the code running constantly leads to memory leaks that slow it down, and eventually cause the crash.

Do you open recordsets, for example, during any processing steps? If so, do you always, always close them after using them?

I suggest you not worry about the raw file size. Make sure your VBA is not the bottleneck.
When I compress the front end it goes down to 3mb so is that size increase up to 400mb normal? And how much will it keep on going up to?

And no the whole database isn't closing after the crash. What I meant by that is that the front end crashes on 1 computer but will still be opened on other computers.

I am also leaving the workplace soon so I cannot maintain it. I need it to be able to support itself as much as it can.
 

omarrr128

Member
Local time
Today, 12:36
Joined
Feb 5, 2023
Messages
69
Without being able to dig into it and poke around we can only give you general advice. Actually, Microsoft has a list of things:


I'd also recommend a maintenance schedule where everyone knows its not available. For example, every Monday at 6am you take it offline, do whatever you do to make it run faster then make it available again.
Maintenance schedule isn't really possible as I will be leaving the workplace and no one else is really knowledgable enough to maintain it.

They may be able to compact and repair but I need to do anything else I can beforehand
 

Galaxiom

Super Moderator
Staff member
Local time
Today, 21:36
Joined
Jan 20, 2009
Messages
12,852
Are you creating tables or adding and removing temporary records from tables in the the front end? If so that will cause bloat.

If so, move any of this activity into a separate temporary database that is created on demand.
 

Galaxiom

Super Moderator
Staff member
Local time
Today, 21:36
Joined
Jan 20, 2009
Messages
12,852
Keep a fresh front end in a central location and copy it to the users' sessions every time they open the application.
This can be done with a batch file.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 06:36
Joined
Feb 28, 2001
Messages
27,191
I worked on a U.S. Naval Reserve Personnel machine that had to be up pretty much no matter what you did to it. But that is not consistent with any normal operating system. Windows, UNIX, OpenVMS, and a few others reach a moment where you ABSOLUTELY POSITIVELY must drop the system to allow for garbage collection and reclamation. Access "bloat" (the phenomenon where the files grow in size) is one example of the garbage collection problem. Call it a cost of doing business. Even during Operation Desert Shield and later Desert Storm, our systems had built-in down time to allow reboots and restarts.

My weekly down time for my Access security-related operations tracking was that we got everyone to agree on a time that the DB could go down with minimum impact. Oddly, because of night-shift operations being busy due to things happening in other time zones, we picked Noon on Fridays. When I took my systems down, the BE would get backed up, then Compressed & Repaired. If there was a new FE pending for release, I would stage that as well. I had one hour to get everything done. As long as I was assiduous in my testing leading up to the maintenance hour, it was not a problem.

Part of the problem is that both Windows and Access (both Microsoft products) use similar memory allocation and reclamation methods, which are inherently imperfect. What causes the bloat is that when you take out a chunk of memory as a scratchpad in which to work, and something else then does the same thing to an adjacent piece of memory, the release process can be unable to re-merge the released memory chunks, so these disconnected and unusable chunks slowly build up.

For Windows, what happens is that either Windows itself or some program just crashes if you waited too long to reboot and the system scratchpad gets depleted. I.e. you don't get a choice. Though I must say that Win10 is FAR better at longevity than Win6 or Win7 ever were. And let's not even talk about earlier versions, OK? The memories are too painful.

For Access, the bloat keeps going until you deplete system virtual memory because too much of the DB is bloat and it clogs up the Virtual Memory (SWAP & PAGE) file. There, a reboot doesn't help. You need to run the Compact & Repair to force proper memory reclamation. You have yourself documented in your request just how much bloat you reclaim.
What can I do to keep it running fast and small sized?

It is the nature of the beast that you cannot unless you can do away with DELETE and UPDATE operations altogether. But even then, SELECT queries use some scratchpad memory to organize the records to be returned, which would lead to eventual bloating - just not as often and not as bad.

Even the "Big Boys" need down time to do things like optimize indexes and reallocate data bucket loads. Those were ORACLE terms but the concept is similar for other vendors like SQL Server, SYBASE, INFORMIX, and (now defunct) ShareBase. It's a fact of life that running software gets "dirty" and needs periodic maintenance. If you don't build that into your schedule and MAKE IT STICK, you will have a crashed and unrecoverable database on your hands.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 07:36
Joined
Feb 19, 2002
Messages
43,293
When I compress the front end it goes down to 3mb so is that size increase up to 400mb normal?
No, your code or processes are causing bloat.

@Galaxiom suggested a fresh copy every time the user opens the FE. That is a method I use. My users have a shortcut that runs a .bat file loaded on the server and the .bat file has four lines of code. One of them copies the master copy of the FE to the local C drive.

To handle bloat caused by temporary add/delete actions, use a template BE that contains empty tables. When you need to refresh the temp data, copy the master template to your local drive. Then run the append process to import the temp data. You don't ever have to worry about deletiing it because you always get a clean, empty database when you do each import.

If you are using make table queries, you should probably rethink the process and just bind your reports to select queries. If it is too much trouble to change, then use a a template table for that action also. Instad of make table, use append queries to append data to the template be which is replaced each time you run the process.
 

ebs17

Well-known member
Local time
Today, 13:36
Joined
Feb 7, 2020
Messages
1,948
When I compress the front end it goes down to 3mb so is that size increase up to 400mb normal?
No, it's scary, a lot of clumsy and incomplete actions happen there.

I think you have to work on two main points here.

1) VBA codes
Objects such as recordsets should be closed again as early as possible (=> recordset.Close). All the more so when such objects are created and used in continuous fire loops.
You should also look at its variables and their runtime. Here, too, you should free up memory INDIVIDUALLY as soon as possible.

2) temporary records and temporary tables
These are to be avoided as far as possible, both in the front end and in the back end with the master data.
Here you write CORRECT queries that solve a complete task instead of slogging through saved intermediate steps.
If temporary tables are required, they are consistently moved to an additional temporary backend (this has already been said). You don't have to dispose of garbage that you don't generate in your actual work environment.
 

ebs17

Well-known member
Local time
Today, 13:36
Joined
Feb 7, 2020
Messages
1,948
The backend has gone up to 150mb now from like 5mb
The backend is also an access file?

If the application is important and runs 24/7, an active database management system (SQL Server & Co.) should definitely be used here. Such an active system is a service, not a stupid file, and also allows maintenance such as backups during runtime and provides much more security.
 

Gasman

Enthusiastic Amateur
Local time
Today, 12:36
Joined
Sep 21, 2011
Messages
14,311
Maintenance schedule isn't really possible as I will be leaving the workplace and no one else is really knowledgable enough to maintain it.

They may be able to compact and repair but I need to do anything else I can beforehand
I always created documentation so that anyone else could look after and use the DBs I made?
 

amorosik

Member
Local time
Today, 13:36
Joined
Apr 18, 2020
Messages
390
Hello,
So I've created a database that is used in a hospital. It is used 24/7 so closing the database to make changes isnt really an option.
After every few weeks the database slows down and sometimes crashes. After deleting all the records in the tables it becomes better.
The backend has gone up to 150mb now from like 5mb and the frontend has gone up to 400mb from like 5mb as well.
I need some advice on how to make the databases work fast consistently without constant repair. And how to stop them becoming so big.
I've split the database already.
Compacting automatically or manually every so often isn't really an option.
What can I do to keep it running fast and small sized?


Do you mean you put the information used by hospital staff, who need 24/7 access, into a classic Access file?
 

AngelSpeaks

Active member
Local time
Today, 06:36
Joined
Oct 21, 2021
Messages
417
I've never heard of any IT process not having downtime and I worked at a hospital for six years. I went into labor, seven weeks early, during a scheduled downtime and had to tell the nurses the day of my ultrasound so they could manually look it up.

Pat's suggestion is probably the best option.
 

ebs17

Well-known member
Local time
Today, 13:36
Joined
Feb 7, 2020
Messages
1,948
How to make Database ... faster
Use optimal programming.
Do you have enough self-criticism to appreciate that something put together that works isn't necessarily designed to work WELL?
 

NauticalGent

Ignore List Poster Boy
Local time
Today, 07:36
Joined
Apr 27, 2015
Messages
6,341
Maintenance schedule isn't really possible as I will be leaving the workplace and no one else is really knowledgable enough to maintain it.
This is unacceptable. If the application is critical or even important, they will NEED someone to maintain it. NO system is maintenance free.

They need to keep you on retainer, train someone to run the maintenance routine, or hire your replacement. If they fail to do so, they will need to have a back-up plan for when something goes awry - and it will, probably at the worst possible time.
 

mike60smart

Registered User.
Local time
Today, 12:36
Joined
Aug 6, 2017
Messages
1,911
Hello,

So I've created a database that is used in a hospital. It is used 24/7 so closing the database to make changes isnt really an option.

After every few weeks the database slows down and sometimes crashes. After deleting all the records in the tables it becomes better.

The backend has gone up to 150mb now from like 5mb and the frontend has gone up to 400mb from like 5mb as well.

I need some advice on how to make the databases work fast consistently without constant repair. And how to stop them becoming so big.

I've split the database already.

Compacting automatically or manually every so often isn't really an option.

What can I do to keep it running fast and small sized?
Are you able to upload a copy of the database with no data so that we can look at the structure of the tables?
 

Cotswold

Active member
Local time
Today, 12:36
Joined
Dec 31, 2020
Messages
528
It is unusual to have these issues with a small database.
I'd look at the users. Are they opening multiple copies on the same PC and leaving them open maybe for days? Or are they switching off the PC without correctly closing one or more copies of the software that they have open? Maybe left open in editing mode for hours or days. Do a little spying around and check out how they are using your software. Maybe write something to check on them. Set deletes to hide, so you can check just how many they are doing and why. Record how long a screen is open in editing mode on a per user basis.
Users can be very resourceful in devising methods of doing something that you would never imagine. Your software should only allow the users to do what you allow them to do. If you don't want them to have multiple copies open stop it from happening.
 
Last edited:

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 12:36
Joined
Sep 12, 2006
Messages
15,658
Maintenance schedule isn't really possible as I will be leaving the workplace and no one else is really knowledgable enough to maintain it.

They may be able to compact and repair but I need to do anything else I can beforehand
What do you mean "leaving the workplace". Do you mean no longer being available to manage the database, or just taking leave. That's exactly why IT depts are resistant to Access. The organisation can become dependent on something that has deficiencies, and there is no ongoing support mechanism in place.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 06:36
Joined
Feb 28, 2001
Messages
27,191
It is unusual to have these issues with a small database.
I'd look at the users. Are they opening multiple copies on the same PC and leaving them open maybe for days? Or are they switching off the PC without correctly closing one or more copies of the software that they have open? Maybe left open in editing mode for hours or days. Do a little spying around and check out how they are using your software. Maybe write something to check on them. Set deletes to hide, so you can check just how many they are doing and why. Record how long a screen is open in editing mode on a per user basis.
Users can be very resourceful in devising methods of doing something that you would never imagine. Your software should only allow the users to do what you allow them to do. If you don't want them to have multiple copies open stop it from happening.

The only way to get that level of reclaimable bloat is to have a lot of updating or a lot of insert/delete activity. Leaving a DB open but doing nothing (including no timers running in the background) does not cause bloat. Being in edit mode does not contribute to bloat until you actually commit the edit. Opening multiple copies will dink around with the file locking, but if the copies are open AND IDLE they do not contribute to bloat. When an ACCESS database is doing nothing, it is static in terms of I/O load, data growth, and CPU usage.

Your last paragraph is absolutely correct, though. In my first "real" job, we understood that our best testers were the customers of our products, because their people would do things we couldn't even imagine. Which is why for us, "user" was and still is a 4-letter word.
 

Users who are viewing this thread

Top Bottom