Compact/Repair problem (1 Viewer)

Pat Hartman

Super Moderator
Staff member
Local time
Today, 13:06
Joined
Feb 19, 2002
Messages
43,266
I use Carbonite at home and at one of my clients. BUT it is important to have many backups during active development. I have two types of backup in my FE's. I keep a table in the FE of when the last backup was made. I have code that checks when the hidden opening form closes to see if anything was updated since the last backup and if it is "me" logged in, asks to make a backup. That just copies the FE to a local backup folder and to a server backup folder. I keep the backups for a couple of weeks so there are frequently 20 or more. The other backup technique, which I either automate as the first one or run manually is to export all objects to text. I do this if I have a database that tends to corrupt. It allows me to build it back from the text versions
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:06
Joined
Feb 28, 2001
Messages
27,175
Where I work, we have a active directory setup with a dedicated file server. All important data goes to a partition on file server. The server has a RAID10 setup to create 2 exact mirrors on 4 drives.
Every 6 hours, a backup software creates a mirror of all shared files to 3 external hard disk and one NAS drive.
Every 12 hours the contents of the file server is mirrored to Dropbox, every 24 hours to Microsoft OneDrive, Every 48 hours to Google Drive.

Are we too cautious? :)

The correct answer is yes, no, and maybe. Which answer is correct? Depends on how much value is placed on the data being copied AND two other factors: How difficult would it be to reconstitute the data from other sources? And how much user disruption is involved when making those backup file copies? HINT: There is a LOT of depth behind the second question.

There is this little bit of a "gotcha" for databases, because it is possible for a "transparent" backup to capture an unusable mirror of the file. For instance, one in which at the time of the backup, you also happened to be going through the internal overhead associated with an insert, update, or delete - which sometimes results in massive physical restructuring of the tables.

I know from personal experience that this "gotcha" is possible for ORACLE, because I won a bet with the "external backup" software rep. He bought me a steak dinner at a decent restaurant in Fort Worth after an incident when his backup software was unable to correctly restore an ORACLE database even though we followed every step of the documented process. After that incident, about 15-20 years ago, ORACLE had to introduce a new back-end state called "BACKUP" in which all tables were stabilized and all ongoing transactions were logged in a TEMP area that was visible to service users touching altered tables. In short, one initiated the state, took the image, and then discontinued the BACKUP state, which caused ORACLE to "roll forward" the transactions that were still pending. I.e. it played "catch up."

When you took the image of the DB in that BACKUP state, you could restore the database to the date/time (and instantiation number) of the moment of completion of establishing the BACKUP state. You then had to roll forward any pending transactions. If you didn't do that state change, you could not restore anything because the copy operation copied mixed (non-coherent) table instantiation numbers.

Why am I telling you this about an Access DB? Because Access doesn't have a BACKUP state where it stabilizes the main tables so you can make a coherent backup. To assure a good backup of an Access backend, you must assure that no one is in it at the moment. Now, all of that staging of copies of the backup? That level of expense is up to you. It would meet U.S. Department of Defense guidelines regarding handling of backups up to and including regulations on offsite storage of operational backups. So my answer for the staging of the backup files is "no, not too cautious" - if the file contents are that valuable to you.

The key to the value of the whole process is what you do when you make the backup copy from the "live" file to the backup file. At that moment, if the DB is in a precarious state, you've got nothing. If you take pains to assure the backup is good, or if the back end file is a non-Access SQL server or ODBC-type server that has a BACKUP state or equivalent AND you use it for backups, then you are as good as gold.
 

KitaYama

Well-known member
Local time
Tomorrow, 02:06
Joined
Jan 6, 2022
Messages
1,541
I use Carbonite at home and at one of my clients. BUT it is important to have many backups during active development. I have two types of backup in my FE's. I keep a table in the FE of when the last backup was made. I have code that checks when the hidden opening form closes to see if anything was updated since the last backup and if it is "me" logged in, asks to make a backup. That just copies the FE to a local backup folder and to a server backup folder. I keep the backups for a couple of weeks so there are frequently 20 or more. The other backup technique, which I either automate as the first one or run manually is to export all objects to text. I do this if I have a database that tends to corrupt. It allows me to build it back from the text versions


@Pat Hartman
I have a process very similar to what you explained. After each backup process, I keep the latest 5 backups and delete the rest with vba.

So without touching anything, I always have only 5 backups in the specified path.
 
Last edited:

KitaYama

Well-known member
Local time
Tomorrow, 02:06
Joined
Jan 6, 2022
Messages
1,541
Because Access doesn't have a BACKUP state where it stabilizes the main tables so you can make a coherent backup. To assure a good backup of an Access backend, you must assure that no one is in it at the moment.
@The_Doc_Man
I wasn't aware of most of what you explained. I appreciate your detailed explanation. It's always good to read your posts.
But we have a rule here. No Access Backend. Everything goes into sql server. It's a shame we don't have a dedicated sql server and it's installed on DC (Domain Controller). A stored procedure saves a backup of the database to a folder in file sever every 2 hours (triggered by windows scheduler).
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 13:06
Joined
Feb 19, 2002
Messages
43,266
I was quite happy to have the number I have. I can end up with 3-4 generated in one day since they are generated each time I close the database. I had to go back 15 backups last week to get the one before I broke the form. That's why I don't do automatic deletes.

PS, I am only talking about FE backups which are made on the developer's computer under the developer's control. BE backups are totally different and as Doc said, difficult during the day while there is constant activitiy.
 

KitaYama

Well-known member
Local time
Tomorrow, 02:06
Joined
Jan 6, 2022
Messages
1,541
I had to go back 15 backups last week ..........................That's why I don't do automatic deletes.
Because of the space available for us in file server (5GB per user), we can not leave too many files back.
I don't publish a new version of FE until it's for sure.
In cases like what you explained above, I extract the object from a copy of the FE from a client PC.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 13:06
Joined
Feb 19, 2002
Messages
43,266
@KitaYama I'm not talking about publishing anything. I'm talking about how I back up MY DEVELOPMENT copy of the FE. The server backups handle backing up the released copy of the production version of the FE. I usually keep only 3 there and except for the current master, they are always zipped to minimize the chance that someone might try to use them.

In order to protect myself from my own stupidity, every time I close the database, the code looks at who the user is and if it is me, asks to make a backup. If I made changes since the FE was opened, no matter how insignificant, I say yes. That is why I end up with so many. When I'm working on site at a client, I back up to the server (or an attached external drive) and to my local drive. If you have limited space on the server, then plug in a portable hard drive that is used just for your development backups. Even terabyte size drives don't cost all that much these days. The drive I have connected at home would have cost HUNDREDS of THOUSANDS of dollars in 1968 when I took my first job writing COBOL and would have been half the size of my office. I paid about $250 for it and it has been worth every penny. I'm sure we've all had one of those moments where time stops and we think, I shouldn't have done that and then the world comes crashing down on us.
 
Last edited:

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:06
Joined
Feb 28, 2001
Messages
27,175
It's a shame we don't have a dedicated sql server and it's installed on DC (Domain Controller)

Now THAT is considered (by the U.S. Dept. of Defense) to be a hazard. Since you are NOT on a military site or U.S. civil government site, you can do that. But in general, if you can afford it, you do not mix and match servers and their roles. A DB server is not a file server is not a Domain Controller is not an Applications server is not.... you get the message. The U.S. D.o.D. believes in - and mandates - purity of purpose. Partly also because if the server that just died had only one purpose, it will be easier to get that service back up quickly.
 

Users who are viewing this thread

Top Bottom