Data Loss during Network Interruption

Pisteuo

Registered User.
Local time
Today, 06:25
Joined
Jul 12, 2009
Messages
72
I have a A2007 database that is split to a shared server that is experiencing unidentified network problems. While working in the front-end users receive the following message periodically:

“Your network access was interrupted. To continue close the database and then open it again.”

If the users are working in a form including queried data, the data is lost and the file crashes. This happens even though the front-end is on the user's computer and no tables are currently being queried. My question is why must the network be stable even though the database is split and queries are not occurring? The form's record source is a table though. But even with this the record source connection is not needed until the form is closed. Would you be able to shed some light on how linked table connections behave? Thank you.
 
This is a bit of a generic description of this subject, but let's take the example of an Excel or Word file and compare it to Access (actually we're comparing them to the Access database engine, which is Jet/Ace). When you work with Excel or Word, whether the file is on your local hard drive or being used over a network, a copy of the entire file is loaded into the local machines RAM while you work with it. Once you close and save the file, the updated file is written back to the disk, replacing the original. If there is an interruption - be that a network interruption, a power loss at the local machine, whatever - the worst that happens is the changes you were working on do not get saved. The original file is still in tact in its entirety on the hard drive.

That's not the case with a file-server application like Jet/Ace. When you open an Access front end (one with a Jet/Ace backend) it loads a portion of the original file into local RAM for processing (whatever portion it needs based on the design of the application). Now you have a file that is scattered amongst various places. Some parts on the hard drive (network or local) and other parts in memory on the local machine(s) that have the file open. The file is then essentially re-assembled and saved once all the front ends have been closed (or at least all the objects that have a data connection) and the processed data has been sent back to the file on the hard drive. This is by design for the purposes of speed and efficiency so the entire file doesn't need to travel across the network each time it is accessed. The downside is that if there is an interruption in the connection between the hard drive and the local RAM, portions of the file are lost and it cannot be re-assembled so you end up with a corrupt db.

Now, if you use a client-server database for the back end, like SQL Server, this does not happen. In a client-server application the client (i.e. the Access front end) simply send requests to the server. All the processing is done on the server and the results are sent back to the client. All of the original data remains intact on the server so if there is a loss of connection, again, the only thing you might lose is any changes you were working on at the time of the interruption.
 
Thank you for your explanation. This confirmed my plans to split to SQL Server.
 

Users who are viewing this thread

Back
Top Bottom