Is access reliable?

ahmed_optom

Registered User.
Local time
Today, 20:12
Joined
Oct 27, 2016
Messages
93
Hi all,

I have been building an access database for our healthcare company for the past few months.

I have learnt so much. However, we have had several corrupt database issues, and Im starting to think that access just isnt up to it.

I am using a split database, with a batch file to load fresh front end files for about 20 users.

Several times a day, a front end will just become corrupt, and I ask them to restart, then it works. Today all the front ends just stopped working, run time error on start up. Nothing was done in the past few days to the front end, its been deployed for over a week. I try to recover etc, now the database is corrupted, the auto numbers are all reset and the data was being over written.

Is this poor design? I noticed the computer that has the most trouble is connected via wifi and regularly drops out. Im starting to think that this is the problem, but I would hope that access would not be so fragile as to lose the database every time a computer crashes or a wifi signal drops.

What are your thoughts? Im in two minds as to completely redevelop the access database, or ditch it and get a software package,
 
Have u tried recompile /compact repair yet? Yes access is reliable if it is used the way it was intended. If you code is error prone no software will be fun to work with.
 
Last edited:
What are the specifics regarding the FE batch file? Also, does every user have their own copy of the FE on their computer?
 
Have u tried recompile /compact repair yet? Yes access is reliable if it is used the way it was intended. If you code is error prone no software will be fun to work with.


Yes, I did that, the front end now opens, I can see the repair bar at the bottom, and then it says "the last time you opened it it caused a serious error" etc.

It also says "id is not an index in this table".

I appreciate it maybe my code, however, I dont understand how the front fail, and why it works on some machines and not others. They are all identical hardware.
 
Decompile is a separate operation . Did you do that for sure?


Sent from my iPhone using Tapatalk
 
What are the specifics regarding the FE batch file? Also, does every user have their own copy of the FE on their computer?

Yes, so the batch file runs on start up, pulls a copy of the FE from a network folder, then places it on their desktop and runs it.

On most computers this works fine. Some seem to not copy properly.

Then there is the backend issues...A computer crashed whilst making a new entry, this seemed to create a duplicate id on the backend, and nothing seemed to be able to fix it. I immedately copied the backend, and then tried to fix it, by compact and repair, manually removing conflicting data.

Now if i cycle through records, its going through them at random rather than in order, and the auto number are over writing existing data.

Like I have said, I appreciate it could be my coding etc, however when I use a website with a msql backend, if i close the browser, or my pc crashes, it shouldnt crash the entire website, but I feel access in a split form is too fragile.
 
Decompile is a separate operation . Did you do that for sure?


Sent from my iPhone using Tapatalk

Yes, i decomplied the front end, it created a copy which was more corrupted.

It just left me with the mysycompact error table with 200+ entries, some are of the form

You tried to assign the Null value to a variable that is not a Variant data type.

Some are about permissions.

I have a working copy of the front end, its just when I try to connect it to a backup of the backend, the autonumbers are all reset and the data gets over written.
 
Take the indexs off of the table with duplicates. Manually remove records that are duplicating or write a sql statement. Add your index back on. Did you try that on the backend?

The question I have now is how did the program let you bypass the table index?
 
Last edited:
What does your table relink routine look like. Have you used dsn-less connections?
 
Take the indexs off of the table with duplicates. Manually remove records that are duplicating or write a sql statement. Add your index back on. Did you try that on the backend?

The question I have now is how did the program let you bypass the table index?



Sent from my iPhone using Tapatalk

Yes I did that, and the autonumbers are all reset and the data is not organised. So for example, if i have a form with patient details, and i cycle with the navigator to the next patient, and the next patient etc, it will start on patient 4, then go to 2456, then patient 67 etc. It should be 1 then 2 , 3 and 4 etc.


If I started again and made a new database and hosted on mysql server, would the chances of such massive database corruptions be lowered? I know this is impossible to answer, but I just dont know if this is my coding or was I asking to much of access?
 
What does your table relink routine look like. Have you used dsn-less connections?

I relinked several times, firstly when I tried to restore the data from a back up, and now on my home pc, I have the fe and be next to each other on my desktop. I relinked tables, they work.

I was unaware of dsn-less connections, and I am googling now. Is this a potential way to restore database integrity or prevent corruption in the future?
 
I would wager that if you google this type of problem very low percentage of global access users have this problem so I am thinking you need to make new tables, extract the data from old tables and reload data.

Then start working on trapping the error.

You are going to run into headaches if you try to scale with my sql Instead of something like sql express.


problem = opportunity
 
doesn't sound like a corruption issue, just the records not displayed in the order you expect. Try sorting on the ID number.

Note that records in any db (Access/Sql Server/MySQL etc) are stored randomly. You have to provide an order such as ID or date of entry.

Perhaps someone has accessed the backend table directly and changed the default order for the table

Access is pretty stable but as mentioned by others, only if used as intended. If your users are connecting via wireless network for example, I would not recommend Access as a backend, use SQL server or MySQL which can handle interruptions to the back end during transactions better - although you will still need code to recover the situation if a transaction is interrupted.
 
I was unaware of dsn-less connections, and I am googling now. Is this a potential way to restore database integrity or prevent corruption in the future?



DSN less connections are a way to use native language to standardize you method of automatically establishing connections to databases like MySQL ,MySQL, spreadsheets,access databases,etc.

Also if you are using test data share the files, it is easier to help that way.







problem = opportunity
 
doesn't sound like a corruption issue, just the records not displayed in the order you expect. Try sorting on the ID number.

Note that records in any db (Access/Sql Server/MySQL etc) are stored randomly. You have to provide an order such as ID or date of entry.

Perhaps someone has accessed the backend table directly and changed the default order for the table

Access is pretty stable but as mentioned by others, only if used as intended. If your users are connecting via wireless network for example, I would not recommend Access as a backend, use SQL server or MySQL which can handle interruptions to the back end during transactions better - although you will still need code to recover the situation if a transaction is interrupted.


There are also duplicate primary IDs.

eg there maybe two patient id of 4. I dont know how this could have happened.
 
Upload your files if you need more help


problem = opportunity
 
Pat,

I pretty much did what you have suggested ( in a round about way) and now recovered the database to a functional state with most of the data intact. Thanks,

Pat, I am trying to understand the first issue you raised about autonumbers. I do have forms with subforms that have the same table as recordsource. For example, I have a form that has a subform that simply shows the previous record set for that particular patient. eg patient id 2, in the main main you are adding todays visit data, in subform it will be previous visit information.

Is that the kind of form set up which you are referring to?
 
Pat,
I do have forms with subforms that have the same table as recordsource. For example, I have a form that has a subform that simply shows the previous record set for that particular patient. eg patient id 2, in the main main you are adding todays visit data, in subform it will be previous visit information.


Maybe your question is solved already.

I wanted to test it so I made a test database with no macros.

The database forms mimic your scenario I think. There is a client form that tracks power of attorney history, client activity history, and shows the most recent client activity detail.

Some code would need to be added to refresh the record sources if any dates were modified causing recent date to change but other than that it seems to work like your requirements indicate.

The real kicker with the test db is that the Person table holds, power of attorneys, clients, and employees so the subforms on the client sheet connect to the person table many times with no issues.
 

Attachments

I noticed the computer that has the most trouble is connected via wifi and regularly drops out. Im starting to think that this is the problem, but I would hope that access would not be so fragile as to lose the database every time a computer crashes or a wifi signal drops.

Here is the problem. The world of TCP/IP has multiple ways to characterize things, but one particular dichotomy stands out for this particular class of problem. There are protocols that operate via a formal connection to a socket and protocols that do not use sockets, they use connectionless protocols supplemented by cookies. Simple examples: FTP and its family are connection-based protocols; HTTP and its family are connectionless protocols.

Various physical connection types will be more or less suitable for one type over the other. As it happens, any wireless connections do better with connectionless & cookie setups, whereas the hardwired physical connections support the protocols that use persistent socket-like connections.

Access normally uses SMB or Server Message Block protocols for file transfers because that is the Microsoft "preferred" protocol for File Sharing. For better or worse, this is a socket-based protocol that is going to be much better when traversing hard-wired physical connections. The protocols establish sequence numbers in the message headers so that they can tell when a message was lost. By contrast, web-based connections have a quick exchange but the contents of the most recent cookie act like a bookmark so that the outbound data call from your system can "tell" the server where it left off in the most recent data exchange.

This distinction is of incredible importance when you have network prone to data loss, and wireless connections are the MOST prone to connection loss. What happens is that you are pumping a lot of data across your link, perhaps several records at a time, or several HUNDRED records at a time if it is a big SQL statement. When you send a very brief transmission, it costs you nothing to pass in a cookie or two as bookmarks.

But when your protocol does not use cookies, then the information on your current progress is in the socket, not in a cookie. If you lost the connection, you lose your place because TCP/IP rarely if ever allows a reconnection. (You can thank your site's security managers for that restriction, even though it is probably a good one in general.) So in essence, you lose your bookmark from the socket's data structure and can never get it back due to the "reconnection" rule.

Now, mechanically, what happens here is that you do a bulk update via SQL and Access has to read in the table, perform the update, and write back the updated records. But if in the middle of this you drop your signal, you now have written back a fraction of the whole table. And as a result, that half-written table has become corrupt.

Bottom line? Your wi-fi connection is not well-suited for a database using SMB protocols. Find another connection or disenfranchise the wi-fi users. They are almost certainly the source of your data corruption. There is no current solution for this problem unless you can use Citrix or some other remote connection manager software. And even with Citrix the solution is still not 100% solved - it is just improved.
 
as mentioned in post #13 - but very briefly:D

I've never tested it in this sort of scenario, but using begintrans, committrans may provide some protection in that the transaction is rolled back if not committed. But personally would use SQL Server/express/MySQL
 

Users who are viewing this thread

Back
Top Bottom