Split database, SLOW performance

dungstar

Registered User.
Local time
Today, 05:19
Joined
Mar 13, 2002
Messages
72
I split my database, compacted and repair, did as much as I could the performance analyzer suggested without compromising functionality and controls, converted the front end into an MDE file and put the BE on a network path (and of course all the tables are linked), with the FE hosted locally by the testing workstation.

It seems to work but is EXTREMELY slow in performance. It takes over a minute to load a single form.

Before the database was split and when the BE was hosted locally, also, there was no speed problem, no more than few seconds to load the forms.

Specs:
The machines run I tested were a Pentium 4 with Win2000 and OfficeXP, and a Pentium 3 with Win2000 and Office2000.
The database was an Access 2000 db developed under Access XP.

Is there a fairly simple way to remedy this problem or at least improve the speed so my users don't run their fists through the monitor in frustration and come hounding after me? Save me!
 
I ran into this same situation -- a split Access 2000 DB slowing down forms as they loaded -- and I found nothing in the MS Knowledgebase that directly addressed the issue. One man's bug is another man's feature.

In the absence of a solid answer I did something I hate to do: winged it with some quick, informal experiments. I found that leaving the DB in one piece, compiled and then converted to an MDE and copied onto a server (with shortcuts on users' desktops pointing to it) resulted in much better performance on this LAN.

Regards,
Tim
 
Thanks, Tim. I was thinking about trying that. I'll do that.

Do you run into any major problems if multiple users are accessing the database?
 
D,

Did I have any problems with record-locking using one whole file on a server rather than multiple front end files on desktops linked to a backend? No -- but I made this change before the DB was put into service. And, without really reflecting on it, I'm not sure why it would make any difference -- in the context of record-locking -- since under both scenarios users would be competing for data in the same tables. (It may, however, make a difference in network traffic -- a few posts on that in this forum.)

Your question did trigger another thought: there is, I vaguely recall, a "record-locking" bug in A2K. If you set record locking to the edited record but start the app from a shortcut, page locking occurs in some circumstances... See Access 2000 Developer's Handbook Vol II for this (I think).

Regards,
Tim
A link that may or may not be useful
 
If you leave your database unsplit for multiple users on a network then you may encounter major database corruption problems. The best advice is to split it. You could always split it and have each user accessing the same frontend on the server but I do prefer to give each user their own frontends. I'm afraid a performance hit is a side effect but it's really the only way to run an Access database in a multi-user environment. Search the forum for split database and you'll find mountains on the stuff.

Make sure you've got the latest service packs installed on each machine running Windows 2000. You could also download the latest MDAC component from Microsoft for each machine:

http://www.microsoft.com/data

It's been suggested that converting the backend to 97 helps with network traffic as it's less prone to bloating.

I also read something from Ricky Hicks at Utter Access:

"I have seen a performance issue with spilt over non-split databases with Access 2000 but not with Access 97 or 2002.

I can see where this could drive you nutz ... as it makes no sense.

The 6 values should load with no performance issues.

Have you tried placing a copy of both files on your local machine and testing perfomance? If the app is still sluggish ... it would rule out network problems."

So, I would assume from that that if your database works fine locally that it might be a network problem.

Something else I do is save all my queries as objects, I don't leave them within the Query Builder. That seems to help (and you've probably got to try everything before the IT guys agree to look at the Network in general!)

As you may gather, it's been a bit of an issue for me too! I sympathise totally. I have a client with a 20MB database running peer to peer which opens like a dream. I have another client running Small Business Server with Access 2000 that's about 13MB in size. It gets to 2pm and slows down completely until the backend is compacted.

Good luck, don't give in!
 
Are your tables linked via the UNC path, or a mapped driver? I prefer the UNC path, but I tried a mapped drive, and it now takes a few seconds for the main form to open, as opposed to 30-60 seconds that it was taking when I was linking through the UNC path.
 
I've been sitting here for the past few hours testing out different methods.

I've the link to the tables were originally linked to a location that is mapped. I've also tried the full network path. I've tried placing the the split database FE & BE on the server, and the same thing with the BE converted to 2002. Also an FE local linked to the same BE 2002 database.

Almost all of these methods exceeded a minute in length. I have found that having the FE hosted locally is a little, but significantly faster than have it on the server. I have also observed that having a 2002 BE increases performance as well. Having the 2002 BE linked to a FE running on the local machine got it just under one minute, which is still not saying much, but an improvement, nonetheless.

The fastest performance were (starting with the fastest first) were
1. Unsplit db - local (obviously)
2. Split db - local FE and BE
3. Unsplit db - server

The last appears to be the best option at this point. This was what Tim suggested, but the issue remains as DBL pointed out is data corruption issues.

I'm beginning to lean towards the fact that our network cannot handle it very efficiently. I copied a 9MB file onto a folder on the server and it took 55 seconds.

--------
One of the methods in the link provided by Tim was this:
"You can greatly enhance performance when opening the main database and opening tables and forms by forcing the linked database to remain open. To do this, create an empty table in the linked database, and link the table in the main database. Then use the OpenRecordset method to open the linked table. This prevents the Microsoft Jet database engine from repeatedly opening and closing the linked database and creating and deleting the associated .ldb file."

I'm not quite sure we to use this OpenRecordset method. I'm not quite sure how this works, logically either, or if it will work.

For example, I tried opening a form and waited over a minute for it to load. Then I close it and opened it again. Because the .ldb file is still existing, it should generate faster response than the first form load, but it doesn't have any noticeable change in response time (maybe 5 seconds quicker).
-----

Thank you all for all your help and all your patience, because I'm losing mine... If you have any other suggestions, please let me know.
 
I have read with great interest the above.

I am currently running a database which is not split on a server - twelve users at any point in time. Each machine has shortcut to the whole database.

Access 2K file size about 5.0MB.

I have been considering splitting for a while now, but........it runs fine (touch wood) at the moment and only compact once a month.

What is the worst case scenario a corruption could cause? - I copy the complete file as a back up each day.

What do I do - To split or not to split - that is the question

Richio
 
I'm thinking through the same issues, and in spite of all I've read on newsgroups and in books, the fact remains that a split database may not always be the best solution. Two things I've had to look at very very closely in light of performance issues after splitting:

1. Split database requires more, and more complex, maintenance
2. Split database (almost) always decreases speed of record access

Both of these issues involve multiple other issues as well. E.g., as a contractor, the maintenance issue is huge: I need to be confident that the databases I build will be stable, and that when problems arise, there are significant measures the users can take to solve problems before they call me in. With a split database, there is the need to train someone in the basics of networking and db maintenance.

I advised a client to compact and repair daily; corruption was occurring more often, so we increased compact/repair to twice daily. Problems escalated. We finally quit compacting/repairing, and the corruption episodes have diminished! So the standard rules may not always be best, given the thousands of variables in any given situation, and the consequent impossibly big time investment to track down every problem.

So, if your users can get along well with a nightly backup and a shared unsplit mdb, then why not?

Danny
 
I don't think I've ever run a complete database on a network with multiple users where I haven't encountered corruption problems, or record locking problems at the very least, so if you've got this far without splitting it you're doing well.

I've never found that a split database is higher maintenance, in fact I find it's the opposite. If there's a problem with the frontend I can work on it without disrupting the users and issue them with a new frontend once it's been tried and tested. If you try to make changes to the frontend of an unsplit database while users are accessing it for data entry then you'll either not be able to save the changes as it's not exclusively opened or end up with major corruption problems. If you kick them out to make the changes then nobody's happy. I always split my databases for this very reason, even if it's being run on one PC with one user.

As I've said previously I'm running a 20MB database peer to peer with no obvious performance problems, in fact it opens quicker than it does on the single machine I work on! If the PCs and the network/server are up to it, you shouldn't have a problems.
 
Last edited:
Here is my 2 pence worth.

I have developed network based apps since Access 2.0 first came out in 1994. I have done this at 3 different client sites.

1) One MDB versus split FE/BE.
Little or no difference in Access 2.0 or 97. I am just starting to use Access 2K and have no significant experience with that.

2) Install FE on each desktop versus install on server.
Access 2.0 and Access 97 again, no Access 2K opinion.
Depends on the network. The current client has a very fast broad bandwith LAN. Local FEs are a little faster but server FEs are fast enough. At a previous client site local FEs were noticably faster.

3) MSACCESS.EXE on workstation versus on server
Access 2.0 and Access 97 again, no Access 2K opinion.
About the same as 2). I test Access 2.0 and Access 97 apps with a full copy of Access on my desktop and with a runtime version on a server. The local version of Access runs apps a little faster than the server version.

RichM
 
The plot thickens.

I have two fairly unsophisticated A2K DBs (sitting whole on a Windows 2000 Adv Server box) that have run on a LAN for about year without any problems (or, I should, say, I have not been told of any problems). No more than two users at a time access the apps for a short time each day.

About two months ago I put into production another A2K app. While there is the possibility that up to 8 users could be accessing it at any one time (this on a 50-client, mixed OS subnet), more likely no more than 2 or 3 will ever run the thing at once -- and not throughout the day. It was this app that I originally planned to split -- but found after putting the pieces into their respective places at the client's the forms loading far too slowly. So I put Humpty Dumpty back together again.

After reading DBL's message in this thread, my blood pressure surged and I e-mailed my client, asking if they had had any sort if corruption problems with the DB. No, they said. Still, now
I worry, since what happens to somebody can happen to anybody.

One other thing: as long as your DB is not using secured tables, maintenance of an unsplit DB is not terribly difficult. With a copy of the DB on my desktop at Pono Inc, I make my enhancements. I then delete all tables. At the client's, I import the tables from the
production DB before copying the new file into the network directory. Down time is never more than five minutes.

Working on the premise that it's not wise to put all your eggs in one basket, the split DB appeals to me -- however, a 30 to 40 second load time for a form is simply unacceptable.

The horror! The horror!

Regards,
Tim
 
Sorry Tim, didn't mean to upset the blood pressure! Just thought it was worth passing on the problems I've had with unsplit databases. I like the security a split frontend gives me when dealing with client's data - I have, in the past, overwritten an entire database, including data crammed tables, with a brand spanking new database with empty tables. My bloody still runs cold when I think about that one!

I suppose "if it ain't broke, don't fix it" might be a good phrase at this point. It sounds as if your client's database isn't likely to encounter record locking problems so you might get away with it.

There is loads on the subject in the archives of the Forum and if you read through it, those that know will always say to split it. I know the performance issue is the main problem but I'd look to the network traffic rather than the application itself.

Dawn
 
Well........all this has made my decision so much easier to make.

Its Friday....it's still working fine at the moment (unsplit).....the weekend is nearly upon us and the sun is out (somewhere)

seriously.....I have decided to stay unsplit for the time being, but will report back on any problems.

richio
 
I just manually split my Access 2003 database into a backend and frontend per:
http://support.microsoft.com/?kbid=304932

Both the Front End and Back End files are on the network server.
I go into the network and open the Front end directly from the server.

It's taking a good 15 seconds to open a form!!!

Is this a network issue or the slowness coming from the front end grabbing data from the linked tables in the backend database?
 
I just manually split my Access 2003 database into a backend and frontend per:
http://support.microsoft.com/?kbid=304932

Both the Front End and Back End files are on the network server.
I go into the network and open the Front end directly from the server.

It's taking a good 15 seconds to open a form!!!

Is this a network issue or the slowness coming from the front end grabbing data from the linked tables in the backend database?

The front end you be on the local hard drive. Only the back end should be on the network share. In a multiple user set up this becomes very important.
 
I hope this helps.

Access is very capable of being split and the BE set put on a proper server and decent LAN 100Mbit+. I have had it running Access FE/BE between two buildings, one side had 100Mbit network and the other 10Mbit coaxial cabling and an infrared unit between the two buildings. It has been further developed to run on a Terminal Server using VPN between two London sites and one in New York. This was along with 16,000 images averaging 30Kb each. I have only had instance of corruption (locked record) in 10 years, although the coaxial was a mare.

Yes it is pain to load individual frontends to each PC. But the this results in simply doing data calls across the network NOT the entire application (executable).

The first thing you must do is create a table with one record, I called this record Company. Then I created a Menu Form bound to this table and put the this Menu into the Access Default Form, load on Startup. So now we have a persistent hook into the BE. This really makes a difference due to the ldb file and the locking mechanism.

The next consideration was the server volumes had two volumes one 90% of the RAID configuration 750GB, the other 10%. The Backend was on the server and presented on the S Drive as Data.mdb. The Backend needs to be on the smaller volume.

The other factor is if you have unfiltered interrogation of files there maybe too much data that needs to be pulled accross the network. I only had tens on thousands of records but in most cases each table was filtered by the user.

The same principle applies over a WAN using a Terminal Server. All that is happening are screen dumps being transmitted over the WAN.

To progress I would start of copying your BE onto another file. I would create a Table with one record and link into the the local PC's mdb file. Create a Form with the RecordSource being that table of one record. Add whatever you need to test your application and make a mde version and use this to test the performance. See if this makes a difference to the performance I got response times down from 15 seconds to almost immediate on searches using a Form.

Simon
 
Part of the problem may also be how you set up the database's locking.

I have set up a split FE/BE in which the FE is on each desktop sharing a common BE file. However, I've taken certain steps.

1. NO form ever opens a table as a recordsource or in underlying VBA DAO recordsets. Instead, every form (and report, for that matter) opens a QUERY even if it is the moral equivalent of SELECT * FROM tUNDERLYING ;

2. When I open recordsets, they are always set to optimistic locking. The queries are set to minimum locking or optimistic locking.

3. The problem we see isn't FE/BE related, it is network-contention related. In the morning, what I do takes a few seconds. When it hits the afternoon, we see less than 20 seconds for a massive update turn into more than 20 MINUTES for a simple update.

4. The corruption occurs even for this configuration, usually when the network gets glitchy and dumps users or just slows them down to the limit of their (im)patience.

We are working on getting a backend file server for our department, one that does not involve cross-country hops to the centralized server we are currently using.

I super-highly recommend the split FE/BE database. I keep a digitally signed, compacted copy of the FE file on the BE server. The file knows its own version and can tell when it is out of sync with the BE, so when I push UP a new version, the folks with the old version on their desktops gets warned to just make a new copy of the FE. (Yes, I could do that for them but my priorities are to get other features to work first.)
 

Users who are viewing this thread

Back
Top Bottom