So I've been having a multi-user challenge with a split frontend/backend database using MS Access 2010.
The backend resides on a file server network with copies of the frontend on multiple user computer stations.
The issue is that action queries such as updates and appends run fine with a single user. But as soon as a second user connects to the backend, performance dies; i.e. an update query that would typically take 1 second with a single user takes almost 2 minutes with 2 users. Both frontends have a perpetual connection to the backend.
I've googled and researched many-a-forum and thread to figure out what was going on. I came across this one in this "General" thread where the situation seemed similar:
"Split Database sometimes VERY slow (more than 1 user?)" (I apparently can't attach a link yet until I have 10 posts?)
In that thread there was mention of using a handy proggy called "Wireshark" which monitors network traffic. So I thought I'd give it a try as well.
In attempting to troubleshoot, I created a simple split database configuration. The backend contains 2 tables, 1 table to act as a perpetual connection to the frontend while the other is involved with an update query on the frontend. The frontend links to the two backend tables and contains one additional local table and the update query. The update query joins the local frontend table with one of the linked backend tables and performs a simple field update.
To create a perpetual connection to the backend, I opened the frontend on each computer and then manually opened one of the linked tables in both frontends. This linked table is not involved with the one update query in the database.
I then opened up Wireshark and fired the update query on one of the local boxes. Much to my shock and horror, I found that there was a continuous stream of Lock Request / Lock Response in the SMB's...almost 900 transaction per second! The Lock Requests were coming from the file server where the backend resided and the Lock Responses from the local box.
While I believe I found the cause of the slow-down, I still can't figure out the cause of the locking request / response behavior. I tried all the available client setting variations of record locking...but to no avail.
I really hope it's not something like a file server setting that needs to be tweaked because I don't have admin rights to make those types of changes as that falls to our IT department. And getting IT to change anything takes an act of Congress...and we all know how often Congress takes to act on anything! /moan
I've attached the test split frontend / backend databases with sample data I am using to troubleshoot, just in case someone wants to look at them and see if there's something I didn't set up right. But it's pretty straightforward and simple. You would just have to relink the frontend tables to the backend in your own test enivronment.
So would anyone have an idea of what may be going on with the locking requests? :banghead:
The backend resides on a file server network with copies of the frontend on multiple user computer stations.
The issue is that action queries such as updates and appends run fine with a single user. But as soon as a second user connects to the backend, performance dies; i.e. an update query that would typically take 1 second with a single user takes almost 2 minutes with 2 users. Both frontends have a perpetual connection to the backend.
I've googled and researched many-a-forum and thread to figure out what was going on. I came across this one in this "General" thread where the situation seemed similar:
"Split Database sometimes VERY slow (more than 1 user?)" (I apparently can't attach a link yet until I have 10 posts?)
In that thread there was mention of using a handy proggy called "Wireshark" which monitors network traffic. So I thought I'd give it a try as well.
In attempting to troubleshoot, I created a simple split database configuration. The backend contains 2 tables, 1 table to act as a perpetual connection to the frontend while the other is involved with an update query on the frontend. The frontend links to the two backend tables and contains one additional local table and the update query. The update query joins the local frontend table with one of the linked backend tables and performs a simple field update.
To create a perpetual connection to the backend, I opened the frontend on each computer and then manually opened one of the linked tables in both frontends. This linked table is not involved with the one update query in the database.
I then opened up Wireshark and fired the update query on one of the local boxes. Much to my shock and horror, I found that there was a continuous stream of Lock Request / Lock Response in the SMB's...almost 900 transaction per second! The Lock Requests were coming from the file server where the backend resided and the Lock Responses from the local box.
While I believe I found the cause of the slow-down, I still can't figure out the cause of the locking request / response behavior. I tried all the available client setting variations of record locking...but to no avail.
I really hope it's not something like a file server setting that needs to be tweaked because I don't have admin rights to make those types of changes as that falls to our IT department. And getting IT to change anything takes an act of Congress...and we all know how often Congress takes to act on anything! /moan
I've attached the test split frontend / backend databases with sample data I am using to troubleshoot, just in case someone wants to look at them and see if there's something I didn't set up right. But it's pretty straightforward and simple. You would just have to relink the frontend tables to the backend in your own test enivronment.
So would anyone have an idea of what may be going on with the locking requests? :banghead: