access databases and VPN tunnels

Elmobram22

Registered User.
Local time
Today, 19:07
Joined
Jul 12, 2013
Messages
165
Hi all,

I have a VPN tunnel across 3 locations in work. Due to the tunnel and the setup it is very very slow. I have tried splitting my database and having the backend on the network but it is unusable. I have ended up splitting the database into three and having one at each location. This is fine until the info needs to shift between databases when all the ID'd don't match. Is there a way of syncing the databases in the early hours? Or something similar? Obviously the ID's will be different if two people use the databases at once but I was thinking if there was a way of having a main database where all the info is imported to and then each database replaced with that one main one in the early hours? Does that even make sense?? Either way I'm stuck with the equipment and transfering the data manually which is a pain.:banghead:

Cheers,

Paul
 
Can you not use remote desktop via the VPN? If each user had a separate remote desktop session you would completely avoid the problem.

The only viable alternative over very slow connections is really to use unbound forms and manually create you ID field when you save the record. There are quite a few discussions on this area on here.
 
Hate to say it but the VPN part of the problem is smokescreen. VPNs are not significantly slower than unencrypted networks if the hosting CPUs are modern. (I work with them all the time.) A hot desktop with 2.8 to 3.2 GHz/32-bit or 1.4-1.6 GHz/64 bit will not be significantly affected by going over a VPN. The computations are not that stringent. A slow network is your real culprit. One immediate question is, what else rides that network besides your DB traffic?

Now to the question of how to make this work? I fought with a dog-slow network for a couple of years. Your solution is to do everything you can possibly do in the front-end and only hit the back-end when you have something that has to be updated. Use queries set for optimistic locking. Use temporary tables in the FE that contain everything you need to do a single UPDATE or INSERT query in the BE based on a JOIN of the temporary FE tables to the permanent BE tables. This still won't turn a sow's ear into a silk purse, so look into what you might be able to do with regard to obtaining a slightly faster network. Use temporary FE tables and some leg-work to minimize the amount of time and the number of touches you need to the BE. But of course, you will NEVER get rid of all BE touches.

In the final analysis, running over a snail-net connection is going to cause headaches, frequent need to repair the DB, and compaction that won't hurt in that case because you will have more down-time anyway. You will see all sorts of issues where impatient users somehow manage to lock a file.

Splitting the DB between FE and BE will be mandatory defense because having local copies of the FE reduces the number of BE locks you have to take out. You honestly don't give a rat's patootie about FE locks as long as the FE is private - but every BE lock is visible to (or affects) every user of the BE file.

Another useful approach is the use of a DAO database link and the .EXECUTE method associated therewith. You can issue an SQL query that will trap itself for inconsistent updates and have it automatically tell you. You can look this up using the Google-brain so I'll let you look up the method. In essence, you want to use this because (a) it will TRAP on failed transactions (cf. DoCmd.RunSQL) and (b) run the SQL action queries as fast as possible because it has less overhead (than DoCmd.RunSQL). There are down-sides to the method, but it is a reasonable response to the problem of minimizing the number and duration of each BE-file touch.
 
Last edited:

Users who are viewing this thread

Back
Top Bottom