Working from home with an access database over VPN (1 Viewer)

CJ_London

Super Moderator
Staff member
Local time
Today, 10:33
Joined
Feb 19, 2013
Messages
16,553
preamble: this is extracted and tweaked from a report I provide to a client. Because every app is different, it is more about WHAT you can do rather than HOW. Please don't respond to this thread asking HOW, start your own thread and reference this one if necessary. However will be happy for responses for other WHAT's :geek:

Covid has forced many companies to have employees working from home. Once the crisis is over, it is likely that working from home will become the norm – if not permanently then perhaps 2-3 days a week.

The issue with access is that performance can suffer when connecting to a backend over VPN. VPN/wireless will always (well at this point in time) be slower than a hardwired network which is the environment where access works best.

However poor performance may not be the fault of access and can be addressed to a certain extent. For example it may be due to the home user not having a fast enough connection whether due to budget or simply a faster connection is not available.

If the connection is not the problem then it is possibly more to do with how the developer has designed the front end and back end. Moving the back end to sql server is unlikely to show any significant performance improvement without taking advantage of some of the better sql server functionality. And in fact performance can be worse since sql server may have higher priority calls on its services.

When working over the internet, it is important to minimise the volume of data traffic to provide good performance. So you need to ‘think like a website’.

There are four basic options, Terminal Server, back end modifications, front end modifications and replication/synchronisation. Which suits your organisation is down to how the access application is utilised.

Terminal Server

The easiest and most effective solution - but at a cost. Requiring no modification to the front end other than relinking to the back end once relocated.

The principle is that both front end and back end are located on the terminal server. The front end is located in each users profile area (equivalent to their local drive) and the back end in a folder accessible by the users (equivalent to the back end server location on the corporate network). The only internet traffic being generated is key press and mouse move events one way and screen refresh the other. Performance wise, this will be very close to users having the back end on their local drive.

Another benefit is that since the access app is located on the server, users do not need a local copy, (they do need a licensed copy on the server, or use the free runtime version). Further, the potential for corruption is reduced as a result of the connection being dropped and you are not limited to the Windows OS - any OS that can connect to Terminal Server such as macOS and Linux can be used.

If you decide to go this route, you may consider that other shared files should also be on this server perhaps to the extent you no longer need a corporate server connected to a LAN which may help offset or eliminate the licensing cost.


Back End modifications

The objective is to make the back end as responsive as possible. Basically this means tables need to be properly normalised and make effective use of indexes. Any field that is regularly used for linking to other tables, criteria for forms or reports or sorting should be indexed, with the exception of fields with a limited range of values such as yes/no. These can still be indexed but it will have little impact on performance.

How this will improve performance really depends on how many new indexes are added and how relevant they are to the day to day processes.

Other settings that can slow down the back end include using lookups in table design and allowing subdatasheets. Also avoid using the attachment field – these tend to contain large objects which would bulk up any records that are fetched.


Front End modifications

The objective is to minimise the amount of data that is called from the back end to populate the front end thereby reducing network traffic.

After Terminal Server, this is probably where most performance improvements can be found, particularly in combination with back end modifications.

The basic rules to follow are:
  • Do not base forms or reports on a table or a query without criteria.
If a form is based on a table or query with 10,000 records then all 10,000 records will be fetched to be subsequently filtered. Even if you use the openform WHERE parameter, all records are still returned – the WHERE is actually a filter. The same goes for subforms (the linkchild/master properties act as a filter on the subform) and forms set for data entry mode will still bring through the underlying records as defined by the recordsource.​
  • The same might also apply to combos and listboxes if they fetch large numbers of records (such as the names of 10,000 customers for the user to select from)

  • Use criteria to minimise the amount of data to be fetched.
If a user is looking for a customer whose name starts with D, apply this as criteria before fetching data from the back end. For 10,000 customers, simplistically only 400 will start with D, a 96% reduction in the amount of data fetched.​
  • Avoid chaining queries
For example query1 gets all data from tables, then query2 gets a subset of query1 and query3 applies final criteria to get a subset of query2. Apply the criteria to query1 and you probably don’t need queries 2 and 3 and the backend has a lot less work to do​
  • Avoid using domain and certain user defined functions in queries (those that call another recordset). Particularly relevant if the backend is not access

  • Another tweak to consider is to load static data to temporary tables or recordsets when the app first opens to be used during the life of the session to populate combo and listboxes as and when required so they do not need to be loaded time after time. If the data is not static, consider repopulating on a timed basis or a button the user can click to refresh.

replication/synchronisation

This relates back to the days when users had to work remotely from the office environment. These days the requirement is low but can still be valid in situations where there is no internet connection or it is very slow.

The principle is the user works with a back end on their local machine. When they are next back in the office, or perhaps have access to a suitably fast internet connection, they can synchronise their data with the master on the server.

Synchronisation typically is a two way street – new and amended data is uploaded from the user to the master database and new and amended data downloaded to the user so at that point in time, both sets of data are the same. So requires a number of queries for each table

This does require care and modifications to the backend tables to record information such as when a record was last changed and when a record was last downloaded. It is also necessary as part of the synchronisation process to identify clashes. i.e. two users have amended the same record since they both downloaded from the master - a decision has to be made as to which amendment is correct - or perhaps they are both correct because each user modified a different field.

Hope viewers find this helpful
 

isladogs

MVP / VIP
Local time
Today, 10:33
Joined
Jan 14, 2017
Messages
18,186
Excellent summary - thanks.
Perhaps also worth stressing that where wi-fi connections are involved, replication/synchronisation when physically connected to the network is likely to be the best way of managing data transfer without corruption
 

Cronk

Registered User.
Local time
Today, 21:33
Joined
Jul 4, 2013
Messages
2,770
You could add sql server backend and pass through queries.
 

Users who are viewing this thread

Top Bottom