Runtime access slow opening (1 Viewer)

Marrafeiro

New member
Local time
Today, 23:05
Joined
Oct 24, 2023
Messages
8
Hi everyone,
I'm going to try explain my problem, my english is not my native language, so i'm sorry if i make any mistake.
My company have a software made in access, it have around 700mb of size, is believe is a big app. The data is in an SQL Server, so we have all the tables linked to MS ACCESS. our architecture of bussiness is:
1- Our Clients (around 300) enter in our servers
2- Each client have 1 accdb file in a network shared driver
3- our server structure is all virtual based with load balencer, 1 server for AD, GPO and 1 shared driver for all servers. 1 server with SQL Server. And around 6 servers where the clients connect.
Our problem is it is slow when we open the MS ACCESS or sometimes when we open some forms. Is not the querys are slow, it seems to be all the Access in general. For example when we open the access file it take around 20s before start run our first querys.
Before we had a monolitical structure, where we had the files, sql, ad, gpo, etc, all in one server, and it was faster, to open it was around 5seconds. Now it can take to up 1 minute. Any1 have any ideia what it could be?
Again, i'm sorry for my English.
 
What happens if you move the Access file to the client's local machine, generally speaking Access doesn't like running from a networked drive.

Also if you have no data stored in the Access file, 700Mb is a huge front end file, are you sure that is correct?
 
The file can't be moved to the client machine. The Client connect directly to our servers by RDP connection.
And Yes the file have 700MB, it is a very big app, i can say it have around 6000 forms,subforms, querys, reports and VBA modules.
 
Ah okay, so the client file is in a RDP folder location.
That should be very fast if the RDP server is on the same backbone network as the SQL server?

What resources are available to each RDP client?
Does each user have there own desktop folder or are they shared. Sharing a front end - especially one that large would not be wise?
If you ping the SQL server from the RDP desktop what sort of results do you get?
 
we have 6 virtual servers with 2 sockets, 32 virtual cpu's and 32GB of ram, normally we have around 30 users per server.
Each user use their own frontend. The databases in sql server are diferent too, each user have their own database the biggest one have around 2gb.
The files we have the shared disk\files\[name of the client]\the access files.
When the user login in the rdp file it open session and the file is open automatclly. But even if the admin open the file with all previleges, it is slow.
When i ping the sql server i get this:
 

Attachments

  • ping.png
    ping.png
    5 KB · Views: 121
Well it's not the connectivity by the looks of it.

It could be memory, how much memory is each user actually allocated? (Or is that the 32Gb you refer to?)
I would recommend a minimum of 8Gb for a large database front end, and 16Gb would be preferable.

Make sure that Access recognises the FE file location as a trusted location.
I have seen that cause a real slow down.
 
We have 32gb of ram per virtual server and each virtual server allocate around 30 users. In the old architecture we had a monolitical server with 128gb of ram for around 150 users and it was pretty fast. I disabled the vba warnings in regedit about trust locations too.
 
I disabled the vba warnings in regedit about trust locations too.

I don't think disabling the warnings will get around the issue.
I don't know what causes it, but Access running in a non-trusted location with an untrusted networked location backend can become very slow.

Humour me, and just update one slow user folders RDP properties and test it.
 
Since you are concerned with a language problem, let me see if I got this right from the descriptions above.

You have a complex in-house network including active directory, formal group policy management, and SQL Server. All servers are virtual. Six of those servers are set aside for load balancing of your user apps. Each virtual server for users is 32 CPUs and 32GB of RAM. This would imply that the VM host machine has about 128 physical CPUs and 128 GB of RAM. (If this is not true, then what is the correct size of the VM host box?)

Of your 300 clients, perhaps 30 at a time log in to your system, which load-balances them to one of the six user servers. This translates to about 5 users per server if under typical usage. Each user uses RDP to log in. Each user has a private folder that holds all of their files on a shared volume. Each user has a separate database on the SQL server, so no two users touch the same data files. You have a very fast network as evidenced by the PING.

Here is where your evidence becomes interesting. At one time, you had ALL functions in the same (non-virtual) environment. It was reasonably fast. Now, with everything separated virtually, it is up to 12 times slower. Since the hardware is the same, this has to be a network configuration issue. My question is therefore, how are network permissions managed? More specifically, how are the network permissions to shared resources managed?

You might need to check with your IT shop for this. There is one primary Windows mechanism to grant users permission to use file objects like devices, folders, and individual files. This mechanism is the Access Control List (ACL) which is a data structure appended to each file object individually. The list consists of Access Control Entries (ACEs) that say "grant READ-ONLY access to user XYZ" or whatever user and permissions you want to grant.

Two main styles of ACE are used to grant permissions to a resource. The first style creates an ACE holding a client's internal ID and permissions.
Append the ACE to the object's ACL. The second style creates an artificial name to be used as a group identifier. You assign the group identifier and permissions in an ACE. Add the ACE to the resource's ACL. Then you add each client as a member of the group. More precisely, in the second style, you grant the client the right to "hold" the group identifer in addition to his/her own individual identifier. Think of it as two keys on a key ring. For reasons discussed below, you would usually prefer the group style when possible.

Your path designation is disk\files\[name of the client]\the access files which is reasonably compact. BUT the question is which method is used for the disk and files elements of that path. For the rest of the path, using individual or group identifers make no difference to speed of access. However, using individual identifiers for the disk and the files element would be a mistake. There are some "standard" groups that would make sense, like Authenticated Users for those two elements. But in general, authenticating permissions by individual IDs costs a lot. EACH LEVEL of a folder or an application file touch must undergo "access arbitration" (lower-case a for access as in "accessibility", not big-A for Access databases). This arbitration compares the ACL on the object against the user's list of "held" identifiers. When you have 300 users and are using the individual identifier method, your expectation is that on the average you must look at 150 user identifiers. But if you use the group ID method, you look at 1 identifier. If you have a good IT group who actually knows what they are doing, they will have done it the right way - group IDs where possible. But it is the first question I would ask.

A second question is how many versions of Access are installed in this environment? The ideal case is one per user but that might be impractical for monetary reasons. One per load-balancer is more in line with normal operations. One per VM host box means you would have contention for MSACCESS.EXE, the executable portion of Access itself. And the increased Windows Lock Management load could slow you down significantly. Note that Windows Lock Management has NOTHING TO DO with the .LDB or .LACCDB files created when you open Access on an application. Windows LM occurs BEFORE Access even THINKS about opening its own lock files.

I would also point out that TECHNICALLY, if you didn't buy the right kind of license for Access, you might be in violation of the EULA that goes with every installation. There IS such a thing as getting a multi-user license for shared server situations.
 
Thank you for make a detailed answer. I'm gonna answer try to answer all you questions the best way possible using bold text.

You have a complex in-house network including active directory, formal group policy management, and SQL Server. All servers are virtual. Six of those servers are set aside for load balancing of your user apps. Each virtual server for users is 32 CPUs and 32GB of RAM. This would imply that the VM host machine has about 128 physical CPUs and 128 GB of RAM. (If this is not true, then what is the correct size of the VM host box?)
We have 2 host machines using proxmox in the moment for each core we are giving 2.5 vCPUs:
Machine 1: 2x Intel Xeon Silver 4110 (32 Cores) and 256GB RAM
Machine 2: 2x Intel Xeon E5-2680 v4 (56 Cores) and 256GB RAM

The Machine 1 have:
Server 1 have SQL Server with 24 vCPUs and 48GB of RAM. The version of the SQL is SQL Server Web Edition
Server 2 have AD, GPO with 8 vCPUs and 16GB of RAM, in this server we have the shared disk 4 NVMEs in RAID 10
Server 3 and 4 are Web Servers each one have IIS working with 16 vCPUs and 32GB of RAM
Server 5 and 6 are the servers where users connect they have 32vCPUs and 32GB of RAM each

The Machine 2 have:
Server 1 is another AD and GPO Server with 8 vCPUs and 32GB of RAM, this one is just for redundancy, in case the other AD server fail.
Server 2, 3, 4, 5 are the servers where users connect they have 32vCPUs and 32GB of RAM each


Of your 300 clients, perhaps 30 at a time log in to your system, which load-balances them to one of the six user servers. This translates to about 5 users per server if under typical usage. Each user uses RDP to log in. Each user has a private folder that holds all of their files on a shared volume. Each user has a separate database on the SQL server, so no two users touch the same data files. You have a very fast network as evidenced by the PING.

Here is where your evidence becomes interesting. At one time, you had ALL functions in the same (non-virtual) environment. It was reasonably fast. Now, with everything separated virtually, it is up to 12 times slower. Since the hardware is the same, this has to be a network configuration issue. My question is therefore, how are network permissions managed? More specifically, how are the network permissions to shared resources managed?

You might need to check with your IT shop for this. There is one primary Windows mechanism to grant users permission to use file objects like devices, folders, and individual files. This mechanism is the Access Control List (ACL) which is a data structure appended to each file object individually. The list consists of Access Control Entries (ACEs) that say "grand READ-ONLY access to user XYZ" or whatever user and permissions you want to grant.

Two main styles of ACE are used to grant permissions to a resource. The first style creates an ACE holding a client's internal ID and permissions.
Append the ACE to the object's ACL. The second style creates an artificial name to be used as a group identifier. You assign the group identifier and permissions in an ACE. Add the ACE to the resource's ACL. Then you add each client as a member of the group. More precisely, in the second style, you grant the client the right to "hold" the group identifer in addition to his/her own individual identifier. Think of it as two keys on a key ring. For reasons discussed below, you would usually prefer the group style when possible.

Your path designation is disk\files\[name of the client]\the access files which is reasonably compact. BUT the question is which method is used for the disk and files elements of that path. For the rest of the path, using individual or group identifers make no difference to speed of access. However, using individual identifiers for the disk and the files element would be a mistake. There are some "standard" groups that would make sense, like Authenticated Users for those two elements. But in general, authenticating permissions by individual IDs costs a lot. EACH LEVEL of a folder or an application file touch must undergo "access arbitration" (lower-case a for access as in "accessibility", not big-A for Access databases). This arbitration compares the ACL on the object against the user's list of "held" identifiers. When you have 300 users and are using the individual identifier method, your expectation is that on the average you must look at 150 user identifiers. But if you use the group ID method, you look at 1 identifier. If you have a good IT group who actually knows what they are doing, they will have done it the right way - group IDs where possible. But it is the first question I would ask.
We have around 120 clients, some of them just have 1 user, others have more than 1. In total they are around 300 users that can access with rdp and their own access file.
Our AD is this way:
We have a main group for this clients (lets call MAIN), then we have a specific group for each cliente (lets call SUBGROUP), this SUBGROUP is linked to the MAIN and then the users are linked to the SUBGROUP. In each folder (where are the access files) of the client we configured the security giving all permission to each SUBGROUP.


A second question is how many versions of Access are installed in this environment? The ideal case is one per user but that might be impractical for monetary reasons. One per load-balancer is more in line with normal operations. One per VM host box means you would have contention for MSACCESS.EXE, the executable portion of Access itself. And the increased Windows Lock Management load could slow you down significantly. Note that Windows Lock Management has NOTHING TO DO with the .LDB or .LACCDB files created when you open Access on an application. Windows LM occurs BEFORE Access even THINKS about opening its own lock files.

I would also point out that TECHNICALLY, if you didn't buy the right kind of license for Access, you might be in violation of the EULA that goes with every installation. There IS such a thing as getting a multi-user license for shared server situations.
We have all the licenses, for Windows, Windows CAL's (for RDP's access) and for SQL Server. For the Access we use the free runtime access. In the moment we are using the 2016 version.

I'm going to check all the other questions with the person who manage the proxmox and give feedback.
I start to believe the problem is the shared driver, somehow the access don't like it very well.
 
I don't think disabling the warnings will get around the issue.
I don't know what causes it, but Access running in a non-trusted location with an untrusted networked location backend can become very slow.

Humour me, and just update one slow user folders RDP properties and test it.
Thank you for your time.
I have test the two options.
By adding trust location is slightly slower than disable the trust locations with the VBA Warning registry.

About giving an access, i must speak with my manager if i can give you a test account.
 
Adding a trusted location is a one off operation for a user account.
I'm confused by you disabling them completely, as Access might still be trying to check if the source or FE is in a trusted location.

What I'm asking is to enable them and set the FE as a trusted location.
 
Adding a trusted location is a one off operation for a user account.
I'm confused by you disabling them completely, as Access might still be trying to check if the source or FE is in a trusted location.

What I'm asking is to enable them and set the FE as a trusted location.
I already did that.
I tested out with the trust location on and with the FE location path. With the trust location on it was sliglty slower than the tests I did with the trust location off.
 
OK, that description in post #10 was clear enough. Your hardware has enough memory and enough physical CPUs for the job. But the fact that it worked fast before you implemented the virtual splits means it was never a hardware issue anyway. Using Access runtime should work OK and it will allow for reasonable security with regard to folks who might try to meddle with anything. You used group identifier methods so that seems relatively efficient.

Does everyone use all 6000 forms that you say you have in the front-end, or is that just a result of trying to cram everything into on DB front-end because of folks having multiple back ends? (But that is curiosity, not an issue that should cause slowdowns.)

The next thing to check, and you might need your IT people to help with this depending on how they have their permissions, you might not be able to see this. When everything was together in one giant memory space, all of the required file locking structures (managed by Windows Lock Manager) were in LOCAL memory (in the same physical environment.) When you split everything to virtual environments with their own separate addresses, you forced use of Windows Distributed Lock Management, in which each virtual system becomes a sub-manager for the domain's multiple file systems. The rule is that a file resource is owned and managed by a single host, the host for which the file resource is considered "local." This means that lock management traffic jumps to the network because of the rules of domain-level sharing.

You need to look at frequency of network collisions. In the "everything is local" configuration there were NO collisions. Once you virtualized everything, you introduced network traffic. It is entirely possible that this slowdown is due to a robust file sharing overhead. Even though it is all in the same physical box, when you went to a virtual environment, you forced Windows to treat the boxes separately. The VM hosting software knows where everything its, but Windows cannot use that information to take shortcuts. I can easily believe that a 12x slowdown would occur due to the introduction of network pathways where you were using memory pathways before.
 
OK, that description in post #10 was clear enough. Your hardware has enough memory and enough physical CPUs for the job. But the fact that it worked fast before you implemented the virtual splits means it was never a hardware issue anyway. Using Access runtime should work OK and it will allow for reasonable security with regard to folks who might try to meddle with anything. You used group identifier methods so that seems relatively efficient.

Does everyone use all 6000 forms that you say you have in the front-end, or is that just a result of trying to cram everything into on DB front-end because of folks having multiple back ends? (But that is curiosity, not an issue that should cause slowdowns.)
Yes, this is a management software for public and private schools, this software already have almost 15 years and have me and other person who develop new features or correcting bugs everyday. We already put some heavy work in python scripts and t-sql, but ofc to do that for all application is almost impossible.
The next thing to check, and you might need your IT people to help with this depending on how they have their permissions, you might not be able to see this. When everything was together in one giant memory space, all of the required file locking structures (managed by Windows Lock Manager) were in LOCAL memory (in the same physical environment.) When you split everything to virtual environments with their own separate addresses, you forced use of Windows Distributed Lock Management, in which each virtual system becomes a sub-manager for the domain's multiple file systems. The rule is that a file resource is owned and managed by a single host, the host for which the file resource is considered "local." This means that lock management traffic jumps to the network because of the rules of domain-level sharing.

You need to look at frequency of network collisions. In the "everything is local" configuration there were NO collisions. Once you virtualized everything, you introduced network traffic. It is entirely possible that this slowdown is due to a robust file sharing overhead. Even though it is all in the same physical box, when you went to a virtual environment, you forced Windows to treat the boxes separately. The VM hosting software knows where everything its, but Windows cannot use that information to take shortcuts. I can easily believe that a 12x slowdown would occur due to the introduction of network pathways where you were using memory pathways before.
I'm going to check this with the IT.
 

Users who are viewing this thread

Back
Top Bottom