What pc hardware/software to speed up Microsoft Access? (3 Viewers)

amorosik

Active member
Local time
Today, 19:36
Joined
Apr 18, 2020
Messages
733
What PC hardware/software is needed to run Microsoft Access quickly?
Suppose Access is the only application running on a computer
What hardware, operating system, and configuration is recommended to maximize the performance of the running application?
 
Stable hardwired connection 1gb or better to the server hosting your FE.
Optimize your application performance for core users. Probably do more for performance on modern computers then high end hardware
Access x64
Fast P2 hard drive
16gb or more memory
Fast single core speed on your processor.
 
Stable hardwired connection 1gb or better to the server hosting your FE.
Optimize your application performance for core users. Probably do more for performance on modern computers then high end hardware
Access x64
Fast P2 hard drive
16gb or more memory
Fast single core speed on your processor.

You say that the same procedure on Access 64bit is faster then 32bit version? Mmmm...
What mean exactly "P2 hard drive" ?
Fast single core cpu, then for Windows actually Intel i9 285K is the best?
 
You say that the same procedure on Access 64bit is faster then 32bit version? Mmmm...
What mean exactly "P2 hard drive" ?
Fast single core cpu, then for Windows actually Intel i9 285K is the best?
x64 will give you a large memory space.
P2, one of the single board style drive that connects directly to the motherboard as opposed to an SSD that plugs in like a HHD.
Core speed would be more important than count with Access.
 
Your question at least IMPLIES that you are asking about the fastest stand-alone Access you can get, that ISN'T being shared. If this is a multi-user Access situation, then remember that whatever machine hosts the FE file is the one doing all the work. The BE file is, at that point, just a file server and ISN'T running Access. So you might want to clarify the proposed configuration. IF this is a standalone single-user system, you could run it as a monolithic DB rather than split. Then, a single system can be optimized to make it work better. Also, on a single-user system, file locking is moot because (a) no one is in competition so minimum lock collisions and (b) all file ops are local to the machine in question so NO network issues and all file locks are managed locally as well.

ASSUMING a stand-alone system: Presumably you would be on a new computer and new version of Office, so assume Win11. You would look up things where Win11 sends feedback to Microsoft - for example, among the "Personalization" settings - and turn off such options. Copilot is the perpetrator of many components where Win11 sends back info or takes frequent logs, all of which are big performance bleeders. You can look online for YouTube warnings of Win11 "spyware" operating under the guise of "making your Windows experience better." I recall at least 8 or 9 things I did to prevent the "spyware" (I mean "benevolent Microsoft optimization feedback") from being a resource pig worse than Windows is already.

Concur with RonPaii's comments regarding "optimize app performance" - Access will certainly allow you to set it up to NOT do certain things like "relation scans" if you can use at least some level of indexes. There are things you can do to avoid excessive queries.

If you have a multi-core CPU that has two classes of active core, look into the concept of "affinity" - where you can associate an app with the higher class of CPU. I have a gaming PC with 28 threads, 8 of which are high-performance and the rest are "economical" (i.e. lower power). The speed difference is not quite 2:1, so if you can assign Access to a "performance" core (if you have one), you get a performance boost.

Getting more memory is cheaper these days, but the studies of virtual memory - which is ALWAYS active for user-mode Windows programs - show that past a certain point, adding more memory has no significant effect. To clarify - after Windows reboots and allows you a login prompt, everything you see, except for some Windows "kernel mode" effects, is done with Windows in "memory managed" mode i.e. you see no physical addressing. It's all virtual. And past a certain point, throwing memory at an app won't help it. Because it isn't what you have, it's what you do with what you have.

The goal is most easily tested using Task Manager >> Performance >> Memory, then look at the "Memory composition" bar chart that is just below the dynamic memory usage chart that updates every few seconds. In the composition bar chart you will find four values reported. Leftmost is "In use" - things that are active in memory. Next to that is "Modified" - "dirty" pages waiting to be written back to memory. Next is "Standby" - pages that were recently active and the program using them is still active. So if that program wants a page given back to it, the page is rapidly available. Finally, right-most is "Free" - memory that has absolutely no current assignment. Any program wants memory? Grab it from Free.

What you want is that you have some Free memory, but it doesn't have to be much. If you put the mouse pointer hovering over each section, it will tell you how many MB are in that section. The optimum performance is that you have enough physical memory (RAM, to be clear) to NOT run out of Free memory.

On my gaming beast, when NOT running Access or any games, I have about 6.2 Gb in use, which includes all of the Windows overhead. Access with a monstrous FE and a really big, single BE is not more than 4 GB, so for me, running a huge Access app would bring me to just over 10 GB, and if I had 16 GB of RAM, I would predict just under 6 GB Free memory. If I only had 8 GB of RAM, I would probably be doing a lot of swapping, which kills performance. There's an old O/S saying: When the O/S swaps, the world stops. If you have Free memory, you are unlikely to be swapping much if any.

Swapping is when something IN memory gets copied to out your virtual memory working file in order to release more RAM for other use. The raw size of your virtual memory is visible under Settings >> Performance (options), and from that mini-panel, find the Virtual Memory setting that controls the size of the swap file. You can use Performance Monitor to determine swap file usage. You can find tutorials online on how to do that. But what you really want is to not swap at all.

I think I've beat this horse to death, so I'll stop here and check back later if you have more specific questions.
 
What PC hardware/software is needed to run Microsoft Access quickly?
Suppose Access is the only application running on a computer
What hardware, operating system, and configuration is recommended to maximize the performance of the running application?
If you 100% local, no network involved?
The having a really great and fast hard drive is the ticket.

In most cases (at least these days), Access is not limited by CPU cycles - we have boatloads of those.

Hence, in near all cases, it's not CPU, but WHEN access has to use/get/grab external resources - that means things like network(s), and hard drives.

So, while ram, and CPU these days is plentiful, and in general not a issue (that is helped by faster hardware)?

Well, next up comes the hard drive - and these days, they are also now stupid fast.

However, what has NOT kept up with the amazing processing power revolution we have in terms of hardware?

Networks - they have not near improved at the rate of CPU, ram, disk drives etc.

So, in near all cases, the bottle neck not going to be better/faster hardware, but that of minimizing the use of external resources - so file(s) on a shared server, and that of network to say the database, be it a shared folder for a pure Access backend, or that of hitting SQL server. The instant your computer has to use external resources is quite much the same instant all that extra hardware etc. falls down, and is waiting for those external resources.....

R
Albert
 
If you 100% local, no network involved?
The having a really great and fast hard drive is the ticket.

In most cases (at least these days), Access is not limited by CPU cycles - we have boatloads of those.

Hence, in near all cases, it's not CPU, but WHEN access has to use/get/grab external resources - that means things like network(s), and hard drives.

So, while ram, and CPU these days is plentiful, and in general not a issue (that is helped by faster hardware)?

Well, next up comes the hard drive - and these days, they are also now stupid fast.

However, what has NOT kept up with the amazing processing power revolution we have in terms of hardware?

Networks - they have not near improved at the rate of CPU, ram, disk drives etc.

So, in near all cases, the bottle neck not going to be better/faster hardware, but that of minimizing the use of external resources - so file(s) on a shared server, and that of network to say the database, be it a shared folder for a pure Access backend, or that of hitting SQL server. The instant your computer has to use external resources is quite much the same instant all that extra hardware etc. falls down, and is waiting for those external resources.....

R
Albert

Let's assume that the Access procedure is running on PC1, which accesses data on the locally installed SQL Server, through the ODBC driver
 
Your question at least IMPLIES that you are asking about the fastest stand-alone Access you can get, that ISN'T being shared. If this is a multi-user Access situation, then remember that whatever machine hosts the FE file is the one doing all the work.

Let's assume that the RAM is sufficient to completely contain both the operating system and all the applications currently running.
 
What hardware, operating system, and configuration is recommended to maximize the performance of the running application?
Hardware: use the best that exists
OS: Windows 11, you don't want to use an outdated OS
Configuration: Only the Access install and Windows 11

Let's assume that the Access procedure is running on PC1, which accesses data on the locally installed SQL Server, through the ODBC driver
Add SQL Server and the ODBC Driver to the configuration.

Access will use as many system resources as the processor can allocate to it. When your CPU has fewer tasks to manage, it can devote more cycles to Access, making the application appear to run faster when fewer programs are active. Conversely, if you're running a resource-intensive application, such as 3D rendering software, a modern game, or heavy AI workloads, your Access performance will suffer. A CPU-only system is especially vulnerable to this, since there's no dedicated hardware to offload the work, which, for the examples given, a dedicated graphics card would be ideal.

So, Unless you limit or optimize those demanding processes, they can monopolize the processor and starve Access of the resources it needs.

Malware can have a similar effect. If your system is compromised and background processes are consuming CPU time (e.g., botnet or "zombie" activity), Access performance will drop accordingly.

If you're certain that only Access and its required components are running, then performance limits are largely determined by your VBA programming practices, database design, UI efficiency, and overall IT environment.
 
Let's assume that the RAM is sufficient to completely contain both the operating system and all the applications currently running.

OK, that suggests you need not less than 7 GB RAM plus the size of your Access app when IT is fully loaded. Since it is unwise to have asymmetric memory card sizes, 8 GB is not quite enough and the next symmetrical size would be 16 GB. You CAN have odd RAM sizes like 1 x 8 GB and 1 x 4 GB giving 12 GB, but that asymmetry prevents you from setting your hardware to automatically interleave.

For the best memory interleaving, you want 16 GB RAM in 4 GB chip x 4 chip configuration with the hardware set up for 4-way address interleave. That will shave off quite a few percent for memory internal latency. I don't know if you can set interleave via ordinary BIOS settings. It used to be a jumper on a particular device a couple of decades ago, but in the 2010-2016 time frame, I recall I could set interleaving in my machine's UEFI boot-up manager. Because of irregular data element sizes for some instructions, it is not quite double the speed for 2-way or quadruple for 4-way interleave. But there IS a big advantage for memory interleaving when possible. I doubt you can get MORE than 4-way interleaving because most systems only support having 4 memory cards.

The other thing to do - and this is now more commonplace to be able to find - is that the storage device interface must be as wide (bits per single cycle transfer) as possible and DMA-enabled. You would ask "but why WOULDN'T it be DMA enabled?" To which the answer is "USB 3.0 serial disk interface." If speed is your need, use a PC equipped with PCIe bus architecture and an NVMe-compatible SSD disk. Second choice is NVMe over SATA interfaces but PCIe is fastest because of the way it ties into the backplane of your box.
 
Let's assume that the Access procedure is running on PC1, which accesses data on the locally installed SQL Server, through the ODBC driver
Ok, now that changes things a wee bit, but again, with hardware where it stands?
Well, since the network bottleneck is gone?
Well, as long as one has a multi-core CPU. And these days, that's probably a min of 6 threads, and more CPU's?
Then that will suffice, and once again, throwing more CPU's, or more ram at this?

It's not really going to help much.
As noted, I certainly recommend having say a newer MVMe then compared to say a SSD drive.
And depending on what tools one has loaded during working?
Well, 16GIG probably the min, and say 24-32 is better. I mean, during the day, the "hogs" I have open are Visual Studio (open near all day), Outlook, and some browsers, and along with SSMS, and SQL express. These run on my computer all day long. Outlook and Visual Studio seems to be the real hogs these days. But, since I'm do lot's of web stuff all day long? Well, now I have

Visual Studio running.
SQL express server running
SSMS running.
IIS (internet services running - the web server).
Outlook
Access

And then the issue becomes do you have, or want any VM's running - I have lots of those, but I don't run them all day.

However, this conversation was limited to JUST Access, and now the question is introduction of SQL server into this mix.

I think I would trade some CPU cycles (and cores) for more disk drive speed. In many cases, if you talk to gamers etc. they will state that loading a game from a SSD is not really any slower then loading from a MVMe drive, dispite the drive being 5x faster rated.
But why?
Well, it turns out most game assets are high compressed, and it's the CPU that limiting the load times of the game (and the un-compressing and un-raveling all the game assets into memory - that's much CPU bound, not drive speed bound). So, you don't see a big jump in performance - it still much CPU bound to "assemble" the game assets.

However, in pure database land? Then the drive speed REALLY going to help more then CPU speed is.

And during the day? It's quite often that I'll take a copy of a database from say the production server, and "restore" it local.
That database is about 7-8 gigs in size. So, the faster (local) MVEe drive is VERY noticeable when I restore that database - and I often do this.
(we often do this to track down a bug - be it in Access, or from the web site - both share the same database).

And it will depend on what kind's of update(s) and SQL one is doing here. So, for me, I don't notice much speed difference in database queries etc. with say more CPU cycles. But I MOST certainly benefit and notice improved development times with faster hard drives - such as restoring and backing up databases, or working with larger projects in Visual Studio.

I thus can't share a lot more then above - but a min of 24 gigs, and a fast hard drive, and say at least 12 or more CPU threads?
After that, I really don't have first hand experience. Access is single threaded, and SQL server can use more cores/threads - but with just one user (you the developer), then even throwing more threads at SQL server not going to help, since threads are a "sideways" scaling issue.

By "sideways" scaling?
That means more users and a larger workload can be handled, but that does not mean that each single user will see their work completed faster by adding more threads/CPU's. So, a single developer box tends to not be thread starved anyway. And if you not thread starved, then adding more cores/threads not going to help you.

Going from say 3.5 ghz CPU to say 4? That will only shave 1 second off a 10 second operation - you probably not notice that.
Same for most queries and SQL stuff. However, SQL backup's and restores etc.? Oh boy, that's where I really smile and enjoy the benefits of a good fast hard drive, and such things help the developer more during a given day then say a bit more CPU cycles....

R
Albert
 
Years ago, back in the olden days speed was a real issue to me with Access.
Then I read abut the permanent connection and it wasn't a problem from that day on. Windows97 was probably the first OS where database systems benefited (since C-DOS). Before that some thought that to pull a record 3 to 5 seconds was acceptable. I've always thought that after 1995 any spec of machine will do just fine for industry and commerce.
 
Last edited:
Years ago, back in the olden days speed was a real issue to me with Access.
Then I read abut the permanent connection and it wasn't a problem from that day on. Windows97 was probably the first OS where database systems benefited (since C-DOS). Before that some thought that to pull a record 3 to 5 seconds was acceptable. I've always thought that after 1995 any spec of machine will do just fine for industry and commerce.
Could not agree more!

And, I not a big hardware guy, and not a gamer.

Really, today we are spoiled with hardware. So, for me? Just about any machine today is more computer then we could dream about years ago.

I often when dragging and copying a Access database - say 130 megs? You can drag and drop that like nothing!
And to think that one file is larger then my 40 meg hard drive, and in fact larger then my first 100 meg drive!

It's just incredible what we do without even being amazed anymore. That above file? Can't even feel a delay when I copy it!

I suppose if I was more of a hardware person, I could perhaps add/give additional advice here - but I really don't and can't add much more then what I shared here.

For the most part, a decent computer today with a good hard drive is more then ample for 99% of developers.....

Now, if one is doing video editing, and graphics work?
Well, then I am just wondering into an area that I don't have the experience to recommend hardware for....


R
Albert
 
an SQL server is best run on Windows Server version.
use the latest tech for your Server, not just a simple pc, use a Server pc.
for workstation, there are i13 now.
 
an SQL server is best run on Windows Server version.
use the latest tech for your Server, not just a simple pc, use a Server pc.
for workstation, there are i13 now.

Ok, i write Sql Server + odbc only for define a system
But for Access code? Which hardware+software system reach the better performance?
 
...Going from say 3.5 ghz CPU to say 4? That will only shave 1 second off a 10 second operation - you probably not notice that....

1 second from better cpu
1 second from better disk
1 second from better s.o.
are 3 second !!!
 
Past a certain point, you will not squeeze any more blood from the turnip. Latency is easy - until it isn't. It is not enough to say that you will use physically interleaved memory if somewhere in the mix you run into disk-based latency or network-based latency or O/S based latency or user-based competition latency. And before you say "but I'm running as a single user"... no, you are not. Nobody is. Because your other user is the 350 system service tasks running something that shows up on Task Manager as "SVCHOST.EXE" or something similar. You are running literally HUNDREDS of service tasks. And you can't shut ALL of them down.
 
I think you are looking for some hardware magic bullet that will run software fast no matter how it's written. There is no such thing!

For example, an earlier post with sample code to calculate perfect numbers for 2 to 10000 ran in a 5 seconds on a 4 year old i7 with 32 GB memory and P2 drive optimized for 3d modeling. It's replacement has a i9 and 4 times the memory and a faster P2 drive ran the same code in 1 second. That is almost a pure CPU load, Access will be loading IO not CPU.

Another example, not Access. When I was leaning 3d A user had a sheet metal part with thousands of holes in a grid pattern, is was taking 20 minutes to calculate. With a change in modeling methods, processing was instant.

Access is the same, poor data design cannot be fixed by adding better hardware.
 
A poor design will benefit from improved hardware performance just as much as a good design
 

Users who are viewing this thread

  • Back
    Top Bottom