What pc hardware/software to speed up Microsoft Access?

However, I have seen significant performance gains when running Access 2010 msi apps on newer hardware with Windows 7 because that version of Access is not a Click-To-Run resource hog, and it has a smaller memory and storage footprint than the newer C2R versions that have to authenticate online with MS. Your Microsoft Account login is a real hog. Win7 doesn't have that problem, and I'm still running Win10 without an MS Account. I don't think you can run Win11 without logging in with an MS account.

Windows 7 is also not a hog like Windows 10 and 11 that have embedded telemetry, CoPilot, and a thousand other processes, (which I call virtual users) simultaneously running. SMALLER IS BETTER!!! Try launching newer Office versions without being connected to the internet and see what happens.

When launching an A2010 msi based app for the first time after booting up my Win7 box, I briefly see the MS Access splash popup flicker on the screen. On subsequent launches I never see the splash flicker at all. That's because MSACCESS.EXE and the app are already cached in memory. And with the new NVMe 14.9 GB/sec SSD storage, most everything runs at lightining speeds. On Win10, the same A2010 msi app is notably slower.

However, no matter how much CPU power, memory, and storage your box has, Access is limited to use 2 GB of memory and 1 CPU Core because Access is not multi-threaded.
Is the 2gb limit your application's memory limit? Doesn't Access/Windows cache table data in in memory as it's loaded?
 
Okay, we totally agree
It is just the way it is.

Now, of course we could as, what is the maxium type of hardware setup to get max performance here?

Well, every day, at one clients location, I remote into a sever. How amazing and fast is this server?
Well, it cost over $100,000 dollars - in fact quite a bit more!!

And boy, let me tell you, fast is fast!

But, what is the key technology in that server that makes it run so fast? (hint: it's not really the 32 CPU's and processors it has).
and it not really the insane amount and gobs of ram that it has.

And if a hard drive (now Mve or SDD) fails?
You can pull it out while the server is running, and just pop in a new drive. It will auto format, and being pard of a "raid"?
Well, it's fail safe.
And so are/is the multiple power supplies. You can slide a power supply out while the computer is running, pop in a new one, plug it in, and the computer just keeps on running as nothing happened!!! They are REALLY amazing bits of hardware.

And now the server room? Used to be full of about 18 computers?

Nope, all gone - just one computer now - everything is a VM on that box.

What really makes that server fly? And can you afford and "mimic" what that super duper server has?

Yes, you can, and and the answer is "raid" drive technology.

So, good desktop computers, and their motherboards support raid.

That means you can say buy 4 1TB MVe fast drives.

You "raid" them all together. Those 4 TB of drives thus become 2 TB, or even 1TB of storage.

And that means if one drive fails? You can pull it out while the computer is running, and put in a new one, without the computer stopping?

Well, you increase the speed of the drives by a factor of 4x. Not 30%, but a whopping 4x increase in drive speed. (depending on how you raid multiple hard drives together).

As noted, most operations don't occur a lot faster then my laptop.
However, my current laptop had a MVe drive, and the previous one had a SSD drive?

When using my SSD laptop? Well, that server was a "joy" to work on, and it was "oh so much" faster.

However, with MVe drive on my laptop? Well, sure, the server is still somewhat faster, but I don't notice a lot of difference, and I can't save being on that server saves me noticeable time and benefits. Of course one area would be VM's. And VM's are one of the most useful developer technologies. Need a test web server? Just spool up a new VM. Need a test copy of win 11, and want to test your Access application installer system? Spool up a new VM. Want to test say 2-4 computers and a server? Just spool up 4 computers, and one server - all networked VM's and all existing without having to buy or obtain new hardware.

Want to run and test x64 and x32 bit Access? Again, VM's are the answer.

So, what word processing is to a personal assistant? Well, that's what VM's are to developers!!! - immensely useful.

But, if you looking for the ultimate in performance?

Go with a raid drive(s) setup.

It can double or 4x times your drive space cost, but it will also double or 4x times your drive speed!

I can't really think of any other technology that will and can speed up a computer today as compared to what RAIDing your drives can.

That's what these fast servers use, and I can say wow - they really do perform fast when it comes to anything involving files......

So, if you looking for the ultimate in performance, then I suggest exploring use of RAIDed drives...

R
Albert
 
Well, every day, at one clients location, I remote into a sever. How amazing and fast is this server?
Well, it cost over $100,000 dollars - in fact quite a bit more!!
And boy, let me tell you, fast is fast!
....
R
Albert

Exactly what desktop computer are you comparing it to?
I mean, if you're used to using a low-end PC, you might be surprised by the performance of any SSD RAID
Is it possible to know the Cpu/Ram/Disk + software configuration of the superfast server?
 
But it doesn't answer the question posed

But only an honest friend will tell you when you've asked the wrong question.

Still, if you want to know what will make a hardware difference, it is the backplane architecture.

I've mentioned that I upgraded my home machine when my old beast died on me. But that old beast was a 64-bit bus x 16 GB RAM with an SSD on a SATA interface. With 4 CPU cores and thus 8 threads, at 3.6 GHz, it was a cranking good machine. Not material to the question, but it had a darned good video card for its time. But it still wasn't the hottest thing in existence. Because internally, it didn't use PCIe backplane architecture and didn't have huge intermediate caches on the memory bus, nor did it have look-ahead instruction staging. One of my machines out at work, though it was only a 1.6 GHz box, could out-perform my 2x faster CPU, because it had the memory and I/O architecture of a server with multiple independent I/O access via fiber-channel technology and instruction staging. (Sometimes referred to as "instruction pre-fetch".) In case anyone is wondering, it was an Intel Itanium server based on the "Longhorn" chipset. I know about "outperform" because there were some benchmark programs I could get from Navy sources and it was legal to run on my home machine since I sometimes used it to remote-connect to the Navy beast box.

I liked it because that box had the DAMNEDEST math capabilities including all the way out to IEEE X format (128-bit floating point). When I did the statistics for my monthly performance reports, there were NEVER any math rounding errors in the intermediate calculations.
 
But only an honest friend will tell you when you've asked the wrong question.
Yes you are right
But mine is not wrong .:ROFLMAO:

I've mentioned that I upgraded my home machine when my old beast died on me. But that old beast was a 64-bit bus x 16 GB RAM with an SSD on a SATA interface. With 4 CPU cores and thus 8 threads, at 3.6 GHz, it was a cranking good machine. Not material to the question, but it had a darned good video card for its time. But it still wasn't the hottest thing in existence. Because internally, it didn't use PCIe backplane architecture and didn't have huge intermediate caches on the memory bus, nor did it have look-ahead instruction staging. One of my machines out at work, though it was only a 1.6 GHz box, could out-perform my 2x faster CPU, because it had the memory and I/O architecture of a server with multiple independent I/O access via fiber-channel technology and instruction staging. (Sometimes referred to as "instruction pre-fetch".) In case anyone is wondering, it was an Intel Itanium server based on the "Longhorn" chipset. I know about "outperform" because there were some benchmark programs I could get from Navy sources and it was legal to run on my home machine since I sometimes used it to remote-connect to the Navy beast box.

I liked it because that box had the DAMNEDEST math capabilities including all the way out to IEEE X format (128-bit floating point). When I did the statistics for my monthly performance reports, there were NEVER any math rounding errors in the intermediate calculations.

With Itanium, 1.6 GHz, mmmmm.... may be this ?
529K usd, without disk and without ram, obviously
Hopefully they'll offer a discount
 
First, the Integrity server is what we had. But they certified it for OpenVMS as well. Your link no longer shows that. I guarantee you, I was NEVER able to saturate that machine. Ever. We had the 8-thread version, whereas the link is to the 16-thread version - two 4-core (8-thread) chips instead of one. But I monitored system states. On an 8-thread system, I never saw saturation level more than 2.5/8. I ALWAYS had 5 threads open. We used to refer to the Integrity server running OpenVMS with 20-30 users online at once as "the world's biggest I/O wait state." If I had the end-of-month load, CPU might reach 2.5/8 but I might see 20 out of 25 user processes in a voluntary wait state - either I/O or a "SUSPEND" (which usually meant one task was waiting on another task.) There is also the matter of its cost, a mere $528K in US dollars.

Second
Yes you are right
But mine is not wrong .

Given that your question is about Access, which is already one of the less efficient programs out there, let's say this: If I had to spend $528K to buy a machine to run Access a little faster, I would SERIOUSLY consider looking at a way to do this in another language that is true-compiled rather than pseudo-compiled. The software cost might be less than $500K. AND my earlier comment about bottlenecks is especially relevant since I personally know the performance of this machine. You will get no more speed out of the machine than you can push through the I/O devices.
 
Just for clarification, Tandem is now a fully-owned subsidiary of HPE though they didn't start that way. Both Tandem and DEC had "non-stop" computers in which if one machine failed, another could take up its work in a matter of seconds. They DID advertise "non-stop" but they didn't advertise "won't slow down a bit during transitions." VAXclusters literally could, with a shared memory setup, have computers that could take over an active thread if a host failed. I never had a VAXcluster with shared memory but I did manage more than a couple of VAXclusters. Sweetest architecture you had ever seen until things got too fast for the CISC architecture. But it was OK, because a version of OpenVMS was released that worked on RISC architecture, and eventually on the Integrity server. In theory they could have increased the CPU internal clock speed, but VMS spent most of its time with users in an I/O wait state where they weren't executing instructions anyway, so what was the purpose? (Rhetorical...)
 
Exactly what desktop computer are you comparing it to?
I mean, if you're used to using a low-end PC, you might be surprised by the performance of any SSD RAID
Is it possible to know the Cpu/Ram/Disk + software configuration of the superfast server?
You are 100% correct. As noted, with my older computer, I "really" noticed the speed difference.
With a newer (and not really a high end) laptop, then that daily speed difference was not like "oh wow", was the server oh so nice.

So, that server is still fast, but as noted, standard hardware has improved so much. It's not a huge deal anymore. It's been some time since I had to restore a database on that server - (and most often, I'm inside of some VM). So, not a whole lot of difference say when restoring a SQL server database on my laptop (about 8 gigs in size that database is, and I do that multiple times during a typical work week).

The one thing I have considered was buying a laptop that supports RAIDing the drives you put into it - something that I not really researched.

But, for a good desktop computer? Then RAID options are available.

As for the server specs? Gee, I not really looked - not my job so to speak.......

It is a DELL server as far as I know. And I don't think the number of CPU's in it is maxed out (I think each CPU board has 4 physical CPU's with multiple "logical" CPU's.

However, recently, a few servers we have are Lenovo ones, since that's what the main supplier of hardware (for over 100+ desktops and other things tends to sell us right now).

So, for example, our web server (that shares the same database as the desktops running Access application) has now been moved out of that main big server, and now runs on 100% it's own hardware box. So, on that server, the web site in then run inside of a VM. Thus, this server is 100% separate hardware, and the computer is NOT part of the company domain anymore. This was done for reasons of security. So, that physical server is behind some hardware network stuff (security), and that server is isolated in its own "DMZ" zone. So, it has no general company "domain" rights, and even file access to the company network is "carefully" managed and restricted.

And total disk storage? Oh, well, it's 100's of TB's right now - I'll have to ask the IT guy how much, but it's boatloads (and network storage devices - SANS are used for this).
But, the main os, and most things (including SQL server) are on that main super server drives - a highly raided setup...

If I can remember during this week - I'll ask for the actual specs.....
The CPU is of course a intel server Xeon version - Xeon Silver 4410Y's if I recall correctly.

So, the server CPU's often don't have higher cycle rates (gHz clock speeds), but they are optimized for multiple-threaded workloads, and those types of CPU's have really good "multi-socket" support - so high memory bandwidth etc. (and error correcting ram etc.).

So, each of those CPU's? They have 12 cores/ 24 threads. And they don't have "little" CPU's and "big" ones, but are all uniform. And they are of course tuned for stability over that of raw CPU cycle rates (gHz). And those CPU' support massive memory bandwidths - 256GB/s. (a new Intel Core i9 runs at about 90 GB/s if you wondering).

So, I don't know how many of those 12 core units are inside the server - but it's at least 4 of them, so that would be 48 cores, and each of those core looks like 24 CPU's (hyper-threads). So, that looks to be about 96 CPU's if we counting logical processors.....

But back on earth? For us mere mortals?
You can actually get as much CPU or even more cycles on a desktop processor. Those server CPU's thus scale "side ways" and allow more users and threads - not necessary that each thread going to be faster then desktop.

But, it is amazing what money can buy.........


R
Albert
 
x64 will give you a large memory space.
P2, one of the single board style drive that connects directly to the motherboard as opposed to an SSD that plugs in like a HHD.
Core speed would be more important than count with Access.
If you think about it, Access is more likely to be IO bound then CPU. Once the BE is shared on a server, data transfers on you network will be the bottle neck. Our current servers have mirrored solid-state drives hosing the BE on an all Cat6 network.
 
Last edited:

Users who are viewing this thread

Back
Top Bottom