Sonic, I think you miss the point.
If the database is used for queries only, no writing, then once it is clean it is HIGHLY unlikely to become corrupted. This kind of read-only usage is commonly found in either "data mining" or "decision management" systems. And those are incredibly common.
Multi-user concurrency and network load were not considered because the experiment was done to isolate DB Engine speed from other speed-killing factors. That is the nature of experiments. You isolate on one factor at a time, which is analogous to the mathematical process called "partial differentiation." You vary one thing at a time to see what effect it has, all other things being constant. So the action in the article is OK by my standards of experimentation.
You talk about those Enterprise Edition servers. Do you want to know the REAL difference between my Intel i7/2600 system and a server-class machine? Memory size, wider interleaving, and a faster disk bus. We both have 8-core (hardware multi-threaded) CPU components. We both have memories that are appropriate for the back-plane, and my system's back-plane was designed for gaming so it is pretty fast.
If you have a stand-alone machine, even as dinky as my little 16 Gbyte DELL XPS 8300, you can run multiple threads and the memory sub-system still takes data in 64-bit gulps at a 3.2-3.4 GHz rate. If I had as slow as a fiber-channel 2 Gbit interface instead of a SATA interface, I'd run even faster than I already do. And FC interfaces are available for faster than that if I wanted them.
I worked with U.S. Navy servers that were nothing more than fancy multi-core systems with big memory and a fast disk bus and a good network card. Inside the box? CPU was the same on my desktop as was in the server. Memory was the same. Memory interface was a little different because it allowed 4-way interleaving and supported a faster DMA (fiber) interface. But it still supported EISA buses.
Do you really know why the "server-class" machines are so fast? Because they have big memory, in the 192 - 256 GByte range. But it's the same memory as I use in my system, balanced against back-plane speed. So why is it that those other machines seem so fast?
Your job on one of those machines runs about the same speed as it would on my system - but those machines time-share via multi-threaded hardware AND they don't have to do as much paging or swapping. With big memory, you can keep multiple big jobs in memory at the same time. THAT is why the server class machines do so much. They don't swap.
But Windows is Windows. At startup, a Windows task does the same thing on my Win10 Home and a MS Server 2012. They load the memory management pointers with the disk addresses of segments in the .EXE file that contain the executable image and then they JUMP (GOTO) the first instruction, which triggers a cascade of page faults as the new image enters memory. Same difference either way.
In the final analysis, the article is saying that the folks looking at the performance of the "monster machines" are looking at aggregate performance, usually as a time-share system. If you are going to isolate on specific comparisons to see which ones are significant, you leave out those factors - as the article did - and concentrate on s/w speed - which the article did.