Bruce,
On most all LANs (most networks use Ethernet), there are no ties.
Only one signal from one PC can be moving over the network wire at one time (think of "the wire" as a single cable looping through all PCs on the local area network). If there are two signals on the wire, there is a collision and both signals die. The competing PCs, with an ear to the wire, upon hearing their packets expire, reset for a random period and re-send.
If wiring is centralized with a switch (rather than a hub), the signals race directly over an individual wire, collision free, to the switch where they are processed. This is why a switch is faster than a hub, all things being equal -- no collisions.
That is, if Computer A and Computer B both send a signal bound to the server -- via the switch -- the switch will determine who goes first over the wire that runs from it to the server.
The server processes the signals it receives and sends them to the app. If the server is busy with other tasks, it's conceivable that two requests for the same record in the same app made at different times will queue up, as though they both arrived simultaneously.
As you can see it's ultimately the last link in the chain (the server) that determines what gets pushed to the app and when it gets pushed to the app, and then it's the app that deals with multiple requests from the server, via the switch/hub, via the workstation. Traffic from other apps on the network and server performance can influence net speed and reliability.
The point: Assuming I didn't just make all this up, networks are built to handle "simultaneous" access and, at least on paper, so is Access.
A thought: If your app works as advertized, you may see some frustration from the routing trio as a job they want to schedule goes away and then, perhaps (don't know if you built this in), later, comes back if one of them decides to throw it back into the bin -- sort of speak. That is, you must guard against jobs getting left behind if you already haven't.
Regards,
Tim