Plain English, though somewhat detailed:
what is in this that makes the difference?
The problem stems from having multiple copies of Windows and of Access that are running in the memories of individual user workstations, NOT on the server's memory. These copies DON'T KNOW about each other. The memories are separate and do not communicate with each other. Windows handles the file overall information. Access handles the file content information. Let's consider two users, #1 and #2.
When Windows reads data about a file or Access reads data about a table, that information is in the local memory of the workstation. If user #2 then comes along behind user #1 and somehow changes the information on the server, and if user #1 then acts on the (now obsolete) information it read only a moment ago, you get data update conflicts. Remember, these updates are NOT coordinated. They can occur unpredictably since people are driving the process. (It is the way people work, not the process itself, that makes it unpredictable.)
If user #1 updates an incorrect copy and changes the database by doing so, in essence that action negates, overwrites, changes things done by user #2.
This is the REAL meaning of "destructive interference", also called "left hand not knowing what the right hand is doing." And losing user #2's data would be the SIMPLER of the two possible bad results.
One of the things that each copy of Access does when it launches for a given file is to remember where things are located in that database. It copies all sorts of pointers to its local memory so that it can quickly find things later. One of those things is the size of the file. Another is the place where unused space starts at the end of the current database file.
You might interject here... "But what about deleted objects?" Access doesn't delete things right away. It just MARKS them for deletion. It doesn't reclaim that space until you do a Compact & Repair. So there IS a point of "first free space in file" that is not affected by prior deletions that haven't been compacted away yet.
Since Access uses some of the database file's space to remember long lists for open queries, users opening SELECT queries, even if they don't update the tables, DO update the Access workspace. If an operation starts from two users using the same area of the workspace at the same time, the workspace gets overwritten. When that happens, you get a CORRUPTED workspace and the database stops working. It will have to be recovered, and you always keep your fingers crossed that you CAN recover it.
That is the structural, internal reason why sharing a monolithic database is a bad idea.
The best way that has been discovered to avoid file corruption is to split the structural part from the data part. This is commonly referred to as a front-end/back-end or FE/BE split. When this happens, the locks and pointers associated with the FE file are PRIVATE and there can be NO contention for them. The "window of opportunity" for a collision of conflicting use totally vanishes (closes) for anything in the FE file.
The technique of using optimistic locking of data in the BE file allows for the "window of conflict opportunity" to be closed MOST of the time. This means that with judicious coding of queries and other data-access techniques, you open the data tables only when you actually need them.
Consider this: When a user is keying data into a form, Access takes a snapshot of the data from the table to fill in the form. The lock is open for about the time it takes to read one disk block, which is milliseconds. Access doesn't need to lock the underlying table again until the user is ready to actually update the form's contents into that table. Until then, all the work is in the FE and the BE is minimally involved.
You do the FE/BE split to stop file corruption. For a unified (non-split) shared database, you can't stop corruption by any way I have ever seen.
There is no other factor that needs to be described. If you want to have a usable shared database that doesn't need recovery, doesn't incur frequent data loss, and that doesn't experience heavy down time, split it and choose the FE update methods wisely.
Doing a shared monolithic Access database doesn't work well if it works at all. Doing a shared split Access database using FE/BE split techniques DOES work and quite nicely.
It is actually a cut-and-dried decision. Split databases to minimize the chance of file corruption caused by destructive interference from simultaneously operating copies of the application that don't know about each other.
Everything else listed in the article provided by the Gent is true but this is the REAL reason you split databases. All the other benefits are ancillary.