Let's say that we wrote a client software that allowed us to interact with the .mdf files directly, bypassing the daemon.
The instant we use this software, all problems that we associate with Jet will instantly rear their ugly head because it's inherent fact of how we read and write to the file. By handing off requests to daemon, the daemon has exclusive control over the files so there is never a problem of corruption from fragmentation or sharing conflict; daemon handles all the potential conflicts before it's even written to the file.
Without a daemon, Jet basically runs on client's side, accessing the files remotely and it works on a cooperative policy but even with a cooperative locking, it remains the state that the file gets fragmented during the use and when we have another client connecting to the same file, it can get fragmented still, and if they need to access same page, corruption is very likely. This is why everyone talk about splitting the database. Microsoft provided us with a option to store everything, data and application in a single document and that's mighty convenient when you're the only user and you are not interested in getting cozy with database theory. Yet, this simplicity will become a liability the second we have second user wanting to use the same file and splitting must be learned to advance into this stage.
Moving to a different backend is basically similar advancement. Instead of several users cooperatively reading and writing to same file, we place the job of writing and reading into a daemon and thus free ourselves from the problems inherent in sharing a file. It's even more important to realize that the problems with writing and reading to a file is not an application issues, it's also operating system/hardware problem so it cannot be easily solved by writing an application to handle the concurrent edits...oh, wait we did! It's called daemon, of course!
Next, Galaxiom is quite astute; the process of linking is in fact very simple. I'd daresay it's easier to link tables than it is to automate to Excel! We got those nice little GUIs to build new DSN and choose it, then choose from a list of tables to link and we're done. We don't even lose the ability to edit queries in the query builder using linked tables! Thus George Wilkinson is also correct to point out that the transition is already seamless and we don't have to change everything at once; we can set up a quick'n'dirty solution by importing data into new backend then linking the tables and keep all existing queries and logic then move the rest one by one when performance becomes a consideration, so it's already simple!
With FileMaker, the architecture is different because each table is its own file. While FileMaker claims that it can handle up to 7 terabytes, a single table (e.g. a file) is limited to....
2 GB!
<Monty Python>It's same!</Monty Python>
In short, we can achieve exactly same thing by making one .mdb containing one table each and linking them all into a common front-end and we'd have much more bigger capability, as long single table doesn't have to get bigger than 2 GB (just like FileMaker).
Well, to be fair, though, I'm very sure that Filemaker handles several files better than Access (they have to, it's their architecture and they would really want to optimize it), so the process would be simpler in FileMaker, but to claim that it is not bound by 2 GB file is not totally accurate. It's still, but worked around by breaking tables into their own file.
Now, think about it for a second. Why did Microsoft decide to give us .mdb that can contain everything in single file?
I think the answer would be simplicity.