razaqad, this question comes up in more systems than just Access. At my site, my main machine is running an older version of ORACLE. We still have to take special actions regarding when and how we back up the db files.
You see, the issue here is that the database isn't on the disk. ("What?" you say, "what is this bloody MDB file good for, then?" )
The real database is the set of pointers and cached records in each user's workspace - which resides on each user's local machine. The database file on the shared file server disk is just a shared file. It contains NOTHING that hasn't been written from a workstation. It contains NOTHING that has not yet been committed. It is the running sum of all transactions only up to the point at which you backed up the file. BUT...
You have no idea of what transactions were running inside that database. You have no idea of whether the re-indexing operation is complete. You don't know if a long string of buffers was being written back when you did whatever you were doing to make the copy. You don't know if someone was rebuilding a table and was only half-way at the time. You don't know if an import was complete. All of that because you can only "see" two machines. Your own and the shared file server where the MDB resides. Everything else is on another machine somewhere else and you don't have a reasonable way to see what they were doing. That's a Windows design feature, not an Access flaw, either. Good security says you CAN'T see what was going on outside your own system.
If you attempt to run a backup on an open database (note I did not exclude ORACLE here), you have no assurance of the quality of that dataset unless you know who isn't in it. Namely, everybody isn't in it. Or nobody IS in it. That is the only way to be sure that the tables are fully updated with all transactions that are supposed to be there.
Even though I'm on a military computer that serves a world-wide audience with round-the-clock access, I have to set aside some time every day so I can do some backup and storage area network operations with my databases closed. I won't tell you how painful it was to get the military to agree to that request, even for a lousy little personnel pay system. But it is the only way to assure that when we attempt to restart our operation from a backup copy, we can do so.
In October of 2002, Hurricane Lili forced us to attempt a restart at our remote site. Because at that time we didn't realize how important it was to toss out users for a daily backup cycle, we were unable to come up. The issue was that the data image was a halfway-updated image. It was self-inconsistent. We were lucky that Lili veered away from our primary site and didn't crash anything of ours.
By August of 2005, we had a daily down time for our special operations and were able to come up at our remote site after Katrina took out New Orleans. I cannot stress how important it is to take whatever steps you need to ASSURE (not just guess and hope) that you have usable copies of your data that can be re-activated from a backup/replication copy.
I happened to have a system for which backup and storage area network data replication issues had been solved. Well over 40% of our projects had not yet addressed the problem at all, another 20% had given it lip service only. They were down for times ranging from two weeks to two months. All because they ASSUMED things were OK.
So, razaqad, if this database is really important to your company, my advice to you to is to strongly emphasize to your management team the need to ASSURE proper copies exist by setting aside a backup time. Don't ASSUME. ASSURE instead. You'll sleep better at night knowing you can ride through a disaster. (Not to mention that if the pungent brown detritus ever DOES hit the rotating air destratification device, you'll come out smelling like a rose and looking like a genius.)