The file system never loses the chance to report size correctly because it isn't required to look at anything in the file itself. File size is determined by data in the NTFS structures called extent pointers or retrieval pointers or file record segments or some similar name. The file system looks at the file's record segments which essentially point to the physical location of file blocks on the random-access storage device (HDD or SSD or "thumb drive") and counts the number of blocks and totals that for total file size.
However, it is POSSIBLE that the backup system altered something in the header that would cause the file size to be fixed in allocation. I am not familiar enough with the details of special allocations. However, it IS possible that your backup system created a particular sized file record segment to hold the file in a single (necessarily contiguous) lump.
Here is a chance to kill two birds with one stone. I'm TOTALLY on-board with the others who suggest splitting your database. So... do the split. This will cause Access to make two different files, which will be a front-end and back-end. The FE will be smaller than the BE. I could well imagine that your C&R would do nothing to the BE from one run to the next. After the split, also run a C&R on the BE separately. If it does not shrink, then move the file to a working area and create a new blank DB file. Migrate everything in the BE file to the new BE and put THAT in the shared area.
Your comment about split DBs running slower would make no sense in THIS context because you are already sharing the monolithic version on a common drive, yes? If you do the split correctly and then look into a topic called "persistent connection" you can make your DB run FASTER and safer. I'll try to be kind, but let's just say I have NEVER seen a split DB that was shared run slower than a monolithic DB that was shared by the same group of users. NEVER.
Sharing the monolith means that EVERY ACTION incurs file locking on a shared system, which means a ton of lock arbitration over the network. Splitting the DB so that your users have individual private copies of the FE contents means that they incur NO lock arbitration for their private copy contents. Oh, sure, they take out locks - but (a) they are LOCAL locks as opposed to network-based locks - a speed difference of milliseconds per individual lock, and (b) there can BE no lock collisions for private files because nobody else can SEE those private files. Thus your lock collisions would be held to the BE file. Just be sure that you don't use PESSIMISTIC locking on any form or query. (One would hope that you don't let your users directly see infrastructure...)
So why is this an advantage now? Once you do the split, you are halfway to the goal of eventually making your system work on an SQL back-end server because the tables are already isolated and you don't have to repeat the split for the FE components. You just have to update the locations of everything.