Size of the database does not change (1 Viewer)

isladogs

MVP / VIP
Local time
Today, 13:36
Joined
Jan 14, 2017
Messages
18,186
In your new database, run Debug...Compile in the VBE. The size should increase again as it will include compilation code
 

Isaac

Lifelong Learner
Local time
Today, 06:36
Joined
Mar 14, 2017
Messages
8,738
I've read all this thread carefully, the possible reasons for it, but none of them seem like the 'winning' one to me.
It's very strange.
And my conclusion is to agree with those who suggested you buy a lottery ticket. Don't forget about us, though
 

moke123

AWF VIP
Local time
Today, 09:36
Joined
Jan 11, 2013
Messages
3,849
agree with those who suggested you buy a lottery ticket.
Especially since it's $1,390,000,000 between mega-million and powerball. I just got back from the store.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 13:36
Joined
Sep 12, 2006
Messages
15,613
The file system never loses the chance to report size correctly because it isn't required to look at anything in the file itself. File size is determined by data in the NTFS structures called extent pointers or retrieval pointers or file record segments or some similar name. The file system looks at the file's record segments which essentially point to the physical location of file blocks on the random-access storage device (HDD or SSD or "thumb drive") and counts the number of blocks and totals that for total file size.

However, it is POSSIBLE that the backup system altered something in the header that would cause the file size to be fixed in allocation. I am not familiar enough with the details of special allocations. However, it IS possible that your backup system created a particular sized file record segment to hold the file in a single (necessarily contiguous) lump.

Here is a chance to kill two birds with one stone. I'm TOTALLY on-board with the others who suggest splitting your database. So... do the split. This will cause Access to make two different files, which will be a front-end and back-end. The FE will be smaller than the BE. I could well imagine that your C&R would do nothing to the BE from one run to the next. After the split, also run a C&R on the BE separately. If it does not shrink, then move the file to a working area and create a new blank DB file. Migrate everything in the BE file to the new BE and put THAT in the shared area.

Your comment about split DBs running slower would make no sense in THIS context because you are already sharing the monolithic version on a common drive, yes? If you do the split correctly and then look into a topic called "persistent connection" you can make your DB run FASTER and safer. I'll try to be kind, but let's just say I have NEVER seen a split DB that was shared run slower than a monolithic DB that was shared by the same group of users. NEVER.

Sharing the monolith means that EVERY ACTION incurs file locking on a shared system, which means a ton of lock arbitration over the network. Splitting the DB so that your users have individual private copies of the FE contents means that they incur NO lock arbitration for their private copy contents. Oh, sure, they take out locks - but (a) they are LOCAL locks as opposed to network-based locks - a speed difference of milliseconds per individual lock, and (b) there can BE no lock collisions for private files because nobody else can SEE those private files. Thus your lock collisions would be held to the BE file. Just be sure that you don't use PESSIMISTIC locking on any form or query. (One would hope that you don't let your users directly see infrastructure...)

So why is this an advantage now? Once you do the split, you are halfway to the goal of eventually making your system work on an SQL back-end server because the tables are already isolated and you don't have to repeat the split for the FE components. You just have to update the locations of everything.

I like the idea of a "fixed" size. I actually have a database that's large and doesn't seem to change size. I've never thought too hard about it, but I will import it into a new dbs, and see what happens. Won't be for a while though!
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 08:36
Joined
Feb 28, 2001
Messages
26,999
I'm basing this on a rather tenuous linkage. First, Dave Cutler wrote the bulk of OpenVMS and Windows NT. (He did VMS first.) A huge (maybe even astounding) part of WinNT and subsequent versions was based on VMS structures including the file system. NTFS is like OpenVMS ODS-2 on steroids. Second, in the On-Disk Structure for VMS it is possible though rare to fix the size of a file leaving a HUGE amount of slack space. If you then make a backup copy of the file with a fixed allocation, if you don't use special backup options, the backup matches the allocated rather than the actual file size. Making an image backup copy would preserve ALL of the settings INCLUDING that "fixed allocation" option.

Therefore, I am GUESSING and freely admit it, but there is precedent for having a fixed allocation that is larger than the actual file it is holding. However, making a new database and exporting everything into it would break the chain of fixed allocation.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 13:36
Joined
Feb 19, 2013
Messages
16,553
it's always a possibility that the file is actually already corrupted and is manifesting itself as a failed C&R in that internal pointers are mixed up. Similar scenario to when file reaches 2Gb - compacting has no effect, you have to import to a new db.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 08:36
Joined
Feb 28, 2001
Messages
26,999
The size in the OP's case wasn't 2 GB. More like 120+ MB. But "already corrupted" would make sense, except that I would expect a C&R to vehemently report a failure.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 13:36
Joined
Feb 19, 2013
Messages
16,553
don't disagree, just throwing it in as a possibility
 

CJ_London

Super Moderator
Staff member
Local time
Today, 13:36
Joined
Feb 19, 2013
Messages
16,553
and some remote to office laptop desktop via Citrix,
if you really are using citrix (i wasn't aware you could remote connect to an office machine using citrix), then place the BE on the citrix server and a copy of the front end on the server in each of the users profile. That will solve performance issues, working wirelessly and working remotely.
 

nortonm

Registered User.
Local time
Today, 13:36
Joined
Feb 11, 2016
Messages
49
Whenever I get broken/corrupt record most of the text changes to a form of Chinese characters in the database, and usually in the table that holds the new submitted records rather than any feeder tables. You could open the main table and have a quick scroll around to see if any records look like that? If they are then try and delete them, and redo the table - create a new table and move your clean records into that.. Then as advised by others split the DB as soon as possible, it's very thin ice to have multiple users in and out of a single unsplit database.
 

Users who are viewing this thread

Top Bottom