Size of the database does not change (1 Viewer)

isladogs

MVP / VIP
Local time
Today, 22:14
Joined
Jan 14, 2017
Messages
18,219
Who knows. As already stated, its unimportant compared to corruption. You may have lost data due to corruption without realising it.

But if you're really bothered try importing a large table from another dB and compare the size before and after. Then delete it again, compact and compare again
 

theDBguy

I’m here to help
Staff member
Local time
Today, 14:14
Joined
Oct 29, 2018
Messages
21,473
This is very true ... but (back to our sheep) why the size is the same?
How much data is different? File size is measured in chunks. Maybe there's enough chunk to accommodate all the records in your tables. The file size would change as soon as you fill up the current chunk and a new chunk has to be added.
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
Who knows. As already stated, its unimportant compared to corruption. You may have lost data due to corruption without realising it.

But if you're really bothered try importing a large table from another dB and compare the size before and after. Then delete it again, compact and compare again
Good idea! Thanks ...
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
How much data is different? File size is measured in chunks. Maybe there's enough chunk to accommodate all the records in your tables. The file size would change as soon as you fill up the current chunk and a new chunk has to be added.
Chunk? Interesting ...where can I read about chunks?
 

moke123

AWF VIP
Local time
Today, 17:14
Joined
Jan 11, 2013
Messages
3,920
Just lately the corporate laptops (~1600) were upgraded from Win 7 to Win 10, and Office 10 to Office 365.
Have not seen that anyone commented on this but is everyone connecting with wireless laptops?
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
Have not seen that anyone commented on this but is everyone connecting with wireless laptops?
Yes ... and some remote to office laptop desktop via Citrix, others take laptops home and logged into the corporate network ...
 

isladogs

MVP / VIP
Local time
Today, 22:14
Joined
Jan 14, 2017
Messages
18,219
I added a "big" table from the external database, C&R - the size increased 20 Mb (140 instead of 120), then deleted this table, C&R - but the size remains the same 140 Mb ...
Perhaps someone else was using it when you tried to compact it....if so, more potential for corruption

And now you tell us that your simultaneous users are on WiFi with various methods of remoting in.
How your database is still functional is a miracle.

Better be careful of those killer chunks as they may byte you badly
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
Perhaps someone else was using it when you tried to compact it....if so, more potential for corruption

And now you tell us that your simultaneous users are on WiFi with various methods of remoting in.
How your database is still functional is a miracle.

Better be careful of those killer chunks as they may byte you badly
There was no ldb file ...
 

theDBguy

I’m here to help
Staff member
Local time
Today, 14:14
Joined
Oct 29, 2018
Messages
21,473
Chunk? Interesting ...where can I read about chunks?
I hope the following links help explain what I mean.


 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
There was no ldb file ...

I hope the following links help explain what I mean.


Thank you ... I would expect that other Access databases (even after C&R) would also behave like the one in question, e.g. to show the same size again and again ... but this is not the case, they always shrink after compacting
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 16:14
Joined
Feb 28, 2001
Messages
27,183
The file system never loses the chance to report size correctly because it isn't required to look at anything in the file itself. File size is determined by data in the NTFS structures called extent pointers or retrieval pointers or file record segments or some similar name. The file system looks at the file's record segments which essentially point to the physical location of file blocks on the random-access storage device (HDD or SSD or "thumb drive") and counts the number of blocks and totals that for total file size.

However, it is POSSIBLE that the backup system altered something in the header that would cause the file size to be fixed in allocation. I am not familiar enough with the details of special allocations. However, it IS possible that your backup system created a particular sized file record segment to hold the file in a single (necessarily contiguous) lump.

Here is a chance to kill two birds with one stone. I'm TOTALLY on-board with the others who suggest splitting your database. So... do the split. This will cause Access to make two different files, which will be a front-end and back-end. The FE will be smaller than the BE. I could well imagine that your C&R would do nothing to the BE from one run to the next. After the split, also run a C&R on the BE separately. If it does not shrink, then move the file to a working area and create a new blank DB file. Migrate everything in the BE file to the new BE and put THAT in the shared area.

Your comment about split DBs running slower would make no sense in THIS context because you are already sharing the monolithic version on a common drive, yes? If you do the split correctly and then look into a topic called "persistent connection" you can make your DB run FASTER and safer. I'll try to be kind, but let's just say I have NEVER seen a split DB that was shared run slower than a monolithic DB that was shared by the same group of users. NEVER.

Sharing the monolith means that EVERY ACTION incurs file locking on a shared system, which means a ton of lock arbitration over the network. Splitting the DB so that your users have individual private copies of the FE contents means that they incur NO lock arbitration for their private copy contents. Oh, sure, they take out locks - but (a) they are LOCAL locks as opposed to network-based locks - a speed difference of milliseconds per individual lock, and (b) there can BE no lock collisions for private files because nobody else can SEE those private files. Thus your lock collisions would be held to the BE file. Just be sure that you don't use PESSIMISTIC locking on any form or query. (One would hope that you don't let your users directly see infrastructure...)

So why is this an advantage now? Once you do the split, you are halfway to the goal of eventually making your system work on an SQL back-end server because the tables are already isolated and you don't have to repeat the split for the FE components. You just have to update the locations of everything.
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
The file system never loses the chance to report size correctly because it isn't required to look at anything in the file itself. File size is determined by data in the NTFS structures called extent pointers or retrieval pointers or file record segments or some similar name. The file system looks at the file's record segments which essentially point to the physical location of file blocks on the random-access storage device (HDD or SSD or "thumb drive") and counts the number of blocks and totals that for total file size.

However, it is POSSIBLE that the backup system altered something in the header that would cause the file size to be fixed in allocation. I am not familiar enough with the details of special allocations. However, it IS possible that your backup system created a particular sized file record segment to hold the file in a single (necessarily contiguous) lump.

Here is a chance to kill two birds with one stone. I'm TOTALLY on-board with the others who suggest splitting your database. So... do the split. This will cause Access to make two different files, which will be a front-end and back-end. The FE will be smaller than the BE. I could well imagine that your C&R would do nothing to the BE from one run to the next. After the split, also run a C&R on the BE separately. If it does not shrink, then move the file to a working area and create a new blank DB file. Migrate everything in the BE file to the new BE and put THAT in the shared area.

Your comment about split DBs running slower would make no sense in THIS context because you are already sharing the monolithic version on a common drive, yes? If you do the split correctly and then look into a topic called "persistent connection" you can make your DB run FASTER and safer. I'll try to be kind, but let's just say I have NEVER seen a split DB that was shared run slower than a monolithic DB that was shared by the same group of users. NEVER.

Sharing the monolith means that EVERY ACTION incurs file locking on a shared system, which means a ton of lock arbitration over the network. Splitting the DB so that your users have individual private copies of the FE contents means that they incur NO lock arbitration for their private copy contents. Oh, sure, they take out locks - but (a) they are LOCAL locks as opposed to network-based locks - a speed difference of milliseconds per individual lock, and (b) there can BE no lock collisions for private files because nobody else can SEE those private files. Thus your lock collisions would be held to the BE file. Just be sure that you don't use PESSIMISTIC locking on any form or query. (One would hope that you don't let your users directly see infrastructure...)

So why is this an advantage now? Once you do the split, you are halfway to the goal of eventually making your system work on an SQL back-end server because the tables are already isolated and you don't have to repeat the split for the FE components. You just have to update the locations of everything.
Interesting ... good to know ... thank you ... I'll try to split during the weekend ....
 

theDBguy

I’m here to help
Staff member
Local time
Today, 14:14
Joined
Oct 29, 2018
Messages
21,473
Thank you ... I would expect that other Access databases (even after C&R) would also behave like the one in question, e.g. to show the same size again and again ... but this is not the case, they always shrink after compacting
That depends, are these "other databases" you are speaking of also non-split and remotely accessed via wifi databases? I think there are different levels of corruption and guessing your subject database may be corrupt at some level.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 16:14
Joined
Feb 28, 2001
Messages
27,183
Chunk? Interesting ...where can I read about chunks?

Your chunks are based on file record segments, which are pointers to the location of disk allocation units associated with the file. To read about the chunks in their proper context, look up the two element names I Italicized.

The "chunks" are elements of the NTFS (New Technology File System) that was introduced with Windows NT in 1996. Disks from back in the days of MS-DOS (and before that) were allocated in units called "clusters" by some companies or "allocation units" in other companies. A disk has a "natural" layout of blocks but it was found that cluster size 1 (1 block = 1 allocation unit) were inefficient for larger disks due to the extensive overhead of remembering where each block was that was part of a file. For comparison, look at the FAT16 file system.

In NTSF, a disk has a fixed-size allocation unit of 4, 8, 16, or 32 disk blocks. A disk block is still exactly 512 bytes, but the larger AU size allows larger files to require less overhead to locate relevant AUs for a given file. The overhead is called a file record segment, which points to a given AU starting point on the disk and it also contains a "number of AUs" for that segment. By physical / geometric necessity, all AUs in a single segment MUST be logically contiguous. A single file that has a single file record segment is automatically contiguous whether it was originally built that way or got that way as the result of running a disk defragger. (NOTE: Never run a defragger on a solid-state disk.)

One chunk ON DISK is called a "segment" which explains why a file-record segment pointer gets its name.
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
That depends, are these "other databases" you are speaking of also non-split and remotely accessed via wifi databases? I think there are different levels of corruption and guessing your subject database may be corrupt at some level.
That's true
 

Minty

AWF VIP
Local time
Today, 22:14
Joined
Jul 26, 2013
Messages
10,371
Just as another option, and it might help identify some problems, create a new blank database.

Import all the objects from the problem child into this new blank database.

See how big it is?
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
Just as another option, and it might help identify some problems, create a new blank database.

Import all the objects from the problem child into this new blank database.

See how big it is?
Good idea ... will try
 

valeryk2000

Registered User.
Local time
Today, 17:14
Joined
Apr 7, 2009
Messages
157
Just as another option, and it might help identify some problems, create a new blank database.

Import all the objects from the problem child into this new blank database.

See how big it is?
86Mb instead of 120 ... will check how does it work ...
 

Users who are viewing this thread

Top Bottom