C & R

pdanes

Registered User.
Local time
, 20:38
Joined
Apr 12, 2011
Messages
263
Is there any way to detect when a database was compacted on its previous close?
 
The method used by C&R creates a new database (and copies elements from old to new) so the creation date on the newest version might be a hint. The update date will be changed every time you open it, but I think creation date is not affected by simple open/close.

Please be aware that there are issues if that is an automated C&R. They have been known to fail and leave the DB in a really bad state.
 
No bloat, no automated C&R. I have one temp table in which I reset the autonumber field. I never ran into it during my testing, and the user never encountered it in the 16+ years he's been using the app, until yesterday. He was pounding furiously on one specific function that used that code, and suddenly got an error about too many fields defined. I did a little reading, discovered what it was and told him to do a C & R. That fixed it - turns out Access does something goofy internally like marking the old autonumber field as inactive and creating a new one. Naturally, after a sufficient number of cycles through this code, the app hits the limit, just like putting too many controls onto a form. C & R fixes it. (Exactly this kind of rubbish is what gives me a rash about Microsoft - instead of coding it properly to begin with, or fixing it when the problem is uncovered, they leave in such garbage and rely on a patchwork solution like C & R.)

I was thinking about putting a counter in this code, and do a C & R on close when the counter reaches a certain number, but I also wanted to be able to detect if the user had done a C & R manually, so I could reset the counter than as well.

However, that's probably not a good approach overall. I think I will simply recode that module to do what I was doing with the autonumber field some other way.

(BTW, how do you "always replace the FE each time it is opened"? How would you do that, and why?)
 
Hm. The range of values for the Long Integer, which is the data type for AutoNumbers, is -2,147,483,648 to 2,147,483,647.

So even if your user created only positive values, that is 2 billion, 147 million, 483 thousand, 647 records possible. Are you sure the user was able to create that many records in one day? That would be an extremely heavy load indeed, it would seem.

Unless the user could show you a table with an AutoNumber that large, I'd look elsewhere for the actual problem. There is one, no doubt, but this may be the wrong path.
 
(BTW, how do you "always replace the FE each time it is opened"? How would you do that, and why?)

The overview is that you build a very short batch job to copy the "master" FE to overwrite the prior copy, then you launch what you just copied. You do it as a way to automatically update the FE so that if you have "fixed" something, everybody gets the fixed version on the next launch. As a side effect of this approach, even if you DIDN'T make an update, the users get fresh copy of the FE that has not yet had a chance to get bloated by use.

Since the FE is where "the action" occurs, it is also where temporary lists and other transient data structures are built during queries, particularly if you have layered queries. Those temporary structures contribute to bloat.


This might help. The article is #7 of the indicated thread. It shows a simple four-line batch job and discusses the subject in greater depth.
 
Hm. The range of values for the Long Integer, which is the data type for AutoNumbers, is -2,147,483,648 to 2,147,483,647.

So even if your user created only positive values, that is 2 billion, 147 million, 483 thousand, 647 records possible. Are you sure the user was able to create that many records in one day? That would be an extremely heavy load indeed, it would seem.

Unless the user could show you a table with an AutoNumber that large, I'd look elsewhere for the actual problem. There is one, no doubt, but this may be the wrong path.
No, you misread my post. I did not overrun the maximum allowable number IN an autonumber field - I reset the autonumber field to start at 1 again.

The clods at MS somehow coded this action to not actually reset the numbering, but instead create a NEW autonumber field in the same table, while leaving the old one hidden somewhere in the guts of the table. Tables can only have 255 fields, apparently including such debris.

I work with this user pretty intensively, and as he is constantly coming up with new ideas, or changing his workflow and thereby his requirements, he gets a new version quite often, and of course, a new version is always freshly compacted by me before shipping, so this has never come up before. But in the last few days, he came up with a project that suddenly made massive use of the code that does this reset, and for the first time in the life of the project, managed to overrun the field count limit.

I had no idea Access worked this way - I assumed restart a numbering sequence meant exactly that and nothing else. (I know, assume...). I'd love to apply a steel-toed size 14 to the rear end of whoever dreamed up this bit of stupidity.
 
I'm not justifying the sloppy code but I'm sure that the developer convinced himself it would never be a problem because autonumbers have no meaning and so resetting them over and over again would never happen.
"No meaning" is only one way to use an autonumber field. I was using it as a sequence number - seemed a handy, no-code way to get sequential numbers. And it does do that, very nicely. There are no other actions that mess with this temp table, so I have no problems of missing numbers or any of the other issues that bedevil incorrect use of this feature. Trivial to code around, now that I know what the problem is, but if they allow the start number to be reset AT ALL, it seems mind-numbingly incompetent to do it in a way that crashes after a certain number of repeats.
 
The overview is that you build a very short batch job to copy the "master" FE to overwrite the prior copy, then you launch what you just copied. You do it as a way to automatically update the FE so that if you have "fixed" something, everybody gets the fixed version on the next launch. As a side effect of this approach, even if you DIDN'T make an update, the users get fresh copy of the FE that has not yet had a chance to get bloated by use.

Ah, yes. Thank you. I have looked at that mechanism, and it doesn't quite suit my purposes.

I have two main 'types' of customers. One is a multi-user environment, talking to a SQL Server backend. There of course I have separate front ends, but I use a version number. When an FE opens, it compares its own version number to the version on the server. When there is a newer one on the server, the user is asked if he wants to download the new version (default Yes). That then starts a download / replace / restart process. But even the FE is around 100MB, so I don't want to irritate the users by unnecessarily copying it every time. To update to a newer version, I just put the new version with the new number on the server, and each user can get it next time he starts.

The other type is a single-user, and there I deliver the app as a single-file setup. Splitting it would introduce pointless complexity, especially since the users often work on multiple computers, including taking a notebook (with database) into the field, to a vacation cottage, to a conference and so on. A single-file setup means they can simply copy the DB onto a flash drive and take it wherever they want - to work, to home machine, to portable notebook, even to other people's computers, if they want to consult with someone. As long as the target computer has a compatible version of Access, they're fine. When I deliver a new version to such a user, it is an empty shell, with the suffix "New" appended to the name. On start-up, the DB examines itself. When it sees all tables empty and its own name with the "New" suffix, it 'knows' that it is the new version, so it asks the user where is the prior version (with all the current data). It then reads in the data, often doing some modifications or conversions in the process, if the new version requires it, then spawns and executes a script that renames the old version with the suffix"_Archive" and a date-time stamp, renames the "New" version to the old version name and restarts the new (now full) version with the stock name. Process takes less than a minute. If it crashes (happens occasionally), the user simply lets me know what happened and continues on with the old version, as if nothing had happened. When the process completes successfully, he again continues, with the new version named exactly as the old one was.
 
Last edited:
The clods at MS somehow coded this action to not actually reset the numbering, but instead create a NEW autonumber field in the same table, while leaving the old one hidden somewhere in the guts of the table.
Why should compact the backend create a new data field?
I cannot reproduce the problem, neither with DBEngine.CompactDatabase nor with ALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).
Does this only occur during automatic compact when closing?
 
Last edited:
Why should compact the backend create a new data field?
I cannot reproduce the problem, neither with DBEngine.CompactDatabase nor with ALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).
Does this only occur during automatic compact when closing?
No - resetting the starting number of the autonumber field is what (apparently) creates a new column, while leaving behind some hidden dregs of the former one. C & R clears out such debris. Reset the start enough times (~255) without C& R, and you get the "too many fields" error. Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.
 
Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.
Code:
Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)"

Dim i As Long
For i = 1 To 10000
    'CurrentProject.Connection.Execute ResetCounterDDl
    CurrentDb.Execute ResetCounterDDl, dbFailOnError
Next
=> no crash.

Note: Compact also resets the counter to the smallest possible value.
 
Replacing the FE each time you open it has nothing to do with version checking. Version checking ensures that the user is running the application correctly and that the FE and BE are in sync.

Updating a single-user monolithic app is certainly possible but I would still split the app and have the user copy two files instead of one. Automatically connecting to the BE is easy enough if they are in the same folder. You could give him a copy script that zips the two files into a single file. In this case, in addition to the version match, I might use a table in each database with a counter that updates each time the app is opened to ensure that they are copied in sync. Then when the counter is zero in the FE, the BE is resync'd with the new FE.
Yes, when you replace it always, you do not need to check the version. I do NOT replace it always, hence the version check. I only replace when needed, in the multi-user systems.

For the single-file configuration, I know there are other ways to do this, and I understand that a split system offers some advantages to even a single user. It also has disadvantages, the primary one of which is additional complexity. I know, I -could- provide a script to do the copying, I -could- tell the user to copy a folder instead of a file, I -could- put version numbers into both FE and BE to insure that the user didn't only copy one of the files instead of both. I -could- do any number of things to compensate for the disadvantages of a split system, but the point is, I don't need to, and I don't particularly want to. This works, the users are used to it, and everyone is perfectly satisfied. The app makes regular backup copies of itself into a subfolder, in case something gets corrupted. The user deletes those a few times per year, when they start getting too numerous.

Splitting a database sometimes makes sense, and other times not. It is not universally a good move - a single-file system is often a better configuration, primarily for the simplicity.
 
Code:
Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)"

Dim i As Long
For i = 1 To 10000
    'CurrentProject.Connection.Execute ResetCounterDDl
    CurrentDb.Execute ResetCounterDDl, dbFailOnError
Next
=> no crash.

Note: Compact also resets the counter to the smallest possible value.
Well, maybe a reset doesn't create a new field if the autonumber has never been incremented? I don't know - just heard about he problem yesterday morning from the user, and haven't investigated the details myself yet, but that is the first thing that occurs to me - no reset, if the counter has not yet actually moved off the starting point. Maybe, in your loop, try adding a record to the table to trigger the autonumber incrementer and then reset?
 
Looking into this some more, it seems that it's not JUST the autonumber. Quoting directly from The Company:

learn.microsoft.com/en-us/office/troubleshoot/access/too-many-fields-defined-error

"Access also creates a new field (increasing the internal column count by 1) for every field whose properties you modify."

Words fail me...
 
Code:
Dim i As Long
For i = 1 To 10000
    With CurrentDb
        .Execute "insert into table1 (T) select top 100 T from table2", dbFailOnError
        .Execute "delete from table1", dbFailOnError
        .Execute ResetCounterDDl, dbFailOnError
    End With
Next
=>no crash.
Tested with Access 365 version 2405
Can you reproduce the problem?

learn.microsoft.com/en-us/office/troubleshoot/access/too-many-fields-defined-error
... TableDef is used and not DDL.
 
@pdanes - My first question to you would be to check the table when this is done to see if it renamed the autonumber field. Normally, you don't care about the name of the field THAT much because, as noted, autonumbers are in theory not supposed to have any meaning other than "unique record number." If there is no table that depends on the table in question (i.e. no child table exists), then there are also no links involving this field and you would have no collateral issue in a "hidden" rename.

Your question about resetting the seed number on an empty table vs. a non-empty table is a valid question. @Josef P. - if you still have the code where you can try it, insert a record before you reset the sequence number to 1. See if THAT exhibits the odd behavior.

@pdanes - my second question for you is this: When this reset operation occurs, are error messages and warnings disabled? Because I can easily believe that Access takes some kind of evasive action, but I can't believe it does so silently. If Access has ANY failings in that regard, it is frequency of error reporting and/or warnings (normally, too often).

EDITED by The_Doc_Man: @Josef P. - you posted the 2nd experiment report just before I closed my comments. So thanks for doing the test!
 
Yes, you do. If the user makes a mistake and opens Access and clicks on the last used database, the new FE will not be downloaded and the one in the local directory will be used. This wouldn't be terrible unless this was the day that the versions changed. Better safe than sorry.

The only problem I see with your monolithic solution is that if the user downloads a new version and copies it to work offline without opening it first so it can run the copy code, he ends up off site with no data. Otherwise, as long as the data transfer is automated, it should be fine.
I suppose there is always some way for a creative enough user to mess things up. So, you have an FE that checks - what? You put a new FE on the server and at the same time update a version number somewhere in the BE, and the FE check to see if they match? Also a good method.

My single-file solution has one key feature, and that is the name. For example, I have one in the field for a lichenologist, and hers is named Lichen.accdb - wildly imaginative, I know. When I send her a new, empty version, it is named LichenNew.accdb, and that is what she puts into the same folder and runs, as her first step. The code sees its own name and empty tables, so it asks her for the name and location of her current data, with default as Lichen.accdb and the current folder. All she needs to do is hit Enter. The current data is read into LichenNew.accdb, doing any necessary manipulations along the way. When finished, it spawns a VBScript and kills itself. The script watches for the LichenNew.laccdb file to disappear, a Database.accdb to appear and disappear again (the new version does a C & R on close after reading in the current data). When all that settles, it renames Lichen.accdb to (e.g.) Lichen_Archive_2024-05-29_10-49-37.accdb, renames LichenNew.accdb to Lichen.accdb, with pauses between each action to allow the Windows file system to catch its breath, then restarts the newly renamed Lichen.accdb, and finally, deletes itself. So, she always know that the app she runs, which contains her data, is named Lichen.accdb, and LichenNew.accdb exists only for the duration of the import, then vanishes on its own.

Yes, she could copy the new, empty database somewhere and be stuck, but there is a limit to just how much hand-holding I can do. These users are not computer experts, but they are not idiots - they are highly respected natural scientists, often world-class authorities in their fields. When I give them clear and concise instructions, I can enjoy a reasonable expectation that they will be able to follow them. And again, the single-file configuration vastly simplifies everything - they know that THIS file is where their database is, period. Even if I wanted to go to a split system, beyond any doubt, the first response from every one of them would be, "Why all this? I liked it better when there was only one thing. Can't you put it back the way it was?"
 
Code:
Dim i As Long
For i = 1 To 10000
    With CurrentDb
        .Execute "insert into table1 (T) select top 100 T from table2", dbFailOnError
        .Execute "delete from table1", dbFailOnError
        .Execute ResetCounterDDl, dbFailOnError
    End With
Next
=>no crash.
Tested with Access 365 version 2405
Can you reproduce the problem?


... TableDef is used and not DDL.
Okay, you've got me - I don't know. I will be digging into this and changing the code to not use autonumber, but I will try to reproduce the problem myself before I dismantle the existing setup - I'm really curious now. Here is the entire routine that bombs:
Code:
Public Sub ResetTmpTable(ByVal x$)
With CurrentProject.Connection
    .Execute "DELETE * From tmp" & x & "AutoID"
    .Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle Number"
    .Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle COUNTER(1, 1)"
End With
End Sub
The tables that this works on contain only two fields - an ID number, used for a join to the appropriate record table, and the RaditPodle field, which is purely a sequence number, used later in an Order By clause, to export the records in a specific order. The ordering specification can be a zoo, so I scan the table according to the ordering spec, write the record IDs into this table, with that autonumber set to start at 1, the export them via a join and ordered by this autonumber.

The second ALTER statement is the line that crashes. But as I look at it now, I'm wondering if the first ALTER is not the source of the problem. Again, the online help says: "Access also creates a new field (increasing the internal column count by 1) for every field whose properties you modify." I don't recall now why that first ALTER is even there, but I wonder if THAT is not the line that increments the field counter, and I have been blaming an actually innocent autonumber reset.

Well, time to get to work on this. I wonder what it will turn out to be.
 
Your update method for the single db is clever but complicated.

Well, if you can think of something that is simpler AND lets me keep it all in one file AND does not make the user's life any more complicated, I'd love to see it.

They don't actually need to know about the BE if you provide a copy script that always copies both the FE and the BE.

Sure, but again, that makes the entire setup more complex. They could be copying to a flash drive, and the drive letter could be anything. They could be putting it into OneDrive, to let the OD mechanism do the copying and transferring to a different computer, and pick it up there. They could be sending it to someone via FTP, or some online file transfer mechanism. And they can have it in different places in different computers. They could be making a backup copy. They could be copying to several different flash drives. They could be moving it to another drive in the same computer, because the current drive is running out of space. Probably other possibilities as well. Do you want to write a copy script that deals with all that? I don't.

And again, the first response would be,
"Why a copy script? I know how to copy a file."
There are two files now. (Actually, at least three, if you count the copy script - maybe more, for zipping and unzipping, etc.)
"Why two files?"
And away we go...
 
Is it perhaps related to deleted data fields? I was able to reproduce this error.
The space is only released again after Compact.

.Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle Number"
This line is also unnecessary. This will probably force a new data field in the background. => 1 deleted field + 1 new field => deleted fields require space up to compact.
 

Users who are viewing this thread

Back
Top Bottom