(BTW, how do you "always replace the FE each time it is opened"? How would you do that, and why?)
No, you misread my post. I did not overrun the maximum allowable number IN an autonumber field - I reset the autonumber field to start at 1 again.Hm. The range of values for the Long Integer, which is the data type for AutoNumbers, is -2,147,483,648 to 2,147,483,647.
So even if your user created only positive values, that is 2 billion, 147 million, 483 thousand, 647 records possible. Are you sure the user was able to create that many records in one day? That would be an extremely heavy load indeed, it would seem.
Unless the user could show you a table with an AutoNumber that large, I'd look elsewhere for the actual problem. There is one, no doubt, but this may be the wrong path.
"No meaning" is only one way to use an autonumber field. I was using it as a sequence number - seemed a handy, no-code way to get sequential numbers. And it does do that, very nicely. There are no other actions that mess with this temp table, so I have no problems of missing numbers or any of the other issues that bedevil incorrect use of this feature. Trivial to code around, now that I know what the problem is, but if they allow the start number to be reset AT ALL, it seems mind-numbingly incompetent to do it in a way that crashes after a certain number of repeats.I'm not justifying the sloppy code but I'm sure that the developer convinced himself it would never be a problem because autonumbers have no meaning and so resetting them over and over again would never happen.
The overview is that you build a very short batch job to copy the "master" FE to overwrite the prior copy, then you launch what you just copied. You do it as a way to automatically update the FE so that if you have "fixed" something, everybody gets the fixed version on the next launch. As a side effect of this approach, even if you DIDN'T make an update, the users get fresh copy of the FE that has not yet had a chance to get bloated by use.
Why should compact the backend create a new data field?The clods at MS somehow coded this action to not actually reset the numbering, but instead create a NEW autonumber field in the same table, while leaving the old one hidden somewhere in the guts of the table.
DBEngine.CompactDatabase nor with ALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).No - resetting the starting number of the autonumber field is what (apparently) creates a new column, while leaving behind some hidden dregs of the former one. C & R clears out such debris. Reset the start enough times (~255) without C& R, and you get the "too many fields" error. Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.Why should compact the backend create a new data field?
I cannot reproduce the problem, neither withDBEngine.CompactDatabasenor withALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).
Does this only occur during automatic compact when closing?
Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.
Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)"
Dim i As Long
For i = 1 To 10000
'CurrentProject.Connection.Execute ResetCounterDDl
CurrentDb.Execute ResetCounterDDl, dbFailOnError
Next
Yes, when you replace it always, you do not need to check the version. I do NOT replace it always, hence the version check. I only replace when needed, in the multi-user systems.Replacing the FE each time you open it has nothing to do with version checking. Version checking ensures that the user is running the application correctly and that the FE and BE are in sync.
Updating a single-user monolithic app is certainly possible but I would still split the app and have the user copy two files instead of one. Automatically connecting to the BE is easy enough if they are in the same folder. You could give him a copy script that zips the two files into a single file. In this case, in addition to the version match, I might use a table in each database with a counter that updates each time the app is opened to ensure that they are copied in sync. Then when the counter is zero in the FE, the BE is resync'd with the new FE.
Well, maybe a reset doesn't create a new field if the autonumber has never been incremented? I don't know - just heard about he problem yesterday morning from the user, and haven't investigated the details myself yet, but that is the first thing that occurs to me - no reset, if the counter has not yet actually moved off the starting point. Maybe, in your loop, try adding a record to the table to trigger the autonumber incrementer and then reset?=> no crash.Code:Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)" Dim i As Long For i = 1 To 10000 'CurrentProject.Connection.Execute ResetCounterDDl CurrentDb.Execute ResetCounterDDl, dbFailOnError Next
Note: Compact also resets the counter to the smallest possible value.
Dim i As Long
For i = 1 To 10000
With CurrentDb
.Execute "insert into table1 (T) select top 100 T from table2", dbFailOnError
.Execute "delete from table1", dbFailOnError
.Execute ResetCounterDDl, dbFailOnError
End With
Next
... TableDef is used and not DDL.learn.microsoft.com/en-us/office/troubleshoot/access/too-many-fields-defined-error
I suppose there is always some way for a creative enough user to mess things up. So, you have an FE that checks - what? You put a new FE on the server and at the same time update a version number somewhere in the BE, and the FE check to see if they match? Also a good method.Yes, you do. If the user makes a mistake and opens Access and clicks on the last used database, the new FE will not be downloaded and the one in the local directory will be used. This wouldn't be terrible unless this was the day that the versions changed. Better safe than sorry.
The only problem I see with your monolithic solution is that if the user downloads a new version and copies it to work offline without opening it first so it can run the copy code, he ends up off site with no data. Otherwise, as long as the data transfer is automated, it should be fine.
Okay, you've got me - I don't know. I will be digging into this and changing the code to not use autonumber, but I will try to reproduce the problem myself before I dismantle the existing setup - I'm really curious now. Here is the entire routine that bombs:=>no crash.Code:Dim i As Long For i = 1 To 10000 With CurrentDb .Execute "insert into table1 (T) select top 100 T from table2", dbFailOnError .Execute "delete from table1", dbFailOnError .Execute ResetCounterDDl, dbFailOnError End With Next
Tested with Access 365 version 2405
Can you reproduce the problem?
... TableDef is used and not DDL.
Public Sub ResetTmpTable(ByVal x$)
With CurrentProject.Connection
.Execute "DELETE * From tmp" & x & "AutoID"
.Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle Number"
.Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle COUNTER(1, 1)"
End With
End Sub
Your update method for the single db is clever but complicated.
They don't actually need to know about the BE if you provide a copy script that always copies both the FE and the BE.
This line is also unnecessary. This will probably force a new data field in the background. => 1 deleted field + 1 new field => deleted fields require space up to compact..Execute "ALTER TABLE tmp" & x & "AutoID ALTER COLUMN RaditPodle Number"