TableDef record count (1 Viewer)

Pat Hartman

Super Moderator
Staff member
Local time
Today, 12:34
Joined
Feb 19, 2002
Messages
43,296
Therein lies the problem. split the FE and BE and you don't need the convoluted data transfer. When you change the FE, you just replace the master on the server and the next time he opens the app, the new version gets downloaded. There are various ways to do this from a simple batch file (which is what I use) to complex apps that do more than just replace the FE. You will find them here.

When you have to change the BE, that is a different process. If the user is local, it is pretty easy. You can ask him to not use the app for a few minutes/hours while you change his BE (after backing it up of course) If he is remote, you create a "conversion app". This app adds columns and Adds tables and maybe corrects existing data such as running a query to convert SLS to null. If you have a structural change to make, the conversion is more complex but it should still be automated using DAO code or DDL queries. When I create apps for users whose systems I don't have easy access to, I use the "conversion app" method. One thing though - I never delete tables in the BE or from columns. I leave the old data there.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
Therein lies the problem. split the FE and BE and you don't need the convoluted data transfer. When you change the FE, you just replace the master on the server and the next time he opens the app, the new version gets downloaded. There are various ways to do this from a simple batch file (which is what I use) to complex apps that do more than just replace the FE. You will find them here.

When you have to change the BE, that is a different process. If the user is local, it is pretty easy. You can ask him to not use the app for a few minutes/hours while you change his BE (after backing it up of course) If he is remote, you create a "conversion app". This app adds columns and Adds tables and maybe corrects existing data such as running a query to convert SLS to null. If you have a structural change to make, the conversion is more complex but it should still be automated using DAO code or DDL queries. When I create apps for users whose systems I don't have easy access to, I use the "conversion app" method. One thing though - I never delete tables in the BE or from columns. I leave the old data there.
Yeah, I figured someone would bring this up. Splitting is one way of doing things, but not the only way, and it is certainly not 'the problem' here. I have intentionally not split this one, because it needs to remain portable, and the single-file design is far better for that.

And I am NOT deleting tables or columns. I am reading data from one DB into another. That all works perfectly and I have no intention of changing it. Splitting this DB would introduce a new level of complexity for both the user and me, and have zero benefit for anything.

All I want is to figure out why this RecordCount property returns the wrong value. I do not want to get side-tracked into discussions about other things, like the merits of splitting a database.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 12:34
Joined
Feb 19, 2002
Messages
43,296
You can do whatever floats your boat. I would never create a monolithic app for a user. Period.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
You can do whatever floats your boat. I would never create a monolithic app for a user. Period.
Jesus - nobody is asking you to. But being pedantic about anything is usually a mistake. There are situations where a monolith is a better solution. If you insist on avoiding it for no reason except that you think splitting is 'always the right way', you add extra work for yourself and extra complexity for the user.

And all this has NOTHING to do with what I asked about. If you're going to engage with me, can you please at least stick to the topic?
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 12:34
Joined
Feb 19, 2002
Messages
43,296
The discussion is relevant because your problem, now that we know what the actual problem is, is caused by the use of non-standard procedures. If you wish to use non-standard procedures, that is your choice. I was merely pointing out that your choice is non-standard. If you think splitting is more work than what you are doing, is it because you've never split a database before and don't know how to relink tables? You joined the forum over 12 years ago. You've made very few posts so you don't come to help people. You come to be helped. Am I supposed to know automatically your level of expertise?

If you are not interested in further discussion, just mark the thread as closed and we'll leave it at that.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 11:34
Joined
Feb 28, 2001
Messages
27,191
(Quote from post #7) I have a development machine with 365, and one of my users has an older one with 2007. Both do it.

Do you have any machines that DON'T exhibit the behavior? When theDBguy says it works OK for him, that eliminates some possible cases. For instance, working on one machine but not two others sounds like a configuration issue, which is in turn usually caused by either having an old library or having an issue among the library references. The installed libraries stay with the machines even if you move this monolithic app of yours. What happens is if you take the app from place to place, it upgrades its libraries where needed. BUT if you move the app back to a lower version, the libraries do not automatically downgrade.

You said "one of my users has an older one..." which is of uncertain semantics. Do you mean that another of your users has some version on which this problem does NOT occur?

Earlier (post #18) you showed us a DCount( "1", "Prilohy" ) and that made me pause for a moment. Do you actually have a field named "1" in that table? Because that is the syntax you presented. The thing that is interesting in DCount is that you could miss extant records counting that way because if there is a null involved, DCount won't count it UNLESS you were doing a DCount ("*", "Prilohy").
 

isladogs

MVP / VIP
Local time
Today, 17:34
Joined
Jan 14, 2017
Messages
18,237
I've stayed out of this thread until now but have watched, skimming posts, as it has deteriorated

As stated earlier .RecordCount gives -1 for linked tables but that is irrelevant here as the database isn't split.
Whether or not it should be split is, in my opinion, not relevant to the question originally posted.

The code written in post #18 should work perfectly. I've used similar code myself on numerous occasions.
I've just tested your exact code on local tables and it worked fine for me
In my case the Debug results were 58, 0 , 0
I don't know why your second figure would be other than zero.

Whilst you could use .RecordsAffected, I see no benefit in doing so in this situation

Whilst I always use DCount("*", "TableName"), your variant using "1" gives exactly the same output. Using "1" doesn't mean it is a real field
 
Last edited:

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
The discussion is relevant because your problem, now that we know what the actual problem is, is caused by the use of non-standard procedures. If you wish to use non-standard procedures, that is your choice. I was merely pointing out that your choice is non-standard. If you think splitting is more work than what you are doing, is it because you've never split a database before and don't know how to relink tables? You joined the forum over 12 years ago. You've made very few posts so you don't come to help people. You come to be helped. Am I supposed to know automatically your level of expertise?

If you are not interested in further discussion, just mark the thread as closed and we'll leave it at that.
It is NOT caused by any non-standard procedures, and I am not using any non-standard procedures anyway. I am DELETING ALL RECORDS FROM A TABLE. Period. There is nothing non-standard about an SQL Delete command. What else is going on around that is completely irrelevant to the topic.

Yes, I do know how to split databases. I do so when it's appropriate, like when communicating with an external server. I do not do so when it would be a poor design choice, like this one, when the user wants the entire database to be flash-drive portable. He copies his single .accdb file onto a USB stick and done. Forcing this user to keep track of (at least) two files, with the potential to forget one while copying the other would be a very poor design choice. I will not force my users to jump through pointless hoops because of someone else's views on 'proper' procedures.

I should never have even mentioned this, because it always prompts a flood of irrelevant comments.

The topic is the RecordCount property of a TableDef object. Nothing else.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
Do you have any machines that DON'T exhibit the behavior? When theDBguy says it works OK for him, that eliminates some possible cases. For instance, working on one machine but not two others sounds like a configuration issue, which is in turn usually caused by either having an old library or having an issue among the library references. The installed libraries stay with the machines even if you move this monolithic app of yours. What happens is if you take the app from place to place, it upgrades its libraries where needed. BUT if you move the app back to a lower version, the libraries do not automatically downgrade.

You said "one of my users has an older one..." which is of uncertain semantics. Do you mean that another of your users has some version on which this problem does NOT occur?

Earlier (post #18) you showed us a DCount( "1", "Prilohy" ) and that made me pause for a moment. Do you actually have a field named "1" in that table? Because that is the syntax you presented. The thing that is interesting in DCount is that you could miss extant records counting that way because if there is a null involved, DCount won't count it UNLESS you were doing a DCount ("*", "Prilohy").
No. All machines on which I run this act the same way. The end user is stuck in 2007, so when I develop a new version on my modern laptop with 365, I run it through an old server on which I also still have 2007, and then send it to him. The problem occurs on all of them.

The only thing I can see that might be influencing something is that the problematic table is in a relationship. But it is on the 'one' side, and the table on the 'many' side gets emptied first, so it shouldn't be running into any cascading conflicts.

The DCount only counts records. I have to put something in that first field or DCount complains. I use "1" as a placeholder, when I'm not interested in any specific field. But putting in a name of one of the actual fields gives the same result. The issues with an actual field name and nulls could result in an undercount, but that's not an issue for this situation - all fields in this table are required.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
I've stayed out of this thread until now but have watched, skimming posts, as it has deteriorated

As stated earlier .RecordCount gives -1 for linked tables but that is irrelevant here as the database isn't split.
Whether or not it should be split is, in my opinion, not relevant to the question originally posted.

The code written in post #18 should work perfectly. I've used similar code myself on numerous occasions.
I've just tested your exact code on local tables and it worked fine for me
In my case the Debug results were 58, 0 , 0
I don't know why your second figure would be other than zero.

Whilst you could use .RecordsAffected, I see no benefit in doing so in this situation

Whilst I always use DCount("*", "TableName"), your variant using "1" gives exactly the same output. Using "1" doesn't mean it is a real field
Thank you. Yes, using "*" might be more readable. I've just gotten into the habit of using "1". I'd like to use an empty string, to indicate that I have no interest in any field, but that throws an error.

As for the original topic, I have no clue why it doesn't work, and apparently nobody else does either. There are other tables in the DB that show the correct count. I'm beginning to wonder if maybe just that table is somehow corrupted, and I could fix it by deleting and re-creating the entire table. Worth a shot, I guess. I'll try it.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
Well, it looks like that was it - a corrupt table definition. I removed the relationship, deleted the table, did a C&R, decompile, another C&R, created a new table with the same definition and added the relationship back in, and did a final C&R.

Loaded the tables in the standard manner, using the code where the DB examines itself on startup and requests a location of the data when it sees it is empty. It read in with no problems, then I ran the ClearDB routine, which empties all the tables prior to shipping to the user. Both DCount and TableDef().RecordCount now give the proper value of zero, so I'm going to consider this solved.

Thank you to everyone who took a run at this - it was a weird one.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 11:34
Joined
Feb 28, 2001
Messages
27,191
I'm beginning to wonder if maybe just that table is somehow corrupted, and I could fix it by deleting and re-creating the entire table. Worth a shot, I guess. I'll try it.

Given that you reported a case where you got a specific non-zero number for the .RecordCount after a delete, corruption cannot be ruled out. If so, then deleting all the records might not work correctly anyway. The DELETE process might stop when it hits the deletion.

As you suggest, it MIGHT be enough to just delete and manually recreate the table, followed by a Compact & Repair.

I cannot dispute your choice to consider a "corruption" solution. Good luck on the attempt.

EDIT: Looks like our posts crossed each other at the same time.
Glad to see the problem is fixed AND that in the aftermath, it was a simple enough fix.
 

theDBguy

I’m here to help
Staff member
Local time
Today, 09:34
Joined
Oct 29, 2018
Messages
21,474
Well, it looks like that was it - a corrupt table definition. I removed the relationship, deleted the table, did a C&R, decompile, another C&R, created a new table with the same definition and added the relationship back in, and did a final C&R.

Loaded the tables in the standard manner, using the code where the DB examines itself on startup and requests a location of the data when it sees it is empty. It read in with no problems, then I ran the ClearDB routine, which empties all the tables prior to shipping to the user. Both DCount and TableDef().RecordCount now give the proper value of zero, so I'm going to consider this solved.

Thank you to everyone who took a run at this - it was a weird one.
Glad to hear you got it sorted out. Cheers!
 

isladogs

MVP / VIP
Local time
Today, 17:34
Joined
Jan 14, 2017
Messages
18,237
Given that you reported a case where you got a specific non-zero number for the .RecordCount after a delete, corruption cannot be ruled out. If so, then deleting all the records might not work correctly anyway. The DELETE process might stop when it hits the deletion.

Yes a delete might stop where it hits a corrupted record but, in that case, using DCount would also have given a non-zero value.
So that wouldn't explain the discrepancy in the two values after running the delete query that were reported at the start of this thread.

The OP has a solution...but not an explanation.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 11:34
Joined
Feb 28, 2001
Messages
27,191
The question is whether DCount could follow the table's contents past the corruption point, because if it couldn't, then after the deletion it would return a 0 (representing as far as it got.) Then the before-and-after non-zero .RecordCount is how many records were there to start, then how many records were not deleted because they were behind the corruption point. This assumes that the record count in question is actually not representative of a normalized result. I.e. if the .RecordCount is kept as a property, not a "hidden DCount" method, then it is updated only when an action query adds or removes records. (Just like the method we DON'T recommend for inventory-on-hand counting.) I'm not saying that is actually what happens, but that is one way it COULD happen. Call it speculation. I've been known to go out on a limb before.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
Yes a delete might stop where it hits a corrupted record but, in that case, using DCount would also have given a non-zero value.
So that wouldn't explain the discrepancy in the two values after running the delete query that were reported at the start of this thread.

The OP has a solution...but not an explanation.
True, I don't really have an explanation - corruption is such a vague and all-encompassing word for so many problems, and similar-appearing problems can have vastly different causes. But trying to go through the machine code of exactly what was happening is more than I am willing to tackle. Obviously the two counts use different methods, since they give different results - beyond that, I haven't a clue.

Mostly I was concerned that there was something strange about the TableDef().RecordCount property that I was overlooking, but seemingly no - it does exactly what one would expect. Given what resources I have to investigate the innards of a malfunctiong Access instance (essentially none), that will have to suffice me.

I appreciate your insights on the matter.
 

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
One more note to this. After replying to Colin about Access obviously using two different methods, I got to wondering about performance, so I wrote a tiny test routine to count the records in the largest table in this app, almost 200,000 records.

Here it is. I ran it three ways, each time commenting out two lines and testing only the performance of the third.
Code:
Sub testcnt()
Dim i&, j&, t As Date
t = Now
For i = 1 To 10000
    j = CurrentDb.TableDefs("Zaznamy").RecordCount
    j = DCount("1", "Zaznamy")
    j = CurrentDb.OpenRecordset("Select Count(0) From Zaznamy").Fields(0)
Next i
Debug.Print Format$(Now - t, "#.#######")
End Sub
The results of the tests, in the same order as the code, are:

,0001157
,0002546
,0002662

And the vast majority of the time involved is the CurrentDB operation. When I put the loop inside a With CurrentDB block, I had to bump the iterations from ten thousand to a million just to get a non-zero elapsed time from the .RecordCount property. The other two methods were not affected almost at all by being inside the block. So I'm guessing that this property is stored as a value, updated by Access's internal equivalent of a trigger, while the other two methods DO count the entire table each time they are called.
 
Last edited:

isladogs

MVP / VIP
Local time
Today, 17:34
Joined
Jan 14, 2017
Messages
18,237
The question is whether DCount could follow the table's contents past the corruption point, because if it couldn't, then after the deletion it would return a 0 (representing as far as it got.) Then the before-and-after non-zero .RecordCount is how many records were there to start, then how many records were not deleted because they were behind the corruption point. This assumes that the record count in question is actually not representative of a normalized result. I.e. if the .RecordCount is kept as a property, not a "hidden DCount" method, then it is updated only when an action query adds or removes records. (Just like the method we DON'T recommend for inventory-on-hand counting.) I'm not saying that is actually what happens, but that is one way it COULD happen. Call it speculation. I've been known to go out on a limb before.
I don't like speculating as I tend to get burnt!

I've tested on a table with a badly corrupted record
CorruptedRecord.png


I then ran the following routine to measure the record count using two methods before & after deletion

RecordCountTests.png


As you can see, the corrupted record was deleted and, in this case, the two methods did give exactly the same result each time.

It would obviously be worth rechecking in a case where the corrupted record isn't deleted

I've attached the test database with the corrupted record

@Petr Danes
You may be interested in a couple of my speed test articles
This article has recently been updated & extended

This one is about to be rewritten with additional tests & different conclusions

Setting CurrentDb does indeed take up a measurable time. In my latest tests, I found the following:
AvgSetClearTimes.PNG


These are average times in milliseconds to set a database variable & to set & clear the variable.
Clearing takes a insignificant amount of time
Setting using CurrentDb is comparatively slow at about 0.325 milliseconds though the time is still very small.
 

Attachments

  • CorruptionTest.zip
    82.7 KB · Views: 58
Last edited:

Josef P.

Well-known member
Local time
Today, 18:34
Joined
Feb 2, 2023
Messages
827
Complementary to Colin's tests:

For the time measurements, it should be noted that the creation of CurrentDb is also measured.

With CurrentDb inside loop:
Code:
Private Sub TestCountCurrentDb()

    Const Maxloops As Long = 1000
    Dim i As Long
    Dim cnt As Long

    Dim T As QpcStoppuhr
    Set T = New QpcStoppuhr

    T.StartT
    For i = 1 To Maxloops
        cnt = CurrentDb.TableDefs("TableA").RecordCount
    Next
    T.StopT
    Debug.Print "TableDef.RecordCount", cnt, T.Time


    T.StartT
    For i = 1 To Maxloops
        cnt = DCount("1", "TableA")
    Next
    T.StopT
    Debug.Print "DCount       .", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With CurrentDb.OpenRecordset("TableA", dbOpenTable, , dbReadOnly)
            '.MoveLast ' not required by dbOpenTable!
            cnt = .RecordCount
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS.RecordCount tbl", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With CurrentDb.OpenRecordset("TableA", dbOpenDynaset, dbReadOnly)
            .MoveLast
            cnt = .RecordCount
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS.RecordCount dyn", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With CurrentDb.OpenRecordset("select count(*) from TableA", dbOpenDynaset)
            cnt = .Fields(0)
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS + Sql Count dyn", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With CurrentDb.OpenRecordset("select count(*) from TableA", dbOpenForwardOnly)
            cnt = .Fields(0)
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS + Sql Count fwd", cnt, T.Time

End Sub
Result:
Code:
TableDef.RecordCount         65536         383,1742
DCount       .               65536         1072,1626
RS.RecordCount tbl           65536         285,2852
RS.RecordCount dyn           65536         2565,4264
RS + Sql Count dyn           65536         1383,167
RS + Sql Count fwd           65536         1169,7228
=> Winner: CurrentDb.OpenRecordset("TableA", dbOpenTable).RecordCount

But now with a reference of CurrentDb (created outside loops):
Code:
Private Sub TestCountDbRef()

    Const Maxloops As Long = 1000
    Dim i As Long
    Dim cnt As Long
    Dim db As DAO.Database

    Set db = CurrentDb  '<--- create instance outside measurement

    Dim T As QpcStoppuhr
    Set T = New QpcStoppuhr

    T.StartT
    For i = 1 To Maxloops
        cnt = db.TableDefs("TableA").RecordCount
    Next
    T.StopT
    Debug.Print "TableDef.RecordCount", cnt, T.Time


    T.StartT
    For i = 1 To Maxloops
        cnt = DCount("1", "TableA")
    Next
    T.StopT
    Debug.Print "DCount       .", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With db.OpenRecordset("TableA", dbOpenTable, dbReadOnly)
            '.MoveLast ' not required by dbOpenTable!
            cnt = .RecordCount
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS.RecordCount tbl", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With db.OpenRecordset("TableA", dbOpenDynaset, dbReadOnly)
            .MoveLast
            cnt = .RecordCount
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS.RecordCount dyn", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With db.OpenRecordset("select count(*) from TableA", dbOpenDynaset)
            cnt = .Fields(0)
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS + Sql Count dyn", cnt, T.Time

    T.StartT
    For i = 1 To Maxloops
        With db.OpenRecordset("select count(*) from TableA", dbOpenForwardOnly)
            cnt = .Fields(0)
            .Close
        End With
    Next
    T.StopT
    Debug.Print "RS + Sql Count fwd", cnt, T.Time

End Sub
Result:
Code:
TableDef.RecordCount         65536         2,0105
DCount       .               65536         1256,3409
RS.RecordCount tbl           65536         107,7835
RS.RecordCount dyn           65536         2167,2701
RS + Sql Count dyn           65536         944,8136
RS + Sql Count fwd           65536         938,5551
Winner: TableDef.RecordCount
.. I think this is a cached value. (2 ms for 1000 loops)
 
Last edited:

Petr Danes

Registered User.
Local time
Today, 18:34
Joined
Aug 4, 2010
Messages
150
Crap - spoke too soon. A few more hours of work and the problem is back. In the same table, with the same incorrect ghost count, and a new table added to the mix, also with a non-zero count on an empty table. Whatever is going on, it's not reliable enough for me to use. I'm giving up for now - I've switched everything to DCount.
 

Users who are viewing this thread

Top Bottom