Relative Speed

rich.barry

Registered User.
Local time
Today, 18:36
Joined
Aug 19, 2001
Messages
176
Can anyone tell me which is the faster of the following 2 methods, and whether the difference will be significant.

1) DLookup a field in a query
or
2) Open a recordset and see if it has a recordcount>0

All I'm trying to do is find out whether a record meeting certain criteria exists.

Also, is anyone able to elaborate on how DLookup works ( i.e. what makes it slow) so that I can gauge when to use it and when alternative code is advisable.

Thanks for the help.

Richard
 
A serious question I’m sure.

A serious reply to be sure.

First part of the question…
You seem to have stated one particular requirement so I’ll go with that.
Under normal circumstances it makes no discernable difference what so ever.
(If you have 100,000+ records it might change, but there is no guarantee.)

Second part of the question…
DLookup works ( i.e. what makes it slow)
It is not slow… it is being run by the same processor that runs everything else.
If it appears to be slow it is solely because it is doing more than we appreciate.
If it is doing more than we appreciate then it is protecting us from ourselves.
In other words…you get what you pay for and sometimes more. (But there is a penalty in speed.)

Which is faster???
Who cares?…there are more things to be concerned with than raw speed.
If Access is being used for raw speed then it is being used out of context.

When all else fails…time it!

Regards,
Chris.
 
Thanks for your reply Chris, although not all of it is helpful. When it takes approx 10 sec per record to upload from floppy, I do care about speed, and it makes sense to have whatever environment one programs in to be running optimally.

Based on your advice however,I did insert a series of timers to find what was taking the time.
The 2 DLookups took 6s each time round. Interestingly enough, when I moved opening my recordset (the one I'm adding data to) to implement once outside the read loop, rather than opening and closing it for each cycle of the loop, the whole thing sped up to about 0.5 s per record.
Why this affected the time taken to do the DLookups, I don't know, but I'm happy with the end result.

Regards

Richard
 
Well I think Pat hit the nail on the head, if I read her correctly.

It’s not the method that is expensive in time it’s the number of times it is called.

In the first question the DLookup is in a query and is called once per record???
To be correctly compared with a recordset operation that recordset operation would also have to be called once per query record. In other words, when running a test that compares two methods of doing the same job we must ensure that they are doing the same job.

Now if we keep the number of calls the same for both, and throw in something as slow as working from a floppy, then I suspect it would make very little difference at all. (One thing about working from a floppy would be getting the floppy up to speed. So hitting the floppy every 6 seconds may also involve waiting for that to happen. Hitting the floppy once and dragging the entire file into memory may be the fastest approach. Another method to optimize the process would be to scrap the floppy idea and use one of the many RAM sticks available, they are much more reliable and a whole lot faster.)

But I only ‘suspect’ that and would have to time it.

Any chance of having a look at exactly what you are doing and how you are doing it?

Regards,
Chris.
 
Code:
[color=green]'   64,000 records.
'   Single pass on hard disk.
'   850MHz PIII[/color]

Option Explicit
Option Compare Text

Public Declare Function timeGetTime Lib "Winmm.dll" () As Long


Sub TestIt()
    Dim lngStart      As Long
    Dim strSQL        As String
    Dim strPlace      As String
    Dim rstLocalities As DAO.Recordset
    
    lngStart = timeGetTime()
    
    strPlace = DLookup("Localities", "tblLocalities", "ID = 32000")

    MsgBox timeGetTime() - lngStart & "   " & strPlace [color=green]'  0 or 10 milliseconds[/color]

    strSQL = "SELECT ID, " & _
                    "Localities " & _
             "FROM tblLocalities " & _
             "WHERE ID = 32000"

    lngStart = timeGetTime()
    
    Set rstLocalities = CurrentDb.OpenRecordset(strSQL, dbOpenDynaset)
    
    strPlace = rstLocalities!Localities
    
    Set rstLocalities = Nothing
    
    MsgBox timeGetTime() - lngStart & "   " & strPlace [color=green]'  0 or 10 milliseconds[/color]

End Sub

:confused:
 
Sorry Pat, roller blades are totally out of the question…I’d spend a week in hospital. :D
 
Thanks for the DLookup explanation Pat. Interesting and much appreciated,

Chris - below is a cut down bit of code. Basically the DLookups have been used for my error trapping.
Possibly the way to remove them is to put an error trap and use the duplicate record error and absence of a parent record to build by information strings instead. I'll consider any other ideas also.

The physical aspects of the situation are a standalone PC is connected to a QC test machine and saves it's results to floppy, which are then uploaded to a QC database. The join field is sample time, which the person testing types in for each sample he tests. The upload has to check that the same data is not uploaded twice, and that the sample time corresponds to a record in the database. In both cases, something graceful must be done if something is wrong.

---------------
'define a whole lot of variables
skipped = 0

Open filename For Input As #1

Record.Open "BendingTests", CurrentProject.Connection, adOpenKeyset, adLockOptimistic

Do
Do
Input #1, dummy
Loop While dummy <> "Date & Time" And Not EOF(1)

If EOF(1) Then Exit Do

Input #1, testdate
......a whole lot more input variables off floppy

'make a horrible string to check to make sure you don't upload the same data twice
ConCat = CDate(SampleDate) & TestType & TestLength & TestWidth & TestDepth & TestSpan & MoR & MoE & Shear

'DLookup check to see if the horrible string exists in the database
DoesExist = DLookup("[BendingConCat]", "qryBendingConcatenated", "[BendingConCat] = '" & ConCat & "'")

'DLookup check to see that the join key (SampleDate) exists in the database. Chances are if it doesn't, there is a typo in the file being uploaded
LabCutExist = DLookup("[LabCutTime]", "LabCuts", "[LabCutTime] =#" & Format(SampleDate, "mm/dd/yy hh:nn") & "#")

'If data not in table and LabCut exists then upload it
If IsNull(DoesExist) And Not IsNull(LabCutExist) Then

With Record
.AddNew
!Sampledate = Format(Sampledate, "dd/mm/yy hh:nn")
......and all the other fields
.Update
End With

'If data not in table and LabCut does not exist then build an error string of items to check
ElseIf IsNull(DoesExist) And IsNull(LabCutExist) Then
NoLabCutStr = NoLabCutStr & SampleDate & " "
NoLabCut = NoLabCut + 1

'if data already in table
Else
skipped = skipped + 1
End If
Loop

Close #1

Record.Close

If skipped > 0 Then
response = MsgBox(skipped & " tests were not imported as duplicate test times exist in the database ", vbOK, "Duplicate Entry")
End If

If NoLabCut > 0 Then
response = MsgBox("The following lab cut times do not exist in the database." & vbCrLf & NoLabCutStr & vbCrLf & "Ammend either the database, or the sample marking and text file and re-import.", vbCritical, "Lab Cut Error")
End If

Set Record = Nothing
 
Pat…your answer is crap! (Hard core and intended to be provocative.)

Travelling between NY and DC is one variable.
A train or plane is another variable.

How do these analogies relate to the problem at hand?

NY to DC equates to quantity?
Train or plane equates to method?
Please make your correction now or mark as correct by your silence.

To quote in a somewhat chronometrical faction…

My quote…
It’s not the method that is expensive in time it’s the number of times it is called.

Your quote…
Obviously the time to run a DLookup() a single time will be indistinguishable from running a query with DAO or ADO.

Pat, to be quite honest with you, I would not even bother with you except for the fact that you know what you are doing. (Hard core and intended to be provocative.)

Regards,
Chris.
 
Pat, beware the missing icons. ;)

What might be ‘conjecture’ in the Connecticut office is ‘crap’ on the Queensland factory floor.
Don’t suppose you will go halves on ‘balderdash’? :)

But you did nail it on the head.

When the data set is one, none of the methods make any discernable difference.
As the data set grows the DLookup and Recordset methods may start to show some difference.
As the data set grows even further neither method may be fast enough and a join would be the answer.

The method chosen is dependent on the size of the data set in just the same way as the distance to travel chooses the method of transportation. (If the transportation is available.)

If the size of the data set were unknown then it would be wise to choose the fastest…the join. (If the method is available.)

Correctly or incorrectly, that’s how I read your reply. You seem to have got it correct the first time and then also seem to have contradicted it.

That’s why I said ahhhh… balderdash? :D

By the use of your icon I’ll assume your not a happy chapy with me as they say… understandable and I apologize for the factory floor language.

Don’t let this silly old bugger upset you… no one, least of all me, denies the amount of good work you do. :)

Kind regards,
Chris.
 
G’day Richard

OK, I can’t test this because I don’t have the sample data and I don’t use ADO.

Therefore this is airware.

I can’t see how you would avoid the two DLookup’s here nor can I see how you might speed it up. The small piece of timing done on the DLookup and Recordset methods indicate about equal performance, not much to be gained.

If anything is slow it looks like taking one variable at a time from the floppy; I guess nine for each record and possibly many more. One thing that might help is to do a line input, if you can, and parse the string.

The only other thing I can suggest and recommend is the use of a RAM stick. I’ve had great success with the JetFlash brand but there are others.

If you like can you post some sample data and I’ll try to build a DAO version?

Just a zipped text file under 100K should do.

Regards,
Chris.
 
Sorry if I offended you.

Pat, it was not only unnecessary to apologize if you offended me, it is technically impossible. :D

All I ever wanted, and needed, is a correct answer.

Correctness lives by degree, crap (sometimes expressed in different ways) inhibits a true explanation, and hence the difference of opinion between the office and the factory floor…between you and me.

I do not want or need or even think that your apology is needed.

What we do is do what we do.

You and I might differ in opinion but that’s all it is, a difference of opinion, nothing more.

Big kiss to make up? :D

Kind regards,
Chris.
 
There is another way to do this that might be easier. A divide-and-conquer method worked for old J. Caesar, it might work for you.

Import the data to a temp table. Don't EVER work directly from the floppy if there is any way to avoid it. Minimize your exposure to this slow device.

Now that you have a temp table populated, you CAN do a join of that table with your REAL data table. If the JOIN query is created, you can open the recordset, do a .movelast, and get an accurate count of its population.

Then, if the population is zero, you aren't waiting on a bloody floppy spin to append anything. If the population is NOT zero, you know you have an append to do - but since it is now on your faster disk, you can get it done quicker.

And in either case, when you are done you can erase the temp table.

Now, why does this work better?

You yourself complained about speed issues. This approach isolates the costs of working with this data set and gives you ways of avoiding some of the work in some cases.

Your problem was to find matches AND to append records. Trying to do it all in one pass forced you to consider some complexities AND some involvement with a really slow device.

So the solution is to minimize your exposure to the vagaries of the slow device. Change your problem just slightly so that you load from the slow device in a straight-through operation but to a temp table rather than committing it to your master table in a single operation.

Then SEPARATELY evaluate the JOIN table to see if you have to do the next step.

Then do the APPEND step only if and when and where you have to.

Then clean up after yourself with a query to erase the temp table.

What good is this approach? Each individual step is simpler, easier to write, easier to maintain, easier to understand. Only the import step is slow to run. All other steps ought to be pretty fast.

And you can tie this together any way you want, including macros or vba behind a control panel form or just about anything else that would trigger some code.
 
Thanks for all the input people. I will rewrite the error trap to trap only on problems rather than check everything.
As it is now a couple of years back since I wrote that particular piece of code, I can't remember the rationale for doing it the way I did. I only decided it was a dog a couple of weeks back when I ended up having to do some testing and uploading myself. It's amazing what other people will put up with when they don't know better!

If there is an error in the text file being uploaded, you end up going to the text file, fixing the mistake, then uploading the file again. The second time round, everything apart from the previously incorrect record is now a duplicate and triggers the errortrap, but as long as the user knows why, this is not a problem.

Best Regards

Richard
 
Nothing proved, nothing solved…nothing gained.

A “Long Day's Journey Into Night”, a “Much Ado About Nothing”.

On the Queensland factory floor…
 

Users who are viewing this thread

Back
Top Bottom