Speed up SQL in VBA statement

Djblois

Registered User.
Local time
Today, 11:48
Joined
Jan 26, 2009
Messages
598
Hi,


I have this procedure that takes a few seconds. It is very simple inserting a single line of data into a table, so I don't think it should take a few seconds.



How can I speed this up to be near instantaneous?



Code:
Sub TrackChanges(ByVal TableName As String, ByVal ProductKey As Long, ByVal FieldChanged As String, ByVal ValueChanged As String, ByVal ModifiedBy As String, ByVal ModifiedDate As Date)

    Dim sql As String
    
    On Error GoTo UnknownError
    DoCmd.SetWarnings False
    sql = "INSERT INTO tblTrackChanges(TableName, ProductKey, FieldChanged, ValueChanged, ModifiedBy, ModifiedDate) " & _
          "Values('" & TableName & "', " & ProductKey & ", '" & FieldChanged & "', '" & ValueChanged & "', '" & ModifiedBy & "', #" & ModifiedDate & "#) "
    DoCmd.RunSQL sql
    
endProcedure:
    
    DoCmd.SetWarnings True
    Exit Sub
    
UnknownError:
    OpenErrorForm Err.Number, Err.Description, "PublicSubs", "TrackChanges", ""
    GoTo endProcedure

End Sub
 
Hi. Just as a test, try moving the table to the local FE and see if the speed changes. If so, maybe there's something about the network traffic. If the speed is the same, then we can try other approach.
 
If you have a lot of indexes on the target table it will take longer to insert as the amount of data in that table increases.

Also I would possibly use CurrentDB.Execute, and then not have to turn the warnings on and off. I believe that those steps may shave a few nano seconds off the process.

Have a look for speed tests in the sample databases for some comparisons of different query methods.
 
I would try a new procedure byRef. Could be faster if passed values are on the large side, although all the parameters seem to be smallish in size. Not that I think you'd notice any difference in cycling warnings, but I do agree it's not the best. At least you properly constructed an error handler around that. So many do not.
 
try timing it
I use this to log 1000's of a second

Code:
Public Declare Function my_gettickcount _
    Lib "kernel32" Alias "GetTickCount" () As Long

store the count before and after, and see how long the operation takes.
maybe the bottleneck is somewhere else.

Then try some suggestions. I use this to execute a sql statement
currentdb.execute sqlstring, dbfailonerror

Code:
timers(1) = my_gettickcount
run the insert statement
timers(2) = my_gettickcount
msgbox "time taken = " & (timers(2)-timers(1))/1000 & " seconds"
 
As long as we're talking speed here, note that when the line

CurrentDb.Exedute mySqlHere

is processed, a copy of the db is created and persists only as long as it takes to process the line. I would declare db object if there will be multiple uses of CurrentDb so that one instance of it will persist from beginning to end of a process. In fact, it might be better to just get in the habit of doing so. Future code modifications could end up with multiple actions where each one ends up creating a new db object. It's not like simply using CurrentDb is saving any memory allocation as that will occur regardless of whether you dim it or not.
 
Code:
endProcedure:
    
    DoCmd.SetWarnings True
    Exit Sub
    
UnknownError:
    OpenErrorForm Err.Number, Err.Description, "PublicSubs", "TrackChanges", ""
    GoTo endProcedure

End Sub

Just as a point of clarification, wouldn’t a Resume endProcedure be required to properly handle and clear the error?
 
Concur with NG. Using a GoTo is wrong to terminate the error handler. You WILL have issues if an error occurs. Among other things, I wonder if you can even take another event after taking the error trap and ending it that way.
 
Agree with all the previous comments about shaving small amounts off the time.
Also with the Resume instead of GoTo in the error handling.

I have written several speed comparison test articles both here and on my website.
Particularly relevant is the article on optimising queries where I go through a number of different methods which can make a significant difference to the time taken. See http://www.mendipdatasystems.co.uk/speed-comparison-tests-8/4594556613

Hope you find it useful

EDIT I also compared 6 different methods of timing. See http://www.mendipdatasystems.co.uk/timer-comparison-tests/4594552971

From my tests using either the Timer function or TimeGetTime were the most consistent/reliable. Like GetTickCount these can measure to millisecond intervals though in reality accuracy is around 1/60 second for each method was they are all based on the system timer. There is also a High resolution timer that measures to less than 1 microsecond … but is less accurate!
 
Last edited:
Getting back to the original question, when was the last time the db was compacted?
 
You got comments about testing this locally in your front-end file - but my question is, what is the normal back end of this database, or is it even split at all? From what you showed, we cannot see the context so don't know what to expect as "normal."
 

Users who are viewing this thread

Back
Top Bottom