Does Access optimize VBA code?

spikepl

Eledittingent Beliped
Local time
Today, 07:42
Joined
Nov 3, 2010
Messages
6,142
In some languages you can write utterly crappy code, repeating unnecessary things in loops etc. etc., and yet the optimizer option of the compiler can to some extent save your skin, by throwing much idiocy away, internally.

While coding in VBA, we have an interpreter rather than compiler... but when we do Debug->compile - is that code optimised in any way?

I suspect not, but would appreciate confirmation one way or the other.
 
in the vba developers handbook there is an introductory section featuring a discussion between Ken Getz (the authour), and MS personnel (without looking up the name) explaining a lot of how the compiler produces optimised compiled code. worth reading if oyu want ot know more.
 
Thanks, unfortunately I do not have that book.

I had a tour on google looking for exerpts and unfortunately the first part of the book is not available just like that. But I ran into this:

http://www.scribd.com/doc/6889711/Office-Vba-Code-Optimization

Judging by some of the 21 optimization tips, the code is taken as is, so the compiler doesn't seem to do much optimization, and one should therefore write sensible code.
 
in the article it explains a lot of stuff it does, like constant folding, and stuff like that.

i think it must be a pretty smart compiler.
 
Okay - maybe I should invest in that book.

But until that time, the immediate indication from the article I did find is that it is pretty dumb and that I should show care in coding. If I follow this doctrine then I cannot loose, if I am wrong.
 
well i think it's worth designing efficiently alogrithms and looping constructs for things.

i find it hard to get out of the habit, but it explains why eg

if len(mystring)=0 is more efficient than if mystring="".

i bought a used copy of the book from amazon for about £5 incl delivery. an old version, but extremely useful. loads of code. eg, the used example on the top one here is only £3 odd.

http://www.amazon.co.uk/s/?ie=UTF8&...hvpone=&hvptwo=&hvqmt=b&ref=pd_sl_uit262sh0_b
 
Thanks - that's a reasonable buy :D
 
That article is probably the most interesting thing I have read in a while on programming. I love it when people actually measure the performance.
In each case I ran the tests on a Pentium Pro 200 with 64MB of memory (which certainly removes the lack-of-hardware issue from the possible set of problems).
:D

On modern hardware, enhancing the human readablity and maintianablity of the code is probably more important than the actual speed.
 
One thing which is seldom asked about these timing tests is; does it make any real difference?

Part of the answer to that question requires a definition of ‘real difference’, yet how many times do we see ‘real difference’ defined?

Windows is interactive with people, it is event driven. If people do not notice the difference then is the difference real? Sure we can time it to the millisecond but will they notice it?

Consider feedback to the user via the screen. The screen may only update every 16 or 20 milliseconds because it is governed by screen refresh rate and that is, generally, governed by mains frequency 60/50 Hertz. So in the unnoticeable time of the screen refresh, a CPU, running at a one nanosecond clock speed, has performed at least 16 million clock cycles. Even if the data bus is running 10 times slower than the CPU clock that still amounts to 1.6 million memory accesses which go unnoticed by the user.

In the article by Ken it is stated that every little bit counts; but does it really?

An example from some years ago:-
It was asked on this site; what is the fastest way to write to a Text Box on a Form?
Eight methods were put forward as to how to do it. The test indicated that there was less that about 20% difference between the eight methods. The real surprise was that by turning off screen echo all the times were about 1800% faster.

Another aspect is when it stated that smaller code does not necessarily run faster.
I’ll go further and say… generally speaking, the larger the code the faster it will run.
As almost always there is a caveat here: we must assume that both sets of code for testing are written with the same degree of functionality.

We start with the understanding that the fastest code of all is the code that is seldom executed.
We add more testing code to determine if the slow bulk of code really needs running. We add an If statement based on a Boolean expression on a single line of code. If that If statement is True we then move to the next inner If statement in a second test of increasing complexity, and so forth. We do not combine all tests on the one line because they will all need to be evaluated. So, by increasing the size of the code, we evaluate if the slow bulk of code really needs to be executed. And we do that by starting with the fastest outer test and progressing to the most complex inner test.

-----------

The tests by Ken are still valid but we now have to ask; are the results still real?

They are VBA verses VBA tests and under most circumstances today, I think not.
I think that every little bit counts actually counts for very little.

Chris.
 
@Galaxiom & ChrisO

I agree that since the cost ratio of hardware/meatware is steadily decreasing, the performance issue is in many instances entirely academic.
 
Code within a loop is more critical than code that executes only once.

One thing they didn't mention which was very important when I was creating transactions that required 2 second response time in an environment with thousands of concurrent users and millions of rows of data is avoiding physical I/O whenever possible. That translates in many cases to letting the database engine report errors rather than proactivly looking for duplicates. The assumption is that the vast majority of the time, the user will not be entering a duplicate value so doing a read to look for the value is a waste of time since most of the time no dup will be found. It is better to do the insert/update and let the database engine report an error and you deal with the error instead of preventing it.
 
One thing they didn't mention which was very important when I was creating transactions that required 2 second response time in an environment with thousands of concurrent users and millions of rows of data is avoiding physical I/O whenever possible.

Imagine what attention to speed they have in systems that analyse stock market trading and automatically transact based on rules and patterns.

Traders are concerned with milliseconds so they can get the jump on emerging market trends. So much so that there was talk recently of putting in a new communications fibre that cut a few milliseconds off the time it took to get trading information across the Atlantic. Multimillions of dollars on hardware investment for such a small gain so it is obviously important.

I am sure they would be paying utmost attention to every speed optimisation possible. No doubt they don't use VBA.;)
 
The main thing about testing code for speed is to specify the conditions of the test.

The test conditions must remain static and we can only vary the method.
If we don’t do that then we are comparing apples with oranges and the results become meaningless.

Chris.
 
The main thing about testing code for speed is to specify the conditions of the test.

The test conditions must remain static and we can only vary the method.
If we don’t do that then we are comparing apples with oranges and the results become meaningless.

Definitely. But as the the article points out, it is very hard to maintain constant conditions because we don't really know what is going on in the background in Windows.

To some extent we can compare apples with oranges so long as we are interested in properties like the basic shape but it becomes completely pointless once a few unexpected bananas are thrown in too. :D
 

Users who are viewing this thread

Back
Top Bottom