Better Form Performance

NauticalGent

Ignore List Poster Boy
Local time
Today, 15:25
Joined
Apr 27, 2015
Messages
6,747
Good morning AWF,

I wavered on whether to post in this form or in the Forms...this one seems more appropriate. If Admin sees it differently, please feel free to move it.

As I have started in earlier posts, I have inherited a MONSTER project. I have been slowly addressing the most pressing issues and working through the list.

The original developer (there has been at least 3 that I can determine) was VERY code centric and had a semi-decent grasp of form and table design. The table structure is next after I tackle this one-the Form in question seems to be corrupting the BE on a regular basis.

The form has about 30 some-odd controls, half of them are text fields with DLookUp control sources. I will address that one too!

OK, rant over; my question:

Access 2010, Split DB, BE is in a shared folder on our LAN.

The code associated with this form contains over 1,800 lines of code! The main sub has about 1,000 by itself and is the heart and soul of what this form is for.

The form allows the user to parse formatted OutLook e-mails that are linked to the DB. It performs 4 to 5 loops (depending on the data), assigning texts to variables and writing them to appropriate tables.

As a side note, there is no Option Explicit line so NO variables are defined...I will be fixing that too.

My question is: Would it improve the performance and perhaps make the process more stable if I made it more modular? For example, if I were to make each loop a separate Procedure or even a separate Module, would it make a difference or is having all the code in one spot the best practice?

As always, thanks in advance...

NG
 
That subs too big!

Sent from my SM-G925F using Tapatalk
 
I agree...just on principle alone. So it is safe to assume making it modular would be beneficial?
 
wouldn't make difference. But will make it much slower because of additional call being made to execute those modules you will make.
the advantage it easier to debug, since it is already broken down.
you can always make a comment of each line of steps, ie:

'1. process 1 <your comment here>
...<code>
...
...
'2. process 2 <comment>
...<code>
...
...
 
So it is safe to assume making it modular would be beneficial?
probably beneficial in terms of readability and perhaps reutilisation for some code, but doubt it will have a performance benefit unless you discover 'better' ways of doing things (which potentially would be easier if the code is modularised) - but then it is not the same thing, not that that matters:)

Code in same place won't make a difference to performance, simplifying code usually does because there is less to interpret.
 
Good stuff guys, thanks a lot. With this in mind I can focus on more pressing matters.

In regards to the form in question, I guess the thing to do would be to research effective parsing techniques to determine if there is a way to streamline that process.

Thanks again!
 
But will make it much slower because of additional call being made to execute those modules you will make.

This was a concern for me as well. It would be the equivalent to putting down what you were doing, go do something else and then come back to where you were.

But do you think it would be that big of a performance hit?
 
But will make it much slower because of additional call being made to execute those modules you will make.
This was a concern for me as well

personally I've never noticed a difference - the code is compiled either at run time for a .accdb or previously when a .accde is created - at worst, a bit like a query needing to create an execution plan before executing - a previously written query (i.e. a querydef) will already have the plan whereas a query executed in vba needs to create the queryplan each time it executes.
 
. . . But will make it much slower because of additional call being made to execute those modules you will make.
Here is code that runs timings on three methods of doing a multiplication on two doubles. In one case the subroutine call executes FASTER than the embedded routine. Can you see why?
So calling a subroutine, I think we can conclude, is no slower than assigning a value to a variable.
Code:
Private Const MAX As Long = 1000000

Private Sub TestSubroutineTimings()
    Const P1 As Double = 1.23456789
    
    Dim clock As Single
    Dim result As Double
    Dim i As Long
    Dim n1 As Double, n2 As Double
    
    clock = Timer
    For i = 1 To MAX
        n1 = P1
        n2 = P1
        result = n1 * n2
    Next
    Debug.Print Format(Timer - clock, "0.0000") & " secs, no sub"
    
    clock = Timer
    DoMultiplyLoop P1, P1
    Debug.Print Format(Timer - clock, "0.0000") & " secs, loop in sub"

    clock = Timer
    For i = 1 To MAX
        result = DoMultiply(P1, P1)
    Next
    Debug.Print Format(Timer - clock, "0.0000") & " secs, loop calls sub"
    Debug.Print "***"
    
End Sub

Sub DoMultiplyLoop(n1 As Double, n2 As Double)
    Dim i As Long
    Dim result As Double
    
    For i = 1 To MAX
        result = n1 * n2
    Next
End Sub

Function DoMultiply(Num1 As Double, Num2 As Double) As Double
    DoMultiply = Num1 * Num2
End Function
Copy and paste it into a standard module and see for yourself what is fast, and what is not over 1 million iterations.
 
1000 and 1800 lines of code in 1 thing seems excessive. I would suggest reviewing the code and breaking it into distinct tasks/functions with some comments (MZTools perhaps to add a header to procedures). My view is it won't make a whole lot of difference is speed/performance, but it will make it more maintainable and "testable". You can do some tests and improve performance if need be . As others have said, you will probably find some reusable pieces.
Major task is to find out what those 1000 and 1800 lines are actually doing (especially in business terms).

Option Explicit is a necessary step.
Prudent use of Debug.Print statements to test "uncertain areas" can help with tuning and understanding the code.
As Markk suggests, do some timing and adjust based on knowledge-- no guessing.

Good luck.
 
Guys, as always, I appreciate your insight and advice. I will adapt MarkK's benchmark code was I get all set up and let you know how it goes.

May just need you all again so don't wander off too far...

NG
 
Just wanted to ask a 2 cent question.
RE: Split DB, BE is in a shared folder on our LAN.

Please describe where the Front-End is located.
Is the Front-End located on the LAN or does each PC have a copy of the Front-End?

Observation:
One of my FE dB is around 90 MB (compacted). Many of the forms have many business rules as it supports a Regulatory activity.
The code at all levels is absolutely huge and complex.
Think of a tax code program, if one option is highlighted and several text box values are evaluated, then optional textbox or code may or may not be considered for that situation.

If it is run from a copy on a LAN, it runs very, very slow.
If a copy of the master is put onto the PC workstation, it runs very fast.

Unlike Excel or Word that loads the entire file from the LAN onto to the workstation RAM, Access only loads a portion of the project onto the workstation RAM. Those Round-Trips over a LAN for the code can cost a lot in time value.

My example above is actually using Thin-Client, it loads a copy of the Access project on each virtual machine that link to a SQL Server BE on the server rack close by.

http://www.kallal.ca/Articles/ThinClientsand.net.html
 
Gent, my method of speeding up code was to find ANY cases where a recordset was involved and see if by some gyration I could convert that to a sequence wherein I generated a query - even if it had to be dynamically built - and then executed it using a DAO database object and the .Execute method.

The reason is that EVERY INSTRUCTION of that VBA is interpreted, but some things take longer than others, and where you get "killed" in performance is a nested loop. Nest two or three loops and the number of times through the innermost loop will positively eat your socks.

If you can change a recordset "FOR EACH" loop into an SQL statement that can be handled by a (DAOdatabase).Execute, the code in the implied loop (the one that has to exist in the .Execute query processor) is true-compiled, not pseudo-compiled like VBA. The best pseudo-code emulator I ever saw was no more than about 20 to 1 in terms of compiled instructions per pseudo-instruction (because of course you have to take breakpoints and calls and argument management into account for the pseudo-code.)

As to "big subroutine" vs. "bunch of smaller subroutines" - I can only say that if you CAN make it more modular, do so. Even you end up not finding anything "wrong" with the code when you are finished, it is my belief (and I'm not trying to sell you short here) that you can't possibly see inefficiencies in 1800 lines of code because you will have trouble grasping all 1800 lines as a gestalt. Most people cannot imagine more than 5, 6, or 7 separate objects at a time. I hesitate to guess how many pieces of code you would have to hold in your mind to analyze 1800 lines' worth of logic.

This is clearly a case where Caesar had it right - divide and conquer!
 
Please describe where the Front-End is located.
Is the Front-End located on the LAN or does each PC have a copy of the Front-End?

Each user has a copy of the FE on their PC. The program itself runs fast enough. The issue is that when the appropriate work-center does their periodic import, there can be anywhere from 50-250 messages to process. It doesn't really take all that long but there is a lot of activity a user does not see.

There are 4 RS's are being looped through with records being updated, inserted or deleted...a very volatile condition from my viewpoint. As I have shared in other posts, the BE has been corrupted three times in the month that I have been there. From what I have been told by the "natives", restores are common.

The first thing I did was add a module that forces everyone except the initiator to log off and exit the application(Access) before the import is run. So far so good, but once I looked under the hood and saw all the *stuff* going on, the "good-idea fairies" are attacking with a united front.

As far as Thin Client or an SQL server goes, that is a bridge too far. This is on a classified government system and getting that kind of hardware/infrastructure literally takes an act of congress. Right Doc_Man??!?

The activity that handles our IT issues (NCTS) is having our SharePoint server upgraded and have agreed to give me some real-estate and even have the Subject Matter Expert (SME) assist me with migrating the BE - unfortunately the site licenses and the Navy's own security regs will not allow me to publish the application to the web; yes there IS a such thing as a classified Web.

Went down a rabbit-hole here but felt like I should elaborate some...
 
Gent, my method of speeding up code was to find ANY cases where a recordset was involved and see if by some gyration I could convert that to a sequence wherein I generated a query - even if it had to be dynamically built - and then executed it using a DAO database object and the .Execute method...

...If you can change a recordset "FOR EACH" loop into an SQL statement that can be handled by a (DAOdatabase).Execute, the code in the implied loop (the one that has to exist in the .Execute query processor) is true-compiled, not pseudo-compiled like VBA.

Doc, I am flattered that you think I have a CLUE as to WTH you are talking about!

Actually, I do understand the concept but applying it is where, as you so eloquently stated, "my socks get eaten".

Just how does one change a 'FOR EACH' loop into an SQL statement? I thought that complied VBA code was the most basic, streamlined instructions at our disposal? Very intrigued but also clueless, so preach on!

BTW, this is the same application that earned you the Challenge Coin almost a year ago.
 
Ok, after a GDB session, I think I know what you were getting at:

https://msdn.microsoft.com/en-us/library/office/ff197654.aspx

This bit is what drove it home:

"The Execute method is valid only for action queries. If you use Execute with another type of query, an error occurs. Because an action query doesn't return any records, Execute doesn't return a Recordset. (Executing an SQL pass-through query in an ODBCDirect workspace will not return an error if a Recordset isn't returned.)"

I mis understood what you were saying; for some reason I thought you were alluding to doing away with the loop process and somehow replace it with the db.Execute method.

If I understand this correctly, I would replace my DoCmd.RunSQL code with the .Execute code...?

It's funny, at work they all think I am some type of Access Demi-God but when I come here I feel Thick as a Brick.

Maybe I should change my user name to ThickAsBrick....?
 
just a thought

whether 1800 lines of code is right or not is not a given.
it depends what the form is doing.

I think RX alluded to this.

Now whether the form design is optimal ....
you mention a lot of controls based on lookups?

so is the form unbound, or could these lookups be included in the forms recordsource query?

If the form works, you might be spending a lot of time on something that is not that important at present.

Reengineering code is not to be taken lightly.

By all means put option explicit in, and see if it still works. It may not as non instanced variables will be treated as variant data types. By declaring and typing them you may get mismatch errors. You will also need to consider the locality of any variables you do declare.
 
Last edited:
I was unclear. I inferred that from the nature of the form, you make some selections and then do some serious processing that seems to take forever. The inference is because if there are no loops, even 1800 lines of linear code isn't going to slow you down that much.

If you are updating, inserting, or deleting something in a recordset loop, whether it is a DO WHILE or DO...UNTIL or some other construct like that, you can often gain tremendous efficiency if it is at all possible to come up with logic in SQL. Perhaps if you can construct a series of preparatory queries and one that "gets the job done", you can still gain. But if this form is having speed problems and yet doesn't process bulk data once you make your selection, then my comments were off target.

I may have confused you, so let me get slightly pedantic. Gent, there are only three or four ways (simplified here) to step through records.

1. Do individual DLookups in a loop based on variables.
2. Open a recordset and step through it in a loop.
3. DoCmd.RunSQL (for action queries)
4. (dao).Execute (for action queries)

Those are in order of speed. What I was talking about with compilers and pseudo-compilers is that VBA is not true-compiled. It generates pseudo-code and an emulator processes the pseudo-code. Even compiled VBA has no machine-level instructions in it (that I know of). Compare with VB6, which IS a true compiler and can produce machine code for a specific machine.

The problem faced by VBA compilation is that Windows and Office have to run on machines that don't always have the same machine code set available. Some older machines simulate SINGLE and DOUBLE variable handling, yet will run Office 32-bit versions perfectly well. Some don't support the new QUAD (integer) data type available in 2013, but that version of Office still runs without crashing. (QUAD might have been available in 2010 or 2007, but I didn't happen to look because I didn't need it when those versions were current.) I'm sure that other examples can exist as well.

The trick is that by interpreting the code, the interpreter can execute the lowest common denominator - pseudo-code - by first looking up what the hardware CAN do, and then calling simulations/emulations of what the hardware CAN'T do. Usually they do this based on some math library that gets loaded based on something in the Hardware Abstraction Layer.

Having said that, the next stop is to understand that functions are part of a .DLL file that contains instructions compiled down to the machine level. Methods are often complied code, too! We can have many layers between that graphic interface and the machine code, and your best efficiency is to get closer to the machine code by calling things written at that layer.

Therefore, while there is absolutely no substitute for the versatility of VBA, there is a penalty for flexibility. The trick is to find the right balance.

The OTHER part of the problem, reducing the size of the 1800 lines of VBA, is more a matter of maintainability, not efficiency. However, if you see enough "stuff" going on repeatedly in those 1800 lines, you might get a handle on how to reduce the size of the mess to a manageable level.

I'm going to look over your original description at the top of this thread and think about anything else I might suggest. Just remember, the more we know about what you are trying to do, at least in broad-brush terms, the better we can fill in some of the blanks for you.
 
The main comment didn't jump anything specific at me, but something else did...

to research effective parsing techniques to determine if there is a way to streamline that process

The question with parsing will always be a matter of predictability. Your parser logic can only shrink if you can make your input less random in some way. When you get into parsing theory, usually based on finite automata, you eventually recognize that explicitly or implicitly, your parser has a bunch of states. For instance, in the simplest concept, states like "Found address line" or "Found keyword 'submarine'" or stuff like that. The only way to make parsing work for you more efficiently is to have fewer states OR to grab more data from within a given state. And THAT happens when you have some level of control over the format of what is sent to you, or when you have precise enough specifications over what was sent that you don't need to look for so many flippin' keywords to find something.

Therefore, the next question is, do you have any say whatsoever over the stuff sent via Outlook? And are you parsing the text body of the message, an attachment, or what?
 
Therefore, the next question is, do you have any say whatsoever over the stuff sent via Outlook? And are you parsing the text body of the message, an attachment, or what?

Doc,

I sent you a PM. If you have SIPR access, I can send you the code. Also, the "e-mails" are actual NMGs from OIX so the format is fairly constant.
 

Users who are viewing this thread

Back
Top Bottom