Alternatives to Domain Aggregate Functions (AC2007)

AOB

Registered User.
Local time
Today, 12:17
Joined
Sep 26, 2012
Messages
637
Hi guys,

I have a form which has 8 textboxes representing the earliest and latest dates of records available in 4 (independent) tables. The textboxes are locked / disabled; they are purely there for informational purposes, so users know what range of dates have been imported when the form is loaded.

I was using simple domain aggregate functions as the control source for each textbox, e.g. :

Code:
=Nz(DMin("[BalanceDate]","[tblBalances]"),"")

And switching DMin for DMax (to get the latest rather than the earliest date) and the table and field names as appropriate

This worked fine when the DB was a standalone but when I split it, the form takes an absolute age to load (and recalculate, after the import process is run)

Looking into it, the general advice for split databases seems to be to avoid domain aggregate functions (according to this article anyway)

But it doesn't suggest a suitable alternative? The only thing I could think of was to create a query for each date, along the lines of :

Code:
SELECT Min([tblBalances].[BalanceDate]) AS EarliestBalance
FROM [tblBalances]

And bind each textbox to its corresponding query. But this is just as slow, if not, in fact, slower?

So I was wondering if anybody could suggest the 'correct' way of doing this? All I want is the smallest or largest date in one (indexed) field in each table - surely not a big ask and something which should be possible to set up to run relatively quickly, no?

Thanks

Al
 
What you can do is open a recordset to get the data you want and then populate your controls.

Where you put this code is up to you and depends on the required refresh frequency.

So if the data is static for the life of the form they you would put it into the load event of the form. If it needs to refresh every time you change a record, you put it in the curent event. If it is time based the timer event etc.

the code would be
Code:
dim rst as recordset
set rst=currentdb.openrecordset("SELECT Min(BalanceDate) AS EarliestBalance FROM tblBalances")
mycontrol.controlsource=rst!EarliestBalance
set rst=nothing
Note that it may be that you are getting several bits of data from the same table, in which case the recordset can include these as well so it only needs to be run once e.g.
Code:
dim rst as recordset
set rst=currentdb.openrecordset("SELECT Min(BalanceDate) AS EarliestBalance, Max(BalanceDate) as LatestBalance FROM tblBalances")
mycontrol.controlsource=rst!EarliestBalance
myothercontrol.controlsource=rst!LatestBalance
set rst=nothing
 
... you might want to include dbOpenSnapshot or dbOpenForwardOnly with that too.
 
Thanks guys

It's still a bit sluggish (i.e. still not as quick as the unsplit version with the DAF's) but an improvement on before, to the point where I believe it will suffice!

Or more importantly, to the point where I can move on to the next performance lag I've suffered since splitting the DB. Will be a painstaking process of working through each query and module to identify a more efficient way of retrieveing or passing data across the network.

(Why oh why do there have to be such performance issues with split databases!! :banghead: )
 
It's not the fact it's split - it is where the back end resides. If you have a slow network connection, you will have a slow response time. Note that domain functions are nearly always slow in comparision to using SQL

Some things to bear in mind - not exhaustive

Keep the 'width' and 'depth' of each returned recordset as small as possible. So for example, if you have a form with a table as a recordset with filters to 'go to' the required record, look to instead of updating the filter, modify the form recordsource to include your filter as criteria. Then you are only bringing through one record rather than 100's/1000's.

Ensure you have indexing on all the appropriate fields. Note that some fields are not normally worth indexing (e.g. yes/no fields, lookup fields) basically where there are only a few values (yes/no only has 2 for example)

If your table design is using lookups in the field design - remove them
 
Note that some fields are not normally worth indexing (e.g. yes/no fields, lookup fields) basically where there are only a few values (yes/no only has 2 for example)

Logically that is correct. However, I recall an article explaining that in the case that ALL criteria fields are indexed, then Access is able to employ a special optimiser for the query.

from http://www.fmsinc.com/tpapers/faster/

Take Advantage of Rushmore Query Optimization

Microsoft Jet uses Rushmore query optimization whenever possible. Rushmore is applied to queries run on native Access data, and on linked FoxPro and dBASE tables. Other linked data types do not support Rushmore optimization. To ensure that Rushmore optimizations are used, create indexes on all fields that are used to restrict a query's output. If you have queries that don't contain indexes on fields used in the query's restriction clause, Rushmore is not used.
 
@Spike - This quote in the fmc link is the philosophy I have followed up to now.
Don't Index Fields with Lots of Identical Data
Don't apply indexes to fields that contain much the same data. For example, indexing a Yes/No field is almost always a performance degrading operation. Similarly, if you have a number field that only contains two or three values, an index wouldn't be a good idea. To check the number of unique entries in an index, use the Access DistinctCount property. Compare this value to the number of records in the table and you can quickly see if the index is doing you any good.
But thanks for the heads up - I'll have to investigate further.

Not sure if Rushmore is used in ACE but for the OP here is a more detailed link about Rushmore for JET and further suggestions on optimising queries

http://support.microsoft.com/kb/112112
 
Thanks guys!

Some great reading material here, I will plough through it and see what i can find to help me.

@CJ_London, In response to your original suggestions, I am using all of those recommended methods (i.e. applying as much filtering in the base query; indexing appropriate fields; no lookups in any field design)

Still getting incredible performance drag (from virtually instantaneous form loads to delays of 20-30 seconds or longer) post-split. I think I can claw some performance back by redesigning some of my queries (in particular, those involving subqueries, which I've found Access struggles with, post-splitting)

I also have a neat little function (from another DB I built) to check the drive mapping on the local machine and relink the tables using the mapped name rather than the UNC - hoping I might get some performance back as there would be fewer directory levels to navigate each time. I may also try to house the backend closer to the root directory on the network to remove a couple more, although unfortunately I only have the authority to go back to within 2 or 3 folders.

What's annoying me is, while I would expect a bit of a lag on large / complex queries (of which I have several), I'm getting the same issue with very rudimentary queries. For example, comboboxes whose RowSource is a simple 2-field query (ID + Text) on a single static table with only a handful of records. I have one form with 4 such combo's, each querying two fields off a single table. The form takes about 20 seconds to load up and, even more irritatingly, draws each combobox in sequence as it is initialised (so you start with a blank form and then each combobox 'appears' every couple of seconds) Making selections from each combobox also takes several seconds to update the record, making the whole process extremely cumbersome and not user-friendly whatsoever.

But in the standalone, it was super-fast and really quite neat. It's incredibly frustrating when you do all the development work (I have been working on this particular DB for 8 months solid now!) and then when you finally come to deliver it to users, the split version is just horrible.

Anyway, I'll read up on your suggested articles, try my relinking function and fiddle with some queries - if there's anything else that anybody would suggest I focus on, any advice, as always, is hugely appreciated!

Thanks again everybody

Al
 
If your project has been under development there may well be a big chunk of redundant vba code slowing things down - have you tried decompiling/recompiling the front end? (this is not compacting which you should do after decompiling).

Also, I presume you have compacted the back end?
 
FE is compiled - not sure how to decompile? (I thought any changes to the VBA automatically meant the FE was in a decompiled state?) BE is compacted / repaired as well.

Not sure if it has an impact but the FE is an .accde as opposed to a .accdb? I wanted to protect the underlying code and forms from accidental tampering (enough work gone into this without mischievous meddling!)
 
Nice, that's handy, I like that. Have saved the shortcut for future reference!

The master .accdb was reduced from 8.7MB to 7.0MB after a decompile / recompile / compact. The .accde stayed exactly the same size (7.5MB) which i thought was odd (but I definitely overwrote the old version from the new master - weird?)

I also did a compact & repair on the BE - size dropped from 5MB to 4.5MB (data gets added every day; I hope to build in some functionality to allow an automated C&R on the BE at regular intervals, i.e. once a week, but it's on my 'phase 2' to-do list - in the meantime I just need to C&R it manually myself out-of-hours)

No appreciable improvement in performance though
 
Woah!!

Okay I added my table-relinking module from my other DB and call it when the DB loads. Significant improvement in terms of speed. Almost back to where it was pre-split. I guess the number of directory levels has more of an impact than I first thought?

(I put the module together when I had similar performance issues on the other DB which is considerably less complex, after some reading and research on the importance of reducing directory levels on linked tables - but I didn't get results like this?? Even the DAF's which I haven't removed yet are flying along - I may even keep them as they are!)

This has put a real spring in my step let me tell you, I was foreseeing weeks of redesign to claw that performance back! It's still not as fast as the standalone, but it's not a million miles off it. It's certainly 'useable' anyway! :D:D
 
Glad you got it sorted - so your path is now?
 
My design is such that the BE sits on a network drive and I have a .bat file that makes a copy of the FE (also on the network) to the users local machine (C:\)

The BE sits in a folder with no less than 10 directory levels (I know... But this is the infrastructure I have to work with...!)

\\server\dir1\dir2\dir3\dir4\dir5\dir6\dir7\dir8\dir9\dir10\MyBE.accdb

However, I know my users have drives mapped from about the 6th level in, e.g.

X:\ -> \\server\dir1\dir2\dir3\dir4\dir5\dir6\

So I relink the tables to :

X:\dir7\dir8\dir9\dir10\MyDB.accdb

Before you ask, my module is generic in that the drive letter, and indeed the folder to which it is mapped, can vary from user to user.

So one person could have their X drive mapped to \\server\....\dir6 while another could have their Y drive mapped to \\server\\....\dir7 but the relink process just looks for common strings between the existing links of each table (UNC) and the mapped drives on the local machine, and replaces the UNC links with the mapped version.

If somebody doesn't have the drive mapped at all, it leaves the link as it is (which would suck for them in terms of performance but all they have to do to remedy it is map the drive)
 
Interesting to see that reducing the levels by 'only' that much makes such a difference.

Have you thought about modifying your .bat to create a new mapped drive direct to the directory?
 
Yeah I was slightly surprised as well but it is considerably quicker.

I wouldn't be overly keen on modifying individual users mapped drives; I have a few of these DB's on the go and I like to keep them consistent where possible. I could easily run out of drive letters. Plus people get confused when they see more than 3 or 4 mapped drives in Windows Explorer... Some people struggle with the C:\...

That said, if this DB works the way I hope it will, I'm hoping to find another network server to house it, such that it can be made available to more people (in different locations), at which point I will probably have to do just that. Food for thought anyway.
 
AOB
Forgive me if this has already been said in an earlier post.
I had some success in improving performance by creating a persistent connection to the back end. Also, Having forms with no Record Source, but setting this in the forms OnOpen/OnLoad event (I can't remember which I used.) Also did the same with all list box and combo box controls. Had no Row Source property for these but set that also in the forms OnOpen/OnLoad.
 

Users who are viewing this thread

Back
Top Bottom