A Function that returns a Recordset (1 Viewer)

@dalski Although likes are nice they are not helpful. Please tell people if my last response solved your problem and you are going to switch to domain functions. OR, you're going to remove the repeating group.
I cannot see how a domain function is the route when these queries are getting executed tens of thousands of times, against recordsets of around 10k records. Are in around 30 different locations (the callers I mean). Maybe an idea to submit a request to site development to remove the like button :p - I know you really don't like that thing, but seems so rude not to say thanks with a like after your generousity.
Not possible.
In my amateur perception it just seems so inefficient to create a query for each individual field one wants returned & that's what led me to this thread. It's looking up an ID & returning one of several fields I need dependant on where it is executed from.

Are you returning one record with lots of fields. If so a type can make things a lot easier to read and manage
Code:
Type ResourceRecord
  Description As String
  BuyRate As Currency
  DiscountPercent As Double
  ResourceType As String
  UnitUse As String
End Type
[/QUOTE]
Exactly right, I'm trying to replicate your masterful Treeview, but with a Subform. Trying to avoid recursion by using structured logic with SQL.

1 - I'm using a class, & class composition (sub-classes) to replicate what is being read on the current record of the subform in-lieu of your type e.g. I know we don't have inheritance in VBA but I'm replicating from some of your awesome tutorials & much reading/ experimenting. So that seems reasonable for a single record & gives me the benefit of encapsulation to reduce complexity.
2 - So that's fine for the single record being read in the Subform & the classes provide methods for sorting & adding records... But to maximise speed I'm using queries to lookup the headers... instead of your recursion.
 
Last edited:
I did not realize until now that you're working with hierarchical data. This is one area where SQL Server really shines, with elegant support for it. If SQL Server is an option, check this out.
Note: Access does not support the hierarchyid data type, so you'll work with it in stored procedures, and return this value as a short string.
 
Thanks Tom, I've spent my life-savings on the project, quit my job & about to go bankrupt :ROFLMAO:. Long-term plan is to go MSSQL Server/ Azure, but for now I just need to get it working & get back to work, before i am homeless. :eek: That's a great link so thank you.
 
In my amateur perception it just seems so inefficient to create a query for each individual field one wants returned
But there is something wrong with the logic behind what you are doing. Whatever you are trying to do with the record should probably be done all at once rather than broken into tiny pieces. So, either the record contains a repeating group or a mishmash of unrelated fields or your processing logic is strange.

You are using querydefs because of their efficiency. Don't use querydefs. Do it the way some of the others do by embedding your SQL and execute the hard coded strings. Because you can't have it both ways. The POINT of using a querydef is so that the created execution plan can be saved to make the query more efficient for the second and subsequent uses. But if you change the structure of the query, then the execution plan needs to be rebuilt. The two requirements are incompatible and that is that.
 
Thanks for the input, it cannot be done at once because the process I'm replicating is recursion. 'Repeating Group', I'm not exactly sure on this, but as a regular user I'm sure you're familiar with MajP's Treeview. That's what I'm up to, but with a Subform, & as I said, instead of using recursion I am using structured logic with SQL.

1 -
Don't use querydefs. Do it the way some of the others do by embedding your SQL and execute the hard coded strings. Because you can't have it both ways. The POINT of using a querydef is so that the created execution plan can be saved to make the query more efficient for the second and subsequent uses. But if you change the structure of the query, then the execution plan needs to be rebuilt. The two requirements are incompatible and that is that.
The QueryDef is being used tens of thousands of times. It is a Parametized Query. So if the point you are making is that the QueryDef gets recompiled when a different parameter is fed to it then I get your point now - so thank you. I did not think that was the case, in the book or in your previous posts; so I don't think this is relevant here. The perception I got from the book & your post, as you've mentioned the only benefit is it saves recompilation of the actual queryDef itself. NOT the parameter; the query structure is the same, just a different parameter. As it is being executed so many times every benefit is desired.
So am I reading you right here in you are saying that a Query Parameter looses all benefit of pre-compilation when feeding a new parameter?

2 -
But there is something wrong with the logic behind what you are doing. Whatever you are trying to do with the record should probably be done all at once rather than broken into tiny pieces. So, either the record contains a repeating group or a mishmash of unrelated fields or your processing logic is strange.
I can't see how recursion could be done with a single SQL statement. Again not getting defensive, I have nowhere near your technical ability. But maybe the complexities of what I'm doing is not clear; nor would it be with what limited info I have provided.
 
Unless you're going to be using each of those functions multiple times, seems like overkill.
Simpler and faster then a bunch of dLookups.

Code:
Option Compare Database
Option Explicit

' Sample Class to search recordset and return values

Private Type MyTableType
    qd As DAO.QueryDef
    rs As DAO.Recordset
End Type
Private This As MyTableType

Private Sub Class_Initialize()
    ' Do nothing
End Sub

Private Sub Class_Terminate()
    With This
        If Not .rs Is Nothing Then
            .rs.Close
            Set .rs = Nothing
        End If
        If Not .qd Is Nothing Then
            Set .qd = Nothing
        End If
    End With
End Sub

Public Property Let LookupID(ByVal PrimayKey As Long)
    With This
        If Not .rs Is Nothing Then
            .rs.Close
            Set .rs = Nothing
        ElseIf .qd Is Nothing Then
            Set .qd = CurrentDb.QueryDefs("MyTableQuery")
        End If
        .qd.Parameters("ID") = PrimayKey
        Set .rs = .qd.OpenRecordset()
        If .rs.RecordCount = 0 Then
            .rs.Close
            Set .rs = Nothing
        Else
            .rs.MoveFirst
        End If
    End With
End Property

Public Property Get Description() As String
    If Not This.rs Is Nothing Then
        Description = This.rs.Fields("Description")
    End If
End Property

Public Property Get Location() As String
    If Not This.rs Is Nothing Then
        Location = This.rs.Fields("Location")
    End If
End Property

Public Property Get SomeOtherField() As String
    If Not This.rs Is Nothing Then
        SomeOtherField = This.rs.Fields("SomeOtherField")
    End If
End Property
 
Exactly right, I'm trying to replicate your masterful Treeview, but with a Subform. Trying to avoid recursion by using structured logic with SQL.
Hope you're not planning on getting rich by selling it. There is almost no market for stuff like that in the Access community. If that isn't why you are doing it then I would question why you are wasting time reproducing something that works very well. You'd finish the project a lot sooner if you just used MajP's version and concentrated on your actual functional requirements.
 
I can't see how recursion could be done with a single SQL statement.
I've handled standard BOM processing with simple queries when the maximum number of levels is known. This was of course before SQL Server added the Hierarchical features. Today, I would use SQL Server if that was the problem I needed to solve because it has the best tools for the job. Sadly, Access is stuck in a time warp as far as SQL is concerned. Until MS added the abomination data types in 2007 to make Access compatible with SharePoint (and make it less compatible with SQL Server et al), I don't recall ANY changes to SQL. So, Access is stuck at the ANSI SQL 1992 basic version I believe. You can overcome that by using pass through queries though if you need to. The only real issue with pass through queries is you can't update a form bound to one and I think you can't bind subforms to pass throughs at all but that may not be true any more. So, that can limit you to unbound forms or in memory DAO recordsets that you bind to the form.
 
Last edited:
Simpler and faster then a bunch of dLookups.
That is your opinion but you didn't offer any justification regarding what makes the wrapper simpler or what makes it faster. As I said, if you are using the dlookups multiple times from different places, having a "wrapper" protects you from changes should you decide you need to make a change that must be replicated every place the domain function is used. This is the only justification I can see for adding the overhead of "wrappers". The wrapper functions are not more efficient in any way since they still each run a dLookup() which is itself, inefficient. They also add lines of code which must be executed. What I object to with the wrappers, is that individually they mask what is actually happening so if you need to know that the wrapper is using a query rather than a table for example, you have to step out of line to look at the wrapper function. And there is something inherently irritating to me to create procedures that would be only a single instruction if written in line.

Reusable code is worth writing but separating out code that is not actually reused simply makes debugging harder and obfuscates the code's logic path The other downside of a "wrapper" to replace a single line of code is that some developer in the future could easily change the function not realizing that the change will break other calls to the function.

I'm not saying that you shouldn't create "wrappers", only that you have a reason and that reason should include multiple uses from different sections of code.
 
Last edited:
That is your opinion but you didn't offer any justification regarding what makes the wrapper simpler or what makes it faster. As I said, if you are using the dlookups multiple times from different places, having a "wrapper" protects you from changes should you decide you need to make a change that must be replicated every place the domain function is used. This is the only justification I can see for adding the overhead of "wrappers". The wrapper functions are not more efficient in any way since they still each run a dLookup() which is itself, inefficient. They also add lines of code which must be executed. What I object to with the wrappers, is that individually they mask what is actually happening so if you need to know that the wrapper is using a query rather than a table for example, you have to step out of line to look at the wrapper function. Reusable code is worth writing but separating out code that is not actually reused simply makes debugging harder and obfuscates the code's logic path The other downside of a "wrapper" to replace a single line of code is that some developer in the future could easily change the function not realizing that the change will break other calls to the function.

I'm not saying that you shouldn't create "wrappers", only that you have a reason and that reason should include multiple uses from different sections of code.
I agree this would be over kill for one off lookup of a single column. The OP indicated repeated lookups for multiple columns and the possibility of keeping a recordset open to do it. Assuming this is part of a single function call, initiating the class once and using it repeatedly for each column would be faster with the added benefit of the class handling nulls and other data issues. Also, passing a class instance is as fast as a recordset reference.
 
In my amateur perception it just seems so inefficient to create a query for each individual field one wants returned & that's what led me to this thread. It's looking up an ID & returning one of several fields I need dependant on where it is executed from.

So... create a wrapper function that passes in three parameters - one is the named query, one is the value you want to use for the effective LOOKUP equivalent operation, the third is the text name of the specific field you want to return,

This next is AIR code to suggest a general format, it might require a little bit of playing. Things in braces {} are things you would have to name.

Code:
Public Function SpclQVal( ByVal SpcQ as String, ByVal SpcRID as {whatever data type the record ID uses}, ByVal SpcFld as String ) As Variant

Dim prm as DAO.Parameter
Dim prs as DAO.Recordset
Dim cdb as DAO.Database
Dim pqd as DAO.QueryDef
   Set cdb = CurrentDB
   Set pdb = AllQueries(SpcQ)
   pdb.Parameters("{whatever name is used for the record's ID field}") = SpcRID
   prs = cdb.OpenRecordset( SpcQ, dbOpenSnapshot, dbReadOnly )
   SpclQVal = prs.Fields(SpcFld).Value
   prs.Close
   Set prs = Nothing
End Function

That has NO error checking and is VERY rough - but it shows a "wrapper" function that could retrieve any one (and ONLY ONE) field from the record picked up based on the value of SpcRID.[/CODE]
 
Thanks Ron, that addresses my requirements more so - in obtaining different fields from the same queryDef. But I am not keeping the recordset open. I am not leaving the recordset open so that is not an issue.

Thanks DocMan - that addresses my requirements more so & similar to what I've been up to but was lacking the return field parameter, thank you. Pretty much what I've been doing.

It seems madness to go creating an individual query for each individual field I want returned when I already have all of these in a query which has been built already.

You are using querydefs because of their efficiency. Don't use querydefs. Do it the way some of the others do by embedding your SQL and execute the hard coded strings. Because you can't have it both ways. The POINT of using a querydef is so that the created execution plan can be saved to make the query more efficient for the second and subsequent uses. But if you change the structure of the query, then the execution plan needs to be rebuilt. The two requirements are incompatible and that is that.
Pat - This is the entire reason that I am using a queryDef, it is used multiple times. I do not believe changing the parameter changes the 'structure of the query' & the benefit of pre-compilation still occurs when entering a different parameter with the queryDef. Granted I'm going through a wrapper function which will add overhead, but I'd sooner a tiny more overhead than storing 1 query opposed to 10.
 
So... create a wrapper function that passes in three parameters - one is the named query, one is the value you want to use for the effective LOOKUP equivalent operation, the third is the text name of the specific field you want to return,

This next is AIR code to suggest a general format, it might require a little bit of playing. Things in braces {} are things you would have to name.

Code:
Public Function SpclQVal( ByVal SpcQ as String, ByVal SpcRID as {whatever data type the record ID uses}, ByVal SpcFld as String ) As Variant

Dim prm as DAO.Parameter
Dim prs as DAO.Recordset
Dim cdb as DAO.Database
Dim pqd as DAO.QueryDef
   Set cdb = CurrentDB
   Set pdb = AllQueries(SpcQ)
   pdb.Parameters("{whatever name is used for the record's ID field}") = SpcRID
   prs = cdb.OpenRecordset( SpcQ, dbOpenSnapshot, dbReadOnly )
   SpclQVal = prs.Fields(SpcFld).Value
   prs.Close
   Set prs = Nothing
End Function

That has NO error checking and is VERY rough - but it shows a "wrapper" function that could retrieve any one (and ONLY ONE) field from the record picked up based on the value of SpcRID.[/CODE]

For a generic lookup the class will still work, you will need to provide more information.

With AnyTableClass
.QueryToOpen = "Your Named Query"
.ParmName = "Your ID lookup Parameter"
.LookupID = YourKeyValue
Description = nz(.ColumnFromRS(1))
Location = nz(.ColumnFromRS(2))
End With

Code:
Option Compare Database
Option Explicit

' Sample Class to search recordset and return values

Private Type MyTableType
    qd As DAO.QueryDef
    rs As DAO.Recordset
    QueryName as String
    ParmName as String
    FieldCount as Long
    IdWasFound As Boolean
End Type
Private This As MyTableType

Private Sub Class_Initialize()
    ' Do nothing
End Sub

Private Sub Class_Terminate()
    With This
        If Not .rs Is Nothing Then
            .rs.Close
            Set .rs = Nothing
        End If
        If Not .qd Is Nothing Then
            Set .qd = Nothing
        End If
    End With
End Sub

Public Property Let QueryToOpen(ByVal QueryName as string)
    This.QueryName = QueryName
end Property

Public Property Let ParmName(ByVal SearchParm as string)
    This.ParmName = SearchParm
end Property

Public Property Let LookupID(ByVal PrimayKey As Long)
    With This
        If Not .rs Is Nothing Then
            .rs.Close
            Set .rs = Nothing
        ElseIf .qd Is Nothing Then
            Set .qd = CurrentDb.QueryDefs(.QueryName)
        End If
        .qd.Parameters(.ParmName) = PrimayKey
        Set .rs = .qd.OpenRecordset()
        If .rs.RecordCount = 0 Then
            .IdWasFound = False
            . FieldCount = 0
            .rs.Close
            Set .rs = Nothing
        Else
            .FieldCount = rs.Field.Count ' ? I have not looked this up
            .IdWasFound = True
            .rs.MoveFirst
        End If
    End With
End Property

Public Property Get IdWasFound() as Boolean
    IdWasFound = This.IdWasFound
End Property

Public Property Get ColumnFromRS(ByVal ColID as long) As Variant
    If Not This.rs Is Nothing Then
        Description = This.rs.Fields(ColID)
    End If
End Property
 
Unless you're going to be using each of those functions multiple times, seems like overkill
If I am using a domain aggregate function I am probably using it more than once. Even if I am uncertain of its reuse I tend to go ahead and put a wrapper around all domain aggregate functions. I find it to be the opposite. You end up wasting time if you do not wrap it.

1. If I think I will only use it once, it always turns out I need it again so it is just easier to wrap it to begin with. Double the work to then make a central function
2. Rarely is this a "one liner". At a minimum there is some validation of the inputs. The wrapper allows for more complex validation of inputs and outputs
3. Far easier to write, read, and debug. If you write a stand alone wrapper you can test and debug easily from the immediate window
4. Makes your code far easier to read
Code:
CurrentTrips = GetNumberOfTripsOnPath(pathID)
vs
Code:
If not isnull(PathID) then
  currentTrips DCount("*", "tblPathTrips", "pathid_FK = " & pathID
end if
5. If any chance you are using it in a query or calculated control you are totally wasting your time not wrapping into a function. Painful to write and more painful to debug in an expression.
6. Far more understandable when you chain the functions
Code:
Public Function GetNumberAvailablePiggybacks(pathID As Variant, EdgeID As Variant) As Long
  Dim CurrentDriver As Long
  If Not IsNull(pathID) And Not IsNull(EdgeID) Then
    CurrentDriver = GetDriverFromPath(pathID)
    If CurrentDriver <> 0 Then GetNumberAvailablePiggybacks = DCount("*", "qrySelectPiggyback", "edgeID_FK = " & EdgeID & " AND driverID <> " & CurrentDriver)
  End If
End Function
There is a wrapper function to get the current driver, and then then the current driver cannot be included in the count of available drivers
 
It's hard to deal with hierarchies in recordsets, but it's way easier if you use classes. Imagine a class that stores its Field.Values in a Scripting.Dictionary keyed by Field.Name...
Code:
Private fields_   As New Scripting.Dictionary
Private children_ As New Scripting.Dictionary

Function Create(Fields As dao.Fields) As cMyDataItem
'   This object stores data from a single row in a table.

    Dim fld As dao.Field
    
    For Each fld In Fields
        fields_(fld.Name) = fld.Value
    Next
    Set Create = Me
End Function

Property Get Key As String
    Key = Typename(me) & " " & fields_("TableID")
End Property

Property Get Text As String
    Text = fields_("SomeDictionaryField")
End Property

Property Get Icon As String
    Icon = "DataItemIcon"
End Property

Property Get Children() As Scripting.Dictionary
    Set Children = children_
End Property

Then, if you have a function...
Code:
Function GetMyDataItems() As Scripting.Dictionary
    Const SQL_SELECT = "SELECT * FROM tMyDataItems"
    
    Dim item As cMyDataItem
    
    Set GetMyDataItems = New Scripting.Dictionary
    With CurrentDb.OpenRecordset(SQL_SELECT)
        Do While Not .EOF
            Set item = New cMyDataItem
            GetMyDataItems.Add !TableID.Value, item.Create(.Fields)
            .MoveNext
        Loop
        .Close
    End With
End Function
... you return a Dictionary keyed by ID corresponding to all the data in the table, filled with class instances each corresponding to a row in that table with all the field values keyed by name. This is very fast. Moving the data across the network is slower than constructing this dictionary.

Then do the same with another table, say all the child rows for each cMyDataItem, which might be cMyDataChild objects, but this time when you enumerate the table, you assign each instance to the Children dictionary of the parent it belongs to, like....
Code:
Sub AssignMyDataChildren(MyDataItems As Scripting.Dictionary)
    Const SQL_SELECT = "SELECT * FROM tMyDataChildren"
    
    Dim item As cMyDataChild
    
    With CurrentDb.OpenRecordset(SQL_SELECT)
        Do While Not .EOF
            ' create child with all data from the row
            Set item = New cMyDataChild
            item.Create .Fields
            ' assign child to the Children Dictionary of the correct parent
            MyDataItems(item.ParentID).Children.Add item.ChildID, item
            .MoveNext
        Loop
        .Close
    End With
End Sub

Now you have a totally complete, fully resolved, in-memory hierarchical data structure. This is waaaaaay faster than opening a distinct recordset for each parent to find its particular children. Also, if you maintain a Dictionary of all the Child objects (rather than just distribute them to their parents, as this code does) then loading and distributing the children of the children is equally fast and simple.

And, presenting this data in a tree is much simpler, because the data structure itself is already hierarchical. Similarly each of the data classes that compose the hierarchy, since they are free of their recordset, can expose whatever fields they contain by any name, so if all the data class nodes in your hierarchy expose fields Key, Text, Children and Icon, building a tree can be accomplished with a single recursive loop.
 
Thanks RonPaii, another nice technique there.

Thanks MajP, yes it makes far more sense to wrap in a function in my mind & I find it comforting that I discovered the multiple benefits on my own.

Thanks MarkK.
Imagine a class that stores its Field.Values in a Scripting.Dictionary keyed by Field.Name...
This is such an elegant solution to hierarchical data in Access it beggars belief. I'm inexperienced with procedures which replicate constructors atm & find them a little scary atm. Was a tough read up until this part; to which I found myself actually cheering out loud 🥳
Then do the same with another table, say all the child rows for each cMyDataItem, which might be cMyDataChild objects, but this time when you enumerate the table, you assign each instance to the Children dictionary of the parent it belongs to, like....

  • If duplicating table contents in memory, then surely there is a limit to the number of items/ class instances/ dictionaries which can be stored in memory?
Sadly I'm too far down the line to use this on this project I think, but I absolutely when complete will be looking to redesign the project & implement this strategy.
 
Last edited:
Thanks RonPaii, another nice technique there.

Thanks MajP, yes it makes far more sense to wrap in a function in my mind & I find it comforting that I discovered the multiple benefits on my own.

Thanks MarkK.

This is such an elegant solution to hierarchical data in Access it beggars belief. I'm inexperienced with procedures which replicate constructors atm & find them a little scary atm. Was a tough read up until this part; to which I found myself actually cheering out loud 🥳


  • If duplicating table contents in memory, then surely there is a limit to the number of items/ class instances/ dictionaries which can be stored in memory?
Sadly I'm too far down the line to use this on this project I think, but I absolutely when complete will be looking to redesign the project & implement this strategy.
I previously posted sample databases using tree controls.

Multi-value field (MVF) using tree control - fills the tree using recursion, it's easy to follow in the debugger because of the limited data. See TreeFillAvailable function in Form_MVF_Control_Tree form.

This 2nd attached ACCDB works with Northwind to supply a contact form using tree control. I does not use recursion, instead it expands branches on demand by calling functions for each type of contact. You will need the Northwind db in the same folder. This one may be more like what you are looking for.
 

Attachments

This 2nd attached ACCDB works with Northwind to supply a contact form using tree control. I does not use recursion, instead it expands branches on demand by calling functions for each type of contact. You will need the Northwind db in the same folder. This one may be more like what you are looking for.
IMO that is unusable. Because it does not show which nodes are expandable there is no plus signs. You have to guess if you can expand it or not. Something this simple it may suffice but in a tree of any size with uneven branch depth it would not. The user is not going to want to click on everything and guess if it will expand.

No Plus.png


I order to get around that for every record that has at least one child you have to add that node as well. This then brings the + sign visible. I discuss that in detail here. This is the method I use for loading extremely large tress (10s of thousands of nodes), which I refer to as light loading vs full loading which loads all nodes.

In contrast see the image below. Expandable nodes show the plus sign and it is clear what can expand and what can not. However this three has 10k nodes and at this time only 33 nodes are actually loaded. 1 per visible node and an additional for each plus sign.
So charlotte has no child nodes but the rest of the nodes do.
When you expand you then flush out the expanded level and add 1 child node for each expanded node that has at least one child.
Expand.png

In that thread you can also view @Edgar_ example on loading a tree with a two pass technique that does not require recursion. However, this does load all the nodes and not dynamically.

I gave up writing code to load a tree a long time ago. I load all my trees with a single line of code using the class module here. It only requires a properly formatted query.

This thread and the class demo/support some advanced concepts
1. Single line of code to load trees
2. drag and drop
3.Auto numbering
4. icons and dynamic icons
5. Light and full loads

Most importantly it demos using the properties of the tree control to support associations to the table kind of a pseudo binding. Shows you how to add, edit, delete nodes or records and synch the two up.
 
If duplicating table contents in memory, then surely there is a limit to the number of items/ class instances/ dictionaries which can be stored in memory?

Yes... you know you have hit the limit when you see the error message "Heap overflows Stack" (or its most common partner, "Stack overflows Heap"). Which of those you get depends on what specific action hits the limit first. There is also the family of error messages "Out of memory", "Out of virtual memory", "System resources exceeded" and a few more of that ilk. You cannot easily determine how much memory you have from inside Access without using some API calls. However, anything involving recursive subroutine calls will surely increase your risk of consuming memory quickly. And, unfortunately, a tree diagram potentially includes recursion. Don't get me wrong, recursion is a useful tool. It just requires you to be careful.

I learned this the hard way the first time my genealogy database ran into a low-level case of incest, in which two people got married who were cousins - 4th cousins once-removed. My original recursion handler didn't like that very much, because (I admit it) I was careless and didn't expect a recursion loop. But I got one. And got a lot of the aforementioned error messages.
 

Users who are viewing this thread

Back
Top Bottom