Recommendation for calling Share Procedures with Parameter

MajP

You've got your good things, and you've got mine.
Local time
Today, 14:57
Joined
May 21, 2018
Messages
9,934
I am converting my first DB to SQL Server and teaching myself to use SQL as a backend. To teach myself as much SQL Server I am doing everything on the server side even though the database is relatively small. It is however pretty complex code since it is a self refencing hierarchical database with some very complex coding. So I am doing as much as I can using stored procedures and pass through queries. For parameterized SP how do you recommend calling them. I built a single Pass through and call it in code to change the sql string. Seems easy and if I move the database then it would require just one query to update the connection. I assume the other way would be to have a settings table to store the connection information then build the query dynamically. Then just change the settings table if this is moved. Any recommendations or other approaches? Thanks
 
From several searches I see the two approaches. It does seem the recommendation is just to make a pass through. Any reason to use the ADO recordset method?
 
For parameterized SP how do you recommend calling them.

I often use vba to build up the literal sql string (as you are probably very used to doing), and end up with a saved query object whose underlying .sql property has been manipulated to:

exec [dbo].[procedurename] 'value1', 'value2', 'value3'

If all you want to do is execute the SP. Which of course can end with a Select statement, which can then allow you to get records from that saved query object as usual, if its ReturnsRecords property is set to true, and subject to other gotcha's when using SP's to return records.
 
I built a single Pass through and call it in code to change the sql string. Seems easy and if I move the database then it would require just one query to update the connection.
With my less than 2 years experience with SQL as a backend, this is the method that worked best for me. In my case, I had to build the string anyway and changing the SQL property for a single PT query made the most sense - only ONE query name to remember.
 
  • Editable recordsets,
  • easier parameter handling,
  • and output parameters and return values.
That bit on knowledge would have come in handy. But then I have avoided ADO like the plague. I was already intimidated using SQL and stored procedures and didn't need more "stuff" to clog up my brain.
 
  • Editable recordsets,
  • easier parameter handling,
  • and output parameters and return values.
Thanks. Yeah I forgot this whole project is really read only, so I was not thinking updating, but that would be important. Is ADO preferred over linked tables? I would assume more efficient since you are processing on the server side.
I will have to play with it more to see what is easier in the parameter handling, since it seems pretty straight forward with SP too.
I have not played with SP output parameters, but why would ADO be more advantageous for dealing with outputs? Thanks.
I am somewhat proficient in writing ADODB back when we were told DAO was going away, so I do not have a preference either way.
 
MajP,

You will definitely want a server-side VIEW.
The view is a recursive CTE that is very cool for traversing your hierarchy.

The view is accessible either as linked table or as an ADO record-set.

Google recursive CTE.
Google materialized views if you want to index.

HTH,
Wayne
 
MajP,

For parameters, look at table-valued functions.
They can use your recursive CTEs.
They look like tables, but AFAIK you can't link them ... only as ADO entities, but can be used in SQL Server queries.

Sorry, *typing*on phone S L O W L Y.

Wayne

P.S. experiment in SSMS, SQL servers querying is way ahead of MS Access in most respects.
 
I agree with Wayne - I use stored procedures to return complex datasets sometimes, and tbh they aren't normally something you would have a need to update.

CTE's are very useful with recursive data, as are the inbuilt ROW_Number and partitioning in returning very specific datasets.
 
@WayneRyan,
Thanks. This is one reason I choose to migrate this DB to leverage the recursive CTE. I knew these existed already so I was planning to use them. The genesis of the DB was to display a family tree like sort that required traversing the tree both in depth and in breadth. The code and the queries got pretty complex and putting them all together was hard to keep straight. I have read up on table valued functions and trying to understand where to use them in place of SP. The author describes them as a parameterized view in its flexibility. Also states he uses SP for actions and table valued functions for selecting. I have not got to the recursive part yet, but I will try to use some table valued functions in place of stored procedures.
 
+1 for whoever commented on TVF's ... I had no clue how unbelievably valuable they were (from several different angles) until not that long ago, I worked once for a bank SQL Development team on huge call center data with a lot of parametrized 'stuff' and just a lot of collaborative sql objects in general.

Boy, you talk about versatility - the table valued function yeah.

As far as indexing views, more power to whoever has found them beneficial, but I hammered away on those for a while and ended up finding so many 'gotchas' (just things that disqualified the view from being index-able), that I mentally abandoned it in favor of simply making sure my source Tables were indexed 'just right' and that my View SQL was as well optimized as possible. But I am not criticizing; if you find them helpful it's great.

@MajP I do the same thing too - saved PT query object, I call it qryBucket and just dump sql into it as needed.

Lastly, one of the (many) nice things about properly saving/storing/organizing/structuring stuff on SQL Server and then minimally coding the Access invocation of it (i.e. exec dbo.procname 'value', 'value') to me, is this:

Even though you might start out assuming that Access surely ought to allow you to paste ANY valid t-sql code chunk into a passthrough query and run it.....such is not quite the case. There are a variety of tiny things (which make NO sense to me), that, if included in your t-sql, will make a passthrough query unable to work properly - with vague error messages that will generally lead you nowhere.

But, encapsulated in a SP, most if not all of those problems seem to fade.

Overall, since SQL server allows a veritable mountain of advantages in organization, scripting > ansi sql statements, commenting, version control, etc.....I find that a compelling reason, but I appreciate Pat's perspective too, it's all a matter of balance, how much is worth the time of converting, etc.

A lot of my apps which do end up using SQL as a back end, someday, they get slated for conversion by the inimitable ".Net team" ( :ROFLMAO: ), and the first thing they're going to do is place a 100% bulletproof server side setup anyway.
 
@Pat Hartman
If I understand what you are saying, then in many cases simply linking the tables and using local queries is sufficient even if the table on the backend is very large. My understanding is that Access is much more efficient than years ago in using linked tables. Besides using form filters what is considered " old school Access practices like filters on forms"?

My BE's frequently contain million row tables but that is irrelevant unless I bind a form to that table and expect to filter it locally with Access.
So if you have a table of that size are you saying you would need to use a pass through query, or you would use a local query to a linked table and bind that to a form to reduce the records bound to the form? Thanks.

For this current project, I am purposely doing everything on the server to practice converting queries and using the power of SS, but want to understand the different ways of doing things. What is the best approach as well as what is easier and still viable.
 
@MajP Most of the principles of a separate backend apply, e.g. don't drag every record onto a form if you don't need to.
Create views on the server to display lists where you don't need to display a combo for drop-down options instead of joining to lookup value tables in Access.

There are many advantages to SQL Server. I suspect if we all thought about it, the list would be very extensive.
Things that immediately spring to mind are support for complex calculated joins, and if necessary joins on different data types without complaining.
Inbuilt string aggregation, row numbering and partitioning functions are incredibly efficient.
In a stored procedure you can create indexed temporary tables, then manipulate them and join them to other temp tables, real tables and then output a result in a really efficient fashion, that allows you to work in a set-based process without iterating over rows.
Merge statements allow you Insert, Edit and Delete in one process if applicable.

We use a lot of Azure SQL backend's so moving data around has more of an overhead than on a local SQL server. Because of that, we tend to move almost all the heavy processing to the server. We also use local tables for many of the lookup values and have routines in place to keep them updated and accurate. The cloud-based storage medium has many benefits for our clients.

I'm sure once you experiment further you will be a strong convert to the improvements available to you.
 
Last edited:
Minty brought up a list of cadillac features for sure, +1
I once learned Windowing, extra fancy partitioning, but forgot all of it due to never having a use case to practice it on.

Good reminder on the string aggregation, (if you meant string_agg). On this issue I admit being like an "old dog". I learned for xml path in the past, and now am so in the habit that I have never uses string_agg, even though I'm sure I should.
Probably neither of them provides an optimization advantage, since they are usually used in the context of a sub-optimal correlated subquery anyway (rbar). But this reminds me...as I am using for xml path now but really must stop and get on board.

Just SS's way of letting you freely type step after step after step - with comments and meaningless white space - even if JUST for readability and ease of development, is amazing.
Doing extremely helpful things in that step, like Minty points out with indexing a temp table.

one of my favorite recent updates is drop object if exists 'name'

no longer do i have to never-endingly look up the 2 old methods of testing if an object existed (even a session temp table), which syntax i could never remember, i can be lazy

then i also think of the system views, like INFORMATION_SCHEMA.
talk about useful! search it for any code text in the definition of any view, SP, - makes it a lot easier to search for potential impacts, what reports or processes already exist, etc. it's nearly like the ability to read your colleague developers' minds.
unless you have a very strict environment where the dba's have decided thou shalt not.

you can tell i kind of like sql server
 
I am converting my first DB to SQL Server and teaching myself to use SQL as a backend. To teach myself as much SQL Server I am doing everything on the server side even though the database is relatively small
It's advisable you read/take a short course on SQL as against using a live project to teach yourself.

After the course/ book you have read and understood some principles, you can then work on a live project.
 
@Pat Hartman,
In the scenario where you leverage linked tables and local queries, do you do anything special to handle subforms that may be based on a very large table too? If the main form is based on a local query to return one record, are the subform records efficiently handled.
 
@Pat Hartman,
In the scenario where you leverage linked tables and local queries, do you do anything special to handle subforms that may be based on a very large table too? If the main form is based on a local query to return one record, are the subform records efficiently handled.
You may find this interesting:
 
@Pat Hartman,

I myself tried this and loaded subforms with the Master's OnCurrent event - another trick I learned from his series. I didn't notice that big of a performance boost.

One I learned about Table Variables, Views and other server-side filtering tricks, I abandoned this method. My remote site in the Middle East noticed a MASSIVE performance improvement, especially once I added in some FillCache code.
 
But you didn't answer my question. Using this method, is the subform updateable AND can you get the master/child link to work or do you need to populate the FK with your own code in the subform's BeforeInsert event?
If memory serves, I was able to update the subforms normally, but this was 4 years ago and once I started using Table Variables, I reverted back.
 

Users who are viewing this thread

Back
Top Bottom