Solved Query Execution Time

In SQL Server, queries that are similar enough will run reusing the same stored execution plan. The optimiser can easily recognise the similarities between adhoc queries well beyond just changing values embedded in their SQL strings.
erm in theory yes, in practice...
Building an execution plan is usually not the problem...
 
To be honest, I doubt that I could understand much of it even if they did explain it to me. Set theory isn't trivial.
One of the key things for a database to use an index is for it to consider it current.
PK indexes on integer values are always current because new values only get added. Thus rebuilding the index will not help much.

Varchar indexes tend to go out of date much faster.... they tend to be less sequential in nature.
Like the key value here:
[Unique Ref] = '" & Me.[Order_Number].Value & "-" & Me.[Tray_ID].Value & "-" & i & "'
at the very least you will get
1
2
3
and then 10 will be inserted inbetween the 1 and the 2, 11,12,21,22 etc... untill 100 gets inserted again.
Inserting into indexes causes them to be considered out of date, while adding.... i.e.
001
002
010
100
as varchars will make the index perform much better over time without the need to rebuild.

as far as I know and recently been told by an MS Engineer indexes in SQL server do not hash, finding shorter values is easier on a database than finding longer values. Another reason why integers are better even than doubles, but varchars not so good neither.
Offcourse now a days, database servers tend to be "monster machines" with multiple solid state drives that the difference between the one indexed column (type) and the other is hardly measurable in a realiable way.

Worse is fighting the optimizer sometimes... we have some reports and some queries that for reasons unknown run fine 99 / 100 times but the 100th time it desides on a different path to resolve the query.... changing query time from 15 seconds to 15 hours :(
It has had MS Engineers going crazy for over 6 months now (since december 2019) some even saying "its impossible" untill they see the proof.
The issue seems to have been reduced now to 999/1000.... but it is not funny trying to fix it.
 
The other clever thing with SQL server is backups and restores.

I think unicorns and black holes are involved, as to how else can you explain how SQL can back up 3-4 Gb of data in about 3-4 seconds. And restore it in about 6-7 seconds. (Numbers made up from memory, but you get the drift.) :alien:
 
Well thanks to you all for your support. It appears my execution efficiency was primarily down to the lack of indexes.

It is now executing in a matter of seconds! :cool:

I have to say that I'm a little annoyed with myself as creating indexes is normally the first port of call for poor query performance and in my naivety I miss-read the indexable|Yes property to be Indexed|Yes! I've recently had the back-end moved from Access to a SQL server and so I'm fresh when it comes to SSMS.

On the plus side - through your help and advice I have learned some really useful things.

Just for info as it was mentined a couple of times, uDataSet() is a 2D array that is populated as a user processes a tray.

Thanks again.
 
Last edited:
Well thanks to you all for your support. It appears my execution efficiency was primarily down to the lack of indexes.

It is now executing in a matter of seconds! :cool:

I have to say that I'm a little annoyed with myself as creating indexes is normally the first port of call for poor query performance and in my naivety I miss-read the indexable|Yes property to be Indexed|Yes! I've recently had the back-end moved from Access to a SQL server and so I'm fresh when it comes to SSMS.

On the plus side - through your help and advice I have learned some really useful things.

Just for info as it was mentined a couple of times, uDataSet() is a 2D array that is populated as a user processes a tray.

Thanks again.
Great to hear you got some improvement!
 
@Pat Hartman is correct, parameterising your queries /stored procedures is definitely the way to get the best out of your execution plans.
Great description of the differences between passing literal values and parameter-based execution plans here https://www.sqlshack.com/query-optimization-techniques-in-sql-server-parameter-sniffing/
Well ... that's a little different. Parameter sniffing is actually a 3rd problem down the road. Simply parameterizing your SP's and walking away will result in, as Pat said, a more likely cached and re-used execution plan...and actually cause the param sn. problem.
A cached execution plan to keep being re-used is not necessarily what should be desired.

Parameter sniffing is actually a negative side effect of doing precisely that. And is a problem whereby it keeps using a cached execution plan, but ideally it shouldn't.

I guess what I am trying to say is up until now the emphasis in posts has been as if we wish an execution plan would be always cached and reused, and Minty's excellent article points out that is not necessarily the case. I'll take the cost of a new one generated over an incorrect cached one used most days of the week, but ideally of course, would be to try to design a balance between the two. The cost of generating a new plan vs. the cost of using a bad cached plan is one to a hundred potentially.
 
While we are on the subject of caching query plans it is worth mentioning the server configuration option relating to it.
Optimise for ad hoc workloads

It prevents query plans being cached on their first run but stores a stub of each query so it can recognise it being used again when it will store the plan. So if your server is doing a lot of one off queries it will stop the plan cache memory filling up with stuff that isn't needed.

The option is off by default so all plans are cached. You should only consider changing the setting after thoroughly understanding the costs and benefits in the context of your server.

Pinal Dave has a short blog post about it with a good cautionary tale.
 
While we are on the subject of caching query plans it is worth mentioning the server configuration option relating to it.
Optimise for ad hoc workloads

It prevents query plans being cached on their first run but stores a stub of each query so it can recognise it being used again when it will store the plan. So if your server is doing a lot of one off queries it will stop the plan cache memory filling up with stuff that isn't needed.

The option is off by default so all plans are cached. You should only consider changing the setting after thoroughly understanding the costs and benefits in the context of your server.

Pinal Dave has a short blog post about it with a good cautionary tale.
Nice, thank you for posting.
 

Users who are viewing this thread

Back
Top Bottom