Acess queries slow when using sql server tables (1 Viewer)

tucker61

Registered User.
Local time
Today, 02:51
Joined
Jan 13, 2008
Messages
239
Hi all, I have just moved 2 tables from my access back end into sql server. I already had 8 tables that I lookup data from in a different sql server.

When running local queries that use tables from both servers in access these seem at least 5 times slower than when they were local tables but when I open the link to the tables directly then open instantly. Same when running the code in sql server. They are almost instant.

So not sure what is causing the delay is it possibility that they are in different locations ?

Couple of my thoughts.. do I need to open and close the connection to the sql tables or will this be happen automatically?

Should I write out the code in vba instead of using the design views for queries ?

One of my tables which is causing the issue every morning has the data deleted, then new data appended this then has a query is run on this to update a local table. Should I change this to a temp table within access ? Then delete one I have done my updates
 

Isaac

Lifelong Learner
Local time
Today, 02:51
Joined
Mar 14, 2017
Messages
4,308
When running local queries that use tables from both servers in access these seem at least 5 times slower than when they were local tables but when I open the link to the tables directly then open instantly. Same when running the code in sql server. They are almost instant.
In many cases, writing t-sql in, for example, sql server mgmt studio, is simply going to be a lot faster than using Access SQL against the ODBC link. (People will oft temper this expectation by reminding us that poorly optimized queries might still perform poorly in t-sql, and that's true enough, but my experience has still shown me that often the same ANSI sql ran on the server simply runs a lot faster). One thing I strongly recommend you look into doing is leveraging the ability to write pass-through queries in MS Access. Optimization discussions aside for the moment, passthrough queries are the way you will be able to replicate that "same when running the code in sql server. they are almost instant" scenario.

Should I write out the code in vba instead of using the design views for queries ?
If the end product is a Querydef object in the Access database, I can't see using the Design View vs. manipulating the .Sql property of the Querydef as being any different at all, all else being equal.

One last thought. In a scenario like at a sizeable company with structured departments, roles, and I.T., it is exceedingly common for SQL Server shop to hand out Views for connection to the Access people. And of course, as I'm sure you know, the potential that the 'Table' you are connecting to is really a View opens up all sorts of optimization questions, because of course querying the View requires the View to run, and the View might be designed horribly and run very slow. Even if you find out it is a View, you may or may not be able to do anything about that - or even be granted access to know the definition, but it's a stone worth trying to turn over IMO.

On the table that you are deleting and inserting, is it SQL Server? It's a possibility that dropping the index(es) on the table prior to the insert could make the insert run faster. And again, a pass through query would enable all of this to occur. You can also consider re-structing your Delete-And-Insert routine to a Merge statement in SQL server, consider things like using Delete vs. Truncate, etc.

Those are my thoughts for now - I will let someone else comment on the open/close connection, I'm less knowledgeable in some of the hacks that can help on that area.
 

tucker61

Registered User.
Local time
Today, 02:51
Joined
Jan 13, 2008
Messages
239
Thanks, let me digest properly and get back to you tomorrow.

I have only been using sql server for the last few weeks, and my main contact in work is on holiday for the next week.
 

isladogs

CID VIP
Local time
Today, 10:51
Joined
Jan 14, 2017
Messages
14,689
Using SQL Server will greatly improve stability & security but as already noted, it won't necessary improve performance. In fact as you have found, it can do the opposite
do I need to open and close the connection to the sql tables or will this be happen automatically?

NO! You should maintain a persistent connection to SQL Server. This will reduce the times of all queries because there will be no need to open the connection each time.
The way this is normally done is to open a hidden form bound to a small SQL table at startup and keep this running in the background.
Ideally the form will be based on a table with only one record and one field to minimise the load.

However, you will still need to optimise queries for maximum benefit in terms of performance. For example, only pull the records you need. Avoid using vba functions in your SQL queries...
 

Galaxiom

Super Moderator
Staff member
Local time
Today, 19:51
Joined
Jan 20, 2009
Messages
12,268
When Access runs a query against tables linked from SQL Server, the ODBC interface will pass the query to the server to process if possible. This cannot happen when running queries in Access that are linked from two different servers because the servers don't know about each other. Consequently, huge numbers of records may be getting returned from each server for Access to process. Performance will be terrible if you are writing large amounts of data back to one of the servers from the other, no matter how you do it.

The servers should be linked so they can communicate directly. Queries can be set up on one of the servers as a view and Access linked to them. It isn't quite as fast as being on the same server but way faster than having Access in the middle.

Stored Procedures or Pass Though queries can be used for action queries. Stored Procedures can also return recordsts via ADO commands and used on forms. Procedures can have parameters and this gives the best performance.

Hopefully you have a willing and capable dba who can set up the linked servers. It is quite involved if Windows Authentication is used right through, especially from 2012 servers onward where they use Kerberos. Sever Principal Name (SPN) has to be configured and the domain admin will need to enable delegation for the SQL Server Engine Service.
 

Isaac

Lifelong Learner
Local time
Today, 02:51
Joined
Mar 14, 2017
Messages
4,308
If you do use Linked Servers (a lot of places I've worked eschew them for security reasons), you have to craft your SQL very carefully. Simply joining two tables (and thus potentially dragging all unnecessary records across the wire) is not the way to go. One alternative is to create a block of code that pulls across a more discreet dataset from the linked server, into a #temp table (or something along these lines), then later code that joins to that.
 

Galaxiom

Super Moderator
Staff member
Local time
Today, 19:51
Joined
Jan 20, 2009
Messages
12,268
I was forced to use linked servers for a couple of years due to version compatibility problems with applications. The newer server 2016 didn't support an early enough version of SQL Server until the old application was upgraded.

I didn't have any problems with performance but it is certainly possible to get it wrong. Just remember to use a structure where the WHERE can be applied before any JOINs to pare down what comes from the other server.

While it is certainly possible to open up security holes if done badly there is no real issue if it is done properly with Kerberos passing the security context and only the Engine service permitted for delegation. It is only giving the same permissions the user would have had if directly connected to the linked server and only on the engine.

The temptation can be to use one of the simple security options available with Linked Servers to make it work without having to deal with Kerberos.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 05:51
Joined
Feb 19, 2002
Messages
31,466
Two database engines (ACE and SQL Server) cannot collude to form one result set. ALL the data must be in one place or the other. Access helps you out with this, When you join a Jet/ACE table to one from a SQL Server (or other RDBMS), Access sends a request to the RDBMS and downloads every single row of the REBMS table. It will restrict columns if you have done that in your query. It will also restrict rows if you have criteria on the server-side table columns. Then it will perform the join on your PC in local memory. This resultset might even be updateable which is the magic of it. However, do not expect the process to be efficient. You need to get all the tables moved to SQL server rather than keeping them here and there.
 

pbaldy

Wino Moderator
Staff member
Local time
Today, 02:51
Joined
Aug 30, 2003
Messages
35,007
One alternative is to create a block of code that pulls across a more discreet dataset from the linked server, into a #temp table (or something along these lines), then later code that joins to that.

One of the methods of doing that I've used is table-valued functions. I wanted to be able to use something in a FROM clause and I couldn't with a stored procedure (which may have been my lack of knowledge). I pass criteria to them so the minimum number of records are pulled from the other server.
 

Isaac

Lifelong Learner
Local time
Today, 02:51
Joined
Mar 14, 2017
Messages
4,308
Excellent! A couple years ago I spent about 8 months with a major Bank and on that particular team tvf's were almost exclusively what they used in many circumstances. I became quite enamored of them! Prior to that point I have to admit I don't think I had ever used one.

This discussion, and your suggestion which is a great one and probably usually superior to using a temp table, reminds me of a harsh Awakening I once got years ago when I was trying to do a little work in Oracle and I posted a question in a pl/sql forum about how I could use a temp table like I did in SQL server. The responses were ranging from harsh to dismissive to enlightening.
 

pbaldy

Wino Moderator
Staff member
Local time
Today, 02:51
Joined
Aug 30, 2003
Messages
35,007
I'll get slammed too then, as I've used temp tables in stored procedures. Typically when a direct join to that data is too slow.
 

Isaac

Lifelong Learner
Local time
Today, 02:51
Joined
Mar 14, 2017
Messages
4,308
They can be awfully handy for troubleshooting, though, when you can see what's in the temp table at your leisure well after the overall procedure runs!
 

pbaldy

Wino Moderator
Staff member
Local time
Today, 02:51
Joined
Aug 30, 2003
Messages
35,007
well after the overall procedure runs!

I thought their life ended with the stored procedure? Or are you using one of the options other than #? I use the # variety, but test by returning it's results during development:

SELECT Blah
INTO #Whatever

SELECT *
FROM #Whatever

I've never tried to get the results later, I can see that could be handy.
 

wvmitchell

New member
Local time
Today, 02:51
Joined
Sep 4, 2020
Messages
17
Local temp tables starting with # are dropped when the proc ends, but global temp tables starting with ## persist afterwards.
 

Isaac

Lifelong Learner
Local time
Today, 02:51
Joined
Mar 14, 2017
Messages
4,308
You're right about procedures. I was thinking simply running a block of code, in which case regular #temp tables do persists for the life of the connection, even after the code has ran. Which is what I meant by being particularly handy to Select from, during development/troubleshooting.
 

Users who are viewing this thread

Top Bottom