Converting BE from Access 2003 to SQL Server

sparks80

Physicist
Local time
Today, 14:49
Joined
Mar 31, 2012
Messages
223
Hi,

I have a multi-user Access 2003 database which is split into BE and distributed FE. The operating system for all users is Windows XP SP3.

I have been researching transferring the tables (BE) and queries (FE) to Microsoft SQL Server. The reason for doing so is the Access database becomes very slow with multiple users (up to 20 at any given time).

I know that there are differences in the SQL syntax for the two database applications, and was wondering if there are any tools that aid converting from Access queries to Microsoft SQL Server views.

Also can anyone share any tips with me to ease the process!

Thanks
 
Last edited:
Hi sparks80,

I am currently working on migration from Acsess to MySQL and I found this article very very helpful. It is fairly straight forward.

http://www.kitebird.com/articles/access-migrate.html

However I am not sure how to transfer queries directly.

You might have noticed the 'MySQL' (equivalent) queries can be obtained from the MS Access queries itself. You might wanna give that a try. Hope this helps.
 
Thank you so much for the reply.

We have Microsoft SQL Server at work, and was thinking about transferring the data to this.

If there is too much work involved I will certainly consider MySQL - I have used this myself in the past for simple applications.
 
Thank you Pat for taking your time to write such a detailed and clear explanation of how to make the transition. It is difficult sometimes to form a coherent view of the advantages and disadvantages of various options when there is so much conflicting information out there.

Lots of people have expressed an opinion that SQL server views are faster than Access queries, so it is very interesting to hear you say otherwise. It is possible that the advantage has been gained by limiting the number of records retrieved, or by better optimised code.

Some of my queries are going to be slow by definition - one uses a custom function to perform a newton-rhapson iterative calculation, and that is going to take time whatever system you use!

Thankfully I have taken care with all tables and queries to use CamelCase, and have always prefixed tables, queries, forms, controls, fields etc with a useful prefix (like tbl, qry, frm, ctl). This will hopefully make life a little easier.
 
For most of my forms and reports I already retrieve a limited recordset, so I can't see any major difficulties. I can't think of any instances where I have included any custom functions in the criteria of my queries, but it is useful to know to avoid these as a general principle. I can think of several examples where I have used custom functions in the SELECT statement, but these benefit from a limited recordset anyway, as vba functions are usually slower than native functions.

I'm into uncharted territory with pass-through queries, as I have only ever dealt with Microsoft Access in any meaningful sense. I have just read through a good tutorial, and it makes sense. Hopefully as you say with good design most of the Access queries will work sufficiently without the need to convert them.

Thanks again for your invaluable advice
 
The ultimate speed solution with SQL Server is the Stored Procedure.

A PassThrough query must be compiled and an execution plan created by the server each time it is run. Stored Procedures are precompiled by the Server and are blisteringly fast.

Stored procedures also have minimum network traffic because the only data sent to the server is the name of the procedure and any parameters instead of the whole query.
 
Thanks,

I will take a closer look at stored procedures - I am very new to this so don't know all the jargon yet! It seems like they are broader and much more flexible in terms of what they can do.

Interestingly I have just read that caching of the execution plan (or not) only really causes performance issues in very few cases.

However in either eventuality it sounds like writing properly optimised code will be better achieved using stored procedures than views.

For a lot of my simpler queries I suspect a pass-through query will be more than sufficient. But for network or processor intensive queries I will see whether stored views are more efficient than this.
 
Interestingly I have just read that caching of the execution plan (or not) only really causes performance issues in very few cases.

Many of the most advanced optimumisation techniques only amount to milliseconds so only start to really cut when there are large numbers of users frequently running the process.

However in either eventuality it sounds like writing properly optimised code will be better achieved using stored procedures than views.

Views don't support parameters so they really are of limited value. There are many good things about stored procedures and they are certainly worth learning to use.

A passthrough is just a query but a stored procedure can do multiple steps all in the one call. They support output parameters and can generate temporary tables in memory. Take a look at the WITH construct they support too. They are incredibly powerful.

There are many good resources online. I found this one was really good.

http://sqlserverpedia.com/wiki/Transact_SQL_Coding_Techniques
 
of course the other thing with a multi-user access only system, is to look at the design of that.

many of the things Pat pointed out initially are equally relevant to access systems. careful table/query form design can impact on general application speed anyway.
 
My personal experience is that after transferring from a 2003 backend to a SQL Server backend initially there was a deterioration in performance. After changing a few bottlenecks I managed to get the same performance.

For me the transfer was not done for performance reasons but for pure concurrency reasons.
 
Thanks to everyone who has replied, the information is very helpful.

It sounds daft, but I had never really thought about why communications over a network become slow. Reducing the amount of data that needs to be transferred by careful selection of records, and further optimisation of code has to be the starting point for improving performance, no matter what hardware or software is being used. After all upscaling equipment, or purchasing new software is expensive compared with changes a few lines of code, or even many lines of code!

I have a form in my database for entering performance measurements for a gamma camera. The form is based on a table with the fields TestID (Autonumber), cameraID (Long), TestDate (Date) and then a series of test results (mostly Single). There is one set of test results per day.

Poor practice would be to base the form directly on the table, because then all of the records have to be retrieved each time the form is opened. Therefore it would be better to have some controls on the form where the user can select year and month.
The year could be a combobox based on a pass-through query that would return a single record per year, rather than every date.
The month combobox would contain fixed values.
Finally the main form would return a limited number of records for the selected date and month.

Is this the right way of thinking?
 

Users who are viewing this thread

Back
Top Bottom