Converting BE from Access 2003 to SQL Server

sparks80

Physicist
Local time
Today, 22:04
Joined
Mar 31, 2012
Messages
223
Hi,

I have a multi-user Access 2003 database which is split into BE and distributed FE. The operating system for all users is Windows XP SP3.

I have been researching transferring the tables (BE) and queries (FE) to Microsoft SQL Server. The reason for doing so is the Access database becomes very slow with multiple users (up to 20 at any given time).

I know that there are differences in the SQL syntax for the two database applications, and was wondering if there are any tools that aid converting from Access queries to Microsoft SQL Server views.

Also can anyone share any tips with me to ease the process!

Thanks
 
Last edited:
Hi sparks80,

I am currently working on migration from Acsess to MySQL and I found this article very very helpful. It is fairly straight forward.

http://www.kitebird.com/articles/access-migrate.html

However I am not sure how to transfer queries directly.

You might have noticed the 'MySQL' (equivalent) queries can be obtained from the MS Access queries itself. You might wanna give that a try. Hope this helps.
 
Thank you so much for the reply.

We have Microsoft SQL Server at work, and was thinking about transferring the data to this.

If there is too much work involved I will certainly consider MySQL - I have used this myself in the past for simple applications.
 
There is no reason to transfer the queries and doing so will cause SUBSTANTIAL work as you rewrite your application to use unbound forms.

To convert the app, use the upzising wizard to transfer only the tables and then link them to the app. Choose the option to add TimeStamps to all tables. This is a system field and is used by SQL server when it gets an update to determine if someone else has updated the record between the time you read it and when you are updating it. If you don't include this column, SQL server compares every column of the replacement record with every column of the stored record to determine what needs changing. Of course back up before you do this because it does change your database. You should also make sure that your table and column names are suitable for SQL Server. If you've used good technique and not used special characters or embedded spaces and have avoided reserved words, you won't have any errors. Run the wizard once to get a feel for how it will handle your tables. If you get errors, delete the server database, delete the converted app and go back to your backup. Fix the problems, make another backup and try again. I've never had to do this more than twice but that's because I understand the things that cause problems and avoid them in all applications, not just the ones I upsize.

Once you are happy with the upsize, you'll need to review your DAO and ADO code. You will need to add dbSeeChanges to many of your .execute and .OpenRecordset statements -
Set rsMember = qdMember.OpenRecordset(dbOpenDynaset, dbSeeChanges)
This is needed to tell SQL Server to return the value of the identity column (autonumber) to Access.

The other thing you'll need to look at is how your forms work. With an SQL backend you absolutely do not want your forms bound directly to tables or even to queries with no selection criteria. If they are, they just sit there sucking data from the server until all rows have been transfered to memory on your computer. This could cause your DBA to threaten you with bodily harm for using all this bandwith unnecessarily. People who have slow Access applications think they can just switch to SQL server and make it better. Not so. Access is actually faster but we digress. The whole object of the switch is to reduce the data footprint of the app and the only way to do that is to optimize your queries. Make sure your forms select the absolute minimum set of records and columns. If your forms currently rely on filters rather than queries with where clauses, you'll need to modify the forms.

And finally, review your queries. If you use UDF's (user defined functions) or VBA functions, they cannot be directly translated to SQL so Access will need to process them locally. If the functions are in the Select clause, that is no problem, the query will be sent to the server sans functions, the data will be returned, and Access will apply the functions locally. If the functions impact selection of rows, that becomes a problem. In that case, rather than sending the query to the server (Access attempts to make all queries "pass-through" by the way), Access will request all rows from all tables involved and run the query locally. This should be avoided at all costs!

Most of my applications end up as SQL Server or some other RDBMS but all can swap a Jet or ACE back end by simply relinking the tables. That is the power of Access.
 
Thank you Pat for taking your time to write such a detailed and clear explanation of how to make the transition. It is difficult sometimes to form a coherent view of the advantages and disadvantages of various options when there is so much conflicting information out there.

Lots of people have expressed an opinion that SQL server views are faster than Access queries, so it is very interesting to hear you say otherwise. It is possible that the advantage has been gained by limiting the number of records retrieved, or by better optimised code.

Some of my queries are going to be slow by definition - one uses a custom function to perform a newton-rhapson iterative calculation, and that is going to take time whatever system you use!

Thankfully I have taken care with all tables and queries to use CamelCase, and have always prefixed tables, queries, forms, controls, fields etc with a useful prefix (like tbl, qry, frm, ctl). This will hopefully make life a little easier.
 
Good naming standards save a lot of work in the long run.

It is kind of amazing but Access with local Jet/ACE tables is faster than Access linked to the same tables in SQL Server. But, we almost never run a database of any size with the BE on our C: drives. Once you move the BE to a server, the advantage lessens and speed is now dependent on the network connection. The advantage to switching to SQL server (if you make the necessary changes to your forms and queries) is that you have control over how much or how little data is actually transfered over the wire. If you are requesting a single record from the server, that's all that gets sent.

People are very confused about the impact of pass-through queries and think they must use them to prevent Access from bringing down all the rows of a table just to get one record. But, Access really is smarter than that. It attempts to "pass-through" EVERY query. You just need to make sure you don't get in the way with UDF's and VBA Functions. Access queries are slightly slower than pass-through queries but not enough that you have to do something special. The reason is that Access has a conversation with the server for each query sent whereas pass-through queries are simply sent. In either case, the data returned would be the same.

The other advantage to using Access queries rather than pass-through queries is that Access queries use Access' variant of SQL but pass-through queries use the target database's variant of SQL. The upshot of this is easily seen if you have to convert your Access app from Jet/ACE to SQL Server or from SQL Server to Oracle or DB2. With Access queries, you simply relink the tables and you're good to go. With pass-through queries, you'll be rewriting EVERY query because of the ever so slight syntax differences. I have several applications that due to the client's needs are set to swap backends at the push of a button.
 
For most of my forms and reports I already retrieve a limited recordset, so I can't see any major difficulties. I can't think of any instances where I have included any custom functions in the criteria of my queries, but it is useful to know to avoid these as a general principle. I can think of several examples where I have used custom functions in the SELECT statement, but these benefit from a limited recordset anyway, as vba functions are usually slower than native functions.

I'm into uncharted territory with pass-through queries, as I have only ever dealt with Microsoft Access in any meaningful sense. I have just read through a good tutorial, and it makes sense. Hopefully as you say with good design most of the Access queries will work sufficiently without the need to convert them.

Thanks again for your invaluable advice
 
The ultimate speed solution with SQL Server is the Stored Procedure.

A PassThrough query must be compiled and an execution plan created by the server each time it is run. Stored Procedures are precompiled by the Server and are blisteringly fast.

Stored procedures also have minimum network traffic because the only data sent to the server is the name of the procedure and any parameters instead of the whole query.
 
Thanks,

I will take a closer look at stored procedures - I am very new to this so don't know all the jargon yet! It seems like they are broader and much more flexible in terms of what they can do.

Interestingly I have just read that caching of the execution plan (or not) only really causes performance issues in very few cases.

However in either eventuality it sounds like writing properly optimised code will be better achieved using stored procedures than views.

For a lot of my simpler queries I suspect a pass-through query will be more than sufficient. But for network or processor intensive queries I will see whether stored views are more efficient than this.
 
Interestingly I have just read that caching of the execution plan (or not) only really causes performance issues in very few cases.

Many of the most advanced optimumisation techniques only amount to milliseconds so only start to really cut when there are large numbers of users frequently running the process.

However in either eventuality it sounds like writing properly optimised code will be better achieved using stored procedures than views.

Views don't support parameters so they really are of limited value. There are many good things about stored procedures and they are certainly worth learning to use.

A passthrough is just a query but a stored procedure can do multiple steps all in the one call. They support output parameters and can generate temporary tables in memory. Take a look at the WITH construct they support too. They are incredibly powerful.

There are many good resources online. I found this one was really good.

http://sqlserverpedia.com/wiki/Transact_SQL_Coding_Techniques
 
of course the other thing with a multi-user access only system, is to look at the design of that.

many of the things Pat pointed out initially are equally relevant to access systems. careful table/query form design can impact on general application speed anyway.
 
My personal experience is that after transferring from a 2003 backend to a SQL Server backend initially there was a deterioration in performance. After changing a few bottlenecks I managed to get the same performance.

For me the transfer was not done for performance reasons but for pure concurrency reasons.
 
Thanks to everyone who has replied, the information is very helpful.

It sounds daft, but I had never really thought about why communications over a network become slow. Reducing the amount of data that needs to be transferred by careful selection of records, and further optimisation of code has to be the starting point for improving performance, no matter what hardware or software is being used. After all upscaling equipment, or purchasing new software is expensive compared with changes a few lines of code, or even many lines of code!

I have a form in my database for entering performance measurements for a gamma camera. The form is based on a table with the fields TestID (Autonumber), cameraID (Long), TestDate (Date) and then a series of test results (mostly Single). There is one set of test results per day.

Poor practice would be to base the form directly on the table, because then all of the records have to be retrieved each time the form is opened. Therefore it would be better to have some controls on the form where the user can select year and month.
The year could be a combobox based on a pass-through query that would return a single record per year, rather than every date.
The month combobox would contain fixed values.
Finally the main form would return a limited number of records for the selected date and month.

Is this the right way of thinking?
 
That's a start. What I do with forms that access large recordsets is create the form with an embedded query that selects ID = 0 so it never returns any records. Then I use combos and text boxes on the form to "filter" the data by whatever makes sense for that form. I use the selection criteria to populate a different query and then replace the recordsource with that query. Normally, the criteria is simple enough that I use a saved querydef as the filtering RecordSource query. Occassionally, the filtering is sufficiently complex that I build an SQL String in code. Keep in mind - this method causes bloat! If you use it, you must compact regularly.
 

Users who are viewing this thread

Back
Top Bottom