Help with my next step (MySQL?)

wolfwolf

New member
Local time
, 21:17
Joined
May 25, 2022
Messages
5
Hi!
Small intro:
I have a small construction company here in Brazil, and I want it to grow. To do this I need some system to store, relate and report all data I have and Excel cant do it anymore. So I reached the conclusion that database is the way to go, and I love Access FE, and it have all the functions I need. Except for online connection.
I would really like to put my database online. Going offline is unthinkable today and in the future I will need to be able to access this data when traveling, in homeoffice, etc.
I have only 3 people that will actively use the database right now.

Where I am right now:
I have made my main functionalities on Access, and they work perfectly. So the next step is test the share functionality. I don't have a server ou LAN in my company, just wi-fi. I know Access do not work with cloud and can go bad over wi-fi. My first bet was MySQL.
So I exported my tables to a MySQL db on my website as a test. The speed drop was insane! I have only some sample data to test and it takes 1 to 5 seconds to open every form, this with a 300Mbps connection. (How this is possible? By comparison, several people can watch 1080p videos simultaneously with 300Mbps!)

My question is: is it possible I am doing the queries SO WRONG that speed is so slow? Or is MySQL that bad? If I use SQL Server or Azure it could be faster?

What do people mean by using SQL Server / MySQL / Azure as BE and Access as FE? Do I need to split the db to do this? Or I simply link the external db to my accdb file (this is what I did with MySQL)? If I split I end up with 2 files, what do I do with the BE file?

I don't want to use RDC, it would be just a paid temporary solution. If it is to be paid I would choose Azure or similar.

My last resort is to use it over wi-fi for now, and in the future take it to some MS service online.
 
The problem usually is that you have to adjust your queries so that the server does them in what is called Pass Thru operation. If your transfer didn't change things, you might be shuffling data back and forth between the FE file and the BE in SQL server. If you don't adjust your queries to all be in Pass Thru, you have radically increased the amount of work being done over the network, which is where "slow" comes in.

I am not an expert in this, so all I can do is offer this answer and hope that one or more of my colleagues will step in.
 
I don't have a server ou LAN in my company, just wi-fi
you need a Hard-wired connection.
 
An Azure / SQL / MySQL cloud-based back end will work fine, but you need to design your front end to not drag every record over the WAN.

Use Pass-Through Queries, Views or Stored procedures to do the heavy lifting.
Open forms without any records or only a single record, then use controls to filter or display more records.
Keep local FE copies of lookup table data that isn't changed frequently and update them on DB opening. Use those for combos on forms.

As @arnelgp said - get everything hard wired if it's your own server, wireless is useless for server data storage.
 
Use Pass-Through Queries, Views or Stored procedures to do the heavy lifting.
But don't get too carried away with pass through. Simple queries are managed as pass through via ODBC. If the query cannot be translated for the server then the it can be very slow. Joins between local tables and remote tables are to be avoided. Anything that requires Access resources such as a user defined function will be a problem if selection is based on them.

Appropriate indexing of tables is vital. Included columns in indexes can make a huge difference to the query performance. Basically you create an index on a column then include the data from other columns so the engine can get all the information it needs by looking no further than the index.

There are so many things that can be done to improve performance with a database server backend but as you found, the performance gains are not at all automatic and require the designer works with the tools provided.
 
see this link about improving performance. It's more about the misconception that sql server/mySql/et al will be faster but shows the sort of things you would need to consider to migrate.

Not sure why you don't want to consider terminal server as a RDC or use a VPN if it is about cost, Azure etc all have a cost.

Do I need to split the db to do this?
it should be split as a matter of course. You should never use an unsplit db except perhaps during development or if there is only one user. Even then I recommend splitting if only so you keep the two parts separate if something goes wrong. If unsplit you risk losing everything.
If I split I end up with 2 files, what do I do with the BE file?
The FE goes on your local machine, each user having their own copy. The BE goes on the server somewhere (which you don't have) so a machine that can be designated as a server where everyone has appropriate rights.
 
If I split I end up with 2 files, what do I do with the BE file?

If you split and ALL of your files go into an SQL Server (or other SQL engine) data file, the SQL file is your BE file. You won't have a "native" Access file as the BE because all of the files will be served to you by your SQL server engine. In fact, if you DO have a "native" BE file that has anything in it, you might have done the split incorrectly.

Ideally, you have a no-data FE file (all forms, reports, modules, macros, and queries - but NO tables) and a no-code BE (tables and nothing else). You distribute the FE and NEVER allow this to be actively shared by two or more people. Everybody gets a private copy. Each FE should POINT to the BE data wherever it resides. Sharing the FE leads to lock arbitration over the network, which - I guarantee - will slow you down to a crawl. The more people doing the sharing, the worse it will get.

Your slowdowns occur when you use the network more than you should, because each step managed in the PC hosting the FE file has to drag data across the net and then (in some cases) write it back across the net. This makes your operations run at network speed rather than memory or internal disk speed. From your network physical description, the speed differential may actually be as much as thousands or tens of thousands of times slower than it could be.
 
Thank you all for the answers!

Ok, no wi-fi, I got it. MySQL is the way to go.

Not sure why you don't want to consider terminal server as a RDC or use a VPN if it is about cost, Azure etc all have a cost.
As all of then have a cost, at least an online db can be used in the future to make web apps or PowerApps.

If you split and ALL of your files go into an SQL Server (or other SQL engine) data file, the SQL file is your BE file.
I have exported all my tables to MySQL and now, when I split the file it says: "There are no tables in this database. The BE database will be empty", and creates an empty BE. The resulting "FE" is exactly the same as it was before splitting, with the linked tables in it. I am doing something wrong? Or the linked tables are supposed to be in the FE file?
 
you use External Data tab->New Data source->From Other Datasources->ODBC datasource.
follow the wizard.
 
You no longer need a BE Access database. You moved the tables to another database engine.
Your tables are now stored in SQL/MySQL . This is now your backend.
 
You no longer need a BE Access database.

providing you have created the required tables in mySQL and successfully copied all your data across. So far as I am aware, mySQL doesn't support lookup or multi value fields or OLE objects or attachment fields. Not sure about calculated fields. If you have any of these they will need to be removed and alternative solutions found.
 
an online db can be used in the future to make web apps or PowerApps.
Don't know about web apps but MS are bringing out an Access Dataverse Connector that can be used with Powerapps. Currently under Beta testing but google to find out more
 
Pretty much any cloud-based DB can be used with a web app.
Power Apps I think can also connect with most databases, but some require a premium connector.
 
I just posted the following in response to a similar question. You can provide a web-based approach by using Apache in conjunction with your database.
Have you considered whether MS Access would be the appropriate approach?
This is an Access website, so responses will be along the lines of supporting MS Access.
I would suggest, as an alternative to MS Access, looking at open source databases that are free to acquire and use. That will also avoid concerns over ethics and piracy, which have been mentioned in prior thread posts. With "free", you get "cheap" which is one of your stated goals.

 
You no longer need a BE Access database. You moved the tables to another database engine.
Your tables are now stored in SQL/MySQL . This is now your backend.
Oh, ok, so I had done all the steps already. Everybody was talking about splitting the db, but actually there is nothing to do.

Use Pass-Through Queries, Views or Stored procedures to do the heavy lifting.
Open forms without any records or only a single record, then use controls to filter or display more records.
Keep local FE copies of lookup table data that isn't changed frequently and update them on DB opening. Use those for combos on forms.
This is what I need to do in order to have a reasonably fast connection. Now I need to do my research and understand all this methods.

An Azure / SQL / MySQL cloud-based back end will work fine
Last questions still related to this thread:
1- Is Azure / SQL Server any better (or faster) than MySQL?
2- If I set everything in MySQL now, it's easy to move to Azure later? Or it has different language, structure or something that make the move a problem as big as moving from Access BE to MySQL?
 
This is what I need to do in order to have a reasonably fast connection.
not quite true - the objective is to minimise the amount of traffic - the data passed to and from the BE and FE. Connection speed has little to do with it. So you will be aiming to do as much data processing on the server as possible.

With a front end on your local machine and the BE on the web, put the thought of performance improvement to the back of your mind. At best you will have something similar. Your reason for migrating is, or should be, more to do with access from anywhere, ability to use powerapps and perhaps data volume.

Is Azure / SQL Server any better (or faster) than MySQL?
They are all about the same as Access in an equivalent environment. There is some functionality within Azure/SQL Server/MySql which performs better than the equivalent in Access and typically these days the server will be mounted on SSD's rather than hard disks but the main difference is the number of cores used. Access only uses one. The others can use more - depends on what you are paying for. Also, these rdbms's have the ability to prioritise certain processing - if your particular BE is low down the pecking order for resources it can be very slow. Again, depends what you are paying for.

If I set everything in MySQL now, it's easy to move to Azure later?
impossible to say without knowing the detail. The languages are all broadly the same but there are differences. Some syntax is the same, some slightly different and some functionality is in one but not the other. So, can be done, whether it is easy or not is another matter. A migration will take time either way.
 
Thank you all for the answers!

Ok, no wi-fi, I got it. MySQL is the way to go.


As all of then have a cost, at least an online db can be used in the future to make web apps or PowerApps.


I have exported all my tables to MySQL and now, when I split the file it says: "There are no tables in this database. The BE database will be empty", and creates an empty BE. The resulting "FE" is exactly the same as it was before splitting, with the linked tables in it. I am doing something wrong? Or the linked tables are supposed to be in the FE file?
This is truly a large many-faceted subject with lots of considerations.

MySQL is one good option, but not the only one. Don't zone in on it and not consider other options, such as a remote hosted SQL Server or SQL Azure. There are also other server based database engines, although not so well-known, perhaps.

WiFi is mostly bad in situations where you have a local Access FE and a local Access BE on premises. If you are working with a server based database, such as SQL Server or MySQL, the concern is lower. Corruption of the data is less likely and that's the main concern with WiFi.

You mention whether queries are the problem with speed. Possibly, but I think the internet connection between your premises and the remotely hosted (i.e. "Cloud") database such as MySQL or SQL Server is much, much more important. And that connection is also more important than the actual database engine you choose. If you have faster internet, it will be faster, if not, not. Once the data in on the server, that difference is lesser. Don't focus on the end point, focus on the pathway to it.

It might even make more difference which Cloud Host you choose, for example. The hardware on which they host their database engine and the amount of RAM they allocate are just as crucial if not more so than the actual database engine they run.

My experience with MySQL is rather limited, but I know that there are differences in the SQL used by all database engines, and that's where you'll have the most issues. So, if you ultimately intend to be on SQL Azure, start on SQL Azure. Why not? Because you can get a hosted MySQL database for $5.00 a month or free as part of your web host package? Is that REALLY the most important factor in running a business critical application? Think carefully here. It may be that a free MySQL database on your web host's server is good enough. But this is supposed to be a business critical application, is it not?

Interface design in Access is also highly important. Some previous posters have mentioned key factors. The basic principle here is that you should always move the least amount of data possible back and forth across the long internet wire. For example, if you follow the traditional Access approach in which a table is bound to a form and all records are available as soon as they are downloaded from the back end, then that's what will happen, except that you are waiting for all of the records to be pulled down from "the Cloud". There is a lot of excellent material around on designing Access interfaces for SQL Server database backends, so I won't rehash that. Again, anything you can do to minimize the need to open a connection to a remote database and pull a bunch of records through it is good. Anything you do that carelessly pulls a bunch of records is bad.

That leaves aside a lot of other important considerations, such as security of the data (better in a server-based database) and the power of the database engine itself. I've minimized that above, in order to emphasize what I think are more crucial to speed. However, it matters more as the amount of data you have to process grows over time.
 

Users who are viewing this thread

Back
Top Bottom