Change to NAS server problems (faster database) (1 Viewer)

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
Hi,
I have quite big database in MS Access and has some problems.

Introdution:
I have 7 databases backend (.accdb) located in shared folder in Windows Server 2012. Each database is about 80-200MB.
From this databases i created one frontend.
Some tables has about 1 500 000 records. It's problem because it needs to calculate the money and it takes about 1-3 minutes with this serwer. I created button to calculate billances.
About 6-8 users use this fronted. Sometimes they put a lot of new recored (about 1000-5000 records) a day.


I have two problems.
1. Since one year i have sometimes database corupted. Everyone must exit program and open this corrupted database (backend) to repair it. It works again. It's strange because about 6 years it works well. I created new functionality every month so I don't know when it started doing this problem. Also it's strange because sometimes it's different .accdb databases. I see similar thread but without solution: error-unrecognized-database.322125
I don't know where to search problems.

2. Beacause of the problem with speed I try some things to make it faster. Firstly I try to change to SQL Server. I migrate database and try changing link to this sql server.
I try mariadb (mysql) and it's very slow. I don't know why and where search problems.
After this I created local copy of databases for only read. Users run .bat file which copy databases to local disk d and they can use database with good speed. I change also computers to better.
Now I try to change very old server computer with Windows Server 2012 to better QNAP NAS TS-473 with 32GB of RAM. Two M.2 disk 512GB and 2x10TB. SSD I use as cache for this folder with .accdb database. I created credentials for all users and map this disk to the Windows 10 Pro computers. All computers has wired connection with very good router and switches. 1GB and server has 2 ports 2.5G with full speed. With normal data transfer (files) I can copy with full speed (5 computers for about 110MB).
I think it would be better then old server, but it didn't. On old server it takse about 1 minute co calculate billnaces. On new QNAP it takes about 9 minutes. I read about it and I think it's because of the linux system. Old server has 2TB normal disk.
I try another method: iscsi disk. I think it resolved my problem. It takes about 25s to calculate but after one hour i see that it didn't synchronize information between computers... I see somewhere that I should use GFS2, ZFS or GPFS, but I can't find if this will resolve my speed problem.
I have only one idea: map this disk by iscsi to windows server and share this disk to users.
Maybe you have better solution for this.

Thanks for help. I spend a lot of hours to find solutions without any succesful.
 

Ranman256

Well-known member
Local time
Today, 02:31
Joined
Apr 9, 2015
Messages
4,102
1 of the msoft updates creates the ability to frequently crash & corrupt multi-user network db.
we were getting this daily. Then theres a fix , run this on each pc.

put this text into a text file CacheFix.reg , then run it on the pc:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters]
"FileInfoCacheLifetime"=dword:00000000
"FileNotFoundCacheLifetime"=dword:00000000
"DirectoryCacheLifetime"=dword:00000000

we havnt had a crash in weeks.
 

GPGeorge

Grover Park George
Local time
Yesterday, 23:31
Joined
Nov 25, 2004
Messages
678
1 of the msoft updates creates the ability to frequently crash & corrupt multi-user network db.
we were getting this daily. Then theres a fix , run this on each pc.

put this text into a text file CacheFix.reg , then run it on the pc:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters]
"FileInfoCacheLifetime"=dword:00000000
"FileNotFoundCacheLifetime"=dword:00000000
"DirectoryCacheLifetime"=dword:00000000

we havnt had a crash in weeks.
Do you have a link to MS documentation for this fix? Is it something that we can bookmark as a reference?
 

Minty

AWF VIP
Local time
Today, 07:31
Joined
Jul 26, 2013
Messages
9,288
A Qnap-NAS box is not designed to be a fast file server, it is a pretty quick data storage device.
Admittedly quite a well-specified one, but not the same as a dedicated raid SSD file server, with 100+GB of Ram.

Moving to an SQL server will help with the database size issues, but you would also need to then use the power of the server to manipulate your data, e.g. get it to do the hard work locally, rather than shifting everything over the network.
 
Last edited:

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
1 of the msoft updates creates the ability to frequently crash & corrupt multi-user network db.
we were getting this daily. Then theres a fix , run this on each pc.

put this text into a text file CacheFix.reg , then run it on the pc:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters]
"FileInfoCacheLifetime"=dword:00000000
"FileNotFoundCacheLifetime"=dword:00000000
"DirectoryCacheLifetime"=dword:00000000

we havnt had a crash in weeks.
Ok i try this. Thanks

A Qnap-NAS box is not designed to be a fast file server, it is a pretty quick data storage device.
Admittedly quite a well-specified one, but not the same as a dedicated raid SSD file server, with 100+GB of Ram.

Moving to an SQL server will help with the database size issues, but you would also need to then use the power of the server to manipulate your data, e.g. get it to do the hard word locally rather than shifting everything over the network.
I use QTS Hero with ZFS. System is on 10TB Raid and SSD M.2 as cache read/write. Disk read is about 3430MB/s and write 2600MB/s.
I can also change it and reinstall system on SSD without cache.
Also 32GB of ram in my opinion is ok with this database size.
I think I need some similar to iscsi where i can read/write because iscsi is very fast.

So it's better to buy new server with Windows Server 2019?

What do you mean do hard word locally? Can you give me some examples or what should I search?
 

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
I try this another idea. I mount iscsi to Windows Server 2012 and share this disk to another users. It seems that i get about 30 secounds.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 01:31
Joined
Feb 28, 2001
Messages
22,821
I do not claim expertise on this, but when you switched to SQL server, there are things you need to do to some queries to prevent Access from trying to load huge tables. You need to consider whether your big queries are PASSTHRU or still done on the front-end. There is also the issue that if you had these multiple back-end files, there is a question of JOIN queries that touch tables in different back-ends. That would be a STRONG reason to combine it all into a single SQL Server (or equivalent) database.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 07:31
Joined
Feb 19, 2013
Messages
14,401
From this databases i created one frontend.
....
About 6-8 users use this fronted.
Not sure if this is a translation issue, but each user should have their own copy of the front end located on their local machine. Only the back end(s) should be on the server. If your users are all using the same copy this implies the front end copy is on the server - which means you are generating a lot more network traffic which will have a performance hit. Further it is probably the reason you are getting corruption as Access is not intended to be used in this way.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 01:31
Joined
Feb 28, 2001
Messages
22,821
Good question, CJ. IF this is a true "shared front-end" case then the network is ALSO getting clobbered by file lock management traffic.
 

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
Not sure if this is a translation issue, but each user should have their own copy of the front end located on their local machine. Only the back end(s) should be on the server. If your users are all using the same copy this implies the front end copy is on the server - which means you are generating a lot more network traffic which will have a performance hit. Further it is probably the reason you are getting corruption as Access is not intended to be used in this way.
Of course all users has their own copy localy on Windows 10 Pro.
So there is 6 frontend and shared backend databases (7 databases, because i split big databases separetly to speed it).

Two years ago I have another idea. Buy Windows Server with shared windows on them. In this case everything will work locally but it was very big price to test it (license for windows and very powerfull windows server)

Today I get information about QNAP. They write to me to change minimal version of SMB. I had SMB 2 and I change it to 3. We will see the speed.
I check it but with normal share from QNAP it takes about 9 minutes. So it didn't help.
 
Last edited:

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 01:31
Joined
Feb 28, 2001
Messages
22,821
Protocol SMB doesn't change that much when you upgrade to v2 or v3, because they are still the same types of exchange. The differences, as I understand it, are basically in how the network data packets are handled internally, plus a couple of new options for data buffer optimization in high-traffic situations. If the problem were caused by data buffer issues, it would imply that you had a limited amount of physical memory in the user's workstations or they were mistuned with respect to dynamic memory (a.k.a. the page file) - which is on the System Control Panel under Advanced settings for Virtual Memory.

Using Task Manager, you should be able to easily see (on the Performance pages) how your memory is holding out. As long as you don't have 0 for "physical memory available" you should be OK.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 02:31
Joined
Feb 19, 2002
Messages
36,352
i split big databases separetly to speed it
I don't believe that improves performance and it could easily make it worse. The really bad thing though is it eliminates all Referential Integrity between the tables located in different BE's. If your tables are too big to be all together in one BE, then it is important to switch to SQL server sooner rather than later because RI is just too important to give up.
 

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
Protocol SMB doesn't change that much when you upgrade to v2 or v3, because they are still the same types of exchange. The differences, as I understand it, are basically in how the network data packets are handled internally, plus a couple of new options for data buffer optimization in high-traffic situations. If the problem were caused by data buffer issues, it would imply that you had a limited amount of physical memory in the user's workstations or they were mistuned with respect to dynamic memory (a.k.a. the page file) - which is on the System Control Panel under Advanced settings for Virtual Memory.

Using Task Manager, you should be able to easily see (on the Performance pages) how your memory is holding out. As long as you don't have 0 for "physical memory available" you should be OK.
I will check it but in my opinion it's not problem with memory because all PC in my opinion is quite power. Ryzen 5650 with about 16GB DDR4 memory 3200MHz. When it works with windows server it's a lot of faster (x10). Windows server has only one ethernet port 1GB. New QNAP has 2 ports 2.5G and we change router to use this two ports for 2.5G. It must be problem with this SMB and Linux.

I don't believe that improves performance and it could easily make it worse. The really bad thing though is it eliminates all Referential Integrity between the tables located in different BE's. If your tables are too big to be all together in one BE, then it is important to switch to SQL server sooner rather than later because RI is just too important to give up.
I have test it and it's very good idea. When I first split database from one to three databases it was about 3 times faster. I didn't do another test when I did another split but I will try connect it and check speed. I need to do some tricks to works it good with this seven databases. In frontend in first view I open blank table from all databases to didn't close connection with important databases (as I remember it was 5 databases). I have also some databases where I migrate old data to archive it. After one year I migrate there some data and reduce important databases size.

I must read more about changing it to SQL server but all my test didn't go well. It's problem because they use a lot of raports and each raport must get data from about 30 tables...
We think about write from start all program with php+sql/mariaDB but it will take about 1 year and consume a lot of costs. I don't know how many things I must change in MS Access to get good speed with SQL server connection.
I have another question about SQL. What will be better now. I did test about 2-3 years ago with MS SQL and mySQL with ODBC connection. What will be the better way to use. I think MS SQL but maybe Oracle?

Daniel Pineault provided that registry fix at :

I'm unsure whether its any better than the original disable leasing fix
I did this trick and since than it works without crash :D
 

Minty

AWF VIP
Local time
Today, 07:31
Joined
Jul 26, 2013
Messages
9,288
If you are joining 30 tables in access from a SQL Server backend it will very possibly be slow.
If you created a View (A stored query) in SQL it should be very quick assuming the relevant indexes etc. are setup

This is what we mean by "passing the hard work to the server". Do the complex maths and or functions, lookup table joins etc. on the server, then simply display the result in Access.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 02:31
Joined
Feb 19, 2002
Messages
36,352
each raport must get data from about 30 tables...
Why? Have you broken up data into multiple tables that should be in a single table?
I don't know how many things I must change in MS Access to get good speed with SQL server connection.
The only way that converting to SQL Server will ever be faster (or even as good as) than Jet/ACE is if you allow the server to do the heavy lifting. Therefore, you MUST in all cases, use queries with criteria to limit the rows returned from the server. That means that your forms cannot be bound to queries with no criteria or directly to tables where you download ALL rows from ALL tables in the join and do the filtering locally. So, think through the application and think about which main forms return more than a single record. Those are where you would start. Sometimes, if the criteria is extensive or complicated, I build search forms and construct SQL strings rather than using saved querydefs. I run a count query to determine how many rows will be returned, then if it is just one row, I pass the SQL String to the single record form and open the selected record directly. If the query returns n records, I pass the SQL string to a not-updateable list form and display that. The user can then do additional filtering or just double click to open the single record form. If n is larger than some number (I decide individually what is rational), I either tell the user to change his criteria or I just give him a warning that the list form might be slow.

I have been doing this for 25+ years with SQL Server, Oracle, DB2, and at least a half dozen other RDBMS' I always use bound forms. I rarely use passthough queries and even more rare is my use of stored procedures. sp's are reserved for the most complex reports. I do however, use Views to improve join performance and since the Views are updateable, this doesn't interfere with the use of bound forms. Many of my applications include tables with more than a million rows so they are not tiny.
 

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
Why? Have you broken up data into multiple tables that should be in a single table?

The only way that converting to SQL Server will ever be faster (or even as good as) than Jet/ACE is if you allow the server to do the heavy lifting. Therefore, you MUST in all cases, use queries with criteria to limit the rows returned from the server. That means that your forms cannot be bound to queries with no criteria or directly to tables where you download ALL rows from ALL tables in the join and do the filtering locally. So, think through the application and think about which main forms return more than a single record. Those are where you would start. Sometimes, if the criteria is extensive or complicated, I build search forms and construct SQL strings rather than using saved querydefs. I run a count query to determine how many rows will be returned, then if it is just one row, I pass the SQL String to the single record form and open the selected record directly. If the query returns n records, I pass the SQL string to a not-updateable list form and display that. The user can then do additional filtering or just double click to open the single record form. If n is larger than some number (I decide individually what is rational), I either tell the user to change his criteria or I just give him a warning that the list form might be slow.

I have been doing this for 25+ years with SQL Server, Oracle, DB2, and at least a half dozen other RDBMS' I always use bound forms. I rarely use passthough queries and even more rare is my use of stored procedures. sp's are reserved for the most complex reports. I do however, use Views to improve join performance and since the Views are updateable, this doesn't interfere with the use of bound forms. Many of my applications include tables with more than a million rows so they are not tiny.
It's not possible to store it in one table. There are a lot of tables where data is very important and i must choose information from this table to one raport. I made a lot of improvements, but it hard because there are a lot of important data in one raport. This system is like bank credits where are a lot of weekly data with information abut customers payments, history payment, interest notes.
I made a mistake with numer 30. I check some data and max is about 10 tables in one raport.

Thanks for information. Now I will implement them iscsi disk mounted in Windows Server and shared from them to others. This will take me time to learn how to change everything to the SQL Server and create views.

I'd like to add thanks for everyone. I tried find help in another forum about 3-4 years ago but I didn't get any good advice. Here in 2 days I get more.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 02:31
Joined
Feb 19, 2002
Messages
36,352
So far you didn't convince me that the tables are properly normalized. Can you post a picture of the table schema?
 

viperlodz

New member
Local time
Today, 08:31
Joined
Apr 11, 2022
Messages
12
So far you didn't convince me that the tables are properly normalized. Can you post a picture of the table schema?
I think I have some problem because I created it about 10 years ago for the temporary program but it works till now. I create a lot of new functionality and there ale a lot of changes in company so I know some problems but without create all new program I didnt have idę to optymalize that.

I can send all tables but it will be hard to uderstand it because of the polish language.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 02:31
Joined
Feb 19, 2002
Messages
36,352
OK but if you are breaking up the tables by date range or transaction type, that would be a problem.
 

Users who are viewing this thread

Top Bottom