Client / Server design in a 100MB database.

ions

Access User
Local time
Today, 12:20
Joined
May 23, 2004
Messages
816
Dear MS Access Expert.

I am preparing for my upcoming MS Access development project. I have been researching Client / Server design recently.

My question is how aggressively should I push the Client / Server design in my upcoming project which will initially have 6 concurrent Users and 50MB of data.

I expect the database to grow by 10 MB max / year and 1 additional concurrent user / year up to 12 concurrent users max.

My backend will be .MDB (File Server) NOT SQL Server.

I have read that the best Client/ Server design keeps the “wire cold”. Ultimately this means binding the form’s recordsource to one record.

Now I could have my forms have tiny recordsources but what bothers me is that with the computer having 2 GB of RAM won’t the entire database always be in RAM?

I discussed my earlier designs (not client / server) with end users and they don’t experience any slow down on Continuous Forms with 5000 records. In fact they feel safe when they can scroll through all the way back into the past using their mouse scroll wheel.

I don’t forsee my upcoming development project needing SQL Server due to future Max database size peaking at about 150 MB and 12 concurrent users. Also in 5 years which is when the database will start to approach its max loads computers will be faster and RAM much larger.

What do you think? How much consideration should I give to Client / Server given my database specifications? In my experience end users like continous forms as it allows them to see a broader picture of their business and eliminates them always having to enter criteria.

Thanks for your feedback.

Peter.
 
You can start out with an Access back end. If the need arises, upgrade the back end to an SQL server and still use the same Access front end.
 
in my experience, access front and back ends will not have a problem with thissize of data, and number of users. In some places you may need to design carefully to minimize the data moving over the network, but not/never to the point of single records. just a date subrange say.
 
Gemma,

Suppose I pull a query over the wire that is 2 MB of data and acts as a recordsource for a continuous form. I believe that Access forms don't fetch all the data at once. They optimize the fetches according to the way the user traverses the form. (Please correct if I am wrong here) However, eventually all 2 MB worth of records should make it to the User's RAM. From that point on the user can Fetch Data from the RAM. (Please correct if I am wrong here)

However suppose there is a concurrent user also updating records in that 2 MB range worth of records. Will Access fetch only the Updated records or will the entire 2MB worth of records be retrieved over the wire? (Suppose I execute Myform.Requery to requery the recordsource of the continuous form.)

If the entire 2MB worth of data is retrieved it is worth to go with more of a client / server approach. If only the concurrent user's updated records are passed over the wire then I can go with larger recordsources for forms since most of the Fetches will be from RAM.

Thanks for your feedback.

Peter.
 
Peter,

Even is you use a Client/server design, your programming logic can still casue poor performs. It is also true the with the proper programming logic, you can have an Access back end with 100,000 record and perform great.

Your programming skills can also be a huge factor.

I would recommend starting with this:

Microsoft Access Performance FAQ

Also search MSDB for JET optimization. This is the actual database engine used by Access for MDBs. Access really is just a from end to JET.
 
Ions,

Your understanding is basically correct, with the caveat that as HiTechCoach pointed out, it depends whether you write queries that doesn't force Jet to perform unnecessary evaluation.

With regards to concurrent users, each user maintain their own version of cached rows which are periodically refreshed every 30 seconds (by default but can be changed for ODBC -- not sure about linked Jet tables) even if the user does nothing, or refreshed when the user returns to the row that user previously viewed.

Do understand that Access need to download the full set of primary key in order to make caching possible, and not restricting the recordsource with a WHERE condition or at least an arbitrary TOP value forces Access to download the entire set of key. For a large table, that would be expensive and for that reason, it's always a good idea to have some restriction such as showing only records for current fiscal year, records for active clients, records for orders placed within 3 months, with option to browse the "archives" which are actually the same table but with more lenient WHERE condition.
 
but note that 2MB over a 100Mb/s LAN does not generally give a performance problem.

but 2Mb over a WAN, or over a wireless connection may present a problem, which is why the recommendation is not to use access with a jet backend on systems other than a LAN.
 
ions,

Jet is a file-sharing system, not a client-server DBMS. Certain optimisations are possible but all the processing happens on the client machine and therefore all of the data necessary to complete that processing has to come over the wire. As others have said, in your case the data and number of users is tiny so that may not be a problem in terms of performance, but you should understand that it is not a client-server architecture.

Client-server DBMS is so powerful, cheap and easy to use these days that there is very little reason not to use it for any database shared on a network, whatever the front end may be. Why are you not considering SQL Server, MySQL or some other server DBMS?
 
Thanks Dportas

I am in the process of learning SQL server and will most likely use ODBC links to SQL Server 2008 Express. The only restriction on SQL Server 2008 Express that bothers me is the 1 GB RAM limit but the price is right $0. I think the full version is $6K which is over the budget.

However, if the project needs to be completed quickly I will stick with JET and DAO because I know it very well.

...but yes I would like to go to SQL Server if the project warrants it.
 
I need to take off my Access hat and put on my System Administrator's hat.

You do not necessarily pull all of a table into memory eventually no matter what you think you are doing on Windows - as long as anything else is going on inside the machine in question. It is a matter of the paging dynamics settings of the particular platform. This includes the amount of scratchpad memory you set up in your swap file. Also, depending on the O/S version, you might actually see some demand-virtual paging. If that happens, your data isn't in RAM. It is in the page file on the disk. Faster than dragging over a network but slower than if it was in RAM.

Further, if Access really did this on a shared file, I'd have to shoot the developers who allowed it to do this. Keeping things in memory CAN'T happen unless you have chosen to allow inconsistent recordset operations. Which risks data corruption in the long run. You WANT to keep track of who is accessing what parts of memory and who has out locks on what records. Using a cache is actually counterproductive in this scenario. You have to strike a balance between speed and data stability to get it right.

This is where having a "smart" backend server like any of many SQL server variants becomes an excellent choice. The backend servers are built to manage this balancing act and hide most of the dirty laundry from you. And the more concurrent users, the better off you will be with a dedicated back end SQL product.
 

Users who are viewing this thread

Back
Top Bottom