Access with Citrix

I wrote a front end version control utility that put the front end in a specific folder on the C drive. That's fine for a PC environment, but caused problems in a Terminal Server/Citrix environment. While users theoretically run their own copy, in reality they don't. I modified my utility to create a subfolder based on the logged in user, so now each person truly has their own copy. The quirky issues I was having went away.

Paul:

I've deployed an Access application over Terminal Service. It worked just great while they had about 15 users. Once they increased that to 25 it started to crash every day.

I never split the database into a FE/BE application for the following reason: my understanding of the reason for the split is because on a conventional LAN, you are taxing the network traffic with the FE components. I reasoned that on a Terminal Service environment this doesn't apply because all the processing is done on one CPU anyway.

After reading your post, I see that you have had improved performance by splitting the database. Before I go through this effort (i.e. installing an auto update scheme and then deploying the front end to 25 terminals), I want to make sure that this is a likely solution (or beginning of a solution) to the crashing problem.

If you believe it will improve performance, can you provide guidance as to how to set up the individual directories to ensure that they are not using the same front end?

Thanks very much

SHADOW
 
This is how we handle our situation. In addition to the Terminal Server (TS), we have a File Server (FS) with a 700GB store, both on a 1000Mbit infrastructure. The Data resides on FS, on the TS, each user has a mde file in a subdirectory NewYork1-4, Central1-4 and Roaming1-4. There are 25 uses in total, not all on the TS. The default directory in Access for each TS user is Access's default /Documents and Settings/User/... so that they can't overwrite each others files.

The one thing we did find that was an issue was the performance of the broadband, so we increased FS and TS end to an SDSL line, this we will do in New York because they have a poor service.

I should also mention that we have VPNs but not Citrix.

Simon
 
This is how we handle our situation. In addition to the Terminal Server (TS), we have a File Server (FS) with a 700GB store, both on a 1000Mbit infrastructure. The Data resides on FS, on the TS, each user has a mde file in a subdirectory NewYork1-4, Central1-4 and Roaming1-4. There are 25 uses in total, not all on the TS. The default directory in Access for each TS user is Access's default /Documents and Settings/User/... so that they can't overwrite each others files.

The one thing we did find that was an issue was the performance of the broadband, so we increased FS and TS end to an SDSL line, this we will do in New York because they have a poor service.

I should also mention that we have VPNs but not Citrix.

Simon

My plan is to create folders within the main folder that contains the database, each with its own front end. There are 8 locations, all numbered, with 3-4 people at each location.

For example, if the file's name is MyDatabase.mdb, I will put the Mydatabase_BE.mdb in the main folder, and create folders 1-8. In each folder will be the mydatabase.mdb front end that links to the tables in the back end. Each user has to change their desktop shortcut from d:/database/Mydatabase.mdb to d:/database/(location code)/mydatabase.mdb, thus only having a few people in each front end instead of having 25 people in the front end.

Can I expect better performance this way rather than having all 25 in the same front end?

SHADOW
 
Last question first, whilst I can't guarantee performance I suggest that using the 8.3 naming convention and compile the databases into an mde for each user.

User1-25/Network.mde ( if it is remote)
Mapped Drive Letter:/Data.mdb

User1-25/Local.mde ( if it is local)
D:/Data/Data.mdb

Separate Mde files is a much better approach. The FE / BE solution I would suggest having a Front Form bound to a table within the BE. This way the Data.ldb file is always open. This table can be a dummy table.

Have only one data table. Make sure that the Default File Location for each user is different to everyone else.

You are very much on the right track.

Separate the data into its own file as it always remains an mdb file, then deploy an mde (with everything else - Queries Forms Reports Macros Modules), link your tables from the data file to the mde file and test it.

When I first did this it took 12 seconds to do a search that used to be instantaneous. So I designed a Menu Form, created a table called Companies made a Query with this table and bound this Query to the Menu Form. Wahla! Back to an instantenous response.

Later I did interfaces that required separate files and I was concerned that one was going to crash as it was too slow, but I applied the same principle as before so the Interfaces Menu now has a query with the Table from the main database and the secondary database, linked together. Wahla! Fast indeed!

This binding of the Menu Form maybe frowned upon, in some quarters, but it works.

Take your time, any problems let me know.

Simon
 
I have a number of Access Apps running on Citrix (Presentation Server 4) with no problems. I have a combination of BE MDB on a seperate server and the main MDE on the Citrix Server. The servers are running on a 1Gb Backbone network and generally there are no delays.

The solutiion is ideal for those users that work from home and access apps via Citrix Access Gateway (CAG).

I only have to replace a single file when doing updates to the main MDE so that makes life a lot easier.

As Bob Larsen said it is not a panacea but it does work. Luckily I have a good relationship with our Internal IT and have a good general IT knowledge that helps.

Good luck with this one Battenburg.

HTH
 
Last question first, whilst I can't guarantee performance I suggest that using the 8.3 naming convention

What is 8.3 naming convention and why should I use it?

...and compile the databases into an mde for each user.

Why is mde a better choice than using an mdb for the front end? I plan to be updating the front end approximately once a week. Instead of using an auto updater, all I have to do is copy the front end that I work on and paste it into the 8 folders. This should only take about 40 seconds. Creating an mde for this purpose is an extra step. If you feel that it will improve performance then I'm happy to do it. But I'm not worried about users modifying the mdb because of the way it's set up.

User1-25/Network.mde ( if it is remote)
Mapped Drive Letter:/Data.mdb

User1-25/Local.mde ( if it is local)
D:/Data/Data.mdb

That's approximately how I have done it, except that the folder's name is the ID of the office, so there are 8 folders.

Separate Mde files is a much better approach. The FE / BE solution I would suggest having a Front Form bound to a table within the BE. This way the Data.ldb file is always open. This table can be a dummy table.

Yes, the logon screen that the users type in their username and password stays open all the time (but hidden) and I bound this form to a table that has some yes/no setting just for this purpose.

Make sure that the Default File Location for each user is different to everyone else..

You mean the front end, correct? If so, then I described my scheme above.

You are very much on the right track.

I certainly hope so! The client is flippin' out because of the poor performance...

When I first did this it took 12 seconds to do a search that used to be instantaneous. So I designed a Menu Form, created a table called Companies made a Query with this table and bound this Query to the Menu Form. Wahla! Back to an instantenous response.

You are describing the dummy table that's kept open persistently here, right?

This binding of the Menu Form maybe frowned upon, in some quarters, but it works.

Actually, Microsoft documents this on their support site.

Take your time, any problems let me know.
Simon

I most certainly appreciate your assistance.

SHADOW
 
Simon:

One thing I'm wondering is what the limitations of a decent server are? There is no way that a good server is maxed out with 25 people acccessing it, is there...? I suppose that at some point they will have to get a second server to balance the load.

Another thing I'm wondering is why people use Citrix when it costs so much more than Microsoft's Terminal Service and does the same thing?

SHADOW
 
Now, starting for the top, you must appreciate that some of the advice is old, but for some stupid reason, it was beneficial that the backend data store, I should have been clearer, should be at the top level of the mapped directory and complies with the 8.3 naming convention so we map D:/Databases/Data.mdb as S:/Data.mdb rather than S:/Databases/Data.mdb.

The mde is a compiled version so the Forms and Reports are ready and don't have to be constructed before they load. This has nothing to do with change just performance and reliability. The drawback is that Users can't change Forms Reports or Modules but I use a Master version of the FE so if I make a ****-up it doesn't impact on anyone else until I release the new Front Ends.

How you structure the Front Ends I don't think it matters, forgive me for being a little vague. The reason I suggested you have separate subdirectories for each use is that if you publish a report in Word it saves the file so it is better that each user has their own subdirectory to store these saved documents. The mde can be the same name for all the users just the Path changes under this regime.

The binding of the Form you have got spot on.

The other areas to help performance are:

If you use AV software get it to ignore mde mdb ldb files
Check broadband performance at each site and compare it with what you expect:

A 2000bit SDSL line should run at 1800bps, allowing for any overhead, about 200 - 220K per second.

Check the PCs and Server for Spyware Spybot and Adaware are two we use, blasted tracking cookies can degrade the broadband performance as well as each machine. I gained a 25% improvement of the broadband speed, at one site, by killing off the tracking cookies.

Check the Registry of each PC:

HKLM/Software/Microsoft/Windows/Windows/CurrentVersion/Run

and see if there is anything there that should not be - but be careful. Quicktime was in here and the tracking cookies meant that I had issues with persistency of the mapped drives.

My absolute favourite is bloody large pst folders in Outlook over 20,000 items or having too many applications open. XP gets a little pissed.

Check the memory of the Terminal Server and / or File Server. We use Xeon 2.8Gb IBM server with 1GB RAM and RAID wih a three drive configuration on Windows 2003 STD Edition. The file Server is again Xeon based with 2GB of RAM, 6 x 300GB drives, we have a 15,000 images and need the storage.

It has been suggested that Terminal Services does sit comfortably on the main server and that the two should be separated out.

Repair and compact the data store periodically, once it is backed up. Check the PC's to see what they are doing and their specification.

I nearly considered becaiuse I think either Windows Server 2000 or 2002 only supported 256 colour and we are an art gallery. Citrix was expensive and its compression ability didn't really apply to low res images.

In summary, if you get your FE / BE end working, if there are still performance issues its general worth going through your enviornment.

Sorry for the lenght of this response but I'm it taken time for our enviroment to work smoothly and efficiently. It can work but it is worth it although at times its a right ole pain in the ass.

Simon
 
The mde is a compiled version so the Forms and Reports are ready and don't have to be constructed before they load. This has nothing to do with change just performance and reliability.

Would using MDB instead of MDE cause the database to fail...?

The reason I suggested you have separate subdirectories for each use is that if you publish a report in Word it saves the file so it is better that each user has their own subdirectory to store these saved documents. The mde can be the same name for all the users just the Path changes under this regime.

In my case, I'm using Terminal Services. Therefore you need to provide a different path to each or they will be all using the same FE. My belief is that the reason that the system is crashing is because I have 25 people using the same FE and by splitting them, I'm HOPING to solve this problem. The reason I started posting here was to see if others here agree.

Check the Registry of each PC:

HKLM/Software/Microsoft/Windows/Windows/CurrentVersion/Run

What is that and what am I checking for? As far as I know, this server was set up JUST for this application.

Check the memory of the Terminal Server and / or File Server. We use Xeon 2.8Gb IBM server with 1GB RAM and RAID wih a three drive configuration on Windows 2003 STD Edition.

I would have to email the company that handles the implementation of the system (who knows ZERO about MS Access, that's why I have to do all the research) to find the specs of the server.

It has been suggested that Terminal Services does sit comfortably on the main server and that the two should be separated out.

What "two" are you referring to?

Thanks

SHADOW
 
Just wanted to chime in what I decided to do-

As I needed to provide connectivity to remote offices and we do have terminal servers available, but after looking at the usenets and other various sites, I decided that it would be simply too much hassle. So I decided to use MySQL as a backend- it uses TCP/IP protocol so there's no mucking around with softwares that may or may not work. For secure connections, I'll just give every users plink which will be called by Access using shell commands to establish a SSH connection to the server.

Total costs for acquiring MySQL server and PuTTY: $0.

:)
 
So I decided to use MySQL as a backend- it uses TCP/IP protocol so there's no mucking around with softwares that may or may not work. For secure connections, I'll just give every users plink which will be called by Access using shell commands to establish a SSH connection to the server.

Total costs for acquiring MySQL server and PuTTY: $0.

:)

Actually, you pay MySQL $600 US to distribute an application that uses MySQL, but that is a good deal still.

Thing is that it's really not just a matter of a few clicks and you get an Access database all nicely backended into MySQL. You still need to split the forms and assign connections and the MySQL database isn't completely compatible with all the queries that in Access. You are looking at quite a development project. It's a lot of work and a lot of research, not to mentiont the configuration. Another viable solution would be to upscale it to SQL Server Express which REALLY is free and supposedly more compatible with Access front ends. I probably will go this route eventually, but for a system with 25 users, I think that the TS solution is less work (although far from hassle-free!)

Thanks

SHADOW
 
Oh, definitely. I didn't say it was an easy solution, just a free solution, and it happens that I know more about MySQL than I do with SQL Server.

I think the big part here is that no matter what, you need to build a base Access database that is correctly configured for whatever backend you use before you can develop anything on it. Once you've done that, it's pretty easy to do whatever you need. Another aspect I like is that I get a bit more control over what is going on by reading the logs, which tells me more about Access than Access would've told me itself (and I'm sure you can do same for SQL Server, too).

As for that $600 to distribute MySQL... That's news to me. You can download the server for free and interface with it from Access. Maybe you were thinking of embedded server, which is an entirely different solution?
 
Oh, definitely. I didn't say it was an easy solution, just a free solution, and it happens that I know more about MySQL than I do with SQL Server.

Fair enough. There is just such an enormous amount of information available on SQL Server that I've read a lot of the topic.

Just a simple question: how DO you indicate to the Access front end where the MySQL backend is located if it's on a remote server? Is there somewhere that you can enter an IP or do you set up a DSN connection using MyODBC (where you enter the IP address that way), in which case you have to enter the DSN on each computer...?

I think the big part here is that no matter what, you need to build a base Access database that is correctly configured for whatever backend you use before you can develop anything on it.

True, but for now I think that just leaving the .mdb back end will be fine if the front ends are deployed correctly and the TS is configured properly. Many people report having gone this route will excellent results.

As for that $600 to distribute MySQL... That's news to me. You can download the server for free and interface with it from Access. Maybe you were thinking of embedded server, which is an entirely different solution?

I called MySQL in 2005 when I was looking into this and they told me that you can download it and use it for free for yourself, but if you distribute it to a client as an application, you have to pay. Maybe things have changed, OR maybe you are using it for your own (or your company's own) usage, which is not my case.

SHADOW
 
I use DNS-less connection with MyODBC.

Here's an example:
Code:
strConnection = "ODBC;DRIVER={MySQL ODBC 3.51 Driver};" & _
                 "Server=127.0.0.1;" & _
                 "Port=3306;" & _
                 "Option=" & opt & ";" & _
                 "Stmt=;" & _
                 "Database=world;" & _
                 "Uid=root;" & _
                 "Pwd=root"

I used a modified version of Doug J Steele's DNS-less connection using that connection string above.

As you can see, the connection string gives IP address for the server, which MyODBC will send via TCP/IP.

I suspect they may have been thinking about this differently- when you told them if you want to distribute it in client, they thought you meant embedding the MySQL server in each front-end, which isn't necessary (and would be totally extraneous) here. There is no need to distribute MySQL; the client only need to have MyODBC (which is free, too) on their computer, and with the code above, you can do it all from Access. So that's a total of two programs to give to my client for a secure connection: MyODBC and PuTTy (they already have MS Office with Access included so that cost is already sunken so to speak). I also believe you can even configure MySQL to only accept connections from LAN, so if you don't need/want remote access, you really don't need PuTTy for a secure connection.
 
Doug J Steele's DNS-less connection using that connection string above.

As you can see, the connection string gives IP address for the server, which MyODBC will send via TCP/IP.
.

So each client computer does have to install the MyODBC, but not set up a DNS?

(The reason I'm asking is because any non-Jet database has to be set up the same way. When I get there, I'll have to read up on DSN-less connections)

Another thing I should point out is that almost all of my forms are bound to tables. To convert to MySQL or SQL Server, I have to change all of them to connect via DAO or ADO. This is a task that I want to avoid if I can just split the database...


SHADOW
 
As matter of fact, my forms are bound to MySQL tables. DSN-Less connected tables are still loaded as linked tables and can be treated as any linked tables. They just don't connect using DSN.

That said, when I add a table, I would use DSN to link the table for first time, then run that code to update the table's connection so it's ready for distribution.

Just so you know, anything that can be linked via DSN, can also be linked without DSN. DSN is supposed to make everything convenient, but people has reported performance gains without using DSN to resolve the link. Some says it's better with DSN. I opted to take out the middleman, so to speak.
 
As matter of fact, my forms are bound to MySQL tables. DSN-Less connected tables are still loaded as linked tables and can be treated as any linked tables. They just don't connect using DSN.
.

Oh...so what is the connection string for and where/how is it used? I thought it was to draw the data into an unbound form....

SHADOW
 
Connection string simply tell Access where to get the data from. You probably have seen how ADODB creates a connection, but need to have SQL *then* issue an Execute command. It's basically similar- connection tells where data is, but you need SQL/Execute to return the data.
 
Connection string simply tell Access where to get the data from. You probably have seen how ADODB creates a connection, but need to have SQL *then* issue an Execute command. It's basically similar- connection tells where data is, but you need SQL/Execute to return the data.

If the tables are linked, why and when would you need to connect...?

Or at least where can I read about this technique?

SHADOW
 
Here's a good list of papers on Jet & ODBC.

This should be the first thing to read: Whitepaper on Jet and ODBC Connectivity.

That paper was for Jet 3.0 but I understand is still valid for 4.0 (and have not been able to find anything more recent anyhow). You also can download that paper and print it out if you want, but can't find that link...

The whitepaper will tell you why/how Jet behaves with external data.
 

Users who are viewing this thread

Back
Top Bottom