Server in the Cloud

hfsitumo2001

Member
Local time
Today, 09:40
Joined
Jan 17, 2021
Messages
394
If our server is in the cloud (cloud), we will put the backend in the cloud. Will it be the process of our operation of this database will be much slower compared with the local server?
 
Extremely slow. Unusable probably.
Terminal Services or Citrix would be much better if you need to support remote users.

BTW, make sure your cloud contract doesn't allow your provider to cancel your account with 24 hours notice if they decide they don't want to do business with you any more.
 
If you are lucky, the cloud server will support SMB protocol. Without that, forget about Access entirely. It will depend on the services you bought, but most frequently, cloud services only do whole-file transfer protocols like (but not limited to) FTP. The strength of Access is that it uses a relatively cheap protocol (Server Message Block) to exchange PARTS of a file. This is the same protocol Windows uses to exchange data with printers and other "hard" input/output devices.

Before you commit to the cloud, verify from their tech support that they will even recognize SMB. If they will not, you cannot put ANYTHING there except backup copies of things (from the Access viewpoint.)
 
Server Message Block. (It was in parentheses in the message you questioned.)

Basically, there are now 3 versions of SMB on the internet. It is something that Windows supplies to those programs that ask to use it and it is up to the Windows network drivers to negotiate with the partner node or device as to which is the highest level they can support - I, II, or III - with the added features of more data security and certain other added features as you get to the higher versions. However, you don't actually care which level they select, usually. The details of this "asking to use it" are a bit complex and you as a user of Access never actually see it because Access does that behind the scenes. The BIG feature of SMB vs. other file transfer protocols is that SMB can selectively transfer selected blocks in the file without having to send the whole thing at once. This is exactly what Access wants to do. You only have to read the blocks containing specific tables rather than the whole file. If an update occurs then your client machine only has to send the updated blocks back rather than overwrite the whole file. As you can guess, for a large file the idea of "whole file" transfers would very quickly load down the network. There are other pitfalls with the "whole file" approach as well, having to do with shared usage.
 
If you put the backend in Azure, it will work, but only if you have designed it for a remote backend in the first place.
 
If you put the backend in Azure, it will work, but only if you have designed it for a remote backend in the first place.
How is that done Minty? Asking for a friend...
 
@NauticalGent That's quite a broad question, and I've only had 1 sip of coffee so far.
Just for your friend...

Some of the basics are (and to be honest a few of these apply to any good design to reduce network traffic)
  • Security - Azure relies on a whitelist of IP addresses. If you aren't connecting from your Office with a fixed IP you need to manage that.
  • Restricting the amount of data you pull into a form. By default open a form with NO data then the user selects either a single or restricted data set to work with.
  • Keep local table versions of the more "static" lookup tables, e.g countries, currencies, users etc. as if you use these in combo's (which you will) they will populate instantly and not drag data over the network. To ensure they are up to date, reload them on startup.
  • Use views and stored procedures to do the "heavy lifting" of complex queries and data manipulation processes. The server will deal with all this then return the results, normally much much quicker than a locally run process.
  • This is especially true if you have a continuous form with a list of "things" that is only used for viewing, you can bring all the data together in a view including the joined look up values (customer names , employee names etc.) again it saves dragging data across the network.
  • If at all possible don't use locally written VBA functions in queries. SQL Server will have to return all the records/data to allow the function to do its thing in access, this is really inefficient. Write the equivalent in a SQL function if required, or leverage some of the more powerful inbuilt functions to achieve the same thing (String_Agg() is a prime example that does a ConcatRelated style function very efficiently).
  • There are probably another hundred things I've forgotten.

That concludes part 1.
Once I remember the other things or people with more knowledge than me chirp in I can update it, and maybe make it a sticky somewhere.
 
Not bad for a single sip of coffee, I am My friend will be impressed!

Some of the basics are (and to be honest a few of these apply to any good design to reduce network traffic)
Exactly. I was looking for some arcane, esoteric method that solely applied to cloud Azure Cloud environment, but as usual, the solution is anti-climatic.

My current setup is about 25 users, the vast majority of them working from home using a Citrix-VDI setup with either Thin or Thick client laptops. The Access application could use a lot of tweaking and I am slowly working on that. I was hired primarily to write user and tech manuals and I have not completed that task yet. The current DB's were developed and are maintained by two other individuals located in Pennsylvania and Alabama (I am in Virginia).

We were just informed that we would be migrating to a Azure Cloud/DaaS platform and I pressed the panic button because the consensus here is that Access will not work on "the cloud".

The good news is that one of the big-wigs in the J6 (IT) department is knowledgeable with RDBMs, including Access, and needed no convincing that we needed a different plan. He is willing to help us with establishing two instances on the existing SQL Server (one for real data, the other for test data) and all the happy-go-lucky *stuff* that comes with setting that up.

What I do not know is:
- where will, or more importantly, where should the FEs reside. Right now everyone has their own folder on the network with their own FE copy. When we make the move, if the FE resides on cloud storage, isn't that what we are trying to avoid?
- Azure seems to have its own SQL Server. I am not sure if the J6 is talking about using that to hold the BE or if he was talking about using the actual physical server located in Virginia. I am certain it's the latter but even so, if the FE itself is "on the cloud" am I not only solving half the problem?

There are a lot of other things I am not aware of - not knowing what you do not know is scary and FRUSTRATING.

With all that, if anyone here wants to chime in, please do so...
 

Users who are viewing this thread

Back
Top Bottom