7 Gigabits Per Second WAN Bottlenecks (2 Viewers)

BlueSpruce

Well-known member
Local time
Today, 13:07
Joined
Jul 18, 2025
Messages
1,096
I have a fiber optic modem that provides 7 gigabits per second speed, and using a Cat 8 double shielded ethernet cable that connects to a 10 gigabits per second NIC on my desktop. I am getting ~1 Giga Bytes per second xfer speed to a cloud backup share. I am wondering if I can reliably link an Access FE to a remote db if my WAN connection can maintain at least 1 Giga Bytes per second speed? @Albert D. Kallal should be able to shed light on this.

2025-11-22,1702GMT.jpeg

RemoteShare.PNG
 
Last edited:
Quit complaining, most of us only get a gig to the house and lucky if I can get 800mb to my PC and that is with COAX/MoCA network. Truth be told, its about 600mb.
 
Quit complaining, most of us only get a gig to the house and lucky if I can get 800mb to my PC and that is with COAX/MoCA network. Truth be told, its about 600mb.
Are you confusing bits and bytes?

Do y'all think I could reliably link an Access FE to a remote db if my network connection can maintain at least 1 Giga Byte per second speed?
 
Last edited:
If the question is directed at me, I am referring to megabytes, MB

A speed of 1 giga bits per second (1 Gbps) is equal to 125 megabytes per second (125 MB/sec) because you divide the bits by 8. How much does an Access FE need to reliably link to a remote db? I have heard an Access FE needs 1 giga bits per second speed on a LAN to reliably work with a BE on a server.
 
Last edited:
I use VDI/Citrix on a thick client for my primary job (maintaining an Access app) with my puny/wired 600mb service - the performance is pretty impressive considering I am in Virginia and the server is in Ohio.
 
Doesn't it boil down to Latency not speed?

Latency, as in internet traffic lag?
Would a P2P connection minimize latency?
I predict that with the advancement of technology, we will soon be able to reliably link Access FE's with remote WAN db's.
I can start testing with my current setup to see how reliable linking to a remote db is.
 
Last edited:
I have successfully run a native Access FE/BE situation for the U.S.Navy with a 1 Gbit Ethernet, for 15-20 concurrent users. I have PAINFULLY run an Access FE/BE from New Orleans to Norfolks and back again (it's a LONG story) where at least one of the intermediate hops was a 10 Mbit Ethernet. It ran. (Actually, no... it didn't run. Didn't even walk. It C-R-A-W-L-E-D. But it operated.) The problem was never that it couldn't operate. It was just that it couldn't operate quickly.

We switched from the innermost (and most secure) network to an intermediate level that no longer utilized the Norfolk connection. That's when I gave up the option to send encrypted e-mails from my app. I could still send e-mails but the server that took part in certificate validation couldn't be reached. After that, the speed problem went away. However, that was an in-building network.

How far is remote? More precisely, how RELIABLE is the remote site's connection? If the reliability score is measured in percent and the score is above 1% instability, you realize that the odds say your network will fail every hundred operations.

The question of latency in networks is simply one of how many hops do you need to make in the standard store-and-forward algorithm. The speed of the network DOES figure into that, but what you really care about with networks isn't latency - it is drops, collisions, outages, that sort of thing. As long as the message makes it through, if you don't pull up the network stats, you can't tell the difference between a network with lots of collision/retransmission and a network with just a slow transmission rate. And you don't care. It is packet loss rate that eats your socks.
 
The speed of the network DOES figure into that, but what you really care about with networks isn't latency - it is drops, collisions, outages, that sort of thing. As long as the message makes it through, if you don't pull up the network stats, you can't tell the difference between a network with lots of collision/retransmission and a network with just a slow transmission rate. And you don't care. It is packet loss rate that eats your socks
Agreed, so hopefully technology will soon reach a point where no TCP packets will be lost,
 
No, there's 8 bits in every byte.
Your screenshot in #1 is showing 1.08 GB/s. The "B" stands for Byte.
Your #1 message seems to convey disappointment that "7 gigabits" speed only results in this 1.08 GB speed.
I tried to point out the two are (almost) equivalent.
 
Well, things have vast improved over the years.

So, while at one time, say a VPN between two locations, and running a split Access database?

Was not that practical.

Now, well, actually it is!

However, if the VPN was being used say by “at home” workers?

Then that falls down, since it not all the fancy corporate hardware and fast network at play here, but the limitation is the “at home” users internet plan!

So, as networks become faster, better etc.?

Then the “ability” to run a split FE/BE over that network thus also improves.

So, assuming no VPN bottle necks?

Then such a setup should work quite well.

Often the issue with a VPN?

Well, along the way the outside “internet” is in the way, and often then the risk or issue is not the network speed, but does it ever “drop” or “break”. That drop or break problem is the deal breaker here. And thus one would recommend moving back end to SQL server. And it often not a speed issue to adopt SQL server, it’s a question of network reliability.

A split FE/BE simple does not tolerate a network connection that can stop or break for a small amount of time. A FE to SQL server can….


If that split FE/BE now runs well on a network?

And the design is to “limit” records pulled into the form?

Then I can’t see why this proposed setup would not work well…..


Remember, if you take a plain standard FE/BE, and it’s running over a network?
(no SQL server)

And then you bind the access form to a table with 1 million rows?


And then you do this:

Docmd.OpenForm “frmInvoice”,,,”InvoiceNum = 12343”

Access will actually only pull the ONE record over the network!

(assuming InvoiceNum column is indexed).


So, not knowing the application?

Well, if it runs ok over a network now?
Then I would be comfortable suggesting that it will run just fine with the proposed setup…

However, the instant some kind of VPN or remote users come into this mix?
Then it's SQL server, or remote desktop all the way....

R
Albert
 
Well, things have vast improved over the years.

So, while at one time, say a VPN between two locations, and running a split Access database?

Was not that practical.

Now, well, actually it is!

However, if the VPN was being used say by “at home” workers?

Then that falls down, since it not all the fancy corporate hardware and fast network at play here, but the limitation is the “at home” users internet plan!

So, as networks become faster, better etc.?

Then the “ability” to run a split FE/BE over that network thus also improves.

So, assuming no VPN bottle necks?

Then such a setup should work quite well.

Often the issue with a VPN?

Well, along the way the outside “internet” is in the way, and often then the risk or issue is not the network speed, but does it ever “drop” or “break”. That drop or break problem is the deal breaker here. And thus one would recommend moving back end to SQL server. And it often not a speed issue to adopt SQL server, it’s a question of network reliability.

A split FE/BE simple does not tolerate a network connection that can stop or break for a small amount of time. A FE to SQL server can….


If that split FE/BE now runs well on a network?

And the design is to “limit” records pulled into the form?

Then I can’t see why this proposed setup would not work well…..


Remember, if you take a plain standard FE/BE, and it’s running over a network?
(no SQL server)

And then you bind the access form to a table with 1 million rows?


And then you do this:

Docmd.OpenForm “frmInvoice”,,,”InvoiceNum = 12343”

Access will actually only pull the ONE record over the network!

(assuming InvoiceNum column is indexed).


So, not knowing the application?

Well, if it runs ok over a network now?
Then I would be comfortable suggesting that it will run just fine with the proposed setup…

However, the instant some kind of VPN or remote users come into this mix?
Then it's SQL server, or remote desktop all the way....

R
Albert

So despite having ultra fast internet connection, it's still recommended to optimise WAN traffic between native Access FE/BE link. I am experimenting with recommendations mentioned in the attached document.
 

Attachments

Agreed, so hopefully technology will soon reach a point where no TCP packets will be lost,

As long as networks required a physical transmission medium (even if it is wireless), the problem isn't that TCP packets are lost to protocol errors, but that the point-to-point layer can be interrupted by things that take too long to fix. Like, lightning striking a wireless tower with a less than perfect ground connection. Or some person of limited intelligence slamming his obscenely huge pickup truck into a utility pole that carries fiber data for phone lines. Automatic re-routing has come a long way, but if you are in an isolated area, you might not have an alternate route. Since it is possible for a given network-related app to involve all seven layers of the OSI model, you have entirely too many ways to lose a connection.
 
So despite having ultra fast internet connection, it's still recommended to optimise WAN traffic between native Access FE/BE link. I am experimenting with recommendations mentioned in the attached document.
yes.

in fact, the #1 developer time saving tip I can share?

Any time you have a query with multiple joins?
Convert that to that to a SQL view, and link to that view.

But why this suggestion over say a pass-through query, or even perhaps say some stored procedure on SQL server?

Simple: cost and time vs the benefits.

Often the report (or form) will use some addtional VBA code to create a filter. And often such code represents significant time and cost.

Well, converting to a view means that the code, the form, the report? The existing filters will THEN play nice with SQL server, and MORE imporant is the cost and time to acheive first rate results?
It is very low.

If you have a complex form, or report based on a complex SQL query?
if you move the query to a view, link to that view (and even give it the same name as the client side query)?

Well, now you going to get about the same performance if you used a pass-though query, or even a stored procedure.

But, the HUGE bonus is you don't have to change any client side code that creates a filter for the form/report.
And by filter, I mean use of the "where" clause that both forms and reports have.

So, say this:

docmd.OpenForm "frmInvoices",,,"InvoiceNum = 13243"

The above code does not require any changes. But, if frmInvoices (or report) is based on a mutli-table query with joins?
Then you get server side performance, the joins and SQL occurs server side, and ONLY the result is sent back down the wire.

now, there are as noted other ways to achieve this, but adopting the view, linking to that view?
Far less work then ANY other solution and approach.

Bottom line:
Views are your first friend and best first steps to improve Access to SQL server performance.
9 out 10 times, with this simple effort and change?

You get first rate performance, and as noted, in near all cases, no changes to the client side VBA code that no doubt should and does use a "where clause" when opening that form or report....

So, this simple change (using a view) tends to equal the performance compared to the significant efforts to say create a pass-through query, or even a stored procedure.

So, my view on this?
Use views! - pun intended!!!!

R
Albert
 

Users who are viewing this thread

Back
Top Bottom