Future of Access (2 Viewers)

All development environments have the same problem if you release without proper testing.
I agree but usually the bad module doesn't affect other parts of the application as it can with Access.

Having source control is excellent, especially when working with a team. Since I generally work alone, I am religious about backups and I use export to text to prevent backing up corrupted versions of the database.
 
I agree but usually the bad module doesn't affect other parts of the application as it can with Access.

I agree but usually the bad module doesn't affect other parts of the application as it can with Access.

Having source control is excellent, especially when working with a team. Since I generally work alone, I am religious about backups and I use export to text to prevent backing up corrupted versions of the database.
I don't understand why you are treating errors in an Access module any different then any other environment. If I make a breaking change in a c# class, any other class importing and using the class will have a problem. The problem with Access and VBA is the limited IDE and the resulting difficulty of using modern development process. I can shoot my self in the foot just as easily with C# and VBA.
 
I don't understand why you are treating errors in an Access module any different then any other environment.
It seems that I cannot explain it so you understand the problem.
 
It seems that I cannot explain it so you understand the problem.
Maybe were are talking past each other but once I started using joyfullservice for export and build and Git for source control, I started looking at the ACCDB FE more like a DLL. I can fork the source in GET to try a something new. If a change to the production FE is need before the fork is done, I can restore that version, make the change, test, release the go back to the fork and import those change or when the fork is done I can merge with the main to include changes made to the main. The ACCDB or ACCDE is just a release, GIT has the source with each object in it own file(s). BTW I also generally work alone.
 
The problem with Access and VBA is the limited IDE and the resulting difficulty of using modern development process. I can shoot my self in the foot just as easily with C# and VBA
I thought Visual Studio was more versatile than Access' IDE? Even if vba code modules were stored externally in Windows files, you still have to compile, link and load the object files, and runtime errors can still happen even though the modules compiled.
 
I thought Visual Studio was more versatile than Access' IDE? Even if vba code modules were stored externally in Windows files, you still have to compile, link and load the object files, and runtime errors can still happen even though the modules compiled.
I not using Visual Studio for VBA. The programing is done in the ACCDB, the add-on is used to export changes and if I need to import from GET or roll back, builds that source into a new ACCDB which is then compiled like normal. The joyfullservice add-on is not source control, GET handles that. It is the interface and builder.

1755891080109.png
 
I not using Visual Studio for VBA. The programing is done in the ACCDB
I kno that. I was referring to the C# you mentioned. It's a lot easier for a team to work with external code modules versus code encapsulated in accdb containers.
 

Regarding this post:

I was catching up on this thread and ran across this post. Specifically to the comment in the post regarding a lost ODBC connection, there is a reason for that problem. We start with the fact that network traffic generally breaks down into UDP and TCP families. ODBC is a member of the TCP family of protocols - one of many. UDP is generally used for "connectionless" protocols most commonly used with web sites. TCP usually involves "sessions" or persistent logical connections between the participants, often called Sockets.

At least 20 years ago there was a security directive regarding network options configuration. Short answer: The ability to RECONNECT to a TCP-family session was determined by the U.S. Dept. of Defense to be too big of a security risk.

The reason is because if someone was monitoring a particular military network and was using a Sniffer or other network real-time analyzer tool, and if that network died for a moment, the sniffer had enough information to take over the socket while the network was in flux. Basically, all you need is found in the packet header, including the socket ID, session ID, and connection sequence number provided by the prior network partner. That is, you don't need login-level credentials to do a RECONNECT because that login occurred previously and established the session-layer structure needed for the transactions.

The "very strong recommendation" was therefore disseminated that we should edit all of our network config files to ALWAYS contain "RECONNECT=NO" from that time on. For what it is worth, the SMB protocol used by Access is another member of the TCP family and is another reason why Access is so sensitive regarding network drops.
 
I kno that. I was referring to the C# you mentioned. It's a lot easier for a team to work with external code modules versus code encapsulated in accdb containers.

Each team member work on their own copy of the FE and pushing out changes just like an IDE like .NET. Each export only exports the changed objects, not the entire FE. Commits are small and often.

One problem with Access is it's tendency to update all named objects with the same name if you change the case of any 1. This causes more objects to be exported with only case changes which can confuse a merge. To combat this I changed my naming convention for ALL named objects to Pascal Case SomeVariable, were before functions and properties were Pascal Case and variables and parameters were Camel Case someVariable.

For Example if add a function with a local variable userInput then export, only that module is exported. Then in another module I add another function with a local variable UserInput, the first variable is changed. On the next export both modules will be exported even though logically the 1st module has not changed.
 
The reason is because if someone was monitoring a particular military network and was using a Sniffer or other network real-time analyzer tool, and if that network died for a moment, the sniffer had enough information to take over the socket while the network was in flux. Basically, all you need is found in the packet header, including the socket ID, session ID, and connection sequence number provided by the prior network partner. That is, you don't need login-level credentials to do a RECONNECT because that login occurred previously and established the session-layer structure needed for the transactions. The "very strong recommendation" was therefore disseminated that we should edit all of our network config files to ALWAYS contain "RECONNECT=NO" from that time on.
And for the above reason is why ODBC always requires login when re-connecting to a db server?
For what it is worth, the SMB protocol used by Access is another member of the TCP family and is another reason why Access is so sensitive regarding network drops.
In a previous post I mentioned that Access' file server architecture and SMB are part of the reason why Access is intolerant to network noise and latency, but I was told SMB was irrelevant. We use Samba, which is based on SMB 3.0, for on premise Linux based db server connections with Access FE's on Windows workstations, and have not experienced any connection issues. All workstations have Gigabit network cards and we only use Cat8 shielded ethernet cables, so that also helps to minimize disruptions. However, fault tolerancy should be provided for Access FE WAN connections.
 
Since you are talking about the desire for a more stable ODBC connection or improved behavior of Microsoft Access in the event of a connection loss,
hasn't this already been improved since Access 2019?


ODBC connection retry logic​


When Access loses the ODBC connection to an external data source, users have to close and re-open their Access databases in order to continue working with objects that use data from those external data sources. Access doesn't attempt to re-establish the dropped connection even if the connection is valid again and there is nothing that users can do to make Access try and reconnect to the external data source.


We've improved this experience so that now if Access loses the ODBC connection to an external data source, Access tries to reconnect to the data source automatically. In this scenario, you'll now see the following experience:


  • If Access loses the ODBC connection to an external data source you're trying to use, you'll see an error message describing the failure.
  • Access now attempts to reconnect to the data source automatically when performing operations that require a connection to an external data source. If the retry is successful, you can continue working with the data and database objects without having to close and re-open Access.
  • If the retry fails, you won't be able to work with objects that use the dropped external data sources but you can continue working with other areas of your database that aren't dependent upon the unreachable data sources.
  • If you attempt to use data or other objects again that rely on the external data connections, Access tries again to reconnect to the external data source.
 
Since you are talking about the desire for a more stable ODBC connection or improved behavior of Microsoft Access in the event of a connection loss,
hasn't this already been improved since Access 2019?

Yes, I remember seeing that, but it only applies to ODBC for SQL Server, and doesn't solve the main problem of network interference and delays causing accdb corruption.
 
Seems to me you are arguing about the difference between a bus and a train. Both transport people and each have their own advantages and disadvantages. Which one you use depends on what matters to you. And sometimes you don’t have a choice.

If you don’t want to use the train because it doesn’t go where you want to go and if Access is the bus and you don’t like the uncomfortable seats, use a coach or a car
 
Last edited:
In a previous post I mentioned that Access' file server architecture and SMB are part of the reason why Access is intolerant to network noise and latency, but I was told SMB was irrelevant.

You may have gotten that answer but you didn't get it from me (unless I was trippin' at the time... but I don't do that.)

Yes, because SMB (any version) is a TCP-based protocol as opposed to UDP based, Access IS sensitive to network drops. Switching to SMB 2 and later to SMB 3 simply means the session can more easily buffer packets and optimize its use of allocated memory resources. A feature of most protocols is the ability for the original sender to retry after a time-out. There can be a retry limit. In the final analysis, either the session layer keeps on going or the entire virtual connection collapses.

And for the above reason is why ODBC always requires login when re-connecting to a db server?

While the exact reason depends on whatever the network gurus said it was and I didn't see that particular dispatch, I would say that it is a VERY HIGHLY LIKELY reason for the "login when reconnecting" requirement. The details of any protocol would be "ordained" by the IETF (Internet Engineering Task Force) and/or IEEE (Institute of Electrical and Electronics Engineers). If you looked up IETF + ODBC, you might get a reference to RFC 2371, the current standard for database transactions under the title "Internet Transaction Protocol". However, as it turns out, ODBC is actually just an API that embodies an implementation of ITP. There is no RFC document for ODBC.
 
You may have gotten that answer but you didn't get it from me (unless I was trippin' at the time... but I don't do that.)

Yes, because SMB (any version) is a TCP-based protocol as opposed to UDP based, Access IS sensitive to network drops. Switching to SMB 2 and later to SMB 3 simply means the session can more easily buffer packets and optimize its use of allocated memory resources. A feature of most protocols is the ability for the original sender to retry after a time-out. There can be a retry limit. In the final analysis, either the session layer keeps on going or the entire virtual connection collapses.



While the exact reason depends on whatever the network gurus said it was and I didn't see that particular dispatch, I would say that it is a VERY HIGHLY LIKELY reason for the "login when reconnecting" requirement. The details of any protocol would be "ordained" by the IETF (Internet Engineering Task Force) and/or IEEE (Institute of Electrical and Electronics Engineers). If you looked up IETF + ODBC, you might get a reference to RFC 2371, the current standard for database transactions under the title "Internet Transaction Protocol". However, as it turns out, ODBC is actually just an API that embodies an implementation of ITP. There is no RFC document for ODBC.
I would've thought good error detection, correction and retransmission of packets was built into SMB, and other networking protocols. However, in this thread I was told Access is the problem:
IMG_0152.jpeg
 
Last edited:
I am a team member of four Access developers and we are concerned about the stability and future of Access.
Getting back to the original (future) concern. Obviously, MS Access primarily survives due to the grace of Microsoft. But going to the next level (in terms of the future of MS Access); is MS Access acquiring users or losing users? This website, to a degree, can indirectly answer that question. Should more people be implementing MS Access solutions, it can be expected that they will find this site. As such, is the number of users of this website (or others) growing, staying stagnant, or declining?
 
Is the ODBC login requirement really because Access uses SMB?

Pat and I sometimes disagree on technical matters due to our different backgrounds. I'm a hardware, device driver, and networks person. Pat came up through more of an applications oriented starting point. We sometimes see things through different shades of rose-colored glasses. I offered my opinion based on my background, she offered hers.

The truth is that when it was originally designed (back in the 1990s?), whatever Access used had to have a connection consistent with TCP standards because database work in a local-area network environment was (and still is) primarily session-oriented. That requirement is because TCP/IP has a fixed upper limit on data packet size and database transactions rarely fit completely in a single network packet. Therefore, you have to establish a framework to maintain continuity of context (to detect that a packet was lost.) And it is the "sessions" requirement that involves TCP-family protocols into the picture. Because SMB is a TCP-family member, it has the ability in its handshake to say, "I got packet #12454 with a transmission error... please retransmit #12454." OR the listener side could say "Acknowledge proper receipt of packet #12454... send next packet in sequence"). Note that a retransmission request is NOT a network drop, because the handshake is still active.

Because it is a file-sharing environment, you have to use a protocol that allows random access to arbitrary PARTS of a file. (Diverge for explanation: Excel does NOT use such a protocol, which means that any number of folks can simultaneously READ an Excel file but only one user at a time can WRITE to it. But that won't work for a shared multi-user database app.) SMB protocol allows tasks to selectively grab file pieces-parts over a LAN. As long as two "grabs" don't overlap, SMB is perfectly content - and so is Access.

So... does that mean that because Access uses SMB that it has that RECONNECT problem? Is there anything else that Access could have done?

No, nothing else. Access uses SMB because that is the protocol that allows shared writes in a given file in a session-oriented local area network environment. It is also known as the Windows File Sharing and Printer Sharing Protocol. But it is called SMB for short or Server Message Block for long, and proprietary implementations include Samba. Whatever you call it, SMB was there first! Which is the chicken and which is the egg?

The issue actually comes down to this: It is the requirements of the app that make it susceptible to connection loss issues and anything you design that doesn't involve witchcraft will be susceptible to session loss. SMB happens to be the name of the protocol that was used. It is the security requirements of the application that lead you to where we are now.
 
I think Doc is saying that the application cannot even request a reconnect. Could it open a new connection?

The other thing you could do is to force the bound form to close if it has been idle for a few minutes.
 
I think Doc is saying that the application cannot even request a reconnect. Could it open a new connection?

The other thing you could do is to force the bound form to close if it has been idle for a few minutes.

The problem with "RECONNECT" actually isn't whether you can restart the session. (You can't.) It is that after a drop, you can't finish an incomplete transaction. You can protect against total loss by always using BEGIN/COMMIT transactions, which means that a Compress & Repair could bring the DB back to a consistent state - but the partially complete transaction is always lost.

Remember that TCP/IP has to break up data streams into packets once we start talking networks - LAN or WAN. If you take a network drop after you have received half of the packets needed for a complete transaction AND it was not done with a formal transaction context, what you now have is a half-updated table. IF you could do a reconnect (different from a retransmit during a continuous session), you would have to be able to bring both sockets together and determine the last validly transmitted buffer - and hope the next one was still available. Can the app open a new connection? Absolutely. Would it have the ability to ask for the last part of the previously broken transmission sequence? I don't know of ANY protocol that does that. Doesn't mean it can never happen, but I've never seen that work.

It is the "Half updated table" that leads to the "Inconsistent database" error message that reveals a corruption event. THAT is the main mechanism of database corruption.
 
but the partially complete transaction is always lost.
I realize that but what we're talking about here is a form left in an idle state not pending completion of an update. Nothing can ever be done if you loose the connection mid update unless the update is actually inside a transaction.

Can the OP solve that specific problem by starting a new connection? I think if the users can be trained to always close the open form and go back to the menu, the reconnection can be handled more gracefully.
 

Users who are viewing this thread

Back
Top Bottom