Future of Access

  • Thread starter Thread starter Deleted Bruce 182381
  • Start date Start date
It is an issue if the person who installs the new code into the .accdb doesn't do the proper regression testing. I understand how the source code manager works. I use the export to/import from text feature for my own personal backups.
All development environments have the same problem if you release without proper testing. You can get daily example on the Daily WTF site.

Moving to source control made it much easer to track changes and undo mistakes. I need to get much better at Git but the basic feature I use are very powerful.

By the way joyfullservice use the add-on and is available on Git Hub, you can fork it, build your own copy and modify your copy. I did that when I saw something, after making my edits and testing, I posted back and the changes were added to the main branch.
 
I agree but usually the bad module doesn't affect other parts of the application as it can with Access.

I agree but usually the bad module doesn't affect other parts of the application as it can with Access.

Having source control is excellent, especially when working with a team. Since I generally work alone, I am religious about backups and I use export to text to prevent backing up corrupted versions of the database.
I don't understand why you are treating errors in an Access module any different then any other environment. If I make a breaking change in a c# class, any other class importing and using the class will have a problem. The problem with Access and VBA is the limited IDE and the resulting difficulty of using modern development process. I can shoot my self in the foot just as easily with C# and VBA.
 
It seems that I cannot explain it so you understand the problem.
Maybe were are talking past each other but once I started using joyfullservice for export and build and Git for source control, I started looking at the ACCDB FE more like a DLL. I can fork the source in GET to try a something new. If a change to the production FE is need before the fork is done, I can restore that version, make the change, test, release the go back to the fork and import those change or when the fork is done I can merge with the main to include changes made to the main. The ACCDB or ACCDE is just a release, GIT has the source with each object in it own file(s). BTW I also generally work alone.
 
I thought Visual Studio was more versatile than Access' IDE? Even if vba code modules were stored externally in Windows files, you still have to compile, link and load the object files, and runtime errors can still happen even though the modules compiled.
I not using Visual Studio for VBA. The programing is done in the ACCDB, the add-on is used to export changes and if I need to import from GET or roll back, builds that source into a new ACCDB which is then compiled like normal. The joyfullservice add-on is not source control, GET handles that. It is the interface and builder.

1755891080109.png
 

Regarding this post:

I was catching up on this thread and ran across this post. Specifically to the comment in the post regarding a lost ODBC connection, there is a reason for that problem. We start with the fact that network traffic generally breaks down into UDP and TCP families. ODBC is a member of the TCP family of protocols - one of many. UDP is generally used for "connectionless" protocols most commonly used with web sites. TCP usually involves "sessions" or persistent logical connections between the participants, often called Sockets.

At least 20 years ago there was a security directive regarding network options configuration. Short answer: The ability to RECONNECT to a TCP-family session was determined by the U.S. Dept. of Defense to be too big of a security risk.

The reason is because if someone was monitoring a particular military network and was using a Sniffer or other network real-time analyzer tool, and if that network died for a moment, the sniffer had enough information to take over the socket while the network was in flux. Basically, all you need is found in the packet header, including the socket ID, session ID, and connection sequence number provided by the prior network partner. That is, you don't need login-level credentials to do a RECONNECT because that login occurred previously and established the session-layer structure needed for the transactions.

The "very strong recommendation" was therefore disseminated that we should edit all of our network config files to ALWAYS contain "RECONNECT=NO" from that time on. For what it is worth, the SMB protocol used by Access is another member of the TCP family and is another reason why Access is so sensitive regarding network drops.
 
I kno that. I was referring to the C# you mentioned. It's a lot easier for a team to work with external code modules versus code encapsulated in accdb containers.

Each team member work on their own copy of the FE and pushing out changes just like an IDE like .NET. Each export only exports the changed objects, not the entire FE. Commits are small and often.

One problem with Access is it's tendency to update all named objects with the same name if you change the case of any 1. This causes more objects to be exported with only case changes which can confuse a merge. To combat this I changed my naming convention for ALL named objects to Pascal Case SomeVariable, were before functions and properties were Pascal Case and variables and parameters were Camel Case someVariable.

For Example if add a function with a local variable userInput then export, only that module is exported. Then in another module I add another function with a local variable UserInput, the first variable is changed. On the next export both modules will be exported even though logically the 1st module has not changed.
 
Since you are talking about the desire for a more stable ODBC connection or improved behavior of Microsoft Access in the event of a connection loss,
hasn't this already been improved since Access 2019?


ODBC connection retry logic​


When Access loses the ODBC connection to an external data source, users have to close and re-open their Access databases in order to continue working with objects that use data from those external data sources. Access doesn't attempt to re-establish the dropped connection even if the connection is valid again and there is nothing that users can do to make Access try and reconnect to the external data source.


We've improved this experience so that now if Access loses the ODBC connection to an external data source, Access tries to reconnect to the data source automatically. In this scenario, you'll now see the following experience:


  • If Access loses the ODBC connection to an external data source you're trying to use, you'll see an error message describing the failure.
  • Access now attempts to reconnect to the data source automatically when performing operations that require a connection to an external data source. If the retry is successful, you can continue working with the data and database objects without having to close and re-open Access.
  • If the retry fails, you won't be able to work with objects that use the dropped external data sources but you can continue working with other areas of your database that aren't dependent upon the unreachable data sources.
  • If you attempt to use data or other objects again that rely on the external data connections, Access tries again to reconnect to the external data source.
 
Seems to me you are arguing about the difference between a bus and a train. Both transport people and each have their own advantages and disadvantages. Which one you use depends on what matters to you. And sometimes you don’t have a choice.

If you don’t want to use the train because it doesn’t go where you want to go and if Access is the bus and you don’t like the uncomfortable seats, use a coach or a car
 
Last edited:
In a previous post I mentioned that Access' file server architecture and SMB are part of the reason why Access is intolerant to network noise and latency, but I was told SMB was irrelevant.

You may have gotten that answer but you didn't get it from me (unless I was trippin' at the time... but I don't do that.)

Yes, because SMB (any version) is a TCP-based protocol as opposed to UDP based, Access IS sensitive to network drops. Switching to SMB 2 and later to SMB 3 simply means the session can more easily buffer packets and optimize its use of allocated memory resources. A feature of most protocols is the ability for the original sender to retry after a time-out. There can be a retry limit. In the final analysis, either the session layer keeps on going or the entire virtual connection collapses.

And for the above reason is why ODBC always requires login when re-connecting to a db server?

While the exact reason depends on whatever the network gurus said it was and I didn't see that particular dispatch, I would say that it is a VERY HIGHLY LIKELY reason for the "login when reconnecting" requirement. The details of any protocol would be "ordained" by the IETF (Internet Engineering Task Force) and/or IEEE (Institute of Electrical and Electronics Engineers). If you looked up IETF + ODBC, you might get a reference to RFC 2371, the current standard for database transactions under the title "Internet Transaction Protocol". However, as it turns out, ODBC is actually just an API that embodies an implementation of ITP. There is no RFC document for ODBC.
 
I am a team member of four Access developers and we are concerned about the stability and future of Access.
Getting back to the original (future) concern. Obviously, MS Access primarily survives due to the grace of Microsoft. But going to the next level (in terms of the future of MS Access); is MS Access acquiring users or losing users? This website, to a degree, can indirectly answer that question. Should more people be implementing MS Access solutions, it can be expected that they will find this site. As such, is the number of users of this website (or others) growing, staying stagnant, or declining?
 
Is the ODBC login requirement really because Access uses SMB?

Pat and I sometimes disagree on technical matters due to our different backgrounds. I'm a hardware, device driver, and networks person. Pat came up through more of an applications oriented starting point. We sometimes see things through different shades of rose-colored glasses. I offered my opinion based on my background, she offered hers.

The truth is that when it was originally designed (back in the 1990s?), whatever Access used had to have a connection consistent with TCP standards because database work in a local-area network environment was (and still is) primarily session-oriented. That requirement is because TCP/IP has a fixed upper limit on data packet size and database transactions rarely fit completely in a single network packet. Therefore, you have to establish a framework to maintain continuity of context (to detect that a packet was lost.) And it is the "sessions" requirement that involves TCP-family protocols into the picture. Because SMB is a TCP-family member, it has the ability in its handshake to say, "I got packet #12454 with a transmission error... please retransmit #12454." OR the listener side could say "Acknowledge proper receipt of packet #12454... send next packet in sequence"). Note that a retransmission request is NOT a network drop, because the handshake is still active.

Because it is a file-sharing environment, you have to use a protocol that allows random access to arbitrary PARTS of a file. (Diverge for explanation: Excel does NOT use such a protocol, which means that any number of folks can simultaneously READ an Excel file but only one user at a time can WRITE to it. But that won't work for a shared multi-user database app.) SMB protocol allows tasks to selectively grab file pieces-parts over a LAN. As long as two "grabs" don't overlap, SMB is perfectly content - and so is Access.

So... does that mean that because Access uses SMB that it has that RECONNECT problem? Is there anything else that Access could have done?

No, nothing else. Access uses SMB because that is the protocol that allows shared writes in a given file in a session-oriented local area network environment. It is also known as the Windows File Sharing and Printer Sharing Protocol. But it is called SMB for short or Server Message Block for long, and proprietary implementations include Samba. Whatever you call it, SMB was there first! Which is the chicken and which is the egg?

The issue actually comes down to this: It is the requirements of the app that make it susceptible to connection loss issues and anything you design that doesn't involve witchcraft will be susceptible to session loss. SMB happens to be the name of the protocol that was used. It is the security requirements of the application that lead you to where we are now.
 
I think Doc is saying that the application cannot even request a reconnect. Could it open a new connection?

The other thing you could do is to force the bound form to close if it has been idle for a few minutes.

The problem with "RECONNECT" actually isn't whether you can restart the session. (You can't.) It is that after a drop, you can't finish an incomplete transaction. You can protect against total loss by always using BEGIN/COMMIT transactions, which means that a Compress & Repair could bring the DB back to a consistent state - but the partially complete transaction is always lost.

Remember that TCP/IP has to break up data streams into packets once we start talking networks - LAN or WAN. If you take a network drop after you have received half of the packets needed for a complete transaction AND it was not done with a formal transaction context, what you now have is a half-updated table. IF you could do a reconnect (different from a retransmit during a continuous session), you would have to be able to bring both sockets together and determine the last validly transmitted buffer - and hope the next one was still available. Can the app open a new connection? Absolutely. Would it have the ability to ask for the last part of the previously broken transmission sequence? I don't know of ANY protocol that does that. Doesn't mean it can never happen, but I've never seen that work.

It is the "Half updated table" that leads to the "Inconsistent database" error message that reveals a corruption event. THAT is the main mechanism of database corruption.
 
When the form is truly idle and Access thinks (incorrectly) that it is still connected, my only question has to do with dealing with the old connection. In order to start a new connection, Access has to first let go of what it thinks it still has. But when it tries to do something with that connection, it ain't there no more. NOW we are talking mechanical issues inside of Access.

One method would be to just close the form in order to close the connection. IF the "close the channel" handshakes work correctly, then all else is clear sailing. You certainly could then attempt to open the form again. If an error occurs, it would likely be in the Form_Unload step. This method would be visible to the user as the form blinking out and returning and of course would have to be performed from another form - like a dispatcher form or something similar.

There is another method to consider. I believe if you attempted to reload or otherwise manipulate the form's .RecordSource, you would get an automatic disconnect attempt of the prior recordset before it tried to open the updated .RecordSource - whether an actual query or a named query or a named table. From what I remember, the form's CONTENT (in the bound controls) might blink but the form itself would stay open. That disconnect, now behind the scenes, is again the wild card in this mix.

Perhaps doing the .RecordSource manipulation in VBA with an "ON ERROR RESUME NEXT" in force during the disconnect/new connect sequence would be adequate. Then all you would have left to worry about is whether you wanted the re-opened recordset to assume its prior position, which I suppose is possible using a search through the .RecordSetClone to help you establish the correct bookmark.

The issue with formal handshakes is that folks think about establishing a connection but don't always remember that the protocol includes closure standards as well as new-connection standards. My remarks are based on understanding of TCP protocols but since Access is NOT OpenSource, I have no clue about that critical clean-out step of closing a failed connection.

Does that help?
 
Okay, but why is the dropped packet not detected by both SMB and the listener (Access), discarded, and retransmitted?.. It's not the case that the whole packet was dropped, rather Access received part of the packet, or the packet contains garbage caused by interference, and now the accdb file is corrupted. Similar damage to when the power plug is pulled from a PC while its running Windows.

Did you forget that in this hypothetical situation, we have dropped the network connection? There IS no retransmission if there is no network. And in this NO RECONNECT situation, there never will be. That data stream is now disrupted and unavailable.

As to network drivers, trust me on this. If your 'puter gets a half of a packet, it will NOT be accepted. At the very least, it would fail the checksum tests because the content checksums are at the tail end of each packet. But regarding the MOST probable detection method, the device driver will probably detect "loss of carrier" (which is a hardware condition) and not worry about anything else like checksums or protocol gaps.

Sorry to disagree with you, but even if a WHOLE PACKET was received, packet lengths are typically about 1500 bytes. Access typically transmits one disk buffer at a pop and those are typically 4 Kbytes - the size of what is also called a disk cluster or a disk allocation unit. It would take two full packets and one full but short packet to send one complete disk buffer of 4K bytes.

If I have a network drop during an update of a single disk block I still have only 1 chance in 3 of having sent a complete buffer even if I really DID send a complete packet.
 
The accdb may already be in a corrupt inconsistent state even if you were to close and reopen the form. What about if the problem happens while an update query or internal housekeeping is in progress? Being that Access is closed source, what would you suggest to the MS team for making Access more tolerant to network connection problems?

To the "What if" question... you corrupt the DB.

To the "may already be corrupt" part of the question, I think you would be unable to reopen the form.

To the "What would you suggest" question, you are asking a question that might not be answerable. IF you have a DB app that doesn't protect all updates by using a Transaction BEGIN/COMMIT sequence, your DB might not be recoverable after a net dropout. But this is a case of knowing that there is a risk and then not taking mitigating action. Every small-system DB I have ever seen was vulnerable in this way, and we CANNOT forget that Access was/is a small-system utility. The "big boys" have large memory models to play with that would allow them to automatically treat every network update as a transaction. The problem, of course, is that transaction-based systems take a little longer if you have to wait for the whole transaction to be set up before finally being committed. It's a price to pay for doing business.

Most people play the odds and don't ask Access to do what it wasn't designed to do - i.e. work over a WAN or work in an incompatible cloud.
 
You get what you pay for. If you want a "real" database engine (members: NOTE "REAL" IS QUOTED HERE) then you should expect to pay a big-boy price for it. SQL Server, ORACLE, DB2, SYBASE, ... you pays your money and you takes your chances.

In the meantime, Access does what it does for less than $1000 for a perpetual license.
 
Doc, you're making this harder than it needs to be.

I'm answering BlueSpruce's questions. He's the one asking why SMB doesn't detect and correct and retry. In fact, the transmission layer DOES all three of those things, but in the end analysis, at some point when all of the countdowns and timers run to zero, the connection has to let go. There, is that short enough for you?
 
So can the disconnection be detected sooner, reconnection happen more quickly, and continue where it left off, without having to close/reopen the form, or the FE?

If there is an actual hard disconnect, the carrier signal will be lost and that is detected immediately. In that case, the network driver sets a software countdown timer (standard = 30 seconds) before notification of the disconnect, because network drops are actually not that infrequent. It is not unknown to have a momentary glitch. If the carrier returns within the countdown, everything reverts to normal operation. The device driver is written to automatically resync and resume operations without your intervention. If a partial header came in, the network driver would send a NAK packet with the sequence number of the missed traffic packet. It would tick a counter in the device's error count and go on about its business. You would never see that recovery behind the scenes because it is handled by the ISO Transport (End-to-End) layer transparently to you.

If the disconnect is "softer" due to a Windows issue or a software bug in the network driver or electrical noise on the line (point-to-point layer issue), the driver gets the end of the message but still has the carrier signal. In response to this other error the network driver will trigger a NAK/RESEND sequence. Again, if it all happens within the driver's timeout level, the recovery is handled by the "network stack" software and you never even know it happened unless you ask to see the network driver's statistics, which would list error counts for each type of recognized error.

When you get an actual disconnect message, your connection was down longer than the network device's timeout. IF you see a timeout, there IS no RECONNECT because at that point the end-to-end network context has now failed.

Can you adjust the timeout? Yes, YOU can - but if your network partner isn't set to the same timeout, it doesn't matter. The shortest timeout wins (... well, actually, LOSES).

You are looking to see the issue sooner - but the operating system doesn't work that way. What I am about to describe is true for several operating systems. The I/O operation is a self-contained pseudo-task within the operating system. You give an I/O request packet to the O/S, which gives it to the driver and establishes an O/S level virtual thread on your behalf. At this point, you have NO VISIBILITY to the state of that thread - as a matter of security, because it is running with O/S-level privileges. That thread is now out of your control and the only early thing you can do is, if your I/O was "I/O & release" rather than "I/O & wait" then you COULD do an I/O Cancel. To the best of my knowledge, this is true for Windows, all UNIX flavors, VAX/VMS and Open/VMS, RSX-11M, and TOPS-10. I believe it is true for RSX-11S, DOS-11, and RSX-11D, but won't swear to those.

While that I/O packet is still considered valid and not yet satisfied, you DO NOT have a way to ask "Is my I/O request still active?" unless you are running with elevated O/S privilege AND know where to look to find the I/O request that is pending on the network device. For most users, you can't get there from here. The design of the O/S says that you have to wait for the I/O response and there is no valid way of anticipating. It is done that way to keep your mitts out of the works of the O/S to keep you from breaking something.
 
Last edited:
No, but unless there actually is an outage, the drop is temporary. Doc, you're making this harder than it needs to be. The system times out. The app doesn't discover the problem until the user tries to do something else. Then even finding out that there was a blip takes minutes. THAT is the problem we are trying to solve. Assuming the OP uses a timer to find out that the network has dropped and closes the open form, will there be a long time required when the user comes back to do something and opens the form again. I'm trying to figure out how to shorten the user's wait time and I don't have a network to do it on so I can't try to create a test.
You can quickly detect the drop by using WMI to setup a network monitor then close all forms or Access when detected.
 
I stated earlier that you could close the form and re-open - IF the connection has by that time re-established itself. Which is a possibility. The related question "DOES ACCESS STILL NEED TO WAIT" can only be answered by testing the connection to see if you have one. (See @RonPaii remark regarding WMI). If you have a connection but the previously open form has now gone bonkers, try to close it and find out. That comes back to whether the form is still around or whether it will respond to a DoCmd.Close of the form, or whatever other method you try.

The real headache in this question is whether Access itself has actually EXITED because of the lost connection between the FE and the linked tables in the BE. That is going to depend on factors such as the way the network was set up and whether the App's error handling (provided by the app's author) intercepted the error that would possibly be fatal to Access itself. I have seen cases where Access died because it lost contact with a linked BE table. Therefore, in answer to the "NEED TO WAIT" question, I have to reply, "Damned if I know. It depends on too many factors."

You can quickly detect the drop by using WMI to setup a network monitor then close all forms or Access when detected.

This presumes that Access is able to deal with that. Remember that Access is (a) single-threaded for code execution and (b) your code is not always in control if no relevant event has been recently triggered. This implies timer operations, and the problem there is that a complex timer routine is going to slow you down a lot. A MAIN program clearly could set up event interceptors to tell you what is going on, but you don't control all of the event vectors for MSACCESS.EXE, the MAIN program. That kind of hampers the detection process.
 

Users who are viewing this thread

Back
Top Bottom