dfenton
AWF VIP
- Local time
- Today, 10:43
- Joined
- May 22, 2007
- Messages
- 469
Does this [use of NETBIOS and IPX/SPX] remains to be the case even today?
Novell switched to TCP/IP for its networking stack over 10 years ago, and NETBIOS is only going to be in use in peer-to-peer networks. I'm a huge fan of NETBIOS, actually -- it's incredibly fast and efficient -- but it's non-routable, so not really usable on a network of any size.
I'm not sure I understand the reasons for needing to ping the file every second. Seems to me it would make more sense to check .ldb file based on an event rather than on a timer.
The reason is to show you updates from other users. This interacts with the refresh interval and with record locking (optimistic or pessimistic). It has to happen on a timer so that, for instance, you can be given a visual representation of the locking status of a record in the record selector.
That's what was puzzling me a bit when I discussed about TCP being reliable- but it does me no good to assume it's the TCP being used. If we know what protocol Windows uses for networking files on a remote drive, that would help the discussion, I think.
It's irrelevant to Jet/ACE, which knows nothing about the networking layer. To Jet/ACE, a local hard drive and a file share on a remote server (mapped to a drive letter or not) are exactly the same. I don't think knowing the networking protocol helps the discussion at all, but it would help in troubleshooting. In general, there is no difference in terms of reliability in regard to Jet/ACE for the various network protocols because Jet waits a really long time (in computer terms) before it gives up on a spotty connection. On a local LAN, TCP/IP transmission errors are going to be corrected in milliseconds or less, and if they're not, it means something is badly wrong at a level either above TCP/IP (something's blocking packets) or at a level beneath it (physical wiring). I think that this is the case for all the common protocols, so I don't really think they make a difference.
I can't understand why they decided to make a single BLOB record of the whole collection of Access objects- seems quite backward to me. Not that it justifies sharing Access objects even in 97 anyway.
I think it was for compatibility with the VBE, but I could be wrong on that.
Well, to me, it's not just about performance [in re: caching lookup tables locally] but also cutting on unnecessary network traffic. Why saturate the wire with requests for same data that does not change or changes slowly?
The amount of data involved is minuscule, and has to be retrieved only once per session, anyway (if it's volatile enough to need to be re-retrieved, then it's not a candidate for local caching). Now, one argument for your approach might be that there is locking overhead across the wire that wouldn't be there for local tables, but the overhead of a read lock is very small. You know the tables are not going to be updated, so there's never going to be any write lock contention, so I'd say this is a really minor consideration.
By doing so, the concurrency improves and the application can then scale better.
I think you're futzing with optimizing in sub 1% performance issues (substantiallyl less than 1%, in fact). Record locking aside, the only data you've avoiding loading across the wire is that first request for the lookup table within each Access session. Since we're talking about lookup tables, we're really considering sub-1000-record tables. Even with larger static lookup tables (such as a zip code lookup table), I think Jet is going to be efficient enough (remember that it doesn't pull full tables, but uses the indexes to decide which data pages to retrieve) to make the difference between caching it locally and pulling it from the server worth the maintenance overhead.
This is same principle when working with a server-based RDBMS. It's easy to say that a query that takes one second to execute is acceptable but if it could be optimized to do the same thing in a millisecond, this is even better because the chances of contention are much reduced and users get good experiences out of the application without needing to break out the wallet for expensive hardware. Perhaps it's not justified with a Jet backend, but it certainly is with a server RDBMS backend and I just think it's good principle to design a good citizen.
I think you're worrying about an amount of data that is at least 1 or 2 orders of magnitude below the threshold that is actually going to make any real performance difference. Without Jet's excellent data caching, or with much slower networks (your approach might be justified for large lookup tables on a 10BaseT network, but nobody is using that any more, right?), the extra work your approach entails might be worth the effort. But with modern networks (increasingly tending towards 1GB instead of just 100Mbps) and modern PCs (plenty of RAM for caching data and plenty of available CPU cycles), I think it's not necessary to do the extra work.
I'm not certain it ever was, to be honest.
Um, I thought I said that TCP will attempt to cover the unreliable nature of the connection by sending back the requests for packets where checksum fails and so forth, which manifests to the end users as slow network connection. The applications is ignorant of what TCP is actually doing to hide that fact. But that goes back to the question... is TCP/IP really the protocol that is used to share a file across the network? I would be floored if it was using a different protocol, especially one that does not check the packets as it arrives such as UDP.
The problem with WiFi is at a much lower level than the networking protocol -- it's the "physical" layer that is unreliable ("physical" in quotes, since it's the radio signal that I'm referring to). It's as though you had a wired network and periodically, somebody unplugged your central hub. There's nothing any networking protocol can do about that -- and TCP/IP shuts down when the physical network layer is absent (try starting a Ping xxx.xxx.xxx.xxx /n 100 and turn the switch off on your WiFi card). That is, when your network adaptor can no longer find the wireless network, it turns off all TCP/IP services. UDP or not, if the network device is disabled/inaccessible, there's no protocol that can overcome that.
Also, WiFi timeouts are at least an order of magnitude larger (and probably several magnitudes larger) than what Jet/ACE can recover from, and larger than what TCP/IP can recover from, as well. It's not too large for streaming protocols or HTTP requests, but that's because the clients for those protocols have been built with long time outs because they know the connections are unreliable when considered end-to-end. They are also stateless connections, so they can get away with buffering and resuming after a pause. Jet/ACE is the least stateless application you're ever going to encounter, so it just can't recover from anything other than the smallest disconnects.