Solved Sequential numbers

Just out of curiosity.
What would you choose instead of Access if you were free to choose?
I used Paradox when it was still DOS based and loved it. I haven't used it since 1995 so I'm not sure if it is even a good product anymore. To tell the truth, I love Access. The code behind the forms paradigm took some getting used to, but once the light bulb lit up, I have never looked back.
 
I'm still waiting for your brilliant solution to any of this.
Don't have one that hasn't already been provided and even if I did, it would be a complete loss on you.

By your own admission, Access is incapable of providing what you need, so why keep trying to pound that square peg into a round hole?

Why not simply go with what you THINK will work, join an applicable forum and make them sorry they accepted your request to join?
 
If sequential numbers are so bad, then why is practically everyone using then as a primary Key?
who says they are bad? - the point of a primary key is it is unique to a record. Nobody cares (or shouldn't do) whether they are sequential or not, whether there are gaps or not. What's bad is thinking they must be sequential.
 
I hope you forgive a stupid question. I'm trying hard to understand what you're planning to do.
If you don't have a good internet connection, how do you plan to sync your main server with your multi back ends?
Saving on DVD and sending them by post?
Or waiting for the time when the internet is back? In this case what will happen while your multi BEs are not synced? Because as you say it may be a week ahead to replace the cables.

And another question if you don't mind.
Why you use $ instead of S in MS? Because I never understand why people who don't like Microsoft, keep using its products.
Okay, because you asked. And nicely I might add--thank you for that.
1. Well, everything was done at one point by post. I survived on that for many years and 1/2" tape.
2. Ideally, with a good Internet connection synchronization can be close to immediate with the right primary key available for doing it. I've tested composite keys, and they're not a good solution based on my testing, for my application.
3. That's the whole point of having a BE at each remote site. They can keep working with what they've got until the WAN comes back.
4. I had Apple as a client. I don't buy any Apple products. If you think I appear arrogant here, you never sat in a meeting with Mr. Jobs. B. Gates wasn't quite so arrogant, he's a smart guy, but still "borrowed" a lot of code from others without their permission -- like from my Xerox team. I've got friends that have worked at IBM for decades, I hear stories. I'm not a fan of Google either, with their business and hiring practices. Sometimes we just have to decide on the lesser of two evils, or we have to go with what 90% of the desktops world-wide have. It's $ because that's what M$ and Bill wanted as they practiced anti-competitive behaviors that buried otherwise good, maybe even better technologies. Having worked at Ma Bell, I understand quite well that business mindset that needs some government oversight.
5. I'm not an Access hater, far from it. There's a lot of cool stuff there, even for thirty years after its initial release, when I went WOW! It's just that when I want to help my friend with their business problem, I'm hitting some serious walls with how closed Access is, and how poor the documentation is and how it hasn't been tended to. Most of my questions on this forum have been about getting around those limitations.
 
Don't have one that hasn't already been provided and even if I did, it would be a complete loss on you.

By your own admission, Access is incapable of providing what you need, so why keep trying to pound that square peg into a round hole?

Why not simply go with what you THINK will work, join an applicable forum and make them sorry they accepted your request to join?
I never said Access won't work. I have said it's a bit difficult to find a solution to some issues I face.
Okay, since you don't have an answer, and don't want to give one, which post number here solves the problem?
I'm looking into two, from others, that look promising (from two of the moderators), even if you think they are square.
Why don't you just look the other way when you see one of my questions? I know, without offering any real advice, you just enjoy being a side-line jerk. We all had to deal with your type in high school, all cocky and no substance, that made everyone else's life hell. We could just ask your brother how it was living with you oh round one, no? Too bad that chief didn't knock some better sense into you.
 
Why don't you just look the other way when you see one of my questions?
Done, and others will do the same as most have already done, or have you even noticed?

And when you wonder why no one will engage you, you can bookmark this thread for easy reference. Best of luck to you.
 
Done, and others will do the same as most have already done, or have you even noticed?

And when you wonder why no one will engage you, you can bookmark this thread for easy reference. Best of luck to you.
Wow, if what you say is true, then how did my question get to four pages? Even if two of them are your snotty comments.
As I said, two of the comments from moderators look promising. I guess you didn't notice that.
Now, if you'll excuse me, I'll get back to coding a solution to my friend's business needs, and the legal brief I need for tomorrow.
 
who says they are bad? - the point of a primary key is it is unique to a record. Nobody cares (or shouldn't do) whether they are sequential or not, whether there are gaps or not. What's bad is thinking they must be sequential.
I can't see the post right now, but Nautical Nut Guy (who my post was directed at) said I'm trying to push a square peg into a round whole using sequential numbers (contrary to some of the posts by others and what M$ does with autonumbers). I never said primary keys have to be sequential. In fact, at the moment I'm using, your advice I think it was, random ANPKs to avoid duplicates between two test systems. I did originally say I would like to get a sequential number into a reference field (not the PK) as a default without having to scan the entire table for a DMax value. I see no problem with that question.
 
OK, I'll tell you how the Navy did this. I worked at the Navy Enterprise Data Center New Orleans for many years, and in fact retired from them.

We found a unique ID for our servicemen - Navy reservists - that wasn't the Social Security Number but was instead a service ID number that was unique and consistent among all Dept. of Defense services. My machine that served the U.S. Navy Reserve Personnel management operations talked to 18 different servers in perhaps 8-10 cities. We were not able to guarantee continuous connection because Wide Area Networks just aren't stable enough for real-time updates. So each of the 18 agencies built "transaction tapes" (which initially WERE tapes until the networks got better) to show what was being done for each service person.

We would "cut a tape" (build a transmittal file) that used the uniform DoD personnel identifier to show transactions. We had exchanged detailed record specifications for each type of transaction. As the networks improved, we developed a schedule of FTP or SFTP transmission times. We used the FTP family because as it happens, FTP is fault-tolerant. It sends several buffers in sequence and reassembles larger blocks of data at the other end. IF there is a drop-out, FTP's particular protocol allows retransmission of a damaged block. Not all TCP/IP methods do that. Our operations center would receive a transaction file and start a background job that would process the transactions to do updates into our tables. We would NEVER send the raw data from the tables. We ALWAYS couched it as a transaction.

The key to sanity is that each agency was the "official" source for something. BUPERS (Bureau of Naval Personnel) was the official source for any transaction whereby a reservist joined the USNR or separated from it. My own system was the "official" source of certain personnel transfers between Navy units and was also the source for mobilization and demobilization orders. DFAS (Defense Finance & Accounting Services) was the source for pay & expenses disbursement. Another group handled things like clothing allowances for military uniforms. Each of the other agencies had their thing. But here is how this was the key to sanity. You could send those transactions TO any server - but only the "official" source could send certain types of transactions. Any other source's transmissions were either ignored or stored with a flag.

SO if you needed to add someone to the members of a Navy Reserve unit, it had to come from BUPERS, never mind the fact that the guy you wanted to add was standing before the Personnel Chief Petty Officer at NAS Whidbey Island in Washington state. Oh, the chief could enter a request TO BUPERS for that person to be approved - but until the transaction came out of BUPERS, that person was not a reservist.

The big takeaway from all of this is ... a distributed system is never going to work well over a network of limited reliability unless you have a way to play traffic cop. The best way to play traffic cop is to designate a "definitive authority" that has to be the source of some particular type of transaction. If the transaction comes from anywhere else, it is not definitive and either is ignored or flagged as non-definitive. Once you have defined the "official authority" then that server is responsible for the official ID codes.

I don't know if this will help you or not, but it is the closest experience I have to your multi-BE situation.
 
Since we are talking history, I started off with a single Apple IIe using UCSD pascal. A whole 8k of memory, no mouse, 2 x 140k floppy drives and a 40 x 21 char dimensioned monitor. Back then they were called micro computers and there were no programs to buy. My first project was to create a costing application so I learned how to create tables and when sequential searching became too slow, how to create indexes for faster retrieval. The 'program' sat on one floppy drive, the data on the other.

Once that was done, it was extended to provide production planning, stock control, waste management, purchase orders, despatch notes, bonus calculations and more. But we still only had the one computer, so I developed in the morning and at lunchtime the computer was taken to to production office for them to use the other programs - I would collect it again the following morning. This became inconvenient so a second computer was purchased - and when there were updates to the app, I just copied the disc and swapped the copy for the old program disk on the production machine. Data used the same process and went the other way.

Then hard drives came along. We purchased a 40mb Corvus hard drive and were able to connect both computers to it, so now we had the program on the floppy drive and data on the hard drive. Apple became aware of what I was doing and it turned out I was one of the first people to do this in a commercial environment. I did meet Steve Jobs briefly when he was in the UK and I was asked if I would be prepared to go round the UK, talking to business groups about my experiences - primarily comparing the micro network over mini computers - advantages and disadvantages, tips and tricks. I did that for a year or so, but since I was training to be an accountant, I had to give it up as it was taking too much time out of my study time.

My network grew to 9 machines across the company before I left and my next two jobs although finance orientated included replacing mini computers with micro networks and writing 'modern' erp systems. The last of these was when I discovered Access
 
Since we are talking history, I started off with a single Apple IIe using UCSD pascal. ...

My network grew to 9 machines across the company before I left and my next two jobs although finance orientated included replacing mini computers with micro networks and writing 'modern' erp systems. The last of these was when I discovered Access
Cool and lucky you, my first experiences were with a teletype machine and paper tape, which you had to type the program on to and then rerun through the reader to execute on the remote system. I then got "lucky" with assembly, Fortran and COBOL on punch cards. I'm guessing about 15 years before the Apple IIe appeared. Later, I got into Trash 80s and the Commodore Pet for options arbitrage analysis. And then we got unlucky with the network paradigm, which indeed distracts us from the simple task of coding an application.

And since we're on memory lane, here's the 30+ year-old code (actually older, this was an English-only "class procedure" for one newer form) to create autonumbers for one transaction table (we called them files back then) and "sub-form" with 79 fields for a transaction I had in my "sailing-ship", kinda where this thread started. This a subroutine that is analogous to an Access Class Object for a form with all the logic to supplement the form (lots of error checking and foreign table updates):

Code:
SUBROUTINE Z980CATRA.S
* Copyright (c) 1986-2020 Advanced Data Business Systems, unpublished. All rights reserved.
* ADJUSTMENT TRANSACTION DEU SUBROUTINE
* CATALOG
* TWG 10/07/92
      GOSUB 99999
      INCLUDE INC.BP 999ADSCS1
      INCLUDE INC.BP 999ADSDEUCV
*
**    EQUATES
*
*
      EQU F.CATR TO FILES(11), F.CATRD TO FILES(12), F.CATRC TO FILES(13)
      EQU F.CATAB TO FILES(15), F.CACHD TO FILES(16), F.CATRHD TO FILES(17), F.CACUST TO FILES(18)
      EQU F.CABTCH TO FILES(24), F.CAFUND TO FILES(27)
*
      EQU CACUST TO PC(3)
**    EQUATE CATR FILE
      EQU CATR$CLM TO D(2), CATR$CUST TO D(3)
      EQU CATR$PERN TO D(5), CATR$BTCH TO D(6), CATR$PS TO D(7), CATR$ERR TO D(8)
      EQU CATR$UPDT TO D(9), CATR$WRK1 TO D(10), CATR$WRK2 TO D(11), CATR$TYPE TO D(12)
...
      EQU CATR$APPAMT TO D(61), CATR$DENAMT TO D(62), CATR$RESAMT TO D(63)
      EQU CATR$DENCD TO D(74), CATR$DENTEXT TO D(75)
      EQU CATR$ICMNT TO D(78), CATR$ECMNT TO D(79)
...
      ON TYPPR GOTO 100, 200, 300, 400, 500, 600, 700, 800, 900
100   ** AT PROGRAM ENTRY (E)
!     READ IN THE PERIOD NUMBER
      READ P(2) FROM F.CATAB, PCONT$COCD:'AOPT' ELSE P(2) =''
      CATAB$CPERN.AOPT = P(2)<10>
      IF CATAB$CPERN.AOPT EQ '' THEN PRINT 'CURRENT PERIOD NUMBER NOT SET'; STOP
      READ P(5) FROM F.CATAB, PCONT$COCD:'GRPD' ELSE P(5) =''
199   RETURN
200   ** BEFORE INPUT PROCESSING (B)
      BEGIN CASE
      CASE ATTR EQ 0; * ID.CATR
        ID = ID 'R(%6)'; IDF = ID
      END CASE
299   RETURN
300   ** COMPUTATION SUBROUTINE (=)
399   RETURN
400   ** AFTER INPUT SUBROUTINE (A)
      BEGIN CASE
      CASE ATTR EQ 0; * ID.CATR
        IF NOT(NFLAG) THEN
          READ CACUST FROM F.CACUST, CATR$CUST ELSE ERRM = 'CUSTOMER ITEM NOT FOUND'; BEFLAG =1; GOTO 499
        END
        IF CATR$TYPE EQ '' THEN CATR$TYPE = P(1)<1,1>
        IF CATR$TYPE NE P(1)<1,1> THEN ERRM = 'TRAN TYPE MUST BE A ': P(1)<1,1>: ' TO USE THIS SCREEN'; BEFLAG =1; GOTO 499
        IF CATR$PS GE 16 THEN ERRM = 'TRANSACTION MAY NOT BE ENTERED AGAINST AFTER THE UPDATE'; BEFLAG =1; GOTO 499
!       SAVE THE ORIGINAL AMOUNTS IN CASE THEY CHANGE
        FOR LCNT = 2 TO 21
          PC(1)<LCNT> = D(LCNT +50)
        NEXT LCNT
        IF CATR$PRTY EQ '' THEN CATR$PRTY =1
      CASE ATTR EQ 3; * CATR$CUST
  ...
600   ** FILE PROCESSING (F)
      IF TYPPR NE 6 THEN RETURN
      IF CATR$PERN EQ '' THEN CATAB$CPERN.AOPT = P(2)<10>; CATR$PERN = CATAB$CPERN.AOPT
      IF CATR$BTCH EQ '' THEN
        IF P(4) EQ '' THEN
          READVU BTCH FROM F.CATAB, 'CATR.NID', 3 ELSE BTCH =1
          LOOP
            PRINT @(0,23): TT$CEL: 'THE NEXT BATCH NUMBER IS ': BTCH: '. ENTER THIS OR A PREVIOUS BATCH NUMBER ':
            INPUT NBTCH:
          UNTIL NBTCH NE '' AND NBTCH MATCH '0N' AND NBTCH LE BTCH DO REPEAT
          IF NBTCH EQ BTCH THEN WRITEV BTCH +1 ON F.CATAB, 'CATR.NID', 3 ELSE RELEASE F.CATAB, 'CATR.NID'
          P(4) = NBTCH
        END
        CATR$BTCH = P(4)
      END
!     UPDATE THE AMOUNTS IN THE CATRHD FILE
   ...
!     NO ERRORS ALLOWED AFTER THIS
      IF NFLAG THEN
        READVU ID FROM F.CATAB, 'CATR.NID', 2 ELSE ID =1
        FFLAG =1
        LOOP
          ID = ID 'R(%6)'
          READV TEMP FROM F.CATR, ID, 1 ELSE READV TEMP FROM F.CATRD, ID, 1 ELSE READV TEMP FROM F.CATRC, ID, 1 ELSE FFLAG =0
        UNTIL NOT(FFLAG) DO ID = ID +1 REPEAT
        IDF = ID; CATR$ASSTR = ID; ID = ID +1
        WRITEV ID ON F.CATAB, 'CATR.NID', 2
  ...
899   RETURN
900   ** COMPLETION PROCESSING (X)
999   RETURN
99999 PROG = 'CA.BP 980CATRA.S 01-06-95 01:49PM JUST 011 TWG   11012 243'; RETURN
      END

I snipped out a bunch that wasn't necessary, but upon review I see that there are two autonumbers being generated, one for a batch number (like my original question) and one for the transaction ID (where this thread degenerated). As originally posted, it's quite trivial and needs about six lines of code (some of the orange code isn't specific to the autonumber assignment). I also had a generalized "autonumber" routine, but that one would be too esoteric, with indirect references, to post here. The mainline form program (kinda like Access Form run time code that you never see) is responsible for the modification of the transaction ID for synchronization. Because this particular application ran in a service bureau (cloud style) environment, the synching software and design isn't enabled. The idea was to show that "autonumbers" aren't anything new or complicated.

So, for the haters, you can see (if you look and comprehend) that I wasn't doing anything so different in my "sailing-ship" from what Access does today. They are both the same shape peg, just made from different material.

P.S. Sorry, I see after posting that the site removes the orange highlighting I placed in the code sections. Just look in 600 area for READV (It's like a DLookup only about 100x faster, more powerful and direct).

P.S.S For those that see GOTOs, OMG he's got GOTOs!, they do functionally the same as an Access Exit Sub (but with wrapup). Relax. Notice how the language doesn't need On Error GoTo? That's because errors are handled with "then and else clauses" in the fundamental commands that could have an error. Oh, my poor, old-sailing-ship was so antiquated compared to the modern motorized vehicle that Access is. Ha ha ha, Now, analyze the Loop syntax and tell me that 50-year-old design isn't better than what Access has today.
 
Last edited:
And since we're on memory lane, here's the 30+ year-old code (actually older, this was an English-only "class procedure" for one newer form) to create autonumbers for one transaction table (we called them files back then) and "sub-form" with 79 fields for a transaction I had in my "sailing-ship", kinda where this thread started. This a subroutine that is analogous to an Access Class Object for a form with all the logic to supplement the form (lots of error checking and foreign table updates):
Running that code (admittedly it makes no sense to me) versus setting a datatype to autonumber doesn't seem like a difficult choice.
 
OK, I'll tell you how the Navy did this. I worked at the Navy Enterprise Data Center New Orleans for many years, and in fact retired from them.

We found a unique ID for our servicemen - Navy reservists - that wasn't the Social Security Number but was instead a service ID number that was unique and consistent among all Dept. of Defense services. My machine that served the U.S. Navy Reserve Personnel management operations talked to 18 different servers in perhaps 8-10 cities. We were not able to guarantee continuous connection because Wide Area Networks just aren't stable enough for real-time updates. So each of the 18 agencies built "transaction tapes" (which initially WERE tapes until the networks got better) to show what was being done for each service person.

We would "cut a tape" (build a transmittal file) that used the uniform DoD personnel identifier to show transactions. We had exchanged detailed record specifications for each type of transaction. As the networks improved, we developed a schedule of FTP or SFTP transmission times. We used the FTP family because as it happens, FTP is fault-tolerant. It sends several buffers in sequence and reassembles larger blocks of data at the other end. IF there is a drop-out, FTP's particular protocol allows retransmission of a damaged block. Not all TCP/IP methods do that. Our operations center would receive a transaction file and start a background job that would process the transactions to do updates into our tables. We would NEVER send the raw data from the tables. We ALWAYS couched it as a transaction.

The key to sanity is that each agency was the "official" source for something. BUPERS (Bureau of Naval Personnel) was the official source for any transaction whereby a reservist joined the USNR or separated from it. My own system was the "official" source of certain personnel transfers between Navy units and was also the source for mobilization and demobilization orders. DFAS (Defense Finance & Accounting Services) was the source for pay & expenses disbursement. Another group handled things like clothing allowances for military uniforms. Each of the other agencies had their thing. But here is how this was the key to sanity. You could send those transactions TO any server - but only the "official" source could send certain types of transactions. Any other source's transmissions were either ignored or stored with a flag.

SO if you needed to add someone to the members of a Navy Reserve unit, it had to come from BUPERS, never mind the fact that the guy you wanted to add was standing before the Personnel Chief Petty Officer at NAS Whidbey Island in Washington state. Oh, the chief could enter a request TO BUPERS for that person to be approved - but until the transaction came out of BUPERS, that person was not a reservist.

The big takeaway from all of this is ... a distributed system is never going to work well over a network of limited reliability unless you have a way to play traffic cop. The best way to play traffic cop is to designate a "definitive authority" that has to be the source of some particular type of transaction. If the transaction comes from anywhere else, it is not definitive and either is ignored or flagged as non-definitive. Once you have defined the "official authority" then that server is responsible for the official ID codes.

I don't know if this will help you or not, but it is the closest experience I have to your multi-BE situation.
Gotcha and my experience was similar. Basically, I had a formalized way to transmit requests for change to the "BUPERS" and then transfigure, error check, report errors back, and update the primary table (file) with valid record change requests. Only it worked with dozens of clients against a half-dozen tables for each client. It migrated from tape to automated FTP to email attachments (if the client didn't have FTP available).

Here's the menu driver record for some of the file transfers:
(Just look at the numbered lines and the description for the menu job, the rest are just ducumentation and control codes for the driver program)

Code:
TWG!07/27/90
* Copyright (c) 1986-2020 Advanced Data Business Systems, unpublished. All rights reserved.
CLIENT INTERFACE MENU
CA
M
CAMENUCNTL
CATBLDUL.MýCACUSTUL.MýCACUSTFUL.MýCACOMUL.MýCAACCUL.MýCAAUGUL.MýCACLMUL.MýCACCRED.MýýýýýCAEDITA.MýUTSRDL.TýUTSRUL.TýUTTULD
1ý5ý25
1ýTABLE DESCRIPTION IMPORT MENU
2ýCUSTOMER IMPORT MENU
3ýCUSTOMER FUND IMPORT MENU
4ýCOMMITMENT IMPORT MENU
5ýACCRUAL IMPORT MENU
6ýAUGMENT IMPORT MENU
7ýCLAIM IMPORT MENU
8ýCLAIM PAYMENT MENU
13ýEDI TRANSFER MENU
14ýSPECIAL REQUEST EXPORT
15ýSPECIAL REQUEST IMPORT
16ýTAPE UNLOAD AND DETACH

You can see a throwback to tape operations for clients that hadn't migrated to Electronic Data Interface (EDI).
And then here's just one sub menu and all it's options:

Code:
TWG!03/01/00
* Copyright (c) 1986-2020 Advanced Data Business Systems, unpublished. All rights reserved.
CUSTOMER FUND IMPORT MENU
CA
M
MENUCNTL
CACUSTFNUL.SýCACUSTFNDL.TýCACUSTFUT.UýCACUSTFUC.UýCACUSTFULC.UýCACUSTFULC.RýCACUSTFULC.SýCACUSTFANA.UýCACUSTFULF.UýCACUSTFPUL.U
1ý5ý25
1ýNEW CUSTOMER FUNDS TO LOAD SCREEN
2ýEXPORT NEEDED CUSTOMER FUNDS
3ýCUSTOMER FUND IMPORT TRANSFER
4ýCUSTOMER FUND IMPORT CONVERT
5ýCOMPARE CUSTOMER FUND DATA
6ýCUSTOMER FUND COMPARE ERROR REPORT
7ýCUSTOMER FUND COMPARE ERROR SCREEN
8ýAUTO OK CUSTOMER FUND CHANGES
9ýLOAD FIXED CUSTOMER FUND CHANGES
10ýPURGE CUSTOMER FUND IMPORT

A Screen in that system is the same as a Form in Access.
BTW, one of my projects is to rewrite the big menu driver program, that had security, access control, etc. into Access so I don't have to user anemic switchboards. Menus have submenus and often go ten or twelve levels deep in an application.
 
Last edited:
Running that code (admittedly it makes no sense to me) versus setting a datatype to autonumber doesn't seem like a difficult choice.
Except the relevant code is 5 lines or less, and the main Form program can modify, prefix or suffix an autonumber to do the kind of BE stuff I've been talking about. When I say Access is closed, I mean you have to use autonumbers their way or noway.
 
Just look in 600 area for READV (It's like a DLookup only about 100x faster, more powerful and direct).
I don't know how you would know this is correct, given your self-confessed deficit of knowledge of Access.
Domain functions (DMax, DCount etc.) are optimised and will use an index if it's present. They are efficient. Just don't use one in a query returning a large dataset.

It's coming out with this type of comment that makes your posts very disingenuous.

You keep comparing Access to programming constructs from 40 years ago, and it's a different tool.
(And I learnt to code in machine code, so might have a clue...)

When I say Access is closed, I mean you have to use autonumbers their way or noway.
That's because that's how they work, and because of what they are specifically designed to do.
Take a look at a SQL Server identity field, it has very similar constraints and again is designed for use in a very specific purpose.
 
I don't know how you would know this is correct, given your self-confessed deficit of knowledge of Access.
Domain functions (DMax, DCount etc.) are optimised and will use an index if it's present. They are efficient. Just don't use one in a query returning a large dataset.
...
That's because that's how they work, and because of what they are specifically designed to do.
Take a look at a SQL Server identity field, it has very similar constraints and again is designed for use in a very specific purpose.
Because I've done tests on DLookup, DMax and DCount in Access?
Look at the readv with nested else statements. One line of code. How may SQL statements will that need?
My plan is to avoid SQL Server, because it brings very little to the game for my user's needs.
As to the comparisons, I addressed that at the bottom of post #71.
 
Last edited:
would also point out that for determining the number of records in a table i.e. dcount("*","myTable") the value is actually stored in the tabledef

curentdb.tabledefs("myTable").recordcount

My plan is to avoid SQL Server, because it brings very little to the game for my user's needs.
sql server express and mySQL are free and can be installed on any hard drive/shared device. In fact many rdbms's have free versions. You can still use access as the front end.

Benefits include ability to use more sophisticated sql commands, less prone to corruption due to outages (apparently a problem for you), greater capacity and higher security - may even help address your replication requirements if you are not going to use the inbuilt functionality. Performance wise, no real benefit unless you require things like recursive functionality which in Access requires a mixture of sql and vba and is definitely slower for any reasonably sized dataset.
 

Users who are viewing this thread

Back
Top Bottom