auto number

I would have thought it was obvious that the flag file was outside the app.

No, the flag is inside the app. What you have to get your head around is that the app uses additional system resources to regulate access to Access tables for certain ops.

The same thing can be done inside the app by locking a table. This can be combined with the allocation of the key from the same table without the need to use DMax.

It can be but you are locking the table for any processing. If you look at the code above again, you will see it allows other users to interact with the table. The only thing that is restricted is adding new records for a split second while the key is inserted.

Store the next number as a single record in a field. Open a recordset with dbDenyRead + dbDenyWrite while the number is read and incremented.

You should take a look at this and you may understand why I am not a great fan of recordset locking. The gist of my reluctance is this (text from the link):

Checking for Errors with Recordset Locking
As with opening databases in exclusive mode, setting locks on Recordset objects can cause errors if the lock fails. You should use the four-step process described in the previous section: Turn off error handling, attempt the operation, check for errors and handle any that occur, and finally, turn on error handling.

BTW, you understand, don't you, that you cannot throw the lock when people are using the table ? This a much greater probability than two people attempting to insert a new record within milliseconds of each other.

Galaxiom said:
There is no need for the user to be aware of the process. Instead of the clunky offer to the user to cancel the update or try again, simply catch the error and try again until you get one allocated.

Ok, finally a reasonable suggestion ! :cool: I should probably set up a counter and run the message only after the number of fails reaches 5, 6 or 10. Since we are talking miliseconds, chances are the user would never see the message.

Best,
Jiri
 
Last edited:
Solo

It is a shame that your code does not work 100% of the time.

Just in case I am not the only one who thinks Rain's mostly talking through his hat, here is a simple way to make the possibility of duplicating a program-issued key pretty damn unlikely. The technique does not lock the table but locks out the other users from the key-generating code. The code should be the last section of the form's BeforeUpdate. The I/O file used to lock users out has to exist on a common network share.
 
No, the flag is inside the app. What you have to get your head around is that the app uses additional system resources to regulate access to Access tables for certain ops.

Semantics make poor bases for arguments.

It can be but you are locking the table for any processing. If you look at the code above again, you will see it allows other users to interact with the table. The only thing that is restricted is adding new records for a split second while the key is inserted.

The only table I am locking for processing is the table that holds the single record with the next key to be allocated.

You should take a look at this and you may understand why I am not a great fan of recordset locking. The gist of my reluctance is this (text from the link):

No surprises there. As I said in my previous post, the error is caught and used to determine that the table is locked.

People often distrust what they don't understand.
 
Looks like others are getting the Solo treatment!

Glad it's not just me:)
 
Solo712.

Before we go any further I think there is something which should be mentioned.

At least one of the moderators of this site thinks that the Watercooler forum should be treated with a great deal of tolerance but that level of tolerance does not apply to the technical forums.

Things such as:-
>>Just in case I am not the only one who thinks Rain's mostly talking through his hat.<<
and:-
>>an all-important point which you don't get<<
and:-
>>do yourself a favour and read a bit on winapi file lock/unlock functions.<<
can be read as both offensive and arrogant. So may I suggest you change your posting methods before some moderator takes action?

-----------------------------

A lot of technical arguments are based on false assumptions.

One of the false assumptions is that the programmer owns, or has the right to modify, the machine on which their program may run. In the business world that assumption is almost always false.

In the business world there are two fundamental differences. The first, and more common, is that we go to work and use our company’s machines. The second, and less common, is that we send our program to a client for it to run on their machine. In both cases, though, our program is not running on our machine.

It therefore becomes a process of least interference with a machine which is owned by someone else. Of course we need to interfere with their machine to some degree, else we could not put our program on their machine, but the interference needs to be minimised.

For example, we should not change regional settings, write things to the registry, change screen resolutions or dump needless files to their machine. Another way to look at it is; how does the user totally cleanup their machine if they want to remove our program?

So it’s not just a matter of installation but also a matter of cleanup if the user wants to do so. Keeping the interference minimised also means keeping the changes to their machine internal to our program. Having external files created only helps to spread the footprint of the installation.

-----------------------------

The DMax() + 1 or DMax() functions may not be necessary; there are other ways to do it. The primary reason for doing any solution is not to return a known value but to return a unique value. The fundamental way to return a unique value is to have the unique value in the table set to no duplicates. It does not matter if the key is single or compound, the same still applies, we let the table decide if the key is in violation of duplicates.

Ultimately, though, the violation of duplicates error is caught in the Form_Error event.
Now there is a lot of stuff written on the www which states we should prevent errors before they happen. While in most cases that may be true, this is not one of them. We could write volumes of code to try and prevent that error, and all of that code may have its own error, but nothing we can write will even come close to the protection offered by the table violation and the error raised by it in the Form trying to do it.

In essence then, the Form_Error event becomes the ‘final frontier’ for the duplicate error. It bypasses all our efforts to prevent the duplicate error and simply reports the failure. How we handle that error is up to us but the fact still remains we will not be able to ignore it.

Now it becomes the way to handle that error…
DMax() + 1 would not generally raise the error and would therefore bypass the Form_Error event. That may be seen as a good thing but is it really? It could work for many years without the need for the Form_Error event protection.

On the other hand DMax(), without the increment, will raise the Form_Error event under all conditions of unique key violation. So the fault condition is raised each and every time the update violates the table unique key. We do not have to wait for an error to occur over time in order to test the error. We create the error in the first place and therefore have only one path of recovery from that error to test.

Again in essence then…
Use the table to create the error of violation of unique key and use the single path of recovery in the Form_Error event. It is only that event which will accurately report a duplicate key violation. The recovery code, whatever it may be, goes in that event.

Chris.
 
Solo said:
No, the flag is inside the app. What you have to get your head around is that the app uses additional system resources to regulate access to Access tables for certain ops.
Semantics make poor bases for arguments.

Your semantics make it clear that you are not in the habit of thinking outside the box.

The only table I am locking for processing is the table that holds the single record with the next key to be allocated.

It's obvious to me that you, in the overwhelming urge to yep back at me, will not consider that the likelihood you will fail to lock a table via recordset on a busy system is at minimum several hundred times greater than the likelihood of having a collision in issuing a unique key. :eek:.

Jiri
 
Looks like others are getting the Solo treatment!

Glad it's not just me:)

I don't know about "Solo treatment" but I do know you have slunk out of a thread without a peep after I demonstrated to you you were wrong. Remember ? Here.

Best,
Jiri
 
Solo712.

>>It's obvious to me that you, in the overwhelming urge to yep back at me,<<

In the next 24 hours, please explain to me what the above line means in post #53.

I’m reasonably sure that Galaxiom could reply for himself but you also have to explain it to me.

Failure to explain correctly to me means 1 or 4 weeks, it’s your choice; no reply means 4 weeks.

Chris.
 
Solo712.
Before we go any further I think there is something which should be mentioned.

At least one of the moderators of this site thinks that the Watercooler forum should be treated with a great deal of tolerance but that level of tolerance does not apply to the technical forums.

Things such as:-
>>Just in case I am not the only one who thinks Rain's mostly talking through his hat.<<
and:-
>>an all-important point which you don't get<<
and:-
>>do yourself a favour and read a bit on winapi file lock/unlock functions.<<
can be read as both offensive and arrogant. So may I suggest you change your posting methods before some moderator takes action?

-----------------------------

I don't mind getting under people's skin, especially if they appear too smug.

A lot of technical arguments are based on false assumptions.

Pray, tell ! :rolleyes:

One of the false assumptions is that the programmer owns, or has the right to modify, the machine on which their program may run. In the business world that assumption is almost always false.

What are you talking about ? I own some of the machines on which I program. And, hell, I will program pretty well any way I please. I will certainly not be constrained by the lack of imagination of some people just because they think of themselves as experts.

In the business world there are two fundamental differences. The first, and more common, is that we go to work and use our company’s machines. The second, and less common, is that we send our program to a client for it to run on their machine. In both cases, though, our program is not running on our machine.

Really ? Never knew that ! And what did you say 'the two fundamental differences' were ? It's kind of hard to figure, you know ? :cool:

It therefore becomes a process of least interference with a machine which is owned by someone else.

What becomes 'a process of least interference with a machine which is owned by someone else ' ? And how does it follow what you said before ? :confused:

Of course we need to interfere with their machine to some degree, else we could not put our program on their machine, but the interference needs to be minimised.

Maybe, you should have tried this snotty, witless babble thirty four years ago when I started with computers. Maybe I would have been impressed, though I doubt it. I am a retired IBMer. Before IBM I wrote commercially/professionally succesful programs. Among them my own DB library built on the Btrieve engine.

For example, we should not change regional settings, write things to the registry, change screen resolutions or dump needless files to their machine. Another way to look at it is; how does the user totally cleanup their machine if they want to remove our program?

So it’s not just a matter of installation but also a matter of cleanup if the user wants to do so. Keeping the interference minimised also means keeping the changes to their machine internal to our program. Having external files created only helps to spread the footprint of the installation.

The only way for me to describe politely what you are saying, is stupid. It is you and your buddy Des who are talking down to people here, so obviously sooner or later you will run to people like me who will talk back to you - especially when they see there is much less there than advertized.

I see no reason why an intelligent Access applications designer would characterize external files the way you do. Yes, they add an element of system maintenance, and that element of course should be weighed against the benefit they provide. But they are system resources, no different than say, the WinAPI or the registry. Whether they should be used or not is not not a matter of some simplistic "you should never", "you must avoid", or some pre-concieved notions of what is allowed and what is not. It is strictly why should they be used in preference to something of equivalent effect but lesser cost.

The DMax() + 1 or DMax() functions may not be necessary; there are other ways to do it. The primary reason for doing any solution is not to return a known value but to return a unique value. The fundamental way to return a unique value is to have the unique value in the table set to no duplicates. It does not matter if the key is single or compound, the same still applies, we let the table decide if the key is in violation of duplicates.

I guess you still remember the argument we had about the domain aggregates vs unique compound indexes. Funny, out of the legion that rose with flaming swords against me noone showed up to defend the idea when similar requests were made afterwards. (lastly here.) Funny also to see you defending methods that are by your own reckoning "not necesary" and contradict your own philosophy.

Ultimately, though, the violation of duplicates error is caught in the Form_Error event. Now there is a lot of stuff written on the www which states we should prevent errors before they happen. While in most cases that may be true, this is not one of them. We could write volumes of code to try and prevent that error, and all of that code may have its own error, but nothing we can write will even come close to the protection offered by the table violation and the error raised by it in the Form trying to do it.

In essence then, the Form_Error event becomes the ‘final frontier’ for the duplicate error. It bypasses all our efforts to prevent the duplicate error and simply reports the failure. How we handle that error is up to us but the fact still remains we will not be able to ignore it.

What you say does not make sense to me. Of course it is better to prevent an error if you can, always better than dealing with the consequences. Always, in everything. This is as much true about looking before stepping into shit, as it is in preventing DB errors which may cause - as the saying goes - an indeterminate state of the machine. To hell with your final frontier. If there is a way of preventing two users contending for table insert at a reasonable maintenance cost, you should do it - period.

Now it becomes the way to handle that error…
DMax() + 1 would not generally raise the error and would therefore bypass the Form_Error event. That may be seen as a good thing but is it really? It could work for many years without the need for the Form_Error event protection.

On the other hand DMax(), without the increment, will raise the Form_Error event under all conditions of unique key violation. So the fault condition is raised each and every time the update violates the table unique key. We do not have to wait for an error to occur over time in order to test the error. We create the error in the first place and therefore have only one path of recovery from that error to test.

This is obscurantist logic which is impossible to follow. Why would I be assigning DMAX() value to a key during an insert ? And the Form_Error with DMax() + 1 you do not need unless you want to have your own messages. Did you not just say, let the system handle it ? It handles it without the need for Form_Error. Why do you want to mess with the engine ?

Again in essence then…
Use the table to create the error of violation of unique key and use the single path of recovery in the Form_Error event.
Chris.

Looks like another one who has to wear sunglasses when gazing at his programming navel not to be blinded by the dazzle ! :rolleyes:

(no hard feelings though ! :))

Best,
Jiri
 
Maybe, you should have tried this snotty, witless babble thirty four years ago when I started with computers. Maybe I would have been impressed, though I doubt it. I am a retired IBMer. Before IBM I wrote commercially/professionally succesful programs. Among them my own DB library built on the Btrieve engine.

Ah. This explains why you are using a file to flag a lock. Such things were commonplace many years ago.

It also explains why you rejected composite indexes in the other thread you mentioned. In 1979 one had to minimise the resources in every way possible. (I started programming in 1977).


And since you never learn anything because you never listen to anyone else it is no surprise that you still base your programming techniques on antiquated concepts.

One thing that does surprise me though. Someone with your extensive experience yet you didn't know how an Is Not Null condition affects the results of a query with an Outer Join.:confused:


Moreover, although you actually admitted in that thread that you were wrong you seem unperturbed that it was you mistake when you arrogantly and abusively way you derided CJ_London when you claimed he was wrong.

I really don't object to people being forthright about their opinions, especially when they back them with sound, up-to-date explanations for their reasons. However you just seem to return inane abuse whenever anyone disagrees with you.

Personally I look forward to you being suspended from this site. (That is what ChrisO was intimating earlier.)
 
@Solo

Code:
I do know you have slunk out of a thread

Reason is you were spending all your time prevaricating and not producing an answer, and I did advise that that is what I was going to do - I notice you have not posted the whole thread.

I want to spend my time helping the original poster, not getting into into an argument. This thread is now post #57? The last time the original poster posted was #40 - way too much for a relatively simply request.

So, since I want to get on with helping others, I will also also be 'slinking off' this one
 
@Solo

Code:
I do know you have slunk out of a thread

Reason is you were spending all your time prevaricating and not producing an answer, and I did advise that that is what I was going to do - I notice you have not posted the whole thread.

I want to spend my time helping the original poster, not getting into into an argument. This thread is now post #57? The last time the original poster posted was #40 - way too much for a relatively simply request.

So, since I want to get on with helping others, I will also also be 'slinking off' this one

Anyone who cares to see which one of us is "prevaricating" has only to open the link in the post #54 above and see for himself/herself.

Incidentally, for the record, it was not I who discouraged the original poster with insulting comments.

Jiri
 
Ah. This explains why you are using a file to flag a lock. Such things were commonplace many years ago.

It also explains why you rejected composite indexes in the other thread you mentioned. In 1979 one had to minimise the resources in every way possible. (I started programming in 1977).

FYI, the last time I have used Btrieve engine (in Pervasive PSQL) was 2006.

And since you never learn anything because you never listen to anyone else it is no surprise that you still base your programming techniques on antiquated concepts.

I would not listen to someone who proposes to throw recordset locks on busy systems when inserting a program-defined key, that's for sure.

One thing that does surprise me though. Someone with your extensive experience yet you didn't know how an Is Not Null condition affects the results of a query with an Outer Join.:confused:

I owned up to it - misreading "Left" for "Inner". It's interesting that you should find fault with that even though my solution worked and CJ_London's who accuses me of "prevaricating" didn't.

Moreover, although you actually admitted in that thread that you were wrong you seem unperturbed that it was you mistake when you arrogantly and abusively way you derided CJ_London when you claimed he was wrong.

A join was a dumb choice either way, and since you saw the code, you know it !

I really don't object to people being forthright about their opinions, especially when they back them with sound, up-to-date explanations for their reasons. However you just seem to return inane abuse whenever anyone disagrees with you.

Look in the mirror, bud !

Jiri
 
How about we all walk away from this thread as mindless bickering doesn't help the original poster and certainly doesn't resolve the issue. Obviously the majority of the forum will never agree with Solo on primary key/autonumber functionality as it seems to be a recurring theme. Even so, the bickering only leaves the thread starter confused. I don't blame that person for staying away once that started.

Solo, I suggest you treat anyone here as if they are capable of firing you. Would you have said half of the things you said on this thread and others to your boss? If you want to burn off steam about this, take it to the Watercooler. I'm sure Galaxiom and Rain will be more than happy to continue the discussion there away from the help forum. At least in there, you have to be pretty offensive to be threatened suspension.
 
Solo712.

You are banned until the 12-May-2013 or until someone else with the authority lifts this ban.

In the future you will be welcome, indeed encouraged, to contribute a different opinion to others. But please consider the language you use, and where you use it, else you will have no future on this site.

Chris.
 
gizmogeek please refrain from deleting threads to which others have contributed. Even if you do feel that some of those contributions are counter productive.
 
Okay just to clear things up. The moderator wrote in that thread that it should be deleted and start new. A few others on your side of the forum mentioned it too so it's was deleted.
Thank you!

Does anyone have any insight into this problem of the "auto number"?
 
I think to summarise where we were, you required a way of determining the next number to be assigned as a PO number or similar- like autonumber but not using the autonumber field type because that was already being used as a record ID.

The overriding solution offered was to use Max(Orderno)+1 either as a DMax function or subquery and the debate then extended as to when that should be applied where again the overall recommendation was to apply it in the beforeupdate event of the form.

There was also some debate as to what else needed to be covered in a multi user environment to ensure two separate users did not assign the same number due to an unfortunate timing coincidence.

As to whether a definitive solution was finally provided - I have no idea:)
 
I think that the original poster has not answered the question as to why it is required.

Why is it required to have the next Purchase Order Number prior to the Purchase Order itself being created?

If we can get an answer to that question we may be in a better position to help.

Chris.
 
Why is it required to have the next Purchase Order Number prior to the Purchase Order itself being created?
It isn't! When a new PO is created it needs to be assigned a number - that would be the last created PO number+1 (probably my grammar!)
 

Users who are viewing this thread

Back
Top Bottom