Incremental Sequential Numbering by Year via Calculated Field

stormwaterguru

New member
Local time
Today, 14:10
Joined
Mar 6, 2025
Messages
12
I think I've seen some advisement to not use calculated fields directly in tables. However, I'd still like to know if there is a way to do the following:

Assign incremental sequential number by Year, i.e. YYYY-DDMM-xxx (xxx being the incremental number within the Year the entry occurs)

The reason I would like to have it calculated in Table is that the data may be imported in rather than interacted with via an Access form.

From what I can tell, DCount would be the way to establish the incremental sequential by year. Being that DCount is not usable in the calculated field of Tables, I don't see a method to do this? It seems like the best I could do is pull date as formatted above and tack on the autonumberID to ensure it remains unique.

Other ideas to execute the ideal formatting seem like they are outside of Table level by utilizing a query or report- would this be correct?
 
The advice is to not have a field storing more than one value. Better to just have it as a separate ‘counter’ field and calculate it when the record is saved. The datepart would be the date formatted as required and to which you can concact the counter field when required

As far as importing in, import via an append query to your table rather than directly to the table.

And have to ask what you see as the benefit of what you want to do - you have a text field which will be slower to search/filter than a numeric field - and assuming you could use a calculated field, they can’t be indexed
 
I think I've seen some advisement to not use calculated fields directly in tables.
The advice is to not use fields that you calculate yourself. It is perfectly OK to use the calculated data type. However, there is so little you can do with this data type that it probably won't save you any time. However, as I read the rest of your question, you are asking about something else which is more commonly referred to as a sequence number.

I have never had a need to add a sequence number to imported data but I think that you can't use an append query to calculate the sequence number. Why? The calculation relies on looking at existing data and determining how many records for the group already exist by using dMax() NOT dCount(). But the entire query would be processed in memory before any new row is added so all rows would end up with the same sequence number. If I'm wrong, someone will jump in with the solution. If I'm not wrong, then the solution is to use a VBA code loop. You read the input file and one record at a time insert the row into the table with the calculated value using DAO and .AddNew.

And finally, I would store the sequence number as a separate field rather than mushing it with the other fields in almost all situations. You should also add a compound unique index to prevent accidents and ensure that the generated sequence will be unique within the group. To create a multi-field index, you MUST use the Index dialog since the table view although it will allow you to create a multi-field PK, will only allow you to create a single field index.

Open the Index dialog. On the first totally empty line, add a name for the index. Set it as unique. Move to the next line. Leave the index name bland and add the second field. Do the same thing for the rest of the fields - up to a max of 10.

UniqueIndex.PNG
 
Last edited:
IMO, a sequence number in table data is a waste. It adds friction to the process of saving a row, and provides zero return on investment.
• Payoff: It does not describe when the item was created. It does not provide any meaningful sort or filter opportunities. It has no relationship to any properties of the object the row describes. It does not uniquely identify a row.
• Payoff = zero.

• Cost: You have to invent the data, it has to be sequential, you have to validate, you have to pick up where you left off, you have to index to avoid duplicates, you have to restart at arbitrary points in time. If users can delete rows you have gaps. If gaps don't matter, see "Payoff:" above. If gaps matter, you must constantly enumerate, find, and insert or re-evaluate.
• Cost = considerable.

What is the payoff in your mind? What problem does it solve?
 
Why? The calculation relies on looking at existing data and determining how many records for the group already exist by using dMax() NOT dCount(). But the entire query would be processed in memory before any new row is added so all rows would end up with the same sequence number. If I'm wrong, someone will jump in with the solution.

Pat, I do not believe you are wrong, but the acid test is whether the query executes quickly or slowly. Remember that VBA and SQL are in two different memory environments.

Count, Max, Avg, and other SQL aggregate functions are different from the DCount, DMax, DAvg, and other Domain aggregates. Count() can operate within SQL - but DCount() can't. Oh, it will provide a count - but it had to slip back to the VBA environment to do it, if I recall correctly. I believe the all-SQL way to do this is a sub-query that contains/provides the Max() of the field? Because I recall using something like that for my old check-book tracker.
 
Pat, I do not believe you are wrong, but the acid test is whether the query executes quickly or slowly. Remember that VBA and SQL are in two different memory environments.
There is no doubt that queries are faster than VBA loops but there are things that I don't believe you can do in a query and so you must use a VBA loop.

Try the query first, Make sure to use dMax() and NOT dCount(). If you use dCount(), you will generate duplicate sequence numbers if you ever delete a row. Even if your policy is no deletes, there is no need to risk using dCount(). Always use dMax() regardless. This is a defensive "driving" lesson. When you have a 100% solution use it. Don't use the solution with a known point of failure.

Doc, I'm pretty sure that a record has to actually be committed to be found by dMax() and the records are committed at the end of the query, not one by one while the query executes.
 
OP isn’t clear whether they are appending a single record or multiple records. If the latter you can use a row number function starting from a known position determined using dmax
 
data may be imported in rather than interacted with via an Access form.
Pretty unusual to import files containing single records but certainly possible.
If the latter you can use a row number function starting from a known position determined using dmax
Do you have an example using "row number" whatever that is?
 
Do you have an example using "row number" whatever that is?

see attached.

because this is an append query, it doesn't need to be more sophisticated than it is - only requirement is that a unique value from one row to the next is passed, if it isn't you will get a duplicate.

So if your values are

1
2
3
3
4
you will get a duplicate rownumber on the second 3

and if they were in the order

1
2
3
4
3

you won't get a duplicate row number

I have other versions that are stable in a select query and handle sorting and filtering (with this version, selecting a row triggers a requery, so the numbers will change).

you can see some variations of rownumber here
 

Attachments

The sequence number is not "over all" it is "over group" so I don't think that solves the problem.
 
So modify the dmax to have a criteria to identify the group. I'm not prepared to spend my time on something that is poorly defined without a clear understanding on why it is required.
 
CJ, perhaps you should try to use dMax() to create a sequence number for a group in an append query to see what happens. And then reread #3. Please prove me wrong.
 
No idea what you mean. Provide some data and the expected result from that data
 
This is not my problem, it is the OP's. You keep saying to use a query. Pick one of your own tables. Group by anything. Create a sample that increments a sequence number within a group in an append query. I don't believe you can use an append query because the sequence number is generated in memory but it references data from the table. And new records have not yet been committed. Please prove me wrong. My attempts generate the next sequence number multiple times. So if the current sequence number is 185 and I append 10 rows, the sequence number is 186 - 10 times - which is what I would expect.
 
Sorry, I didn't notice that you posted the sample. Using the static variable works for the "over all" situation. Maybe the OP can figure out how to make it work for his "over group" situation.
 
No, this cannot be accomplished by a simple calc in table.

Generating custom unique identifier is a fairly common topic. I had to do this for tracking laboratory samples. Samples are numbered sequentially for each year. The numbers had to be accounted for so no gaps and code deals with that. Data was not imported so number was assigned when user initiated new record. Review https://www.accessforums.net/showthread.php?t=23329

Assigning this new data to imported records can be accomplished with or without VBA.

I tested this mod to CJ's query and it worked: IIf(Year(Date()) <> Year(SaleDt),DMax("cnum","table6"), 0)+rownumber([rep] & [sale]) AS Expr1
Then INSERT without rownumber() - had to add a PK field to Table3:
INSERT INTO Table6(rep,sale,saledate,cnum)
SELECT rep,sale,saledt, Nz(DMax("cnum","table6","Year(SaleDate)=" & Year(SaleDt) & " AND PK<" & [PK])+1,1)+DCount("*","Table3","Year(SaleDt)=" & Year(SaleDt) & " AND PK <" & [PK]) FROM Table3;

An UPDATE action so Table3 is not needed but using rownumber() again:
UPDATE Table6 SET cnum = IIf(Year(Date()) <> Year(SaleDate),DMax("cnum","table6"), 0)+rownumber([PK])
WHERE cnum IS NULL AND (((rownumber(-1))=False));
Now UPDATE without rownumber() function (tested with 3 years of data):
UPDATE Table6 SET cnum =Nz(DMax("cnum","table6","Year(SaleDate)=" & Year(SaleDate) & " AND PK<" & [PK])+1,1)+DCount("*","Table6","Year(SaleDate)=" & Year(SaleDate) & " AND PK <" & [PK] & " AND cnum IS NULL")
WHERE cnum IS NULL;

Since domain aggregates in SQL can really slow performance, VBA recordset might be more efficient.

Could also run UPDATE on entire table to assure no gaps but if this number has meaning to people, that can negate usefulness.

Alternative is to use RunningSum property of textbox on report, again deleting or sorting records would reassign sequence and might lose usefulness

Several assumptions involved and we really need more info.
 
Last edited:
Maybe the OP can figure out how to make it work for his "over group" situation.
It is perfectly doable by modifying the rownumber function and ensuring the query is sorted by 'group'. But without data and a clearer understanding of the requirement, not prepared to spend time providing further example.
 

Users who are viewing this thread

Back
Top Bottom