Handle Overflow Errors In A View 1(SqlServer) 2(Access) (1 Viewer)

dalski

Active member
Local time
Today, 21:41
Joined
Jan 5, 2025
Messages
374
I need high calculated columns with high-precision & scale Decimal types in Views/ SP's in SQL Server. As the calculations go through several layers of high precision with multiplication/ division operations the precision requirements are huge. The problem is sometimes big numbers will be inputted by the user in different columns; sometimes large amounts in column-A, small amounts in B, big amounts in C... with many permutations and more columns than mentioned.

Ideally the Decimal type would be able to handle these large numbers & precision that I need & I would safely design within the constraints of the Decimal type. This is not possible as the precision & scale of numbers is too big. It may seem like overkill but I'm pretty sure the math is correct. The best way to get around this seems to be CASTING the type in each layered calculation, reducing precision whilst maintaining high scale (decimal places). This is a brilliant solution to work within precision & scales. However this leaves me vulnerable to overflow errors should the integral (nr before decimal place) exceeds the storage capacity. It seems the scale (dp) will be truncated to accommodate larger integrals when the values > the cast type, which is fine & this is the reason the decimal type behaves in this way for good reason & an overflow error will not be thrown on the scale. Fine that's excellent, no probs there.

Views
They're simplest for my requirements for data sources of my forms in Access. However I cannot see how to catch an Overflow Error in a SQL Server View? Then how to handle that error in MS Access?
When an overflow error occurs in a view the form the user gets no notification of an error. It's as if the data source detaches - catastrophic failure. One would think the transaction would be rolled back & a notification to the user. We also need to hit "Refresh All" button in Access after entering a rogue value; or the user is unaware a sinister value just caused an overflow error.
1770301419529.png

Testing the many different possible permutations of data entry isn't going to be possible as there are so many different permutations. Ideally I want to capture the overflow error in SQL Server & revert the transaction with a notification in Access.
Any ideas on what to do? SP's it seems TRY & CATCH blocks to capture an arithmetic overflow error but can't do them in Views.
 
Your math is wrong for the types you are using and the result is the error output you see in the form. I would suggest you look into Numerical Methods to come up with an algorithm that will do the calculation to the precision you require.
 
Thanks Ron, I understand what's causing the error. I can't see much on googling Numerical Methods. I'm pretty sure my basic math is correct; if I need 9,999,999.999 in a certain column; multiplying/ dividing through several others with increased scale the numbers quickly get big. Maybe I need to look at float type then? Seems float can go very wrong on multiplication/ division, even addition:
 
Last edited:
Numerical methods is a complex subject. I minored it in school but have not done anything really deep in a long long time. With that said, I would suggest as I would, looking for outside support for an algorithm; AFTER you better define the parameters and limits of the inputs and outputs. In fact with that information posted, you may get more support from the group.
 
You've got a few basic choices.

Scaled integers (such as CAST( ... ) AS DECIMAL( m, n ) will overflow when the integer part exceeds the (max # of digits) - (# of decimal places) - when auto-scaling is used..

IEEE 754 numbers allow for more precision at the cost of a lot of overhead. See also: https://en.wikipedia.org/wiki/IEEE_754

Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points. However, some computers - e.g. the Intel Longhorn CPUs - have Quadruple precision (34 digits) hardware. I used it once for snorts and giggles. Man, it was bizarre. Insanely precise.

Perhaps you could look for a math library that claims to be compatible with VBA and IEEE 754 Quad precision. You might also find that you could access such extended-precision numbers by doing your computations in some other language that supports or emulates extended numbers. At least when you use floating point numbers, you don't have to worry about constantly rounding the intermediate values in order to preserve the range of the integer part of your number. Floating point at least truncates from the right so it sheds excessive precision at the low end of the numbers in question. If you worry about the integer parts, that gets preserved the longest.

In any computer that lacks IEEE 754/Quad, your only choices would be simulation or a continual and very tedious attention to the number of digits you are generating. You have to live within your resource limits, and I fear that your extreme demand for precision is going to bankrupt your efforts. You just don't have the digital resources. Capacity is capacity and when you exceed it, computers complain.
 
Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points. However, some computers - e.g. the Intel Longhorn CPUs - have Quadruple precision (34 digits) hardware. I used it once for snorts and giggles. Man, it was bizarre. Insanely precise.
I have looked and cannot find reference to an Intel Longhorn CPU, but I do remember the name as the code name for Windows Vista. Do you have any other memory of that systems. Would it be something much bigger then a Intel based computer?
 
AI did find the following support native Quadruple precision.
1. IBM POWER9 and POWER10
2. IBM zSeries (mainframes)
3. Some older HP/Intel Itanium (IA‑64) systems
 
Thanks both, dear lord this is too much for me I struggle with basic addition of two numbers 🤮. I'm trying my utmost to reduce precision with decimal.

If rounding worked before let coercion then I think I'd be ok to use float but the inaccuracies terrify me. I appreciate the external library suggestion Doc, it's a good suggestion but I'm short on time & capabilities for that. I'm trying my utmost to get the number sizes down.

Float As A Solution
Suggesting float isn't as bad as it seems? If let coercion didn't exist & I could round to x amount of dp's I think I'd be ok like in Excel; the trick to round to spec'd dp's to avoid floating point arithmetic errors. BUT I think Let coercion renders the round useless as per other thread.
Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points.
1770318763618.png

Based on Float(53) this 'precision' confuses me because it seems that a float is up to 308 significant figures is a direct contradiction. I know binary can't accurately display some nr's but it seems a direct contradiction. Presumably 53 is fraction + 1 sign. 64-bit - 53 = 11 for exponent. That's making sense though I'm getting sidetracked with computer science here opposed to my issue at hand 🤦‍♂️.
1770319933770.png

1770321107012.png
 
Thanks both, dear lord this is too much for me I struggle with basic addition of two numbers 🤮. I'm trying my utmost to reduce precision with decimal.

If rounding worked before let coercion then I think I'd be ok to use float but the inaccuracies terrify me. I appreciate the external library suggestion Doc, it's a good suggestion but I'm short on time & capabilities for that. I'm trying my utmost to get the number sizes down.

Float As A Solution
Suggesting float isn't as bad as it seems? If let coercion didn't exist & I could round to x amount of dp's I think I'd be ok like in Excel; the trick to round to spec'd dp's to avoid floating point arithmetic errors. BUT I think Let coercion renders the round useless as per other thread.

View attachment 123090
Based on Float(53) this 'precision' confuses me because it seems that a float is up to 308 significant figures is a direct contradiction. I know binary can't accurately display some nr's but it seems a direct contradiction. Presumably 53 is fraction + 1 sign. 64-bit - 53 = 11 for exponent. That's making sense though I'm getting sidetracked with computer science here opposed to my issue at hand 🤦‍♂️.
View attachment 123092
View attachment 123094

You are finding that floating point has it's limits. If you are multiplying a long list of floating point numbers, losing precision on each multiplication then at some point nonmatter how many decimal points you have; you will end up with a nonsense value.
 
1. IBM POWER9 and POWER10
2. IBM zSeries (mainframes)
3. Some older HP/Intel Itanium (IA‑64) systems
The box I had was an HP IA-64 Itanium, 8 CPU threads, advanced PCIe internal data bus, supported fiber-channel external disks at VERY high speed, and this one had 8 GB RAM running at 2.1 GHz/64-bit data width- which for OpenVMS was enough for 30-40 users simultaneously, because OpenVMS is not a memory hog like Windows.

The general comments from OpenVMS users was that the O/S on a good machine was the world's biggest I/O Wait state. Even with a back-end ORACLE Enterprise database manager and some clunky non-Windows front-end software, I could NEVER get that thing bogged down. It would average 30 users in peak usage times, I would have Free memory (directly comparable to the same concept in Windows, 'cause MSFT stole the memory management design) at maybe 30-40% of my 8 Gigs, and the CPU "busy" load would RARELY reach 2.5/8 threads busy. The U.S. Navy loved it for what it did even though it was an oddball because of OpenVMS as the O/S. Hardly ever had hardware failures either. It wasn't perfect, but it was a real crankin' machine.

However, the latest trends of having 24 threads/12 cores on a desktop machine running at that same speed tended to make the Itanium obsolete, eventually. Even so, it was a nightmarishly fast machine and would qualify that way today.
 
The box I had was an HP IA-64 Itanium, 8 CPU threads, advanced PCIe internal data bus, supported fiber-channel external disks at VERY high speed, and this one had 8 GB RAM running at 2.1 GHz/64-bit data width- which for OpenVMS was enough for 30-40 users simultaneously, because OpenVMS is not a memory hog like Windows.

The general comments from OpenVMS users was that the O/S on a good machine was the world's biggest I/O Wait state. Even with a back-end ORACLE Enterprise database manager and some clunky non-Windows front-end software, I could NEVER get that thing bogged down. It would average 30 users in peak usage times, I would have Free memory (directly comparable to the same concept in Windows, 'cause MSFT stole the memory management design) at maybe 30-40% of my 8 Gigs, and the CPU "busy" load would RARELY reach 2.5/8 threads busy. The U.S. Navy loved it for what it did even though it was an oddball because of OpenVMS as the O/S. Hardly ever had hardware failures either. It wasn't perfect, but it was a real crankin' machine.

However, the latest trends of having 24 threads/12 cores on a desktop machine running at that same speed tended to make the Itanium obsolete, eventually. Even so, it was a nightmarishly fast machine and would qualify that way today.
A lot of the solid performance could be attributed to OpenVMS. I fondly remember my Novel servers. They never went down and they never slowed down. I hosted Access MDB BE on them and they never got corrupted. The exact same BE never performed as well on Windows Servers.
 

Users who are viewing this thread

Back
Top Bottom