Handle Overflow Errors In A View 1(SqlServer) 2(Access) (2 Viewers)

dalski

Active member
Local time
Today, 09:10
Joined
Jan 5, 2025
Messages
374
I need high calculated columns with high-precision & scale Decimal types in Views/ SP's in SQL Server. As the calculations go through several layers of high precision with multiplication/ division operations the precision requirements are huge. The problem is sometimes big numbers will be inputted by the user in different columns; sometimes large amounts in column-A, small amounts in B, big amounts in C... with many permutations and more columns than mentioned.

Ideally the Decimal type would be able to handle these large numbers & precision that I need & I would safely design within the constraints of the Decimal type. This is not possible as the precision & scale of numbers is too big. It may seem like overkill but I'm pretty sure the math is correct. The best way to get around this seems to be CASTING the type in each layered calculation, reducing precision whilst maintaining high scale (decimal places). This is a brilliant solution to work within precision & scales. However this leaves me vulnerable to overflow errors should the integral (nr before decimal place) exceeds the storage capacity. It seems the scale (dp) will be truncated to accommodate larger integrals when the values > the cast type, which is fine & this is the reason the decimal type behaves in this way for good reason & an overflow error will not be thrown on the scale. Fine that's excellent, no probs there.

Views
They're simplest for my requirements for data sources of my forms in Access. However I cannot see how to catch an Overflow Error in a SQL Server View? Then how to handle that error in MS Access?
When an overflow error occurs in a view the form the user gets no notification of an error. It's as if the data source detaches - catastrophic failure. One would think the transaction would be rolled back & a notification to the user. We also need to hit "Refresh All" button in Access after entering a rogue value; or the user is unaware a sinister value just caused an overflow error.
1770301419529.png

Testing the many different possible permutations of data entry isn't going to be possible as there are so many different permutations. Ideally I want to capture the overflow error in SQL Server & revert the transaction with a notification in Access.
Any ideas on what to do? SP's it seems TRY & CATCH blocks to capture an arithmetic overflow error but can't do them in Views.
 
Your math is wrong for the types you are using and the result is the error output you see in the form. I would suggest you look into Numerical Methods to come up with an algorithm that will do the calculation to the precision you require.
 
Thanks Ron, I understand what's causing the error. I can't see much on googling Numerical Methods. I'm pretty sure my basic math is correct; if I need 9,999,999.999 in a certain column; multiplying/ dividing through several others with increased scale the numbers quickly get big. Maybe I need to look at float type then? Seems float can go very wrong on multiplication/ division, even addition:
 
Last edited:
Numerical methods is a complex subject. I minored it in school but have not done anything really deep in a long long time. With that said, I would suggest as I would, looking for outside support for an algorithm; AFTER you better define the parameters and limits of the inputs and outputs. In fact with that information posted, you may get more support from the group.
 
You've got a few basic choices.

Scaled integers (such as CAST( ... ) AS DECIMAL( m, n ) will overflow when the integer part exceeds the (max # of digits) - (# of decimal places) - when auto-scaling is used..

IEEE 754 numbers allow for more precision at the cost of a lot of overhead. See also: https://en.wikipedia.org/wiki/IEEE_754

Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points. However, some computers - e.g. the Intel Longhorn CPUs - have Quadruple precision (34 digits) hardware. I used it once for snorts and giggles. Man, it was bizarre. Insanely precise.

Perhaps you could look for a math library that claims to be compatible with VBA and IEEE 754 Quad precision. You might also find that you could access such extended-precision numbers by doing your computations in some other language that supports or emulates extended numbers. At least when you use floating point numbers, you don't have to worry about constantly rounding the intermediate values in order to preserve the range of the integer part of your number. Floating point at least truncates from the right so it sheds excessive precision at the low end of the numbers in question. If you worry about the integer parts, that gets preserved the longest.

In any computer that lacks IEEE 754/Quad, your only choices would be simulation or a continual and very tedious attention to the number of digits you are generating. You have to live within your resource limits, and I fear that your extreme demand for precision is going to bankrupt your efforts. You just don't have the digital resources. Capacity is capacity and when you exceed it, computers complain.
 
Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points. However, some computers - e.g. the Intel Longhorn CPUs - have Quadruple precision (34 digits) hardware. I used it once for snorts and giggles. Man, it was bizarre. Insanely precise.
I have looked and cannot find reference to an Intel Longhorn CPU, but I do remember the name as the code name for Windows Vista. Do you have any other memory of that systems. Would it be something much bigger then a Intel based computer?
 
AI did find the following support native Quadruple precision.
1. IBM POWER9 and POWER10
2. IBM zSeries (mainframes)
3. Some older HP/Intel Itanium (IA‑64) systems
 
Thanks both, dear lord this is too much for me I struggle with basic addition of two numbers 🤮. I'm trying my utmost to reduce precision with decimal.

If rounding worked before let coercion then I think I'd be ok to use float but the inaccuracies terrify me. I appreciate the external library suggestion Doc, it's a good suggestion but I'm short on time & capabilities for that. I'm trying my utmost to get the number sizes down.

Float As A Solution
Suggesting float isn't as bad as it seems? If let coercion didn't exist & I could round to x amount of dp's I think I'd be ok like in Excel; the trick to round to spec'd dp's to avoid floating point arithmetic errors. BUT I think Let coercion renders the round useless as per other thread.
Most often, you run into Single or Double precision (7 digits or 15 digits) with floating decimal points.
1770318763618.png

Based on Float(53) this 'precision' confuses me because it seems that a float is up to 308 significant figures is a direct contradiction. I know binary can't accurately display some nr's but it seems a direct contradiction. Presumably 53 is fraction + 1 sign. 64-bit - 53 = 11 for exponent. That's making sense though I'm getting sidetracked with computer science here opposed to my issue at hand 🤦‍♂️.
1770319933770.png

1770321107012.png
 
Thanks both, dear lord this is too much for me I struggle with basic addition of two numbers 🤮. I'm trying my utmost to reduce precision with decimal.

If rounding worked before let coercion then I think I'd be ok to use float but the inaccuracies terrify me. I appreciate the external library suggestion Doc, it's a good suggestion but I'm short on time & capabilities for that. I'm trying my utmost to get the number sizes down.

Float As A Solution
Suggesting float isn't as bad as it seems? If let coercion didn't exist & I could round to x amount of dp's I think I'd be ok like in Excel; the trick to round to spec'd dp's to avoid floating point arithmetic errors. BUT I think Let coercion renders the round useless as per other thread.

View attachment 123090
Based on Float(53) this 'precision' confuses me because it seems that a float is up to 308 significant figures is a direct contradiction. I know binary can't accurately display some nr's but it seems a direct contradiction. Presumably 53 is fraction + 1 sign. 64-bit - 53 = 11 for exponent. That's making sense though I'm getting sidetracked with computer science here opposed to my issue at hand 🤦‍♂️.
View attachment 123092
View attachment 123094

You are finding that floating point has it's limits. If you are multiplying a long list of floating point numbers, losing precision on each multiplication then at some point nonmatter how many decimal points you have; you will end up with a nonsense value.
 
1. IBM POWER9 and POWER10
2. IBM zSeries (mainframes)
3. Some older HP/Intel Itanium (IA‑64) systems
The box I had was an HP IA-64 Itanium, 8 CPU threads, advanced PCIe internal data bus, supported fiber-channel external disks at VERY high speed, and this one had 8 GB RAM running at 2.1 GHz/64-bit data width- which for OpenVMS was enough for 30-40 users simultaneously, because OpenVMS is not a memory hog like Windows.

The general comments from OpenVMS users was that the O/S on a good machine was the world's biggest I/O Wait state. Even with a back-end ORACLE Enterprise database manager and some clunky non-Windows front-end software, I could NEVER get that thing bogged down. It would average 30 users in peak usage times, I would have Free memory (directly comparable to the same concept in Windows, 'cause MSFT stole the memory management design) at maybe 30-40% of my 8 Gigs, and the CPU "busy" load would RARELY reach 2.5/8 threads busy. The U.S. Navy loved it for what it did even though it was an oddball because of OpenVMS as the O/S. Hardly ever had hardware failures either. It wasn't perfect, but it was a real crankin' machine.

However, the latest trends of having 24 threads/12 cores on a desktop machine running at that same speed tended to make the Itanium obsolete, eventually. Even so, it was a nightmarishly fast machine and would qualify that way today.
 
The box I had was an HP IA-64 Itanium, 8 CPU threads, advanced PCIe internal data bus, supported fiber-channel external disks at VERY high speed, and this one had 8 GB RAM running at 2.1 GHz/64-bit data width- which for OpenVMS was enough for 30-40 users simultaneously, because OpenVMS is not a memory hog like Windows.

The general comments from OpenVMS users was that the O/S on a good machine was the world's biggest I/O Wait state. Even with a back-end ORACLE Enterprise database manager and some clunky non-Windows front-end software, I could NEVER get that thing bogged down. It would average 30 users in peak usage times, I would have Free memory (directly comparable to the same concept in Windows, 'cause MSFT stole the memory management design) at maybe 30-40% of my 8 Gigs, and the CPU "busy" load would RARELY reach 2.5/8 threads busy. The U.S. Navy loved it for what it did even though it was an oddball because of OpenVMS as the O/S. Hardly ever had hardware failures either. It wasn't perfect, but it was a real crankin' machine.

However, the latest trends of having 24 threads/12 cores on a desktop machine running at that same speed tended to make the Itanium obsolete, eventually. Even so, it was a nightmarishly fast machine and would qualify that way today.
A lot of the solid performance could be attributed to OpenVMS. I fondly remember my Novel servers. They never went down and they never slowed down. I hosted Access MDB BE on them and they never got corrupted. The exact same BE never performed as well on Windows Servers.
 
Based on Float(53) this 'precision' confuses me because it seems that a float is up to 308 significant figures is a direct contradiction. I know binary can't accurately display some nr's but it seems a direct contradiction. Presumably 53 is fraction + 1 sign. 64-bit - 53 = 11 for exponent. That's making sense though I'm getting sidetracked with computer science here opposed to my issue at han

You apparently are thinking about it wrong. There are not "308" significant figures in a DOUBLE.

Think REALLY HARD about scientific notation in school where you had numbers like Avogadro's number, 6.0221408 x 10^23, the approximate number of atoms in one mole of any element or compound, which doesn't have 23 significant digits. It has eight. But it has an explicit scaling exponent that tells you the magnitude of the number. In a DOUBLE number, you have a sign bit that relates to the sign of the mantissa, the digits of the expressed number. You have 52 binary digits which gives you the equivalent of about 15 decimal digits and maybe a little bit more, not a full digit's worth. That 308 is the equivalent of the 11-bit binary exponent in what is called "excess" notation.

Scientific notation "floats" the fractional point to the left or right (whether in binary or decimal or octal or hex... you get the idea). To the left of that point, you have integer-qualified digits. To the right of that point, you have fraction-qualified digits. When the position of the fractional point is in the middle of the digit sequence, you either have a non-normalized scientific number OR you have an ordinary decimal number that has no explicit exponent.

You are absolutely agonizing over this precision problem. You are banging your head into a wall. Don't do that to yourself. You obviously believe you need some complex rounding and scaling algorithm. I refuse to fully believe that.

The Currency data type is probably the best thing for you to use, giving you 15 integer digits and 4 fractional digits. If you compute interest or fractional splits of your numbers, it will lose fractional digits below 0.0001, but NOBODY computes that far. If you use a DECIMAL data type you have 28 digits tops, and if you use 16 digits in the fraction, you have only 12 integer digits left for the integer part.

Here is a rule of thumb for estimating scaling. First approximation: 2^10 ~ 10^3 - for estimation purposes. So... how many bits do you have? For the SINGLE you have 32 bits = 1 sign, 23 mantissa, 8 for exponent. So you have 23 bits. For non-zero numbers you actually have 24 because of something called "hidden-bit" normalization. But you have 23 or 24 bits. Divide that by 10. You get 2 and a fraction. So 2x3 = 6, you get six and a fraction digits out of that 24-bit binary number. Let's do DOUBLE. You have 1 sign, 11 exponent, and 52 bits. (Again, 53 with hidden-bit normalization). So 52/10 = 5 and a fraction. 5x3 = 15, so you get a little over 15 digits out of that 52-bit mantissa.

What this comes down to is, how many digits do you REALLY need. I don't accept the answer you've been giving us because I've seen the way you've been flailing around. To quote that great actor Strother Martin in the movie Cool Hand Luke, "What we have here is a failure to communicate."

In case anyone is wondering, this 2^10~10^3 approximation is a quick-and-dirty way to change numerical base on a number, from 2 to 10 based on the way we are doing it. It's the fast way to find the number of digits you can expect out of a bit stream.

Works for integers, too. What is the number of digits in a 32-bit integer? 32/10 = 3, with 2 bits left over. 3x3 = 9, so 1,000,000,000 x the 2 bits left over... 2^2 = 4 so the number is about 4 billion. Want to try it for Quad integers? That's a 64-bit number. So 64/10 = 6, then 6 x 3 = 18. And you have 4 bits left over which is 2^4=16, and the final number is about 16,000,000,000,000,000,000.

It gets dizzying - but once you wrap your head around it, it becomes useful for quick-and-dirty approximations.
 
I struggle with basic addition of two numbers
OK. So, before this thread goes completely crazy, perhaps we should look at your actual issue?
  • What numbers do you have?
  • What calculations should be performed?
  • Examples that fail (cause the overflow)
 
As the calculations go through several layers of high precision with multiplication/ division operations the precision requirements are huge.

That is ANOTHER red flag that I missed from your earlier post where you originally made this comment.

In scientific studies, precision requirements DO NOT GROW. They are ALWAYS limited to the precision of the least precise number in your sequence of computations. Precision is actually a factor for study in formal error analysis, which tells you the limits of how many digits you may expect in a number. Quality-control officers learn this fairly quickly, as do research-oriented chemical analysts. (Which was my specialty in college.)

You can NEVER gain precision in math. You can only lose it or keep what you have. It is an offshoot derived from the laws of thermodynamics, the basic premise of which is that in the real world, nothing is perfect. You are conflating the precision of the computation with the precision of the thing being measured. The computation HAS NO MEANINGFUL PRECISION because you can arbitrarily CHOOSE how many digits to keep. Only the object in question has a required precision. Simple case in point. When you go to a restaurant and you have to compute the tip, you stop your computation at pennies because nobody tracks money more precisely than pennies.

There is an old phrase, "Good enough for government work." Though it raises the hackles of some folks, including one of my contract-company bosses, it is a sign that, even for government nit-pickers, there are limits to be considered when doing something. Let's try another old phrase, "You can't make a silk purse out of a sow's ear." You can't make those numbers something they are not.

Here is one of my favorite analogies relating to computations that don't need to be performed. I don't know if you have ever run across linear regression. It is a method by which you compute the formula for a particular line, to obtain a slope and an intercept point. Imagine this experiment: Take some big graph paper with a numbered grid with 0 in the middle. Stand away from it and fire a small gauge shotgun at it. Now take the X & Y coordinates of each pellet hole. Assuming this was a cylinder-bore shotgun that had not been crushed or otherwise distorted, what pattern would you see? (Hint: Random radial distribution, Gaussian distribution of holes from the center of the target.) Now using the linear least-squares method, determine the slope and intercept of the best straight line through that pattern of pellet holes. Guess what? The linear least-squares method WILL compute that line. But the more important question is, SHOULD you have bothered to compute that line? To which the answer is NO. It was an inappropriate application of math, which - being generally inanimate and non-sentient - doesn't CARE about applicability. Applying that level of computation (linear regression on something that isn't expected to be a straight line anyway) is just WRONG.

I think your fascination with precision is another example of non-applicable math principles. You NEVER need more precision than that of the object / entity / population being measured in the first place. I have mentioned this before but apparently it hasn't clicked.
 

Users who are viewing this thread

Back
Top Bottom