Like almost everything, not really as simple as it may first appear.
• It is impossible to represent an infinitely repeating decimal with a computer. There is not enough memory to store infinity. It will be rounded or truncated or something, because the precise result is impossible.
• Because it is not precise, how each computer handles this imprecision could be different.
• There are different rules you might use to round a number. Ask your AI, about 'Bankers Rounding,' which is what the VBA.Round() function does.
• In banker's rounding, numbers are sometimes rounded up, sometimes rounded down. This prevents a bias in pennies that would accumulate if $0.005 always rounds up to $0.01
Thanks Mark, I'm a little familiar with BR, but I thought the whole point of using decimal is that it's precision is guaranteed. I thought SQL Server would use BR but this suggests it does not. Obv's you'll know a lot more than me. There must be a way to disregard client computers... because by that logic you wouldn't have Stock Exchange/ Online Banking/ Excel...
A separate topic - I'm avoiding overflow by specifying the rounding & also replicating the method in Excel to avoid memory truncation caused by floating point arithmetic in calc's by specifically specifying the decimal places of accuracy.
Adding to the above, I'm surprised you didn't test in both Access and the Windows Calculator.
FWIW, both give the same result as the online calculator.
Does that mean those are the 'correct' answers? See post #2
The perfect reason for using a decimal data type in your dbms, and one of the reasons Access needs a native decimal data type. You will always get a consistent answer (even if it isn't perfect).
VBA gives me same as Excel; though I won't be using Access a great deal as most of the work will be done in SQL Server. Access will just be displaying data (to less decimal places & with less accuracy) but that's fine. The BE is doing the operations.
Code:
Sub blah()
Dim dec As Variant
Debug.Print CDec(Round(1 / 1.95, 16))
End Sub
FWIW, both give the same result as the online calculator.
Does that mean those are the 'correct' answers? See post #2Well it's not relating to BR because Excel does not use BR, whereas
In my feeble mind that seems worse - as it contradicts BR. I don't know what the right answer is, I'd assume the online calculator would be the most reliable as we're not seeing great consistency elsewhere but no doubt I'm wrong.
I must be missing something here, as I have chosen decimal because of it's 'guaranteed' precision & one would think sticking with the same datatype would be more robust. Single is wonderful in this instance but I'll be working with plenty bigger/ more precise numbers than single can handle, nor should I have to convert data type, decimal is the most accurate & it should be able to handle this.
Like I say to avoid it in Excel you need to explicitly round to the dp's required in every area. That's fine, it works in Excel. But it doesn't seem to affect SQL Server.
I need the math to be correct. Imagine Stocks & Shares, they'll be accurate to many, many decimal places. This rounding is costing me 4p on a primary calculation. With compounding it will add up to a lot. Moreso it needs to be correct. I've chosen a high 16 dp precision to mitigate this.
I must be missing something here, as I have chosen decimal because of it's 'guaranteed' precision & one would think sticking with the same datatype would be more robust. Single is wonderful in this instance but I'll be working with plenty bigger/ more precise numbers than single can handle, nor should I have to convert data type, decimal is the most accurate & it should be able to handle this.
I converted 1.95 to single as an example to get the same result as you are seeing in SQL Server. My guess is that SQL Server is doing the calculation in the lower precision because the 2 terms can be represented as Integer and single. By casting 1.95 to double, you force SQL Server to do the calculation as a double.
I converted 1.95 to single as an example to get the same result as you are seeing in SQL Server. My guess is that SQL Server is doing the calculation in the lower precision because the 2 terms can be represented as Integer and single. By casting 1.95 to double, you force SQL Server to do the calculation as a double.
The damage is done quickly. Look at 1/1.95 (as this appears in your initial post) and look at GPGeorge's contribution in post #9.
That CAST statement is SQL-Server, right? I believe 1/1.95 will compute as SINGLE(1)/SINGLE(1.95) because that is the smallest number size that will fit the two operands on either side of the "/" operator. The "1" could be an INTEGER, but 1.95 must be at least a SINGLE, so LET-coercion makes the "1" into a SINGLE as well. When you get 0.5128200000000000, you got that because SINGLE division gives you six or seven digits at most i.e. 0.512820. The other computations show that the 6th digit is legitimately a 0 and the seventh digit must have been lost to SINGLE mantissa-truncation.
Your "CAST ROUND(...) AS DECIMAL" is done after the SINGLE division has already occurred and the inevitable truncation has ALSO already occurred. Note also that starting with a longer possible mantissa could occur ONLY if your constant expression started out with more digits or contained the implicit type-marker (in VBA, it is "#", the octothorpe) that would signal it to be a DOUBLE. As far as I recall, LET-coercion cannot promote beyond DOUBLE. You MUST do an explicit typecast or function-based conversion to get to DECIMAL.
Value = 1 / 1.95 ? Value 0.512820512820513 ? RoundMid(Value, 16) 0.512820512820513 Value = 1 / CDec(1.95) ? Value 0.5128205128205128205128205128 ? RoundMid(Value, 16) 0.5128205128205128
RoundMid which performs true 4/5 rounding is hosted at my repository at GitHub VBA.Round
Link is attached because I'm a novice not allowed to post links to my own repositories.
I believe 1/1.95 will compute as SINGLE(1)/SINGLE(1.95) because that is the smallest number size that will fit the two operands on either side of the "/" operator.
Exactly! IIRC this is also officially documented in the T-SQL Documentation.
Applying this to the original statement:
SQL:
BEGIN
DECLARE @One AS DECIMAL (28,16) = 1,
@OneDotNineFive AS DECIMAL (28,16) = 1.95;
SELECT CAST(ROUND(@One/@OneDotNineFive,16) AS DECIMAL (28,16)) AS LessWTF;
END;
=> LessWTF: 0.5128205128000000
You might argue that the result's scale of 10 is less than your defined Scale of 16. - Correct.
SQL Server must define the precision and scale of the result before running the actual calculation. It does so by a set of rules considering the precision and scale of both arguments. As the possible maximum precision of the result is limited, you might get a result with a greater scale by reducing the precision of the arguments.
SQL:
BEGIN
DECLARE @One AS DECIMAL (18,16) = 1,
@OneDotNineFive AS DECIMAL (18,16) = 1.95;
SELECT CAST(ROUND(@One/@OneDotNineFive,16) AS DECIMAL (28,16)) AS NoMoreWTF;
END;
@RonPaii apologies - I've made things more complicated where I'm struggling to grasp a basic same-type Decimal truncation my mixing types & I see how silly I was here now - I'm an idiot, apologies!
I believe 1/1.95 will compute as SINGLE(1)/SINGLE(1.95) because that is the smallest number size that will fit the two operands on either side of the "/" operator. The "1" could be an INTEGER, but 1.95 must be at least a SINGLE, so LET-coercion makes the "1" into a SINGLE as well. When you get 0.5128200000000000, you got that because SINGLE division gives you six or seven digits at most i.e. 0.512820. The other computations show that the 6th digit is legitimately a 0 and the seventh digit must have been lost to SINGLE mantissa-truncation.
Thanks Doc, well I'm stunned that SQL Server converts into data-types which are not registered SQL data-types. And here is their link for Data-Type Precedence (relating to LET Coercion).
BEGIN
DECLARE @One AS DECIMAL (28,16) = 1,
@OneDotNineFive AS DECIMAL (28,16) = 1.95;
SELECT CAST(ROUND(@One/@OneDotNineFive,16) AS DECIMAL (28,16)) AS LessWTF;
END;
=> LessWTF: 0.5128205128000000
You might argue that the result's scale of 10 is less than your defined Scale of 16. - Correct.
SQL Server must define the precision and scale of the result before running the actual calculation. It does so by a set of rules ...
Sonic & Ron thanks for article, I've read the articles countless times amongst many others but I'm struggling to see my errors; basing on the first e.g. Decimal(28,16) LESSWTF BUT RENAMING TO E1 & E2 FOR formula references:
Code:
BEGIN
DECLARE
@e1 AS DECIMAL (28,16) = 1,
@e2 AS DECIMAL (28,16) = 1.95;
SELECT CAST(ROUND(@e1/@e2,16) AS DECIMAL(28,16)) AS Dec28p16s;
END
Abiding to Operation Rules: e1 / e2:
ResultPrecision = p1 - s1 + s2 + max(6, s1 + p2 + 1)
ResultPrecision = 28 - 16 + 16 + max(6, 28 + 16 + 1)
ResultPrecision = 28 + 45
ResultPrecision = 73 But forget all that because we spec'd 28
ResultScale = max(6, 16 + 28 + 1)
ResultScale = 45 But forget all that because we spec'd 16;
Had we not spec'd return p & s we would fall into Rule-3: DECIMAL(38,6) I believe. BUT we spec'd DECIMAL(28,16).
So: Integral = 28(p) - 16(s) = 12 spaces to the store the integral (the nr left of the decimal point)
We had 12 positions to store p1 & p2's value of 1 in e1 & e2; so must be stored as 000 000 000 001
Scale
16 positions to store 0 in s1 & s2:
s1 so must be stored as 0 000 000 000 000 000
s2 must be stored as 9 500 000 000 000 000
So e1 must have been stored as 000 000 000 001 . 0 000 000 000 000 000
e2 must have been stored as 000 000 000 001 . 9 500 000 000 000 000
Calculation
So assuming as we spec'd a column to hold result as DECIMAL(28,16); same as our calc's.
Spaces For Integral - We had plenty of room for overflow on i1 & i2.
Spaces For Precision - we had 16 decimal places; as Windows Calculator Yeilds 1/1.95 = 0.51282051282051282051282051282051 (to 32dp) we sould truncate or bankers round (who knows) at digit 16 as pic'd?
When dealing with high precision math on a computer, you need to be very careful with all calculations. Defining all constants and variable using the required precision helps the complier choose the correct method. You will find languages handle the calculations very differently.
Lookup Numerical computational methods for more information.
I may have "glossed it over." Looking at the chart (per your link), ALL of the DECIMAL conversions start with NUMERIC/DECIMAL or DECIMAL/NUMERIC, which requires the explicit cast. After things are numeric, automatic conversions occur - but that step that crosses between NUMERIC and DECIMAL is not automatic. And that means you lose control if the initial conversion flips you into something smaller than you actually needed for those digits to reach the desired precision.