Question Big Number Rounding

Privateer

Registered User.
Local time
Today, 02:45
Joined
Aug 16, 2011
Messages
193
When I enter a number equal to or over ten million, the last two values get rounded and I don't want them to be. My field is setup as a single, standard, two decimals. When I tested it I typed in 123456789, what I got was 123,456,800.00

enter 9999999 get 9,999,999.00
enter 10000199 get 10,000,200.00
enter 10000001.75 get 10,000,000.00

What the hell is going on? Single is supposed to handle up to 10^38

I have checked my "Region and Language" settings for the OS (W7) and there is nothing in there about rounding or maximum number size. I would like to leave the data type at single for the space considerations, and because it should work as single. Any help would be appreciated.
 
What the hell is going on? Single is supposed to handle up to 10^38
And it does. But the precision is relative to the size of the number, because the way floating-point numbers are represented internally. You are seemingly expecting that 10^37 <>10^37+1, but the machine will not see any difference. For Single datatype I guess 6-7 significant digits is all what you can expect. Anything beyond that is same-same to the machinery.

People are often surprised that the machine also can decide something like 0.01+0.01 + 0.01 +0.01<>0.04 - again an issue of precision.
 
Yeah, I am surprised, I really do not understand the concept of precision. To get this to work I have changed my data types for large numbers to double and that seems to work. Decimal also worked, but I went with double. Thanks for the feedback.
 
i would use currency, if you only need 2dps.

other real number types can give roundoff errors
 
Type "data type" into Access 2010 help for a wealth of information regarding data storage.
 
It would appear that you need a little help on the concept of representation.

Computers represent a number in a fixed-size container. You choose the size of the container based on calling it a BYTE, WORD, LONG, QUAD, SINGLE, DOUBLE, etc. (There are other keywords than these, I'm just picking a few.)

Picking the keyword does TWO things. FIRST - it sets the maximum number of digits you can display. SECOND - it defines the place where fractions go. I'd call it the decimal point except you are in binary for these cases. So forgive the mixed metaphor when I use common language to make the next key point.

When you use any of the integer types BYTE, WORD, LONG, QUAD - you set the decimal point all the way to the right, leaving no room for an actual fraction. When you use any of the scientific types, you set the decimal point all the way to the left and then use a scaling mechanism to dynamically redefine the current location of that decimal point. But again, you have a limited number of digits.

Here is a rule of thumb to remember: 2^10 is approximately equal to 10^3 - for quick and dirty scaling purposes. That is, 10 bits gives you 3 decimal digits. So...

BYTE - has 8 bits. 2^8 is less than 2^10 so the limit will be less than 10^3. Therefore, you have less than 3 full digits.

WORD - has 16 bits. 2^16 is more than 2^10 but less than 2^20, so the limit will be more than 10^3 but less than 10^6. Therefore you have > 3 digits but less than 6.

LONG - 32 bits. More than 9 digits but less than 12.

QUAD - 64 bits. More than 18 digits but less than 21.

When we get to the SINGLE and DOUBLE, you need to know that 1 bit is used for signs, 8 bits are used for the scaling, and the rest of the number is used for digit representation (a.k.a. mantissa).

For SINGLE, you have 32 bits minus the 9 bits of overhead leaving you 23 bits. So you have > 2^20 but less than 2^30, which gives you > 10^6 (6 digits) but < 10^9 (9 digits).

For DOUBLE you have 64 - 9 = 55 bits. More than 15 digits but less than 18.

Like I said, this is quick and dirty, but it is the easy way to pick representation sizes. There are various wrinkles to add to this, including the harsh reality that decimal fractions are sometimes impossible to represent in binary computers. Just as 1/3 in decimal is 0.33333...3333 to infinity, 1/10 in binary is 0.0001100110011...00110011 to infinity. (I think it is something like that.) So when you truncate a binary fraction to however many bits you had, you perform binary rounding. When you convert that fraction back to where you wanted it, it is no longer exact.
 
Last edited:
Great explaination Doc.
One thing I would add is that you can use an integer type and do your own scaling. On a project for Clairol (hair care products), I had to use formulas to calculate materials required for various batch sizes. Some materials were used in such large volumes that we only needed two decimal places of precision but others such as dyes and fragrances were used in minute amounts and needed eight decimal places. Rather than use two fields (SAP isn't exactly user friendly), we used a single integer field and scaled depending on the material type.
 
Thanks, Pat.

Privateer, I might add that technically, CURRENCY is a pretty big integer that uses scaling to achieve a decimal point at some position other than at the far right. However, it is treated like an integer internally. CURRENCY might be the preferred way to go but it takes up a lot of space in your data records.
 
Currency is also limited to 4 decimal places but it is far less trouble than working with single or double so I use it whenever I can despite the extra space.
 

Users who are viewing this thread

Back
Top Bottom