It would appear that you need a little help on the concept of representation.
Computers represent a number in a fixed-size container. You choose the size of the container based on calling it a BYTE, WORD, LONG, QUAD, SINGLE, DOUBLE, etc. (There are other keywords than these, I'm just picking a few.)
Picking the keyword does TWO things. FIRST - it sets the maximum number of digits you can display. SECOND - it defines the place where fractions go. I'd call it the decimal point except you are in binary for these cases. So forgive the mixed metaphor when I use common language to make the next key point.
When you use any of the integer types BYTE, WORD, LONG, QUAD - you set the decimal point all the way to the right, leaving no room for an actual fraction. When you use any of the scientific types, you set the decimal point all the way to the left and then use a scaling mechanism to dynamically redefine the current location of that decimal point. But again, you have a limited number of digits.
Here is a rule of thumb to remember: 2^10 is approximately equal to 10^3 - for quick and dirty scaling purposes. That is, 10 bits gives you 3 decimal digits. So...
BYTE - has 8 bits. 2^8 is less than 2^10 so the limit will be less than 10^3. Therefore, you have less than 3 full digits.
WORD - has 16 bits. 2^16 is more than 2^10 but less than 2^20, so the limit will be more than 10^3 but less than 10^6. Therefore you have > 3 digits but less than 6.
LONG - 32 bits. More than 9 digits but less than 12.
QUAD - 64 bits. More than 18 digits but less than 21.
When we get to the SINGLE and DOUBLE, you need to know that 1 bit is used for signs, 8 bits are used for the scaling, and the rest of the number is used for digit representation (a.k.a. mantissa).
For SINGLE, you have 32 bits minus the 9 bits of overhead leaving you 23 bits. So you have > 2^20 but less than 2^30, which gives you > 10^6 (6 digits) but < 10^9 (9 digits).
For DOUBLE you have 64 - 9 = 55 bits. More than 15 digits but less than 18.
Like I said, this is quick and dirty, but it is the easy way to pick representation sizes. There are various wrinkles to add to this, including the harsh reality that decimal fractions are sometimes impossible to represent in binary computers. Just as 1/3 in decimal is 0.33333...3333 to infinity, 1/10 in binary is 0.0001100110011...00110011 to infinity. (I think it is something like that.) So when you truncate a binary fraction to however many bits you had, you perform binary rounding. When you convert that fraction back to where you wanted it, it is no longer exact.