Refers to a type of
floating-point number that has more
precision (that is, more digits to the right of the decimal point) than a
single-precision number. The term
double precision is something of a misnomer because the precision is not really double. The word
double derives from the fact that a double-precision number uses twice as many
bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.
The extra bits increase not only the precision but also the range of magnitudes that can be represented. The exact amount by which the precision and range of magnitudes are increased depends on what format the program is using to represent floating-point values. Most computers use a standard format known as the IEEE floating-point format.