dcsimg
Main » TERM » D »

double precision

Refers to a type of floating-point number that has more precision (that is, more digits to the right of the decimal point) than a single-precision number. The term double precision is something of a misnomer because the precision is not really double. The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.

The extra bits increase not only the precision but also the range of magnitudes that can be represented. The exact amount by which the precision and range of magnitudes are increased depends on what format the program is using to represent floating-point values. Most computers use a standard format known as the IEEE floating-point format.










LATEST ARTICLES
Facts about IT & Coding Boot Camps

The following coding and IT boot camp facts and statistics provide an introduction to the changing trends in education and training programs. Read More »

Top Cloud Computing Facts

The following facts and statistics capture the changing landscape of cloud computing and how service providers and customers are keeping up with... Read More »

Texting & Chat Abbreviations

From A3 to ZZZ this guide lists 1,500 text message and online chat abbreviations to help you translate and understand today's texting lingo. Read More »

STUDY GUIDES
Java Basics, Part 1

Java is a high-level programming language. This guide describes the basics of Java, providing an overview of syntax, variables, data types and... Read More »

Java Basics, Part 2

This second Study Guide describes the basics of Java, providing an overview of operators, modifiers and control Structures. Read More »

Network Fundamentals Study Guide

Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »