double precisionRefers to a type of floating-point number that has more precision (that is, more digits to the right of the decimal point) than a single-precision number. The term double precision is something of a misnomer because the precision is not really double. The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.
The extra bits increase not only the precision but also the range of magnitudes that can be represented. The exact amount by which the precision and range of magnitudes are increased depends on what format the program is using to represent floating-point values. Most computers use a standard format known as the IEEE floating-point format.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »List of Free Shorten URL Services
A URL shortener is a way to make a long Web address shorter. Try this list of free services. Read More »Top 10 Tech Terms of 2015
The most popular Webopedia definitions of 2015. Read More »
The Open System Interconnection (OSI) model defines a networking framework to implement protocols in seven layers. Use this handy guide to compare... Read More »Computer Architecture Study Guide
This Webopedia study guide describes the different parts of a computer system and their relations. Read More »What Are Network Topologies?
Network Topology refers to layout of a network. How different nodes in a network are connected to each other and how they communicate is... Read More »