double precisionRefers to a type of floating-point number that has more precision (that is, more digits to the right of the decimal point) than a single-precision number. The term double precision is something of a misnomer because the precision is not really double. The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.
The extra bits increase not only the precision but also the range of magnitudes that can be represented. The exact amount by which the precision and range of magnitudes are increased depends on what format the program is using to represent floating-point values. Most computers use a standard format known as the IEEE floating-point format.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
From cute electronic toys to VR gaming, here are 5 hot gifts to give to your special tech enthusiast this holiday season. Read More »What's Hot in Tech: AI Tops the List
Like everything in technology, AI touches on so many other trends, like self-driving cars and automation, and Big Data and the Internet of Things... Read More »DevOp's Role in Application Security
As organizations rush to release new applications, security appears to be getting short shrift. DevSecOps is a new approach that holds promise. Read More »
Java is a high-level programming language. This guide describes the basics of Java, providing an overview of syntax, variables, data types and... Read More »Java Basics, Part 2
This second Study Guide describes the basics of Java, providing an overview of operators, modifiers and control Structures. Read More »The 7 Layers of the OSI Model
The Open System Interconnection (OSI) model defines a networking framework to implement protocols in seven layers. Use this handy guide to compare... Read More »