Double Precision

Refers to a type of floating-point number that has more precision (that is, more digits to the right of the decimal point) than a single-precision number. The term double precision is something of a misnomer because the precision is not really double. The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.

The extra bits increase not only the precision but also the range of magnitudes that can be represented. The exact amount by which the precision and range of magnitudes are increased depends on what format the program is using to represent floating-point values. Most computers use a standard format known as the IEEE floating-point format.

Top Articles

Huge List Of Texting and Online Chat Abbreviations

From A3 to ZZZ we list 1,559 text message and online chat abbreviations to help you translate and understand today's texting lingo. Includes Top...

How To Create A Desktop Shortcut To A Website

This Webopedia guide will show you how to create a desktop shortcut to a website using Firefox, Chrome or Internet Explorer (IE). Creating a desktop...

The History Of Windows Operating Systems

Microsoft Windows is a family of operating systems. We look at the history of Microsoft's Windows operating systems (Windows OS) from 1985 to present...

Hotmail [Outlook] Email Accounts

  By Vangie Beal Hotmail is one of the first public webmail services that can be accessed from any web browser. Prior to Hotmail and its...

Supply Chain Definition &...

A supply chain is a network between an organization and its suppliers to...

Relational Database Definition &...

A relational database stores and connects data in tables and columns, emphasizing the...

Common Business-Oriented Language (COBOL)...

What is COBOL? COBOL stands for Common Business-Oriented Language. It is a 60-year-old programming...