Definition
1. That is a floating-point number with more precision than a single-precision number. A double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long. The extra bits increase not only the precision but also the range of magnitudes that can be represented. Most computers use a standard format known as the IEEE.
2. Double-precision floating-point format is a computer number format that occupies 8 bytes (64 bits) in computer memory and represents a wide, dynamic range of values by using a floating point. It usually refers to binary64, as specified by the IEEE 754 standard, not to the 64-bit decimal format decimal64.
Abbreviation
Synonyms
Superterms
floating-point format
Subterms
Sources
1 http://www.webopedia.com/TERM/D/double_precision.html (1.); https://en.wikipedia.org/wiki/Double-precision_floating-point_format (2.)
Author: Laura Saupe