Very popular with engineers, scientists, universities, and high school students, they've been around for a while, ever since IBM installed the IBM 704 mainframe in 1954 employing a 36 bit word, and then in 1957, released the high level Fortran programming language to run on the mainframe. Called a REAL or FLOAT variable, as opposed to an INTEGER. Defined by the IEEE in 1985 as a 32 bit or single precision floating point variable, its 8 bit "exponent" enables the number to be up to 10^{38}decimal digits in length. It can thus be a very large number, or a very tiny one, with up to 37 zeroes following the decimal point. However, note, beside those leading / trailing zeroes, it contains only 6~7 significant digits(known as its "mantissa"). In the 1960s IBM increased its size to what became known as a 64 bit or LONG REAL variable, now with 15~16 significant digits. It was introduced on the IBM 7094 in 1962, initially as a 62 bit number, becoming 64 bits on theSystem/360 Model 44 launched in 1965. Defined as Double Precision in later Fortran manuals. In 1964 a competitor Control Data released one of the early "super computers", the CDC 6600, a 60 bit computer that used an 11 bit exponent, enabling the number to be up to 10^{308}decimal digits in length. However for IBM and most hardware manufacturers, the 8 bit exponent remained the norm. In historic records in terms of expense, the CDC 6600 cost $US1 million to purchase in 1964 currency, as opposed to $360,000 in 1965 for the IBM 360/44. In terms of power, the CDC's CPU apparently ran at 40MHz with a benchmark of 3 million FLOPS (Floating Point Operations per second).In contrast, today, a typical laptop's CPU performs at perhaps about 100 Gigaflops (billion), and a typical GPU (graphics card) can handle 1 - 10 Teraflops (trillion).... In April 1965 with transistor density on an integrated circuit at50 components, and the distance between transistors at one twentieth of a millimetre, Gordon Moore at Fairchild Semiconductors said that with development causing that density to double yearly, by 1975 chip density could be at65,000. In 1971 his newly formed company Intel released the 4004 CPU chip that contained 2,300 components, Moore's Law it was being called. But almost immediately the acceleration halted, 10,000 was the figure in 1975, and Gordon Moore revised his expectation to being a doubling "every two years". Then it sped up again, so he reworded it to "every 18 months". In 2020 as the distance between components approaches the size of atoms, and with a transistor density of 100 million per square millimetre, yes, what comes next? Back to 1980. A major advance utilizing the 11 bit exponent and a 53 bit mantissa was the Intel 8087 math coprocessor launched in 1980, also with a proprietary 80 bit format that was used for temporary intermediate values. The unit was made optional on IBM-compatible PCs until 1989 and the release of i486 chip, and has been integrated ever since. Employed withExtract from an article in stackoverflow.com

- dBASE II databases in 1979, with values ranging from 1.8e+63 to 1e-63, and accuracy up to 10 digits.
- Lotus 1-2-3 Spreadsheets in 1983
- dBASE III Plus in 1986, and DBXL, a dBASE compatible. In Section 5.5 in the DBXL manual, it specifies numeric values may range from 1e+308 (i.e. 10
^{308}) to 1e-307 and up to 19 significant digits may be stored in a numeric field with one digit reserved for a decimal point. The maximum length of the integer portion is 16 digits, the decimal portion is 15 digits. If the number exceeds 16 digits, just 13 significant digits are retained.- Microsoft Excel in 1987 and Google Sheets in 2006
- Microsoft SQL Server in 1989 and MySQL in 1995 databases. Current SQL Server databases allow for 38 significant digits
What's the difference between a single precision (32-bit) and double precision (64-bit) floating point operation?I read a lot of answers but none seems to correctly explain where the word double comes from. I remember a very good explanation given by a University professor I had some years ago. Asingleprecision floating point representation uses a word of 32 bit.

- 1 bit for the
**sign**, S - 8 bits for the
**exponent**, 'E' (*including its exponent bias of 127*) - 24 bits for the fraction, also called mantissa, or coefficient (even though just 23 are represented). Let's call it 'M' (for mantissa, I prefer this name as "fraction" can be misunderstood).

- 1 bit for the
**sign**, S - 11 bits for the
**exponent**, 'E' (*including its exponent bias of 1023*) - 53 bits for the
**fraction / mantissa / coefficient**(even though only 52 are represented), 'M'

- 0.000124 becomes 0.124 x 10
^{-3} - 237.141 becomes 0.237141 x 10
^{3}

**single precision**: log_{10}(2^{23}), which is about 6~7 decimal digits, numeric range ±10^{-38}to 10^{38}*noting that the original IBM mainframe in 1954 with its 36 bit word enabled the mantissa to be log*_{10}(2^{27}), which is a full 8 decimal digits**double precision**: log_{10}(2^{52}), which is about 15~16 decimal digits, numeric range ±10^{-308}to 10^{308}