Very popular with engineers, scientists, universities, and high school students, they've been around for a while, ever since IBM installed the IBM 704 mainframe in 1954 employing a 36 bit word, and then in 1957, released the high level Fortran programming language to run on the mainframe. Called a REAL or FLOAT variable, as opposed to an INTEGER. Defined by the IEEE in 1985 as a 32 bit or single precision floating point variable, its 8 bit "exponent" enables the number to be up to 1038 decimal digits in length. It can thus be a very large number, or a very tiny one, with up to 37 zeroes following the decimal point. However, note, beside those leading / trailing zeroes, it contains only 6~7 significant digits (known as its "mantissa"). In the 1960s IBM increased its size to what became known as a 64 bit or LONG REAL variable, now with 15~16 significant digits. It was introduced on the IBM 7094 in 1962, initially as a 62 bit number, becoming 64 bits on theSystem/360 Model 44 launched in 1965. Defined as Double Precision in later Fortran manuals. In 1964 a competitor Control Data released one of the early "super computers", the CDC 6600, a 60 bit computer that used an 11 bit exponent, enabling the number to be up to 10308 decimal digits in length. However for IBM and most hardware manufacturers, the 8 bit exponent remained the norm. In historic records in terms of expense, the CDC 6600 cost $US1 million to purchase in 1964 currency, as opposed to $360,000 in 1965 for the IBM 360/44. In terms of power, the CDC's CPU apparently ran at 40MHz with a benchmark of 3 million FLOPS (Floating Point Operations per second). In contrast, today, a typical laptop's CPU performs at perhaps about 100 Gigaflops (billion), and a typical GPU (graphics card) can handle 1 - 10 Teraflops (trillion). ... In April 1965 with transistor density on an integrated circuit at 50 components, and the distance between transistors at one twentieth of a millimetre, Gordon Moore at Fairchild Semiconductors said that with development causing that density to double yearly, by 1975 chip density could be at 65,000. In 1971 his newly formed company Intel released the 4004 CPU chip that contained 2,300 components, Moore's Law it was being called. But almost immediately the acceleration halted, 10,000 was the figure in 1975, and Gordon Moore revised his expectation to being a doubling "every two years". Then it sped up again, so he reworded it to "every 18 months". In 2020 as the distance between components approaches the size of atoms, and with a transistor density of 100 million per square millimetre, yes, what comes next? Back to 1980. A major advance utilizing the 11 bit exponent and a 53 bit mantissa was the Intel 8087 math coprocessor launched in 1980, also with a proprietary 80 bit format that was used for temporary intermediate values. The unit was made optional on IBM-compatible PCs until 1989 and the release of i486 chip, and has been integrated ever since. Employed withExtract from an article in stackoverflow.com What's the difference between a single precision (32-bit) and double precision (64-bit) floating point operation? I read a lot of answers but none seems to correctly explain where the word double comes from. I remember a very good explanation given by a University professor I had some years ago. A single precision floating point representation uses a word of 32 bit.
- dBASE II databases in 1979, with values ranging from 1.8e+63 to 1e-63, and accuracy up to 10 digits.
- Lotus 1-2-3 Spreadsheets in 1983
- dBASE III Plus in 1986, and DBXL, a dBASE compatible. In Section 5.5 in the DBXL manual, it specifies numeric values may range from 1e+308 (i.e. 10308) to 1e-307 and up to 19 significant digits may be stored in a numeric field with one digit reserved for a decimal point. The maximum length of the integer portion is 16 digits, the decimal portion is 15 digits. If the number exceeds 16 digits, just 13 significant digits are retained.
- Microsoft Excel in 1987 and Google Sheets in 2006
- Microsoft SQL Server in 1989 and MySQL in 1995 databases. Current SQL Server databases allow for 38 significant digits
noting that the original IBM mainframe in 1954 with its 36 bit word enabled the mantissa to be log10(227), which is a full 8 decimal digits