v1.0.5 / 01 may 03 / greg goebel / public domain
* One of the common misunderstandings among computer users is a certain
faith in the infallibility of numerical computations. That is, if you
multiply, say:
3 * (1/3)
This seems to clearly indicate a bug in the system, and it's a bigger shock to find out that, no, that's the way it's supposed to work. This document explains such issues in detail.
* Nearly all computer users understand the concept of a "bit", or in
computer terms, a 1 or 0 value encoded by the setting of a switch. It's not
much more difficult to see that if you take two bits, you can use them to
represent four unique states:
00 01 10 11
000 001 010 011 100 101 110 111
Most computers operate on information 8 bits, or some multiple of 8 bits, like 16, 32, or 64 bits, at a time. This collection of bits is more-or-less fundamental, and has been given the odd name of the "byte". A computer's processor accepts data a byte or multiples of a byte at a time. Memories are organized to store data a byte or multiples of a byte per each addressable location.
Actually, in some cases 4 bits is a convenient number of bits to deal with, and this collection of bits is called, somewhat painfully, the "nybble". In this document, we will refer to "nybbles" often, but please remember that in reality the term "byte" is common, while the term "nybble" is not.
A nybble can encode 16 different values, such as the numbers 0 to 15. Any
arbitrary sequence of bits could be used in principle, but in practice the
most natural way is as follows:
0000 = decimal 0 1000 = decimal 8
0001 = decimal 1 1001 = decimal 9
0010 = decimal 2 1010 = decimal 10
0011 = decimal 3 1011 = decimal 11
0100 = decimal 4 1100 = decimal 12
0101 = decimal 5 1101 = decimal 13
0110 = decimal 6 1110 = decimal 14
0111 = decimal 7 1111 = decimal 15
7531
7 * 1000 + 5 * 100 + 3 * 10 + 1 * 1
7 * 10^3 + 5 * 10^2 + 3 * 10^1 + 1 * 10^0
Each digit in the number represents a value from 0 to 9, which is ten different possible values, and that's why it's called a decimal or "base-10" number. Each digit also has a "weight" of a power of ten proportional to its position. This sounds complicated, but it's not. It's exactly what you take for granted when you look at a number. You know it without even having to think about it.
Similarly, in the binary number encoding scheme explained above, the value
13 is encoded as:
1101
1101 =
1 x 2^3 + 1 x 2^2 + 0 x 2^1 + 1 x 2^0 =
1 x 8 + 1 x 4 + 0 x 2 + 1 x 1 = 13 decimal
2^0 = 1 2^8 = 256
2^1 = 2 2^9 = 512
2^2 = 4 2^10 = 1,024
2^3 = 8 2^11 = 2,048
2^4 = 16 2^12 = 4,096
2^5 = 32 2^13 = 8,192
2^6 = 64 2^14 = 16,384
2^7 = 128 2^15 = 32,768 2^16 = 65,536
2^11 = 2 K = 2,048
2^12 = 4 K = 4,096
2^13 = 8 K = 8,192
2^14 = 16 K = 16,384
2^15 = 32 K = 32,768
2^16 = 64 K = 65,536
2^21 = 2 M
2^22 = 4 M
There is a subtlety in this discussion. If we use 16 bits, we can have 65,536 different values, but the values are from 0 to 65,535. People start counting at one, machines start counting from zero, since it's simpler from their point of view. This small and mildly confusing fact even trips up computer mechanics every now and then.
* Anyway, this defines a simple way to count with bits, but it has a few restrictions:
Despite these limitations, such "unsigned integer" numbers are very useful in computers, mostly for simply counting things up one-by-one. They're very easy for the computer to manipulate. Generally computers use 16-bit and 32-bit unsigned integers, normally referred to as "integer" and "long integer" (or just "long") numbers. An "integer" allows for numbers from 0 to 65,535, while a "long" allows for integer numbers from 0 to 4,294,967,296.
* Let's take a side-trip to discuss representation of binary numbers.
Computer mechanics often need to write out binary quantities, but in
practice writing out a binary number like:
1001 0011 0101 0001
This is another thing that sounds tricky but is actually simple -- if it
wasn't, there wouldn't be any point in doing it. In our normal decimal
system, we have 10 digits (0 through 9) and count up as follows:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ...
0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17 20 21 22 23 24 25 26 ...
In a hex system, we have 16 digits (0 through 9 followed, by convention,
with a through f) and we count up through the sequence as follows:
0 1 2 3 4 5 6 7 8 9 a b c d e f 10 11 12 13 14 15 16 ...
Each of these number systems are positional systems, but while decimal
weights are powers of 10, the octal weights are powers of 8 and the hex
weights are powers of 16. For example:
octal 756 = 7 * 8^2 + 5 * 8^1 + 6 * 8^0
= 7 * 64 + 5 * 8 + 6 * 1
= 448 + 40 + 6 = decimal 494
hex 3b2 = 3 * 16^2 + 11 * 16^1 + 2 * 16^0
= 3 * 256 + 11 * 16 + 2 * 1
= 768 + 176 + 2 = decimal 946
000 = octal 0
001 = octal 1
010 = octal 2
011 = octal 3
100 = octal 4
101 = octal 5
110 = octal 6
111 = octal 7
0000 = hex 0 1000 = hex 8
0001 = hex 1 1001 = hex 9
0010 = hex 2 1010 = hex a
0011 = hex 3 1011 = hex b
0100 = hex 4 1100 = hex c
0101 = hex 5 1101 = hex d
0110 = hex 6 1110 = hex e
0111 = hex 7 1111 = hex f
1 001 001 101 010 001 binary = 111521 octal
1001 0011 0101 0001 = 9351 hex
* After defining unsigned binary numbers, the next step is to modify this scheme to provide negative numbers, or "signed integers". The simplest thing to do would be to reserve one bit to indicate the sign of a number. This "sign bit" would probably be the leftmost bit, though it could be the rightmost for all it matters. If the sign bit is 0, the number is positive, and if the sign bit is 1 the number is negative.
This works, and it is used in a few obscure applications, but although it's the obvious solution from a human point of view it makes life hard for machines. For example, this scheme gives a positive and negative value for zero! A human might shrug at that, but it gives a machine headaches.
The more natural way, from the machine point of view, is to split the range
of binary numbers for a given number of bits in half and use the top half to
represent negative numbers. For example, 4-bit data give eight positive
and eight negative values:
0000 = decimal 0
0001 = decimal 1
0010 = decimal 2
0011 = decimal 3
0100 = decimal 4
0101 = decimal 5
0110 = decimal 6
0111 = decimal 7
1000 = decimal -8
1001 = decimal -7
1010 = decimal -6
1011 = decimal -5
1100 = decimal -4
1101 = decimal -3
1110 = decimal -2
1111 = decimal -1
This has some similarities to the sign-bit scheme in that a negative number
has its topmost bit set to "1", but the two concepts are different. In
sign-magnitude numbers, a "-5" is:
1101
1011
* So now we can represent unsigned and signed integers as binary quantities.
Remember that these are just two ways of interpreting a pattern of bits. If
a computer has a binary value in memory of, say:
1101
* While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and we still haven't addressed the problem of fractions. To handle fractions and obtain greater range we have to abandon signed integers and go to a "floating-point" format.
In the decimal system, we are familiar with floating-point numbers of the
form:
1.1030402 x 10^5 = 1.1030402 x 100000 = 110304.02
1.1030402E5
If we have a negative exponent, that means the number is multiplied by 1
that many places to the right of the decimal point. For example:
2.3434E-6 = 2.3434 x 10^-6 = 2.3434 x 0.000001 = 0.0000023434
Similar binary floating-point formats can be defined for computers. There are a number of such schemes, but one of the most popular has been defined by IEEE (Institute of Electrical & Electronic Engineers, a US professional and standards organization). The IEEE 754 specification defines 64 bit floating-point format with:
Let's see what this format looks like by showing how such a number would be
stored in 8 bytes of memory:
byte 0: S x10 x9 x8 x7 x6 x5 x4
byte 1: x3 x2 x1 x0 m51 m50 m49 m48
byte 2: m47 m46 m45 m44 m43 m42 m41 m40
byte 3: m39 m38 m37 m36 m35 m34 m33 m32
byte 4: m31 m30 m29 m28 m27 m26 m25 m24
byte 5: m23 m22 m21 m20 m19 m18 m17 m16
byte 6: m15 m14 m13 m12 m11 m10 m9 m8
byte 7: m7 m6 m5 m4 m3 m2 m1 m0
<sign> * (1 + <fractional_mantissa>) * 2^(<exponent> - 1023>)
____________________________________________________________
maximum minimum
____________________________________________________________
positive 1.797693134862231E+308 4.940656458412465E-324
negative -4.940656458412465E-324 -1.797693134862231E+308
____________________________________________________________
Some programs also use 32-bit floating-point numbers. The most common
scheme uses a 23-bit mantissa with a sign bit, plus an 8-bit exponent in
"excess-127" format, giving 7 valid decimal digits.
byte 0: S x7 x6 x5 x4 x3 x2 x1
byte 1: x0 m22 m21 m20 m19 m18 m17 m16
byte 2: m15 m14 m13 m12 m11 m10 m9 m8
byte 3: m7 m6 m5 m4 m3 m2 m1 m0
<sign> * (1 + <fractional_mantissa>) * 2^(<exponent> - 127)
________________________________________
maximum minimum
________________________________________
positive 3.402823E+38 2.802597E-45
negative -2.802597E-45 -3.402823E+38
________________________________________
The term "real" without any elaboration generally means a 64-bit value, while the term "float" similarly generally means a 32-bit value.
* Onece again, remember that bits are bits. If you have 8 bytes stored in computer memory, it might be a 64-bit real, two 32-bit reals, or 4 signed or unsigned integers, or some other kind of data that fits into 8 bytes.
The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data.
* So now our computer can handle positive and negative numbers with fractional parts. However, even with floating-point numbers you run into some of the same problems that you did with integers, and then some:
If you keep dividing you'll eventually get one with a negative exponent too big for the real value to hold and have a "numeric underflow". Remember that a negative exponent gives the number of places to the right of the decimal point and means a really small number.
The maximum real value is sometimes called "machine infinity", since that's the biggest value the computer can wrap its little silicon brain around.
This means that if you add a very small number to a very large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. If you are performing computations and you start getting really insane answers from things that normally work, you may need to check the range of your data. It's possible to "scale" the values to get more accurate results.
It also means that if you do floating-point computations, there's likely to be a small error in the result since some lower digits have been dropped. This effect is unnoticeable in most cases, but if you do some math analysis that requires lots of computations, the errors tend to build up and can throw off the results.
The faction of people who use computers for doing math understand these errors very well, and have methods for minimizing the effects of such errors, as well as for estimating how big the errors are.
By the way, this "precision" problem is not the same as the "range" problem at the top of this list. The range issue deals with the maximum size of the exponent, while the resolution issue deals with the number of digits that can fit into the mantissa.
That is, if you want to do a computation on a decimal fraction that is a neat sum of reciprocal powers of two, such as 0.75, the binary number that represents this fraction will be 0.11, or 1/2 + 1/4, and all will be fine.
Unfortunately, in many cases you can't get a sum of these "reciprocal powers of 2" that precisely matches a specific decimal fraction, and the results of computations will be very slightly off, way down in the very small parts of a fraction. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...
If you don't follow all of this, don't worry too much. The point here is that a computer isn't magic, it's a machine and is subject to certain rules and constraints. Although many people place a childlike faith in the answers computers give, even under the best of circumstances these machines have a certain unavoidable inexactness built into them.
* So now we have several means for using bits to encode numbers. But what about text? How can a computer store names, addresses, letters to your folks?
Well, if you remember that bits are bits, there's no reason that a set of
bits can't be used to represent a character like "A" or "?" or "z" or
whatever. Since most computers work on data a byte at a time, it is
convenient to use a single byte to represent a single character. For
example, we could assign the bit pattern:
0100 0110 (hex 46)
There is a standard binary encoding for western text characters, known as
the "American Standard Code for Information Interchange (ASCII)". The
following table shows the characters represented by ASCII, with each
character followed by its value in decimal ("d"), hex ("h"), and octal
("o"):
ASCII Table
______________________________________________________________________
ch ctl d h o ch d h o ch d h o ch d h o
______________________________________________________________________
NUL ^@ 0 0 0 sp 32 20 40 @ 64 40 100 ' 96 60 140
SOH ^A 1 1 1 ! 33 21 41 A 65 41 101 a 97 61 141
STX ^B 2 2 2 " 34 22 42 B 66 42 102 b 98 62 142
ETX ^C 3 3 3 # 35 23 43 C 67 43 103 c 99 63 143
EOT ^D 4 4 4 $ 36 24 44 D 68 44 104 d 100 64 144
ENQ ^E 5 5 5 % 37 25 45 E 69 45 105 e 101 65 145
ACK ^F 6 6 6 & 38 26 46 F 70 46 106 f 102 66 146
BEL ^G 7 7 7 ` 39 27 47 G 71 47 107 g 103 67 147
BS ^H 8 8 10 ( 40 28 50 H 72 48 110 h 104 68 150
HT ^I 9 9 11 ) 41 29 51 I 73 49 111 i 105 69 151
LF ^J 10 a 12 * 42 2a 52 J 74 4a 112 j 106 6a 152
VT ^K 11 b 13 _ 43 2b 53 K 75 4b 113 k 107 6b 153
FF ^L 12 c 14 , 44 2c 54 L 76 4c 114 l 108 6c 154
CR ^M 13 d 15 _ 45 2d 55 M 77 4d 115 m 109 6d 155
SO ^N 14 e 16 . 46 2e 56 N 78 4e 116 n 110 6e 156
SI ^O 15 f 17 / 47 2f 57 O 79 4f 117 o 111 6f 157
DLE ^P 16 10 20 0 48 30 60 P 80 50 120 p 112 70 160
DC1 ^Q 17 11 21 1 49 31 61 Q 81 51 121 q 113 71 161
DC2 ^R 18 12 22 2 50 32 62 R 82 52 122 r 114 72 162
DC3 ^S 19 13 23 3 51 33 63 S 83 53 123 s 115 73 163
DC4 ^T 20 14 24 4 52 34 64 T 84 54 124 t 116 74 164
NAK ^U 21 15 25 5 53 35 65 U 85 55 125 u 117 75 165
SYN ^V 22 16 26 6 54 36 66 V 86 56 126 v 118 76 166
ETB ^W 23 17 27 7 55 37 67 W 87 57 127 w 119 77 167
CAN ^X 24 18 30 8 56 38 70 X 88 58 130 x 120 78 170
EM ^Y 25 19 31 9 57 39 71 Y 89 59 131 y 121 79 171
SUB ^Z 26 1a 32 : 58 3a 72 Z 90 5a 132 z 122 7a 172
ESC ^[ 27 1b 33 ; 59 3b 73 [ 91 5b 133 { 123 7b 173
FS ^\ 28 1c 34 < 60 3c 74 \ 92 5c 134 | 124 7c 174
GS ^] 29 1d 35 = 61 3d 75 ] 93 5d 135 } 125 7d 175
RS ^^ 30 1e 36 > 62 3e 76 ^ 94 5e 136 ~ 126 7e 176
US ^_ 31 1f 37 ? 63 3f 77 _ 95 5f 137 DEL 127 7f 177
______________________________________________________________________
In a text editor, they'll just be shown as a little block or a blank space or (in some cases) little smiling faces, musical notes, and other bizarre items. To type them in, in many applications you can hold down the CTRL key and press an appropriate code. For example, pressing CTRL and entering "G" gives CTRL-G, or "^G" in the table above, the BEL character.
The ASCII table above only defines 128 characters, which implies that ASCII characters only need 7 bits. However, since most computers store information in terms of bytes, normally there will be one character stored to a byte. This extra bit allows a second set of 128 characters, an "extended" character set, to be defined beyond the 128 defined by ASCII.
In practice, there are a number of different extended character sets, providing such features as math symbols, cute little line-pattern building block characters for building forms, and extension characters for non-English languages. The extensions are not highly standardized and tend to lead to confusion.
This table serves to emphasize one of the main ideas of this document: bits are bits. In this case, you have bits representing characters. You can describe the particular code for a particular character in decimal, octal, or hexadecimal, but it's still the same code. The value that is expressed, whether it is in decimal, octal, or hex, is simply the same pattern of bits.
Of course, you normally want to use many characters at once to display
sentences and the like, such as:
Tiger, tiger burning bright!
54 69 67 65 72 2c 20 74 69 67 65 72 20 62 75 ...
* Now let's consider a particularly confusing issue for the newcomer: the
fact that you can represent a number in ASCII as a string, for example:
1.537E3
31 2e 35 33 37 45 33
But, now to get really confusing, suppose you wanted to view the bits of a
32-bit real directly, bypassing conversion to the ASCII string value. Then
the computer would display something like:
10110011 10100000 00110110 11011111
31 30 31 31 30 30 31 31 20 31 30 31 30 30 ...
Confused? Don't feel too bad, even experienced people get subtly confused with this issue sometimes. The essential point is that the values the computer works on are just sets of bits. For you to actually see the values, you have to get an ASCII representation of them. Or to put it simply: machines work with bits and bytes, humans work with ASCII, and there has to be translation to allow the two to communicate.
* 8 bits is clearly not enough to allow representation of, say, Japanese characters, since their basic set is a little over 2,000 different characters. As a result, to encode Asian languages such as Japanese or Chinese, computers use a 16-bit code for characters. There are a variety of specs for encoding non-Western characters, the most widely used being "Unicode", which provides character codes for Western, Asian, Indic, Hebrew, and other character sets, including even Egyptian hieroglyphics.
* If you've managed to get this far without becoming bewildered and want a more complete understanding, there are a few more details to be considered.
The first detail is the translation of binary numbers into decimal numbers and the reverse. It's easy to do. There are formal arithmetic methods, "algorithms", for performing such conversions, but most people who do such things a lot have a fancy calculator to do the work for them and if you don't do it a lot you won't remember how to do anything but the brute-force method. The brute-force method isn't too hard if you have a pocket calculator.
In almost all cases where someone wants to do this, they're converting an unsigned integer decimal value to a hex value that corresponds to a binary number. They almost never convert a signed or floating-point value. The main reason to convert to binary is just to get a specific pattern of bits often for interfacing to computer hardware, since computer mechanics may need to send various patterns of "on" and "off" bits to control a certain device.
Usually the binary value is not more than 16 bits long, meaning the unsigned binary equivalent of the bit pattern is no larger than 65,535, and if you remember your powers of 2, or just have them written down handy, you can perform a conversion very easily.
Suppose you have a decimal number like, say, 46,535, that you want to convert to a 16-bit binary pattern. All you have to do is work your way down the list of powers of two and follow a few simple rules.
First, take the highest power of 2 in the table, or 32,768, and check to see
if it is larger than the decimal number or not. It isn't, write down a "1":
1
10
101
46,535 greater than or equal to 32,768? Yes, subtract, write: 1
13,767 greater than or equal to 16,384? No, write: 0
13,767 greater than or equal to 8,192? Yes, subtract, write: 1
5,575 greater than or equal to 4,096? Yes, subtract, write: 1
1,479 greater than or equal to 2,048? No, write: 0
1,479 greater than or equal to 1,024? Yes, subtract, write: 1
455 greater than or equal to 512? No, write: 0
455 greater than or equal to 256? Yes, subtract, write: 1
199 greater than or equal to 128? Yes, subtract, write: 1
71 greater than or equal to 64? Yes, subtract, write: 1
7 greater than or equal to 32? No, write: 0
7 greater than or equal to 16? No, write: 0
7 greater than or equal to 8? No, write: 0
7 greater than or equal to 4? Yes, subtract, write: 1
3 greater than or equal to 2? Yes, subtract, write: 1
1 greater than or equal to 1? Yes, subtract, write: 1
46,535 decimal = 1011 0101 1100 0111 binary = b5c7 hex
b5c7 hex = 11 x 16^3 + 5 x 16^2 + 12 x 16 + 7 x 1
= 11 x 4096 + 5 x 256 + 12 x 16 + 7
= 45,056 + 1,280 + 192 + 7 = 46,535
* The next interesting topic is the operations that can be performed on binary numbers. We'll consider signed and unsigned integers first.
The most basic operations on binary values are the simple logical operations
AND, OR, XOR ("exclusive OR"), and NOT. Performing them on some sample byte
data gives:
1111 0000 1111 0000 1111 0000
OR 1010 1010 AND 1010 1010 XOR 1010 1010 NOT 1010 1010
____________ _____________ _____________ _____________
1111 1010 1010 0000 0101 1010 0101 0101
AND: The result is 1 if both values are 1.
OR: The result is 1 if either value is 1.
XOR: The result is one if only one value is 1.
NOT: The result is 1 if the value is 0 (this is also called "inverting").
* Binary addition is a more interesting operation. If you remember the
formal rules for adding a decimal number, you add the numbers one digit at a
time, and if the result is ten or more, you perform a "carry" to add to the
next digit. For example, if you perform the addition:
374 + 452
4 + 2 = 6
7 + 5 = 12
( 1 + ) 3 + 4 = 8
Performing additions in binary are essentially the same, except that the
number of possible results of an addition of two bits is minimal:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10
0011 1010 58
+ 0001 1101 + 29
___________ ____
0101 0111 87
CC C
Assuming that we are adding unsigned integers, notice what happens if we add:
1000 1001 137
+ 0111 1011 + 123
_____________ ___________
(1) 0000 0100 ( 256 + ) 4 ?
CCCC C CC
* OK, now to get really tricky. Remember how we defined signed integers as
two's complement values? That is, we chop the range of binary integer
values in half and assign the high half to negative values. In the case of
4-bit values:
0000 0001 ... 0110 0111 1000 1001 ... 1110 1111
0 1 ... 6 7 -8 -7 ... -2 -1
To see how this works, pretend that you have the binary values above written on a strip of stiff paper taped at the ends into a loop, with each binary value written along the loop, and the loop joined together between the "1111" and "0000" values.
Now further consider that you have a little slider on the loop that you can move in the direction of increasing values, but not backwards, sort of like a slide rule that looks like a ring, with 16 binary values on it.
If you wished to add, say, 2 to 4, you would just move the slider up two values from 4, and you would get 6. Now let's see what happens if you want to subtract 1 from 3. This is the same as adding -1 to 3, and since a -1 in two's complement is 1111, or the same as decimal 15, you move the slider up 15 values from 3. This takes you all the way around the ring to ... 2.
This is a bizarre way to subtract 1 from a value, or so it seems, but you have to admit from the machine point of view it's just dead simple. Try a few other simple additions and subtractions to see how this works.
The big problem is that overflow conditions have become much trickier.
Consider the following examples:
0111 0100 116 1000 1110 -112
+ 0001 0101 + 21 + 1001 0001 -111
___________ _____ ______________ ____________
1000 1001 -119 ( = 137) ? (1) 0001 1111 ( 256 + ) 21 ?
* As an aside, if you want to convert a positive binary value to a two's
complement value, all you have to do is invert (NOT) all the bits and add 1.
For example, to convert a binary 7 to -7:
0000 0111 -> 1111 1000 + 1 -> 1111 1001
* This covers addition and multiplication, but what about multiplication and
division? Well, take the binary value:
0010 1001 = 41
0101 0010 = 82
0001 0100 = 20 (losing a 1 that was shifted off the right side)
This implies that you can implement multiplication by performing a combination of shifts and adds. For example, to multiply a binary value by 5, you would shift the value right by two bits, multiplying it by 4, and then add the original value. Similarly, division can be implemented by a shift and subtract procedure. This scheme allows you to use simple hardware to perform these operations.
One interesting feature of this scheme are the complications two's complement
introduces. Suppose we have a two's complement number such as:
1001 1010 = -102
0100 1101 = 77
1100 1101 = -51
There is also a "rotate" operation that is a variation on a "shift", in which the bit shifted out of one end of the binary value is recirculated into the other. This is of no practical concern in this discussion, it's just mentioned for the sake of completeness.
* So that covers the basic operations for signed and unsigned integers. What about floating-point values?
The operations are basically the same. The only difference in addition and
subtraction is that you have two quantities that have both a mantissa and an
exponent, and they both have to have the same exponent to allow you to add
or subtract. For example, using decimal math:
3.13E2 + 2.7E3 = 0.313E3 + 2.73E3 = 3.043E3
A sharp reader will realize this is why performing calculations between sets of floating-point quantities with greatly different ranges of values is tricky and dangerous. If the two sets of values are adjusted to have the same exponents, a process known as "normalization", the mantissa of the much smaller number may be shifted so far down that it doesn't even show up in the result of an addition or subtraction.
Also recalling high-school math, to multiply two floating-point values, you
simply multiply the mantissas and add the exponents:
3.13E2 * 2.7E3 = (3.13 * 2.7)E(2 + 3) = 8.45E5
* Finally, what about higher functions, like square roots and sines and logs and so on?
There are two approaches. First, there are standard algorithms that mathematicians have long known that use the basic add-subtract-multiply-divide operations in some combination and sequence to generate as good an approximation to such values as you would like. For example, sines and cosines can be approximated with a "power series", which is a sum of specified powers of the value to be converted divided by specific constants that follow a certain rule.
Second, you can use something equivalent to the "look-up tables" once used in math texts that give values of sine, cosine, and so on, where you would obtain the nearest values for, say, sine for a given value and then "interpolate" between them to get a good approximation of the value you wanted. (Younger readers may not be familiar with this technique, as calculators have made such tables generally obsolete, but such tables can be found in musty old textbooks.) In the case of the computer, it can store a table of sines or the like, look up values, and then interpolate between them to give the value you want.
* That covers the basics of bits and bytes and math for a computer. However, just to confuse things a little bit, while computers normally use binary floating-point numbers, calculators normally don't.
Recall that there is a slight error in translating from decimal to binary and back again. While this problem is easily handled by someone familiar with it, calculators in general have to be simple to operate and it would be better if this particular problem didn't arise, particularly in financial calculations where minor discrepancies might lead to a tiresome audit.
So calculators generally really perform their computations in decimal, using
a scheme known as "binary-coded decimal (BCD)". This scheme uses groups of
four bits to encode the individual digits in a decimal number:
0000 = decimal 0
0001 = decimal 1
...
1000 = decimal 8
1001 = decimal 9
1010 = ILLEGAL
1011 = ILLEGAL
...
1110 = ILLEGAL
1111 = ILLEGAL
0001 0000 = decimal 10
0001 0001 = decimal 11
0001 0010 = decimal 12
0001 0011 = decimal 13
...
BCD can be used to implement integer or floating-point number schemes. Of course -- it's just decimal digits represented by bits. It can be used in computer programs as well, but except for calculators you won't normally come in contact with it. It makes the arithmetic substantially slower and so is generally only used in applications, such as financial calculations, where minor errors are unacceptable.
* One final comment on computer math: there are math software packages that offer "indefinite precision" math, or that is, math taken out to a precision defined by the user. What these packages do is define ways of encoding numbers that can change in the number of bits they use to store values, as well as ways of performing math on them. They are very slow compared to normal computer computations, and most applications actually do fine with 64-bit floating-point math.
* I wrote this article because floating-point issues are a common source of confusion. I can sympathize with this, because it took a long time for me to understand the issue as well as I do, and in fact writing this article helped me clear up my own thinking on the matter. I'm afraid I can't cite sources on this document, however, since I picked up the information in bits and pieces from various manuals, documents, and my own background in computing.
* Revision history:
v1.0 / 12 jan 97 / gvg / Derived from older documents.
v1.1 / 24 jul 98 / gvg / Minor tweak update.
v1.2 / 04 dec 98 / gvg / Minor cosmetic update.
v1.0.3 / 01 dec 01 / gvg / Minor cosmetic update.
v1.0.4 / 01 dec 02 / gvg / Minor update, fixed a few typos.
v1.0.5 / 01 may 03 / gvg / Minor update, fixed typos.