## 1.5 Scientific Notation

Numbers encountered in many science and engineering problems can become quite large or quite small, depending upon the units of measure being used. Writing numbers in what is called *scientific notation* makes keeping track of decimal points and orders of magnitude (factors of ten) simpler, as well as making computations using logarithms more straightforward. With a choice of 10 as our base, we can write any positive number as a number between 1 and 10, times 10 raised to an integer power. A couple of examples help:

\[ 5827 = 5.827 \times 10^3; ~~~~~ 0.0365 = 3.65\times 10^{-2} \] If we take the logarithm of our first number, as an example, and use our rules of multiplication, we see that \[ \log 5827 = \log( 5.827 \times 10^3) = \log 5.827 + \log 10^3 = \log 5.827 + 3. \] Likewise, \[ \log 0.0365 = \log 3.65 + \log 10^{-2} = \log 3.65 - 2. \]

Remember that \(10^0\) is just \(1\), and that the log of zero doesn’t exist – there is no finite power to which one can raise the number 10 in order to produce zero exactly. Zero is just zero, and anything times zero is also zero.

Scientific notation became a popular way of expressing numbers being used in calculations, due to its emphasis of the logarithmic approach. When it was necessary to multiply and divide a long string of numbers, it was easier to write each number in scientific notation, add or subtract the relevant logarithms of each, and keep track of the “factors of ten” to determine the final placement of the decimal point.