Columnist David ball discusses the history of the units system used by scientists and several of the base units themselves.
As a branch of science, spectroscopy is inherently quantitative. Quantities have two parts: numerical values and units. I will assume that everyone knows what numbers are (although a future column might devote space to the conventions for expressing numbers ...). The units system that science (including spectroscopy) uses, however, might not be as obvious. All scientific units are based upon seven fundamental, so-called base units. In this first part, we will review the history of these units and several of the base units themselves. We will complete our review in the next rendition of "The Baseline."
The units system currently used in almost all science has been, in most cases, rigidly defined. However, this wasn't always the case.
David W. Ball
Historically, units systems sprang up all over the world as commerce developed. Some became obsolete, while vestiges of others remain. One of the oldest units of length, for example, is the cubit, for which there is evidence of use in Egypt in 2700 BCE. Many civilizations — Egyptian, Roman, Hebrew, Babylonian, Arabic — used the cubit, but its defined length varied from just over 300 mm (in today's units) to over 650 mm. In part because of the imprecision of its definition, the cubit is not used as a unit today except in historical discussions.
There is evidence from ancient Sumer that the length of a human foot was adopted as a unit of measure: the foot. The ultimate question, however, was "Whose foot?" The length of a foot eventually was standardized in England, and finally was supplanted by the SI system. Units for other types of quantities — volumes, energies, even currencies — were developed and ultimately were based upon some physical object or objective. An acre, for example, is apparently the area of earth that can be tilled by a single ox in a single day. Although most units systems of measurement have been standardized, the units systems of currencies remain stubbornly self-defined by individual governments (the introduction of the euro [€] being a well-known exception of governments working together for a common unit of exchange).
As part of the exchange of new ideas that was part of the European Renaissance, scientists of the time began to understand the need for a unified system of units. After the development of the decimal system for representing fractional parts gained acceptance in the late 1500s and early 1600s, some scientists — like Leibniz in the 1670s — saw an obvious system of units based upon factors of 10.
It took the French Revolution to finally bring a codified system of units, based upon powers of 10, into existence. In 1791, the French National Assembly requested that the French Academy of Sciences establish a logical system of units. A Commission appointed by the Academy (including famed mathematicians–scientists Lagrange, Coulomb, and Laplace) spent months considering how such a system would be defined, mostly based upon physically measurable objects like the earth and the day. They then spent years making measurements or constructing prototypes, in the midst of almost constant political turmoil. A "metric system" was adopted by France in April of 1795. In 1798, the French invited representatives from some neighboring countries (not including Great Britain or the fledgling United States) to hear presentations of results and consider adoption of a standard system. In 1840, the system was made compulsory in France, and other countries slowly adopted the new system. (Currently, almost all countries have adopted such a system for everyday use, not just for commerce. Holdouts include Liberia, Burma [Myanmar], and the United States.)
In 1875, an international treaty called the Convention du Mètre (the Meter Convention, or the Treaty of the Meter) established three organizations to oversee an international system of units; to date the treaty has been signed by 51 countries, with several dozen others having associate status. One of the organizations, the Bureau International des Poids et Mesures (BIPM; the International Bureau of Weights and Measures), is charged with ensuring consistency of the units around the world, so that any one country's meter is the same as any other country's meter. Another organization, Conférence Générale des Poids et Mesures (CGPM, the General Conference on Weights and Measures), meets every four to six years to determine what the standards are. At their 11th meeting in 1960, the Conference dropped the phrase "metric system" and adopted the phrase "Système International d'Unités", or "International System of Units" (abbreviated SI) as the name for the worldwide system of units. At the same time, the Conference also updated the list of approved numerical prefixes. The Conference works on important but seemingly trivial conventions. For example, in 2003, the Conference passed a resolution that the decimal point could be represented by a comma or a period, but that commas used to separate left-of-decimal digits into groups of threes were not appropriate.
To date, the CGPM has recommended the adoption of seven fundamental, or base, units. With the exception of one unit, all are based upon physical phenomena that can be measured in the laboratory, allowing them to be replicated around the world. Units for all other quantities are derived units, which are product or division combinations of the various base units.
The base units describe the following quantities: length, mass, time, electric current, temperature, amount of substance, and luminous intensity. Table I summarizes the base units for the various quantities. Interestingly for readers of this publication, several of the units have a spectroscopy connection.
Table I: The base units of fundamental quantities
The base unit of length is the meter (also spelled metre), whose abbreviation is "m". A 1-m length is a little more than a yard. Its original definition was one-ten millionth of the distance between the Earth's equator and the North Pole on a meridian that passed through Paris, France. Once this distance was determined (after a multiyear surveying project), this distance was represented by the spacing of two marks on a 90:10 platinum-iridium bar kept in Paris. Because the bar's length varied with temperature, it was specified that the bar be at the melting point of ice to yield the correct distance between the marks.
In 1960, the 11th CGPM redefined the meter as 1 650 763.73 wavelengths of the orange-red light (λ ≈ 605.8 nm) emitted by xenon-86 in a vacuum (Figure 1). This tied the definition of the meter to a laboratory measurement that could be reproduced worldwide. In 1983, the 17th CGPM redefined the meter as being the distance that light travels in a vacuum in 1/299 792 458 of a second, and in 2002, there was a codicil that this definition be restricted to lengths for which relativistic issues were negligible. This is its current definition.
Figure 1: A portion of the emission spectrum of Xe. The arrows indicate the particular emission line that, for 23 years, used to be the basis of the definition of the meter.
Note that area and volume are not part of the base units. Instead, they are derived units because they are the products of two lengths or three lengths, respectively. A short discussion of derived units, including a common unit of volume, will be found in the next column.
The base unit of mass is the kilogram. It is the only base unit that is not the fundamental unit of its type, and is instead a combination of the fundamental unit and a numerical prefix (to be discussed in the next column). The kilogram was defined originally as the mass of one cubic decimeter (0.1 m) of water at 0 °C in 1795, which was changed to 4 °C in 1799 because water is at its greatest density at that temperature. Later in 1799, a platinum cylinder was machined that, given the abilities of the time, was as close as possible to the mass of 1 cubic decimeter of water at 4 °C.
In 1879, a new cylinder was machined. Made this time of a 90% platinum and 10% iridium alloy, it is a right-circular cylinder 39.17 mm wide and high. This cylinder, called the international prototype kilogram (or IPK), was declared the official kilogram by the first CGPM in 1889. Six copies also were constructed; currently, all of the prototype kilograms reside in an environmentally controlled vault at the International Bureau of Weights and Measures outside of Paris. Other copies of the IPK exist and reside in countries around the world.
The kilogram is the only base unit that is based upon a physical object, rather than some laboratory-produced phenomenon. Its definition as such is subject to criticism. For example, in periodic reevaluations of the masses of the IPK and its replicas, the exact masses of the cylinders change very slightly over time — on the order of a few dozen micrograms. This is likely due to the absorption and desorption of contaminants on the surfaces of the spheres, as well as the inevitable production of scratches and other imperfections upon handling. Because of this, there are some modern attempts to reconsider the definition of a kilogram in terms of other quantities. Suggestions to date include defining a kilogram as a specific number of carbon or silicon atoms; measuring a certain number of heavy metal ions by the amount of current they can generate; using special electronic balances that measure the electrical power needed to determine the mass of an object; or defining the amount of force needed to accelerate a kilogram of mass. Some of these methods do not currently have the precision needed to define a kilogram any better than the current definition. Ultimately, a CGPM conference will have to consider and approve any new definition.
The base unit of time is the second (which has the occasionally troublesome abbreviation "s", not sec). It was originally defined as 1/86 400 of a day (with the denominator coming from the product of 60 × 60 × 24). However, it was recognized quickly that the length of the day is inconstant and, in fact, slowly increasing. It is estimated that the day lengthens about 2.3 s every 100 000 years, or by about 23 μs per year. This might not seem like much, but it's enough time for light to travel over 6000 m — which means, by the way, that if you're using a GPS system for navigation, your predicted position could be off by 6 km! Clearly, a more accurate definition of the second is necessary.
At the 11th CGPM in 1960, the second was redefined in terms of a fraction of a particular year; specifically, 1/31 556 925.974 7 of the tropical year starting at 1900 January 0 at 12 h ephemeris time. (In calendar-speak, January 0 is the day before January 1. There are some logical reasons for defining a "day 0" for a month, but don't ask me the reasons . . .) This definition lasted only seven years, as the accuracy of atomic clocks (for a review of such, see a previous column [1]) improved. In 1967, at the 13th CGPM, the second was redefined as the duration of exactly 9 192 631 770 periods of the radiation involved in the transition between the two hyperfine levels of the cesium-133 atom, provided that the atoms had an altitude equal to the average ocean surface of the earth (this part being added in 1980) and were at 0 K (this part was added in 1997). Hence, both the meter and second units are intimately tied to spectroscopic principles.
Temperature is a property of a sample of matter that is a measure of the average kinetic energy of the particles (atoms, molecules) in the sample. The first temperature measurement device apparently was constructed at about 1000 BCE, when Avicenna, a Persian–Arabic philosopher, built an air thermometer whose gas expanded and contracted with changing temperature. In 1592, Galileo Galilei built what is now called a Galilean thermometer (Figure 2). Acting on the principle of buoyancy, the thermometer was composed of a larger-diameter glass tube with objects that sank or floated depending on the varying density of the water as temperature changed. Galilean thermometers can be purchased today as novelty items in stores.
Figure 2: A Galilean thermometer. Photo by Alistair Riddell.
The first liquid–alcohol thermometer was constructed in 1654. In 1724, Daniel Fahrenheit published a numerical scale for expressing temperature. Originally, the temperature of a mixture of salt and ice was set arbitrarily at 0, the freezing point of water was set at 32, and 96 was set at core body temperature. Later, the interval between freezing and boiling of water was set to 180 increments, slightly moving average human body temperature to 98.6. This scale defined the Fahrenheit temperature scale, symbolized as "°F". Currently, the U.S. is the only major country that continues to use the Fahrenheit scale for everyday purposes.
In 1742, Anders Celsius, a Swedish astronomer, published a scale based solely upon the freezing point and boiling point of water. Originally called the centigrade scale, Celsius originally put 0 at the boiling point of water and 100 at the freezing point of water! (This was reversed by others three years later, after Celsius's untimely death in 1744.) The centigrade scale was more logical than Fahrenheit's scale, and likely appealed more to scientists who were developing a unit system based upon powers of 10. The ninth CGPM officially renamed it the Celsius scale in 1948.
As useful as these earlier scales were, they were both relative — zero and any other benchmarks were based upon arbitrary chemical or physical processes. As early as 1702, however, it was noted by Gillaume Amontons (of Amontons' gas law fame), a French physicist, that there should be some sort of temperature scale for which there should be some absolute minimum — absolute zero, as it were. In 1787 and 1802, respectively, French scientists Jacque Charles and Joseph Louis Gay-Lussac noted how the volume of a gas varied with temperature under constant pressure; Gay-Lussac was the first to note how a factor of 273 was involved. Finally, in 1848, William Thomson, Lord Kelvin, published a paper titled "On an Absolute Thermometric Scale" (2), in which he argued for an absolute scale having the same degree size as the Celsius degree and having zero degrees at the minimum possible temperature, which he argued was about 273 degrees below 0 °C. The Kelvin absolute temperature scale is named in Lord Kelvin's honor.*
In 1954, the 10th CGPM formally defined the temperature of the triple point of water (the point in the phase diagram of water at which all three phases can exist in equilibrium) as having the temperature of 273.16 degrees in the Kelvin scale. In 1968, the 13th CGPM defined the size of the degree as 1/273.16th of the thermodynamic temperature of water at its triple point, and recommended that the unit be called "kelvin," with the abbreviation K, rather than "degrees Kelvin" or "°K". Note that the temperature of the triple point is a defined temperature, not an experimental one, and the size of the kelvin unit is based upon it.
Electrical current is caused by the movement of electric charges, either electrons or ions. The unit of current is the ampere (sometimes shortened to amp but whose sanctioned abbreviation is A), named after André-Marie Ampère, a French physicist who proposed some fundamental ideas in electromagnetism. In 1893, the ampere was defined as the current needed to deposit 0.001 118 g of silver from a silver nitrate aqueous solution. However, at the ninth CGPM in 1948, a new definition was adopted: 1 A is the current that, running in two straight parallel conductors of infinite length that are 1 m apart in vacuum, would produce a force of 2 × 10–7 newtons per meter of the conductors. This only changed the value of the coulomb by about 0.01%.
In the next column, we will consider the rest of the base units and a few issues that concern us about all of these units.
* Although William Thomson was knighted in 1866 for his plethora of scientific achievements — including work in thermodynamics, laying the first transatlantic cable, oceanography, electrodynamics, and the age of the Earth — he was not given his baronage until 1892. The title Lord Kelvin derives from the River Kelvin, a river that passes by the University of Glasgow, Scotland, UK, where Thomson was on the faculty. (See Figure 3.) Thus, the name of a river is now permanently ensconced in the fundamental units of science. Kelvin is also rather famous for some major misstatements of future applications. He is said to have stated: "Heavier-than-air flying machines are impossible" and "Radio has no future." Such blunders are perfect examples of Clarke's law (after science fiction author Arthur C. Clarke): "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
Figure 3: The River Kelvin in Glasgow, Scotland, UK. The river's name is now irreversibly attached to the standard unit of temperature. Photo by Finlay McWalter.
Thomson died in 1907; his remains were interred in the nave of Westminster Abbey, right next to Isaac Newton. He had no heirs or close relatives, so the title of Lord Kelvin died with him and reverted to the Crown. Although in theory it is possible that the Crown might bestow the title of Lord Kelvin on someone else, those who understand the peerage system better than I think that the chances are very remote. Hence, we might have seen the first and last Lord Kelvin.
Thanks to my colleague Stephen Benn of the Royal Society of Chemistry for details concerning hereditary baronies in the UK peerage system.
David W. Ball is a professor of chemistry at Cleveland State University in Ohio. Many of his "Baseline" columns have been reprinted in book form by SPIE Press as The Basics of Spectroscopy, available through the SPIE Web Bookstore at www.spie.org. His most recent book, Field Guide to Spectroscopy (published in May 2006), is available from SPIE Press. He can be reached at d.ball@csuohio.edu; his website is academic.csuohio.edu/ball.
(1) D.W. Ball, Spectroscopy 22(1), 14–20 (2007).
(2) W. Thomson, Phil. Mag. 1, 100–106 (1882).
New Hyperspectral Imaging Database Enhances Human Skin Research
October 14th 2024Researchers from the University of Minho (Portugal) have developed a hyperspectral imaging database of human facial skin, aimed at improving various scientific applications such as psychophysics-based research and material modeling. The database includes 29 participants with diverse skin tones, providing detailed spectral reflectance data under controlled conditions.