It probably says more about me than anything else, but I've been asked the following question at least half-a-dozen times. In the interests of having a place to point future questioners I offer the following answer to the (apparently common) question:
if the temperature today is 0°C, and it will be twice as cold tomorrow, what will the temperature be tomorrow?
And the instant answer is:
-136.575°C.
Which is, in human terms, rather cold. The lowest temperature recorded and confirmed on Earth is -89.4°C, recorded on 21st July 1983, at Vostok, a Russian research station in Antarctica.
It's also a rather dramatic drop in temperature. A lot colder than most people guess when they ask this or similar questions.
So, how did I arrive at this apparently drastic figure, and how do you divide zero by two and get a number other than zero in the first place?
The answers to both questions derive from the same, usually overlooked, point: the Celsius temperature scale, like the Fahrenheit scale (and many other, now obsolete, temperature scales such as the Newton, Romer, Delisle, Leyden, Dalton, Wedgewood, Hales, Ducrest, Edinburgh and Florentine scales) is a relative scale.
0°C isn't the same as 0mm wide or 0V of electrical potential. Both these latter are absolute measures. You can't get narrower than 0mm and you can't get less electrical potential than 0V.
You can get colder than 0°C, however, since 0°C is just the freezing point of water.
So, to arrive at the frigid forecast above I simply converted the first figure to an absolute temperature scale (the Kelvin scale), halved it and then converted it back to Celsius.
Makes sense now? I didn't think so.
Let's step back a bit and take a look at the relative temperature scales, starting with the oldest temperature scale still in regular use.
The Fahrenheit scale, developed in 1724 by Gabriel Fahrenheit, used mercury to measure changes in temperature, since mercury exhibits consistent changes when it undergoes thermal change. Mercury expands and contracts as the temperature changes and this volume change is both uniform across a wide range and large enough to measure accurately.
In addition, mercury is cohesive rather than adhesive, so it doesn't stick to the only transparent substance Fahrenheit had access to: glass. Finally, mercury is bright silver, making it easy to visually distinguish changes in liquid volume in a narrow tube.
Fahrenheit began by placing his mercury thermometer in a mixture of salt, ice and water. The point the mercury settled to on his thermometer was considered zero.
He then placed the thermometer in a mixture of ice and water. The point the mercury settled to this time was set as 30. Finally, 'if the thermometer is placed in the mouth so as to acquire the heat of a healthy man' the point the mercury reaches is set to 96.
Using this scale, water boils at 212 and it freezes at 32. This latter number is an adjusted figure on Fahrenheit's part: it made the difference between boiling and freezing a relatively clean 180.
[NB, the above chronology isn't the only possible process Fahrenheit undertook. The Wikipedia article on the Fahrenheit temperature scale notes several other mooted explanations. Cecil Adams's The Straight Dope site also covers the origins of the Fahrenheit scale, focusing on the more amusing (or bemusing) possibilities.]
Less than twenty years after Fahrenheit's scale was developed, the Celsius scale was created by Swedish astronomer Anders Celsius. His scale used the freezing and boiling points of water as the two key markers and put 100 degrees between the two temperatures.
Unlike today, however, in Celsius's original scale, water's boiling point was 0 and the freezing point 100.
In the years after his death in 1744, the numbering scheme was reversed. This change is routinely credited to another great Swede, Carl Linnaeus (also known as Carolus Linnaeus) but the evidence for this is circumstantial and not particularly convincing.
Numbering scheme aside, the modern Celsius scale (used pretty much everywhere on earth except the United States) is different from the one Celsius developed.
It doesn't make much difference in day-to-day use, but the basis of the modern Celsius scale is the triple-point of water. The triple-point of a substance is the temperature and pressure at which the solid, liquid and gaseous states of said substance can all co-exist in equilibrium. And the triple-point of water is defined as 0.01°C.
As well, each degree Celsius is now defined abstractly. In Celsius's original scale a one degree change in temperature was defined as a 1% change in relative temperature between two externally referenced circumstances (ie the boiling and freezing points of water).
Today, a degree Celsius is defined as the temperature change equivalent to a single degree change on the ideal gas scale.
The ideal gas scale brings us almost to the point (finally, I hear you cry). As noted at the beginning of all this, the temperature scales above are relative scales: they give you a useful number to describe the thermodynamic energy of a system but they do so by creating a scale which is relative to some physical standard (whether that be the triple-point of water or the 'heat of a healthy man').
Back in 1787, however, Jacques Charles was able to prove that, for any given increase in temperature, T, all gases undergo an equivalent increase in volume, V.
Rather handily, this allows us to predict gaseous behaviour without reference to the particular gas being examined. It's as if gases were fulfilling some Platonic conceit, all acting in a fashion essentially identical to an imagined ideal gas. Hence the 'ideal gas scale' which describes the behaviour of gases under changing pressure without reference to any particular gas.
The Platonic ideal falls apart at very high pressures because of simple physical and chemical interactions. For the sort of pressures needed to use a gas as a thermometric medium (ie measurer of temperature) on earth, however, all gases exhibit the same, very simple behaviour described by the following equation:
pV = [constant]T
or in words:
pressure multiplied by Volume = [a derived constant] multiplied by Temperature
Which means if you keep the pressure constant, as the temperature changes so does the volume. Or, if you change the temperature and keep the volume constant, the pressure goes up or down in direct relation to the temperature's rise or fall.
One very nifty thing about this is the way it makes it possible to create a temperature scale which is independent of the medium used to delineate the scale.
Back in 1887, P Chappuis conducted studies using gas thermometers which used hydrogen, nitrogen, and carbon dioxide as the thermometric media at the International Bureau of Weights and Measures (BIPM). Regardless of the gas he used, he found very little difference in the temperature scale generated. If the temperature of the gases changed by a value T, and the pressure, P, was held still, the increase in volume, V, was the same regardless of the gas being used to set the scale.
This change in thermodynamic activity has been recognised and accepted as the fundamental measure of temperature, since it is derived from measures of pressure and volume that aren't dependent on the substance being measured.
One of the most important consequences of this discovery is the recognition that there is a naturally defined absolute zero temperature value. When the pressure exerted by a gas reaches zero, the temperature is also zero. It is impossible to get 'colder' than this, since at this temperature all atomic and sub-atomic activity has ceased. (And, before anyone asks, yes I know what negative temperature is, and it isn't a temperature 'below absolute zero.' Systems with negative temperature are actually hotter than they are when they have positive temperature.)
In 1933 the International Committee of Weights and Measures adopted a scale system based on absolute temperature. It is called the Kelvin scale and uses the same unitary value for single degrees as the modern Celsius scale. So a one-degree change as measured by the Kelvin scale represents the same change in temperature as a one-degree change as measured using the Celsius scale.
The zero-point for the Kelvin scale, however isn't an arbitrary one (eg the freezing point of water) but the absolute one.
Absolute zero is, as it happens, equivalent to -273.15 C, so converting between K and C is a simple matter of addition or subtraction:
C = K - 273.15
K = C + 273.15
So 0 degrees Celsius is 273.15 Kelvin. Using standard notation for each scale we can re-state this sentence thus:
0°C = 273.15K
Note there is no degree symbol used when denoting a temperature in Kelvin. And, just as there is no degree symbol, the word isn't used either. The phrase 'degrees Kelvin' is incorrect: just use the word 'Kelvin.'
Which brings us, finally, to explaining how I arrived at the temperature I listed at the beginning of this article. As I noted above, the Celsius and Fahrenheit scales are relative scales, so you can't compare two different temperatures measured using these scales absolutely.
20°C is not twice as warm as 10°C, since both are a measure relative to the triple point of water.
The Kelvin scale, however, is an absolute scale. Different values measured using this scale are related in absolute comparative terms. 20K is twice as warm as 10K (although both values are pretty damned cold relative to what you or I are comfortable with).
So, to find out what temperature (in degrees Celsius) would be 'twice as cold' (ie half the temperature) of 0°C I simply converted the value to Kelvin:
0 + 273.15 = 273.15K
Divided this value by 2:
273.15/2 = 136.575 K
and converted it back to degrees Celsius:
136.575 - 273.15 = -136.575°C
Working in the other direction, twice as hot as 0°C is easy to calculate. It's 273.15°C. Which is rather hotter than any human can handle.
If nothing else, this demonstrates how narrow a range of temperatures suit human beings. Let's presume -10°C – 50°C is a useful range of liveable temperatures for human beings.
I'm being generous with this range. The low is, in human terms, well below the freezing point of water. And the high is, again in human terms, a long way above blood temperature. This range is only acceptable as a liveable range if we assume 1) the range refers to measured temperatures and 2) we have technology capable of keeping experienced temperatures (eg, in a dwelling or next to human skin) from reaching these extremes of heat and cold.
Converting this to Kelvin, we have a range of 263K – 323K. (I'm leaving the 0.15 off: it doesn't change the arithmetic, other than to needlessly complicate things.)
The lowest temperature in this range is 81% of the highest temperature in this range. 323K (50°C) is only 19% warmer than 263K (-10°C).
Change the liveable range to 0°C – 40°C (a range more genuinely liveable, especially if we assume only basic available technology) and the hottest we can reasonably handle is only 13% warmer than the coldest we can live with.
Be even more conservative, and restrict the range so it runs roughly through the human comfort zone: 10°C – 35°C (a range that goes from 'cold if you don't have warm clothing' through to 'hot in the sun but bearable if there's almost any sort of breeze') and the hottest weather we can comfortably manage is only 9% warmer than the coldest most of us are willing to deal with.
No wonder folk are concerned about a 0.6°C increase in global surface temperatures over the last 100 years.