Generating Extremes: Temperatures

By Kenneth Freeman

‘Extreme’ is a relative word: an Antarctic -30°C may seem extremely cold to you or me, but it’s fairly standard for an emperor penguin. In the laboratory, extremes can be far more dramatic: we approach the absolute limits of physical properties like temperature, reaching levels that can’t easily be comprehended. At these farthest limits we can explore exciting science which broadens our understanding of the fundamental forces in nature and provides insight that can lead to new technologies.

There are lots of ways to take a physical object to the extreme: examples include extremes of temperature, extremes of size (very big or very small), and extremes of pressure. These three properties – temperature, size, and pressure – are quite familiar from our everyday lives. Hot and cold, big and small are simple enough; pressure may be less intuitive initially, but doesn’t require much thought to understand its effects.

In the sciences, we can consider lots of other extremes too: for example concentration, magnetic field and time (e.g. looking at incredibly fast processes). We can also combine these different extremes to create a multitude of extreme environments. Here we’ll look at the methods used to produce these conditions in the lab, and how they compare to the extremes we see in nature.

Temperature scale in degrees Kelvin (K) showing some examples of the temperatures we’re looking at.

Temperatures in the range of around -10°C to 30°C we feel and react to every day, depending on season and climate. We encounter higher and lower temperatures in cooking and storing food (say, -20°C to 200°C) and higher still in things like open flames (around 1000°C for a candle flame). Anything outside this range lies in the area of industry or scientific research.

In the sciences temperature is described in terms of the Kelvin scale. On this scale ice melts at about 273 K, the temperature on a warm summer’s day is about 298 K, and water boils at about 373 K. The lowest temperature on the scale, called ‘absolute zero,’ is at zero Kelvin (written 0 K). But there’s a catch: the closer you get to absolute zero, the more cooling power is required to get closer still, and as a result absolute zero can in fact never be reached. Let’s see how close we can get.

Low Temperatures – cryogenics

Why would we want to cool a material down? For scientists, the main reason is usually to get rid of some of the unwanted vibrations of the atoms that occur at higher temperatures. Removing that ‘background noise’ can allow subtle and delicate effects to emerge, that are masked at higher temperatures by all the jiggling around that’s going on. Some of these effects are due to the quantum nature of the atoms or electrons in the material, and can be quite counterintuitive.

When we get to temperatures below around 100 K, we usually say that we have entered the ‘cryogenic’ range. (This word comes from the Greek roots ‘cryo,’ which means cold, and ‘genic,’ which is related to the idea of production). Getting below 100 K requires the use of liquid nitrogen, which can cool things down to 77 K, or liquid helium, which can cool things down to 4 K. To get below 4 K requires the use of a dilution refrigerator, which can cool things down to the millikelvin (mK) range or even below. If we need to get really cold, an adiabatic nuclear demagnetization refrigerator (NDR for short) can cool samples down to the micro-Kelvin (µK) range.

Chilly: the Boomerang nebula is the coldest natural environment at only 1 K, but it’s not the coldest place in the universe – that’s much closer to home!

Let’s start with liquid nitrogen. Ordinary air is made of around 78% nitrogen and 21% oxygen, with the rest made up of other gases, so obtaining gaseous nitrogen is easy. It can be separated out from the air using several methods: for example, we can use passive materials that filter out everything but the nitrogen through adsorption or diffusion. The nitrogen gas can then be liquefied through the Joule-Thomson effect, where a gas is cooled by expansion when we force it through a thermally insulated valve. (Think of spraying deodorant onto your hand from a can – it’s cold, right? That’s the Joule-Thomson effect.) If the temperature of the gas after this throttling process is below the boiling point, the gas will condense into a liquid.

Alternatively, we can first liquefy the air – with both the oxygen and the nitrogen still present – and then separate it into its constituents through cryogenic air separation. This technique makes use of the different boiling points of the condensed gases, and is similar to fractional distillation.

Other gases can be liquefied in basically the same way. First the gas is compressed, which causes it to warm up. The warm, compressed gas is then cooled by conventional refrigeration methods, and then expanded in a Joule-Thomson process, causing it to cool further. Repetitions of this cycle can be used to liquefy helium. Since the boiling point of liquid helium is around 4 K, immersing an object in liquid helium will cool it to 4 K (provided there’s enough helium that it doesn’t all boil off first!).

But what if we want to get even colder than 4 K? We can do this using a clever technique called dilution refrigeration. Dilution refrigerators use two helium isotopes: the highly stable and common 4He, and the much rarer 3He. At low temperatures, a mixture of these two isotopes will spontaneously separate into two phases – a 3He-rich phase in which almost all of the atoms are 3He, and a 3He-poor phase in which only about 7% of them are, with the rest 4He. The boundary between these two phases acts a bit like the nozzle in the Joule-Thomson effect: as 3He crosses that boundary from the 3He-rich phase to the 3He-poor phase, it absorbs heat from its surroundings, and cools them down.

There’s a good animation explaining how dilution refrigerators work here. This technique may seem complex, but it is also powerful: it can be used to cool the mixing chamber to temperatures of a few millikelvin, i.e. a few thousandths of a degree above absolute zero.

To go lower than that requires a complete change in technique. We must switch to a device that is nothing like the kind of refrigerator we’re used to in everyday life: the adiabatic nuclear demagnetisation refrigerator, or ‘NDR’. Here’s how it works:

  1. An isolated piece of metal is placed in a strong magnetic field. This causes the magnetic dipoles of its atomic nuclei to align with the applied field, which reduces its magnetic entropy. That entropy converts into thermal entropy, and as a result the metal heats up.
  2. This added heat is then removed by placing the metal in contact with a heat sink (liquid 4He, say) while the applied magnetic field is kept constant.
  3. The heat sink is then removed, and the applied field is slowly reduced. With the field no longer there to align them, the magnetic dipoles will re-orient themselves randomly. This increases their magnetic entropy, which they get from the thermal entropy of the material: as a result, the material cools down! (This is the ‘adiabatic demagnetisation’ bit.)
  4. The cold metal is placed in contact with the sample we wish to cool, absorbing heat from it and thus cooling it down.

Or, if you prefer illustrations…

We can keep going round this loop, progressively extracting more and more heat from our sample, and taking it to lower and lower temperatures: down to a microkelvin, i.e. one millionth of a degree above absolute zero. These temperatures represent the current state-of-the-art for cryogenics.

Since the cosmic microwave background (the ‘afterglow’ of the big bang) gives an average background temperature of 2.7 K in outer space, the coldest places in the universe are actually in laboratories here on Earth!

High Temperatures – nuclear fusion

Unlike the field of cryogenics, physics at high temperatures is a more versatile and less specific area. We encounter high temperatures in a variety of areas: astrophysics, particle physics and nuclear physics, for example. High temperature means high energy, and in principle there’s no limit to how high we can go, so the high end of the temperature scale lies much further away from 300 K than absolute zero does.

Discussing the highest temperatures in the universe can become difficult, because most of it is too far away to measure directly: thus we have to infer its temperature from other types of measurement.

Inside the JET vacuum vessel. This shows the inside of the torus, where nuclear fusion is to take place. The overlay colour image is taken using a normal, visible light camera. There is plenty of plasma but we can only see the cooler edges of it; the plasma at the centre is so that only radiates light in the ultra-violet end of the spectrum.

The most energetic process ever to have occurred was the beginning of the universe itself, with enormous accompanying temperatures: along the lines of 1026 K, i.e. a ‘1’ followed by twenty-six zeros! The most energetic processes in the universe today include the formation of neutron stars. The supernova that forms them is extremely energetic – the most powerful explosion in the modern universe – and the core temperature of a newly formed neutron star (within the first minute or so of its life) can reach around 1011 K.

In terms of creating extreme temperatures on Earth, one key motivation is to develop controlled nuclear fusion for power generation. What exactly does that involve, and how difficult is it?

Nuclear fusion occurs naturally in stars where the fusion of light elements produces the energy required to counteract the gravitational forces which would otherwise collapse the star. Fusion produces nearly all of the elements that make up the matter in the universe, along with a lot of radiated energy.

Human nuclear power-generation technology today uses nuclear fission, which generates energy not by fusing small nuclei, but by splitting large ones. In fact, however, fusion offers a better alternative: it has a higher energy yield, produces less radioactive waste, and utilises hydrogen fuel which can be readily manufactured from seawater.

Our attempts to achieve nuclear fusion so far, while increasingly successful, still fall short of the key aim for energy production: starting a self-sustaining fusion reaction that yields more energy than was put in to start it. This goal – ‘breaking even’ – is thus the current focus of fusion research.

To achieve useful fusion, a fuel of light elements (typically hydrogen and deuterium) must be held at high temperature and pressure long enough to ‘ignite,’ at which point the process becomes self-sustaining. Once this has been achieved, the hot plasma must be confined safely while fusion continues, which is the main technical challenge. This is being addressed through two main avenues: inertial confinement, and magnetic confinement.

Inertial confinement uses a spherical pellet of fuel which is covered in an ablator – a layer of material, such as plastic, that is suddenly heated by X-rays. The sudden heating causes the ablator to explode outwards, away from the centre of the pellet, which in turn results in a high-temperature and high-pressure shock wave compressing the pellet from all sides. This setup, which is used at the National Ignition Facility in the USA, creates the required conditions for fusion and confines it through the even implosion of the pellet.

At the National Ignition Facility, the 2 mm fuel pellet is held inside a gold cylinder called a ‘hohlraum’ (a German word meaning a hollow space or cavity). This cylinder is heated by 192 powerful laser beams, which enter the hohlraum through the open ends and cause it to emit intense X-rays. These near-uniform X-rays impinge on the pellet and cause the plastic ablator to explode and the fuel pellet to implode at over 350 km/s. This heats the centre of the pellet to a temperature of over 107 K (ten million Kelvin).

Magnetic confinement uses strong magnetic fields to guide the fusing plasma into a confinable shape. In a ‘tokamak,’ for example, the plasma is confined within a torus, i.e. a doughnut shape. (The word ‘tokamak’ comes from a Russian acronym for ‘toroidal chamber with magnetic coils.’) The largest example is the International Experimental Thermonuclear Reactor (ITER) in France, which is a collaboration between the European Union, India, Japan, China, Russia, South Korea, and the United States.

In a tokamak reactor, the interior of the torus is evacuated and a gaseous hydrogen-based fuel is introduced. This then breaks down into a high-energy plasma as a large current is passed through it. The plasma is heated to around 108 K (one hundred million Kelvin) through the magnetic confinement of the hot plasma, applied radio and microwaves, and the injection of high-energy beams of deuterium. At these temperatures – ten times the temperature at the core of the Sun! – the plasma particles begin to fuse. This is considered to be the more promising avenue for nuclear fusion, with the ITER facility due to undergo commissioning after 2021.

Where next?

Exploring materials at the extremes of the temperature scale has provided us with promising new technologies and has given us the chance to test our current theories of matter and the fundamental forces of the universe. Low temperatures will continue to be reached in research labs across the world, perhaps even moving into space to take advantage of the low ambient temperatures. High temperatures will be produced in particle colliders and in the race towards successful nuclear fusion. The current limits of producible temperatures will, of course, be pushed as the answers we find lead to further questions and opportunities for exciting new technologies emerge from the exotic states and properties of matter that we find.

 

 

Image credits:

JET vessel and plasma – Copyright EUROfusion, link: http://www.ccfe.ac.uk/images_detail.aspx?id=15

Boomerang Nebula – ESA/NASA, link: https://www.spacetelescope.org/images/heic0301a/