Uncategorized


What is a Nuclear Power Plant?

A nuclear power plant is a plant where nuclear fission is used to generate electricity.


The cooling towers of Doel Nuclear Power Station on the river Scheldt near Antwerp as seen from Fort Liefkenshoek.

Nuclear fission is a process where atoms are split apart. This releases energy in the form of heat and radiation. The heat is used to produce steam, which turns a turbine to generate electricity.

Nuclear power plants have been used to generate electricity since the 1950s. The first nuclear power plant was built in Obninsk, Russia. Today, there are more than 400 nuclear power plants operating in over 30 countries around the world.

Nuclear power plants have some advantages over other types of power plants. They don’t produce air pollution or greenhouse gases. And they can run for many years without needing to be refueled.

However, nuclear power plants also have some disadvantages. They require a lot of expensive safety measures to prevent accidents. And the radioactive waste from nuclear reactors can be very dangerous. The radiation produced by nuclear fission is dangerous. It can cause cancer and other health problems. That’s why nuclear power plants are built with safety in mind. They have thick walls and concrete containment structures to prevent the radiation from escaping.

Nuclear power plants are safe, but accidents can happen. The most serious nuclear accident in history happened at the Chernobyl nuclear power plant in Ukraine. On April 26, 1986, the Chernobyl nuclear power plant in Ukraine experienced a catastrophic nuclear accident. A sudden surge of power caused an explosion in one of the reactors, causing a fire that released large amounts of radioactive materials into the environment. The accident resulted in the deaths of 31 people and forced the evacuation of more than 135,000 people from the surrounding area. The Chernobyl nuclear disaster was the worst nuclear accident in history.

Nuclear power is an important part of the world’s energy mix. It produces electricity without emitting greenhouse gases. And it can help us meet our growing energy needs without contributing to climate change. The greenhouse gases that nuclear power plants avoid emitting each year are equivalent to taking more than 80 to 100 million cars off the road. That’s why nuclear power plants are an important part of the world’s energy mix. And it’s why we need to continue to invest in nuclear power.

Let’s Get Critical!

In some of my other posts, I’ve explained what’s meant by “criticality” with regards to nuclear reactors and that it’s not necessarily a bad thing. But what I didn’t really get into are some of the nuances of criticality – what’s meant by the terms critical mass and critical geometry, the difference between criticality in a nuclear reactor versus a nuclear weapon, and so forth. So this seems like a good time to take a deeper look at criticality – starting with a quick refresher on the basics.

To start with, the term “criticality” is, I guess you could say, non-judgmental. By that, I mean that the term is neither inherently good nor bad – it’s simply descriptive. When a uranium atom fissions it emits two or three neutrons; if one of those neutrons (on average) goes on to cause a second fission then the reaction is said to be “critical.” If more than one fission occurs then the reaction is super-critical; less than one fission means that the reaction is subcritical.

critical chain reaction

A critical chain reaction

In the setting of a nuclear reactor, supercriticality means that the reactor is starting up (increasing power), subcriticality means that the reactor’s shutting down (or reducing power), and criticality is its normal operating state of being. A reactor can be critical and producing barely enough energy to heat a cup of coffee…or to light up a city, or anything in between. When we were starting up the reactor on the submarine I was stationed on the announcement “the reactor is critical” was met with a yawn (if it was an early-morning startup) and a notation in our logs.

Now – say, to pick a number, that a neutron has to travel an average distance of one inch before it can be absorbed by a second uranium atom, causing it to fission. If that’s the case, is it possible for a ball of uranium an inch in diameter to sustain a fission? Well…no – because most of the neutrons will be formed less than an inch from the surface and many of those neutrons will escape from the ball of uranium first. In order for a ball of uranium to achieve criticality, it has to be large enough that most of the neutrons emitted will remain within the uranium so they can go on to cause a fission. For nuclear weapons-grade U-235 that turns out to be a little bigger than the size of a softball and weighs a little over 100 pounds – this is a critical mass for U-235 as a bare sphere of uranium metal. Changing the density, the chemical composition, and the uranium enrichment will change the critical mass, as well as surrounding the sphere with materials that will reflect escaping neutrons back into the sphere.

critical mass and geometry 1

If the fissionable material is in a flat plane, most of the neutrons will escape without causing a fission, while in a more compact shape (a sphere, for example) like the one shown above these same neutrons would be able to be absorbed by another uranium atom, causing the fission reaction to continue.

OK – so now picture that sphere of uranium melted down into a paper-thin flat sheet. Will this achieve criticality? Well…no – because the only way a neutron can find another uranium atom in which to cause a fission is to be emitted in the plane of the sheet – the great majority will escape. But it’s also easy to visualize slowly crumpling this thin sheet of uranium until, eventually, most of the neutrons will remain within the uranium – this is a critical geometry. We need both a critical mass and a critical geometry to achieve criticality. In fact, an important aspect of nuclear criticality safety is making sure that one cannot achieve both of these at the same time.

critical mass and geometry 2

So the first shape would weigh less than a critical mass and the second will weigh more.

And this brings up an interesting couple of questions – why is it that criticality in a reactor is not very thrilling, but criticality in a nuclear weapon is bad? And why is it that we need to add water to a nuclear reactor to achieve criticality, but not in nuclear weapons – especially if neutrons need to be moving slowly to cause a fission?

The answer to both of these questions begins with the amount of U-235 – the enrichment – of the uranium in question. In a nuclear reactor, there is far less U-235 and far more U-238 than there is in a nuclear weapon. This means that there are fewer “target atoms” in a given volume of reactor fuel (where 3-6% of the atoms are U-235) than there are in the same volume of weapons-grade uranium (in which over 90% of the atoms are U-235). It turns out that U-235 will fission if it’s struck with a fast neutron – it’s just not as likely as when the neutrons are moving at a more sedate pace. On top of that, fission caused by fast neutrons produces up to twice as many neutrons compared to atoms that are split by slower-moving neutrons, which also helps to make up for the lower efficiency of fast neutrons in causing fission to occur.

In a reactor, this means that we need to give the neutrons the best chance possible to cause a fission, and this means slowing them down with a moderator like water. In weapons-grade uranium, on the other hand, there are so many U-235 atoms that even fast neutrons are likely to be able to cause a fission. And since the neutrons in a nuclear weapon don’t need to be moderated and don’t travel as far, they can cause fission more quickly than in a reactor – this, the presence of control rods in a reactor to help absorb excess neutrons, and one other thing I’ll get to in just a moment is why the chain reaction in a nuclear reactor is controlled and that in a nuclear weapon is not.

The final piece of the criticality puzzle has to do with the two types of neutrons emitted during fission – prompt and delayed neutrons. Prompt neutrons are emitted immediately when the atom is split and they go on to cause fission fairly quickly as well (within nano- or microseconds), but there are neutrons that don’t emerge for seconds or minutes – these are the delayed neutrons. In a nuclear reactor, which operates for days, weeks, or even months at a time, delayed neutrons contribute to the neutron population in the reactor core, so they’re factored in with the prompt neutrons when designing a nuclear reactor core. It makes sense – a reactor startup can take several hours, so delayed neutrons can even play a role in controlling the reactor from the time the first control rods are pulled until they are inserted to shut it down. And, in fact, were it not for these delayed neutrons it would be impossible for a person to control the reactor – and would be very difficult even for electronic systems. Were it not for delayed neutrons we would likely not have nuclear reactors. In a nuclear weapon, by comparison, there’s not enough time for delayed neutrons to make an appearance – nuclear weapons are critical (actually supercritical) on prompt neutrons alone.

There are a lot more aspects of nuclear criticality – criticality safety comes to mind, for example – but they start to get a bit more complicated so this seems like a good place to stop for the moment. So let’s do a quick rehash and then we can call it a day!

  • Criticality simply means that the number of fissions is staying constant over time – that one neutron from a fission goes on to cause another fission.
  • There is a minimum amount of fissile material that will sustain a critical chain reaction – this is called the critical mass.
  • It’s also important for the mass of material to be in a configuration in which most of the neutrons have a chance of causing a fission – this is the critical geometry – without both a critical mass and a critical geometry there will be no criticality.
  • In a nuclear reactor, the fissions occur at a controllable pace due to a number of factors, including the dependence on delayed neutrons to achieve criticality,
  • While in a nuclear weapon all of the fissions come from prompt neutrons so the reaction proceeds much more rapidly.

Radiation: By The Numbers

So there’s this rock that’s sitting about a meter from where I’m sitting right now. It’s got a beautiful deep green color and it’s a mass of flat squarish crystals that are maybe 5 mm on a side and about 1 mm thick. It’s also radioactive, which is why I bought it at the Columbus (Ohio) rock and mineral show a few decades ago – it’s a uranium mineral called torbernite; this particular piece came from Morocco.

torbernite-encrusted-rock

Figure 1: my torbernite-encrusted rock

Knowing that it’s radioactive I was eager to take some measurements when I got it home – and, as a radiation safety professional, I’ve got my own instruments. I can’t remember the readings when I first bought the rock, so give me a minute to grab my instruments and I can get some readings now. Ready for some numbers?

At a distance of 1 cm my Geiger probe gives me a count rate of about 250,000 counts per minutes – a respectable count rate. But I’ve got additional detectors – let me see what they say. My “baby” sodium iodide detector (the crystal on this one is 1 inch tall and an inch in diameter) gives me a reading of 140,000 cpm – less than the Geiger counter.

sodium iodide detectors

Figure 2: My GM (in the clip on top of the meter) and my two sodium iodide detectors

I’ve got a larger sodium iodide as well – 2”x2”, or about 8 times the volume of the smaller crystal. That one gives me nearly a half-million counts per minute. The background count rates on each of these (that is, the count rate when the probes are away from any radioactive materials) are about 75 cpm for the Geiger counter, about 3500 cpm with the baby sodium iodide, and about 10,000 cpm with the larger crystal. Or, to summarize them in a table:


Interestingly, the meter that I’ve connected these detectors to has a faceplate that reads out in mR/hr as well as in CPM. I’ll explain the difference in a minute; for now, let it suffice to say that the dose rate is more important than the count rate if I’m trying to figure out how dangerous this rock might be.

radiation detector faceplate

Figure 3: the faceplate of my radiation detector. CPM is on the top and the other two scales are for dose rate.

And – bonus! – I’ve also got a different meter that measures dose rate as well. So let’s see what readings I get from all of these:


Both of these tables show an awful lot of variability – it can make a lot of people wonder how we can ever know exactly what numbers to use and what these numbers mean.

Let’s look at the count rates – the first table – first.

In particular, take a look at the background count rates – a paltry 75 cpm for the Geiger counter and a whopping 10,000 cpm for the large sodium iodide detector, with the smaller sodium iodide in between. The reason for the difference here is that a Geiger tube is very sensitive to beta radiation and it’s not very sensitive at all to gamma rays; sodium iodide, on the other hand, does a good job of measuring gammas but not so much with beta particles. So looking at these readings tells us that background radiation mostly consists of gamma rays – which makes sense because beta particles can’t travel more than 20 feet through the air at best, and mostly not even 5 feet. So our two gamma detectors are picking up the background gamma rays, which mostly pass through the Geiger tube without registering. And, of course, the 2”x2” sodium iodide detector has a higher count rate because it has four times as much cross-sectional area as the smaller one.

Now let’s look at the meter readings – and this one surprised me, to be honest. The Geiger tube had a higher count rate than did the sodium iodide…but why? One thing that comes to mind is that the Geiger counter is sensitive to beta radiation and the sodium iodide isn’t. This suggests that the torbernite is giving off both beta and gamma radiation – the Geiger tube will measure almost all of the betas and some of the gammas, while the sodium iodide is only seeing the gammas. So my rock must be giving off both beta and gamma radiation. In fact, there’s probably some alpha radiation being emitted as well, but I don’t have a working alpha detector at the moment (mine’s out for repair) so I can’t check to see how much. As far as the larger sodium iodide detector – we see the same factor of four difference between the baby detector and the larger one, so this is just due to the size of the detectors again.

When we look at the dose rate, though, the numbers are all over the place – none of these numbers are dangerous, but since radiation dose affects our risk of getting cancer or radiation sickness it’d be sort of nice to know which (if any) of these numbers we can trust. And part of the key here is to look at the ratios of the readings between the Geiger counter and the two sodium iodides. See anything familiar here? Once again you see that the dose rates are proportional to the readings – the GM dose rate is twice that of the small sodium iodide and the large sodium iodide dose rate is four times that of the smaller detector. In other words, the meter is just looking at the count rate and converting that to dose rate. It’s easy to do – the problem is that it’s the wrong way to measure dose rate.

Here’s why.

Say I throw a piece of gravel at you – you’re upset so you throw a rock back at me. “Not fair!” I say. “Your rock’s a lot bigger than the gravel I threw at you.”

You reply “You threw one thing at me and I threw one thing back at you – we’re even.”

“But yours hurt me more than mine hurt you.”

And that’s the thing – the meter that I have these different probes connected to was calibrated with Cs-137, with a gamma energy of 662,000 electron volts (or 662 keV). In effect, my meter has been “told” that every time it sees a count, the radiation causing that count has an energy of 662 keV. But the radiation coming from my rock has a slew of energies – some have more energy than Cs-137 and most have less. And that’s why the readings are so varied – my meter has no idea how much energy is passing through it – it only sees the number of counts. And since radiation dose (and dose rate) are related to the amount of energy deposited in a material, this particular meter can only accurately measure radiation dose rate from Cs-137 and it’s going to be wrong for everything else. Or, to put it another way, I don’t trust any of the first three readings.

But the ion chamber – that’s different. The ion chamber can tell the difference between high and low energies, so the readings take into account the higher and lower gamma energies from my rock. And – even better – there’s a piece of plastic on the bottom of the meter!

brown beta shield ion chambers

Figure 4: my ion chamber; the case (left) shows the brown beta shield that can be slid down to measure beta dose; the beta radiation passes through the foil window on the bottom of the black ionization chamber on the right.

I know – exciting, right? But here’s the thing – when that piece of plastic is covering a thin metal window on the bottom of the meter it screens out the beta radiation that we know is coming from the rock; when the window is open then the betas can enter the chamber as well. So the dose rate with the beta window is open is 40 times as high as when the beta window is closed – that confirms what we concluded with the count rate measurements – there’s more beta radiation being given off than there is gamma. Cool, right?

It’s also good to know what these readings mean – what’s normal, what should be investigated, and what might hurt us. Let’s start with the easy one – count rate.

If I’m measuring count rate it’s usually because I’m looking for contamination – contamination is only very rarely a health risk, it just affects whether or not we need to wear protective clothing or decontaminate ourselves, equipment, or areas. So as long as the count rate I’m measuring isn’t high enough to call for decontamination then I don’t worry about it all that much. And unless the count rate is really high – in the hundreds of thousands of counts per minute, the contamination doesn’t pose much (if any) risk.

Most of the time, in a non-emergency setting, we want to try to keep contamination levels to a minimum. So anytime I have more than about a few hundred cpm above background with a GM or more than about 1000 cpm above background with a 1”x1” sodium iodide I’ll stop to clean it up (I don’t do contamination surveys with the larger detector because it’s too hard to see low levels above background). But in an emergency – a nuclear reactor accident or a dirty bomb, for example – we can actually let people have as much as 100,000 cpm with a Geiger counter before we need to start cleaning things up. And after the Fukushima accident, there were so many people who were contaminated that the Japanese changed their limits from about 10,000 cpm to over 100,000 cpm without causing any added risk to the public.

With dose rate, I normally measure less than 0.1 mR/hr with my ion chamber – and more like 0.01 mR/hr with a suitably sensitive instrument. When dose rates get to about 2 mR/hr the public isn’t allowed to have unrestricted access – but nobody’s going to be harmed by this level of radiation. In fact, it’s not until the dose rates get into the tens of thousands of mR/hr that they start to pose a risk. Here’s a subjective summary:


The last thing to mention here is the units of radioactivity, what they mean, and when they start to become a concern.

The biggest thing to understand is that when we’re talking about radioactivity we’re talking about the rate at which the atoms in a source are decaying away. By definition, one curie (the American unit) of radioactivity will undergo 37 billion decays every second (the SI unit, the becquerel, undergoes one decay every second). This has nothing to do with the size of the source, by the way. One gram of radium (Ra-226) has one curie of activity – the same amount of radioactivity as three tons of the much longer-lived depleted uranium, and the same activity (still one Ci) as 100 micrograms of tritium or one milligram of cobalt (Co-60). This means that you can’t just look at a radioactive source and judge the risk it poses by its size – one gram of Co-60 can give you a fatal dose of radiation in less than an hour; a ton of depleted uranium is radiologically safe. The only way to judge the risk a gamma source poses is by making radiation measurements with your trusty ion chamber.

Of course, you might be able to find a label that gives the activity of a radioactive source, there might be a sign on the door to a room, shipping papers, or something like that. If you can find out the source activity, here’s what some of the numbers mean:


Putting it all together

So…let’s put this information to use!

  1. Say you’re doing a contamination survey using your trusty pancake GM and you get a reading of 2000 cpm. What should you do?

    This is clearly higher than background (remember, with a GM, background is normally around 50-100 cpm), but it’s not a danger to anybody. As a radiation safety professional, I’d be inclined to clean up the contamination if I could unless it were a large-scale emergency with other more pressing problems.

  2. During a routine radiation survey you notice radiation dose rates are around 1.5 mR/hr in a waiting room. Is this a concern?

    This dose rate is clearly elevated, but not enough to pose a risk to anybody. It’s also lower than the 2 mR/hr level that would call for restricting access for members of the public. On the other hand, radiation levels are higher than they ought to be – this warrants investigation to find out what’s causing the elevated rad levels. They should be reduced if possible.

  3. A technician tells you that an incontinent nuclear medicine patient urinated on the floor a half-hour after being injected with 10 mCi of Tc-99m. He’s measuring radiation dose rates of about 10 mR/hr with a pancake GM. What do you need to do?

    Tc-99m emits a gamma with much less energy than Cs-137, so the readings we measure with a Geiger counter are going to be higher than the actual dose rates. You need to bring an ionization chamber or an energy-compensated GM to the scene to find out what the actual dose rate is before you know what actions you need to take. Oh – and clean up the radioactive urine!

  4. You see a source lying on the ground and you’re able to find the instrument it fell out of. The label tells you that the source is 75 Ci ofIr-192. What should you do?

    Seventy-five curies is a fairly high amount of activity – that much Ir-192 will produce a dose rate of about 32 R/hr a meter away, which can cause radiation sickness in 2-3 hours. This source isn’t deadly over a short period of time, but it’s got to be treated with care. You’ll need to fall back from the source until dose rates drop to 2 mR/hr and establish a radiological boundary, evacuating everyone from within that boundary (don’t forget to survey on floors above and below the source if appropriate). If you have training in recovering radiography sources then you can attempt to retrieve it. If you don’t have such training, you’ll need to contact your regulators and the manufacturer so that they can retrieve the source.

Radioactive Gems and Jewelry

Radiation and radioactivity were discovered more than a century ago and it didn’t take long to realize that they could burn the skin. From there it wasn’t too great a leap to realize that perhaps they could also be used to deliberately burn tissues – tumors – and the field of radiation oncology was born. A century ago, however, physicists had not yet created the artificial radionuclides that are the backbone of today’s nuclear medicine and radiation oncology – Cs-137, Co-60, Ir-192, and so on – they had only the natural radioactivities, primarily radium (Ra-226) and the nuclides into which it decayed. Radium itself was hideously expensive, but it decayed into radon (Rn-222), which decayed further into isotopes of lead, polonium, and bismuth.

Radon being a gas, it could easily be extracted from the radium ore and would then be loaded into tiny gold capsules that were sealed – these “seeds” were then inserted into tumors in order to treat cancer. And to make sure that they had enough (in those very early days of medical imaging) the doctors tended to order more seeds than they ended up using. The extras often ended up being sold to gold buyers and were melted down and frequently sold to jewelry manufacturers…along with the radioactive lead, polonium, and bismuth it contained.

Fast-forward nearly a half-century to the 1960s when doctors began reporting patients with odd skin conditions that were eventually identified as radiation dermatitis – not necessarily radiation burns (those take a higher dose over a shorter period of time), but skin damage, pigmentation changes, and damage to the underlying cells; one patient even died of skin cancer that was likely caused by the radiation. While it took some time, physicians and public health officials came to realize that the skin damage was due to radioactivity in the jewelry they were wearing; gamma spectroscopy showed the radionuclides to be radon decay products and more detective work revealed the origin to be the decades-old gold capsules.

Once the source of the contamination was known public health officials let the public know what was going on and offered to survey any gold jewelry brought to them – especially antique gold. Of about 160,000 pieces of jewelry examined, 155 were found to contain radioactivity; 133 of these were turned over to the government for disposal and the other 22 were kept by their owners. Of the pieces collected, the majority dated back to the 1930s and 1940s, although one ring was engraved with the year 1910.

Two factors seem to have made the difference between those who developed radiation dermatitis and those who did not, as well as the varying degrees of severity among those afflicted: the amount of radioactivity that was present in the gold and the amount of time that it was worn; the type of jewelry played a role as well, albeit not as important as the other factors. A wedding ring, for example, would likely be worn continually for years or decades and, as a ring, would be in closer contact with the skin compared to, say, a brooch or a pendant – an amount of radioactivity in a wedding ring, then, would be expected to produce more serious skin damage than in a brooch or a pair of earrings.

There haven’t been any cases of radiation injury from radioactive jewelry in several decades; this doesn’t mean that every bit of contaminated gold has been accounted for; most likely any contaminated jewelry that remains is only lightly contaminated or is being held as a family heirloom rather than something that is worn frequently. In any event, whatever is left of this contaminated gold appears to pose little risk.

————–

A decade ago I became aware of another area in which radiation and jewelry cross paths – it turns out that radiation can cause some gemstones to change color, and some of these changes are for the better. Topaz, for example, can change from a fairly ordinary straw color to a much more attractive blue; diamonds can turn green and bluish-green (they can turn yellow or brown, but that was found to be due to thermal heating of the gems placed in the beam of a particle accelerator), and other gemstones can turn still more colors. The way it works is that the color of a gemstone (or anything else for that matter) is a function of the manner in which light interacts with it – in gemstones it has to do with what are called “color centers” and these color centers can be affected by exposure to ionizing radiation. Not only that, but different types of radiation and different irradiation periods can cause different kinds of color changes!

One example of this is with blue topaz, which is typically exposed to either neutrons or to high-energy electrons. Neutrons weigh about 2000 times as much as electrons and they cause more ionization within the crystal; at the same time, they are also large enough to jostle atoms around within the crystal structure or even to be captured by an atom, causing it to become radioactive and to decay to form an atom of a different element. On the other hand, the much lighter electrons don’t do nearly as much when they interact with the topaz crystal – they can cause ionization and minor changes, but that’s about it. So topaz that’s irradiated with neutrons ends up being a much darker blue than electron-irradiated topaz. Or, to put it another way, topaz that’s placed in a nuclear reactor core (which is a great source of neutrons) will be a deeper blue than the topaz that’s placed in the beam of an electron accelerator.

Here’s the thing, though – slamming neutrons into atoms can make them become radioactive and, if the electron energy is high enough, so can electrons. So irradiated gems makes them more attractive, but it can also make them radioactive – the question is whether or not they become radioactive enough to pose a threat to the wearer.

This was enough of a concern that a number of studies were done to try to evaluate the threat (if any) these gems posed, including a few studies performed or funded by the Nuclear Regulatory Commission, as well as by some gemological organizations. And they all found the same thing – that the gems are radioactive, but not to the point of causing problems. And some of the reasons for this are different for different types of irradiation.

One reason is that the elements of which most gemstones are comprised don’t lend themselves well to becoming activated, so not much radioactivity is produced to begin with; on top of that, most activation products decay away fairly quickly, so it’s not too big a deal to store the gems until most of the activity that is induced is gone, and the traces of longer-lived radionuclides that remain are present in quantities too low to cause a problem. And there’s an additional factor that comes into play with electron-irradiated gems – unless the electrons have a lot of energy they won’t strike any atoms hard enough to eject them from the nucleus. Or, put another way, electrons can’t induce radioactivity unless they’re very high-energy – higher than what most accelerators are capable of producing.

OK – so I mentioned that there might be traces of radioactivity left in some of the gems, but that it’s not dangerous…and you might wonder how I can say that so confidently. Well, it turns out that there are traces of radioactivity in a lot of things – including all of the food that we eat and in the water that we drink. In most cases, our food and water contains more of this natural radioactivity than do irradiated gems. I’ve made measurements on both irradiated gemstones and bananas and salt substitute (both of which contain naturally radioactive potassium-40), and it turns out that a bunch of bananas gives off more radiation than even a pound of irradiated blue topaz that’s been cut, mounted, and ready to sell.

————–

In addition to materials that have been made radioactive by people there are also some gems with natural radioactivity – primarily uranium and thorium and their decay products. But these, too, pose no risk to the wearers. If you’re interested in reading more about this topic, here are some links that might be useful. Some of these reports are a bit technical, but they all contain a great deal of useful information:

The Radioactive Decay Patterns of Blue Topaz Treated by Neutron Irradiation

Health Risk Assessment of Irradiated Blue Topaz (NUREG 5883) – Nuclear Regulatory Commission, 1992.

A History of Diamond Treatments – Overton and Shigley, 2008

Gemstone Irradiation and Radioactivity – Ashbaugh, 1998

Are You Afraid of Your Phone?

I got my first cell phone about 30 years ago – one generation ago in human time and five generations in transmission technology. And it seems that every few years – especially whenever a new transmission technology is developed – there’s a flurry of concerns about the harmful effects of cell phone radiation. So maybe this is a good time to take a look at the science behind cell phones and cancer.

electromagnetic spectrum

 

(Source: US EPA https://www.epa.gov/sites/default/files/2017-05/electromagnetic-spectrum_0.png)

Here’s where some of the misunderstandings begin – to a scientist or an engineer “radiation” has a different meaning than it does to an ordinary person. When someone with a scientific background uses the term they’re speaking about the energy that’s given off by one object and transmitted through space to another. So scientists and engineers talk about thermal radiation, radio and microwave radiation, visible light radiation, and even gravitational radiation – as well as x-ray, gamma ray, cosmic, and alpha, beta, and gamma radiation. The thing is, only the second group of these has enough energy to remove an electron from an atom – to cause an ionization – and it’s the ionization that’s the first step in causing cancer. This means that the first types of radiation (thermal, etc) cannot cause cancer and the second group can.

The reason for this is actually fairly simple – it takes a minimum amount of energy to cause an ionization and, absent ionization, radiation can’t initiate the sequence of events that might lead one day to cancer. Anything with less energy can’t do the trick.

Think about trying to throw a ball onto the roof of, say, a school. If the roof is 40 feet off the ground then you have to throw the ball hard enough to reach 40 feet in the air. If you can’t throw the ball with enough energy to get 40 feet in the air it’s never going to land on the roof, no matter how many times you throw it. Electrons around atoms are the same – it takes a minimum amount of energy to strip an electron from an atom and nothing with less energy is going to cause an ionization. Radiation with less energy than ultraviolet isn’t energetic enough to cause an ionization – and cell phone radiation (including 5G) lacks the energy to ionize an atom.

OK – so what about all the studies showing that, in spite of this, cell phones do cause cancer? And this is somewhat personal for me – not only do my wife and I, our kids, parents, and…well…our entire extended families use cell phones on a regular basis, but one of my relatives developed parotid gland cancer on the side of her neck that she normally holds her cell phone on. And then there’s the occasional paper in the medical literature as well – what about them?

The thing is, there’s not a single one of these papers that’s conclusive. What I mean by that is that, for every paper that shows a slight increase in cancer risk among cell phone users there are others that show no change at all. And the changes that they do show are tiny – smaller than the normal variability in the data, which means that they’re not very convincing. Consider – say there’s a group of six people. On average, about half the people in the world are men and the other half are women. But say that this group has four women and two men – 33% of the group is male and 67% is female. Is this significant? Should we start looking for a reason for this big shift in population statistics? Well…no. The change of a single person in a small group might cause what appears to be a dramatic change in the statistics – one that might not be borne out as the group grows in size. Think of a larger group – 30 people, say – with one extra woman. Now we’ve got 16 women and 14 men (53% and 47% respectively). In an even larger group of 100, that one extra person brings the numbers of 51% and 49% – even less impressive. The moral of the story is that, in a small study population, a very small actual change can look impressive – more impressive than is warranted.

Now think about a cancer that affects only, say, one person in 10,000. This means that if you’re looking at 10,000 people then you’d expect to see a single of these cancers. So what if you see two such cancers in a group of people who drink orange juice – does that mean that orange juice causes cancer? Or could it be that it’s just a fluke – that just one extra case, even of a rare cancer, isn’t that big a deal? And in fact, it’s the latter; a single extra case – even if it causes the expected number of cases to double – simply isn’t enough to show cause and effect.

One of the more recent studies seemed to show an increase in some very rare cancers among rates that were exposed to cell phone “radiation” – but this study suffers from the problems of small-number statistics; these were pointed out by internal reviewers when the study was first done. And, of course, there’s also the fact that cell phone “radiation” can’t ionize atoms so they can’t initiate the process that leads to cancer – and the report’s author doesn’t provide any plausible explanations as to how it might. But there’s more than this.

One factor is that this study exposed the rats to very, very high levels of exposure – far higher than any user would be exposed to. And dose rate makes a difference – just as the rate at which you add water to a bathtub makes a difference. At a lower dose rate, the body can repair damage much more effectively than it can when its repair mechanisms are overwhelmed. And, of course, there’s the minor point that rats aren’t people and they might not respond the same way that we do to any exposure. So that makes it hard to apply these results to humans as well.

But, then, the study also exposed the rats to the same dose across their entire bodies, which is something that doesn’t happen when we’re talking on the phone. If you’re holding the phone in your right hand then your right ear, part of the right side of your head, and maybe the right side of your neck are closest to the phone and will get the highest dose. The heart (where some of the rats developed tumors) is at least 10 times as far from the phone and will receive less than 1% of the dose of the head and neck – not to mention that the intervening tissue will reduce exposure even more by absorbing still more of the radiation. So for this study to be at all relevant to humans, our hearts would have to be amazingly sensitive to the effects of radiofrequency radiation – something that has never been noted (or even postulated).

The bottom line is that this report adds nothing compelling to the arguments that cell phones might cause cancer. Or, to put it another way, I’m still using my phone and I’m not worried about my kids using theirs. As long as they don’t text while driving, that is.

NORM Who?

My very first radiological consulting project was for a glass-making company; they were re-bricking one of their furnaces and a load of replaced refractory brick they were disposing of ended up tripping a radiation detector at the landfill. One of my colleagues nodded knowingly and said “Norm.”

I’d been with the company for more than a year and there was nobody by that name in our office so I was confused. “Norm? Norm who? Is he in one of our other offices?” My colleague started laughing.

“Not ‘Norm who’ – you should be asking “NORM what?”

Turns out that NORM stands for Naturally Occurring Radioactive Materials, and that’s what the refractory brick contained – one of the components was a zirconium mineral and, because they have similar geochemistries, anywhere we find zirconium we’re going to find some uranium as well. And there’s not just the uranium – as uranium decays to stable lead it goes through over a dozen intermediate steps, so the refractory brick also contained traces of radioactive thorium, radon, radium, polonium, and more. This is what the landfill’s portal monitor was picking up. And since we’re so good at detecting radiation, the amount that gave a clear signal – strong enough to set off the alarm – was nowhere close to what it took to be harmful.

NORM was in the news not long ago, although it was easy to miss the reference (https://news.wfsu.org/state-news/2021-04-13/state-of-florida-plans-cleanup-of-old-piney-point-phosphate-plant – combined with information published by the EPA
https://www.epa.gov/radtown/radioactive-material-fertilizer-production).

Here too, due to the geochemistry of uranium, phosphate rock also tends to have elevated levels of the stuff. Interestingly, I saw evidence of this in aerial radiation surveys over my home state of Ohio – I also saw it when I was doing radiation surveys at the racetrack in Indianapolis – the greenish-yellow area in the image here shows the higher radiation levels we detected.

aerial radiation survey

Aerial Radiation Survey

It turns out that phosphate rock is used to make some types of fertilizer, and anywhere that this fertilizer is used has enough radioactivity to show up in radiation surveys. As with the refractory brick, this isn’t nearly enough radioactivity to pose a risk.

I’ve run across NORM in a lot of places – in Iranian hot springs, in coal seams, in a former mineral processing facility in NYC, in the North Dakota oil fields, and many more. In some cases – primarily fossil fuel deposits – the NORM is there because uranium is insoluble in water that lacks oxygen. Since decaying organic materials removes oxygen from water, swamps, and other places where there’s stagnant water that can collect leaves and plants – which turn into fossil fuel deposits over millions of years – tend to collect higher levels of uranium over time. In other cases, a mineral will be made of elements that are similar to uranium – the atoms are close to the same size and they have similar chemical properties – so it’s easy for uranium to slip into the crystal structure of any of a number of minerals, including ores of niobium, vanadium, titanium, and any of the rare earth elements. This happens in our bodies too, by the way – many actinide elements, as well as radium, will also slip into the crystal structure of our bones. And – getting back to fossil fuels briefly – radium and other radionuclides that come from the decay of uranium will become trapped inside the scale lining the pipes that convey oil and natural gas (as well as the brines that tend to accompany hydrocarbon deposits) from the depths to the surface as well as the pipes and tanks used to process and store these fuels.

decay chain thorium

reference: https://upload.wikimedia.org/wikipedia/commons/thumb/2/25/Decay_Chain_Thorium.svg/1310px-Decay_Chain_Thorium.svg.png

Now – consider a chunk of monazite (monazite is a rare earth element ore) that’s contaminated with thorium. Over time, the thorium decays to stability through a series of a dozen progeny radionuclides, every one of which is radioactive. So the monazite rock also contains isotopes of radium, actinium, polonium, bismuth, and more. In addition, monazite contains cerium, lanthanum, and other rare earth elements, along with phosphorus and oxygen. About 50% of the monazite consists of the rare earth elements for which it’s mined and the rest is phosphate, thorium, uranium, and oxygen (the oxygen combines with all of the metals in the rock).
So – when monazite is processed to remove the rare earths the reside is only about half the weight of the original rock, but it contains all of the original radioactivity. Just by removing the rare earths the remaining waste (called mill tailings) have double the radioactivity concentrations as the original rock. The NORM concentrations in the tailings were enhanced by the processing; what we now have is called TENORM (technologically enhanced naturally occurring radioactive materials).

The other place we can find NORM is in rocks. I’ve already mentioned why uranium tends to end up in coal and hydrocarbon deposits – for the same reason we also tend to see it in dark (organic-rich) sedimentary rocks; during the Second World War, the Swedes mined a black shale deposit to produce shale oil and used the residual shale to make cement to build houses for the poor. Unbeknownst to them, the oil shale was emitting high levels of radon from the decay of NORM uranium and thorium, causing excessive radon exposure to those living in the homes. We tend to see the same – along with slightly elevated levels of radiation at the surface – in many places where the bedrock consists of dark organic-rich sedimentary rocks.

With igneous rocks, it’s just the opposite. The granite countertops in my apartment are a light gray, and I’ve been able to measure uranium, thorium, and potassium (along with traces of radium) in my kitchen. It turns out that, with igneous rocks, it’s the lighter ones that have higher levels of radioactivity. That’s because uranium, thorium, and potassium atoms are all on the large side so they go into the last minerals to crystallize – these are the minerals that form gray, pink, and red granites and other light-colored igneous. Black igneous rocks come from the mantle and are almost devoid of radioactivity. As an aside, this is why natural radiation levels are fairly low in Hawaii, Japan, Iceland, and other islands made of dark rocks that come from the Earth’s mantle. Oh – I guess I should also mention that if an interior decorator recommends “black granite” to you…well…it’s not really granite. But, then, your decorator isn’t a geologist and they know that it’s easier to sell “black granite” than it is to sell diorite or basalt, which is what the rock really ought to be called.

Unfortunately, regulation of NORM and TENORM is a mess in the US, primarily because the Nuclear Regulatory Commission isn’t permitted to regulate the stuff. This means that it’s left up to the states to regulate – this, in turn, means that every state has its own take on the matter. Some states have regulations that are remarkably good, some have no NORM/TENORM regulations at all, and some have regulations that are just out-and-out bad – and there are places where you can find all of these within just a few hours’ drive of each other.

We can’t get away from NORM – it’s been on Earth before Earth was even a fully-formed planet. The fact is, we live on a planet that’s just a little radioactive. But that’s not a bad thing – at the very least it means that every organism that’s ever lived on Earth has been exposed to radiation from the NORM that’s always been a part of our planet and, in the past, levels were higher than they are today. It means that our bodies – our cells – evolved to deal with this level of radiation damage, which is why NORM so rarely poses us a risk. This is why you should just enjoy your granite countertops – don’t be scared of them.

Cosmic Rays

I’m a geek. I’ve gotten used to this fact, as have my children, my wife, and other family and friends. And it’s why, when I fly, sometimes I monitor radiation levels. I’ve learned that this can sometimes make the person sitting next to me a little nervous – I’ve learned not to talk about it much. Luckily my radiation detector has a cell phone app so I can leave it in my carry-on bag and monitor it via Bluetooth from my seat…I just need to remember to turn off the alarm so my bag doesn’t beep annoyingly in the overhead storage bin.

What’s interesting is that, as we ascend after takeoff, radiation levels drop steadily. Actually, this part isn’t very surprising because when we’re on the ground most of the radiation I’m measuring comes from radioactivity in the rocks and soils and, as we gain in altitude we’re getting further and further from the surface, reducing the radiation coming from the rocks and soil. But then when we get to about 10,000 feet the readings stopped dropping – interesting. And by 12,000 feet or so they started rising again – even more interesting. By the time we were at our cruising altitude they were several times higher than they’d been at ground level – not only that, but I was also seeing many more neutrons than I’d seen at the surface. Curious.

What I was seeing was nothing new – except to me the first few times I made these measurements. The first time it was noticed was over a century ago, in 1911 when Austrian physicist Victor Hess was taking a balloon flight, and he had his own radiation instruments with him (albeit without Bluetooth or a phone app) when he noticed the same thing I was to see a century later. Hess was more curious than I was because he was the first person to see or hear of this effect, while I had the benefit of knowing about his work. Hess recognized the physics behind the lowering dose rates that happened at first, but the rising dose rates as he ascended still further had him puzzled. What he finally realized was that he was seeing radiation coming in from outer space – cosmic rays. Then he just needed to figure out what they were.

Hess first thought that the Sun might be the source of this cosmic radiation. But when he arranged to make measurements during a solar eclipse he saw the same effect – while he still didn’t know their source, he could rule out the Sun – cosmic rays apparently originated from somewhere outside of our Solar System. Hess went on to be awarded the 1936 Nobel Prize in Physics (https://www.nobelprize.org/prizes/physics/1936/summary/) for this discovery.

(https://phys.org/newman/gfx/news/hires/2012/1_1911_1912_hess_ballon.jpg)

It turns out that I know more about cosmic rays than Hess did. That’s not because I’m smarter than he was (and goodness knows, I’m not!). Rather, it’s because I’ve been able to read over a century of research on the subject, much of it using instruments that Hess never even dreamed of. Not only that, but cosmic rays have been studied by physicists and astronomers – even by geologists – who have teased out details far beyond the science of Hess’ day.

One of the things we’ve discovered, for example, is that most of the cosmic radiation we’re exposed to originates elsewhere in our galaxy in the form of high-energy particles blasted into space by exploding stars. These are the nuclei of atoms that have had all of their electrons stripped away and that have sailed through space until they encounter our planet – they have so much energy that they punch through the Earth’s magnetic field and stream into our atmosphere. There, they’re likely to smash into an atom in the atmosphere, initiating a chain reaction of events giving rise to a cosmic ray air shower (https://scied.ucar.edu/image/cosmic-ray-air-shower) – a cascade of gamma rays and particles, some of which can create radioactive tritium (H-3) and carbon-14. Other particles can smash into airplane fuselages (where, among other things, they show up on my radiation detector) and still more filter down to sea level to give us radiation dose of between 20-30 mrem annually.

Something else we’ve learned over the decades is that cosmic radiation accounts for about 10% of our annual exposure to natural background radiation – and that this remains relatively constant throughout the solar cycle (the decadal waxing and waning of solar activity). The reason for this is that, when Solar activity is low, the relatively anemic charged particles from the Sun can’t penetrate to sea level, but the much higher-energy galactic cosmic rays can. On the other hand, when the Sun’s activity is high the solar cosmic rays are more energetic and more of them can penetrate more deeply into the atmosphere, exposing us to more radiation from the Sun; at the same time the stronger solar wind helps to sweep the galactic cosmic rays from the inner solar system, reducing our exposure to them. These two factors tend to cancel each other out with the net result that our exposure to cosmic radiation remains relatively constant throughout the Solar cycle.

We’ve also found out that cosmic rays can induce radioactivity in rocks that are exposed at the surface – cosmic ray air showers produce neutrons and when neutrons are captured by stable atoms they can become radioactive. My MS advisor made use of this fact – it turns out that you can tell how long ago a rock first became exposed to cosmic rays by studying the induced radioactivity and this can be used to determine the rate at which glaciers are retreating. By analyzing rocks from Antarctica, for example, my advisor was able to tell when rocks were last covered by glaciers; using similar analyses a friend of mine was able to tell when the glaciers melted back from his front yard in Kansas.

Another Kansan, astronomer Adrian Melott has other speculations about cosmic rays – he thinks they might be linked to the manner in which life has evolved on Earth. He notes that our Solar System bobs up and down through the galactic disk every 40-50 million years or so and postulates that when we’re on “top” of the disk (the side facing the direction of our travel through intergalactic space) we might be exposed to more cosmic radiation than when we’re on the “bottom” of the disc, and that this extra smidgeon of radiation might be enough to trigger faster rates of evolution. And there does seem to be some correlation between the fossil record and the timing of our excursions above the plane of our galaxy…so he might have a point.


But then there’s the stuff we’re still trying to figure out….

On October 15, 1991 a cosmic ray observatory (the Fly’s Eye camera – https://en.wikipedia.org/wiki/High_Resolution_Fly%27s_Eye_Cosmic_Ray_Detector) in the Utah desert recorded the debris of a cosmic ray with a staggering amount of energy – in physics terms it was about 300 billion billion (3×1020) electron volts; in more prosaic terms this was a single atom that packed the same wallop as a Little League pitcher’s fastball. Not only was this unprecedented, but it was also beyond anything science could explain.

The first thing that science was unable to explain was how an atom could have been endowed with so much energy. First, even exploding stars – supernovae – lack the power to impart so much energy to a single atom, and scientists were having problems thinking of any other phenomena that could do the trick. But then there was another problem – even if they could find a mechanism to accelerate an atom to so high an energy, it should have been whittled away while the atom was in transit across the galaxy through collisions with the rare atoms of hydrogen or even by running into photons – for the most energetic particles in the universe, anything they encounter will have a lower energy that will slow them down ever so slightly. As far as scientists could understand there was no way particles with so high an energy could exist…and yet they did.

As things stand now there are a few candidates that might be able to boost particles to so high an energy – the super-massive black holes in the cores of quasars are a candidate, as are the huge lobes of energetic gas near some galaxies that emit powerful beams of radio-frequency radiation. Another hypothesis dates back to the earliest days of the universe – speculating that cosmic-scale shock waves from the epoch of galaxy formation might have accelerated particles to these velocities. This latter is the most ancient, but there are some that are even more exotic, including the decay products from supermassive particles formed by what are called topological defects in the fabric of space itself (and no, I don’t really know exactly what they mean by that either, but it sounds pretty cool). But we still have to figure out how these particles can hold on to their energies for so long – especially when we consider that they seem to originate from outside of our own galaxy.

(a radio galaxy – the jets that form the large lobes might be one source of ultra-high energy cosmic rays https://www.nasa.gov/images/content/709514main_hs-2012-47-a-print.jpg)

Interestingly, these highest-energy particles provide a degree of comfort to the designers of our highest-energy particle accelerators. Every now and again when a new accelerator pushes to higher energy levels there’s a concern that it might produce new types of particles (https://phys.org/news/2014-02-chances-particle-collider-strangelets-earth.html) or miniature black holes or that it might even damage the structure of space – any of which could be dangerous. So physicists examine the matter – to date they’ve always decided that the new accelerator was safe to operate. Of course, they might have made a mistake, or there might be gaps in our understanding of the physics. But then they remember the ultra-high energy cosmic rays – boosted by mechanisms we still don’t understand to energies a million times greater than any accelerator we can yet make. The fact that the Earth still exists tells us that even these highest-energy particles can’t do extensive harm, which tells us that our accelerators are most likely safe to operate.

Sometimes when I’m flying and looking at my radiation instruments I think about a star that exploded somewhere in space – maybe in our galaxy, maybe in a galaxy halfway across the universe – and that spat out a flurry of atoms at incredible velocities. I think about these atoms speeding through space for eons – maybe since before our planet was even born – deflected time after time by the magnetic fields of stars, of clouds of interstellar gas, and that are woven into the fabric of our galaxy and in the spaces between the galaxies. And after so long a journey, to come to a rest in my radiation detector…just seems too mundane an end for so long and exotic a trip.

Oklo’s Natural Nuclear Fission Reactors

A bit over six billion years ago a star exploded somewhere in our galaxy – we can’t be sure where the star was so long ago, but we know that it was within shouting distance (on a galactic scale) of a cloud of gas and dust. A shock wave from the supernova slammed into the cloud, compressing it and setting in motion a series of events that would, millions of years later, lead to the birth of a star and the formation of a number of planets – we’re currently all sitting on the surface of one of those planets.

When stars are active they produce energy by fusion – first of hydrogen, then of helium, and working up to iron. But fusing iron doesn’t produce energy, it sucks it up – when a star starts burning iron it collapses, forming a neutron star or a black hole and the outer layers rebound out into space, blowing the rest of the star apart into space. During this explosion, every element heavier than iron is formed, including gold, lead, and uranium. This debris emerges as a shock wave that can trigger the collapse of existing clouds of gas and dust; whatever is left will eventually become part of such a cloud itself. Thus, the cloud from which the Solar System formed contained uranium, and the shock wave that caused its collapse contained still more.

Over time, as the Earth cooled it began to solidify, the first rocks forming a thin skin that floated like pond scum atop the underlying magma as the magma itself circulated, driven by convection. As the world churned the various elements began sorting themselves out with the large atoms (including uranium) partitioned into the solid rocks of the nascent crust, iron and nickel sinking to form the core, with the mantle lying in between.

Over the eons, the rocks that contained the uranium began to weather and the grains bounced their way down the stream beds, collecting in areas where the speed of the water slowed, just as grains of gold do. But the early atmosphere lacked oxygen, as did the water and since uranium is insoluble in anoxic water the grains just sat there. And then, two billion years ago, that changed – photosynthesis produced oxygen that flooded the Earth’s atmosphere and saturated the water and the uranium began to dissolve, precipitating out again in places where chemical reactions had again stripped oxygen from the water.

One such location was in a part of the planet that would one day become part of the nation of Gabon. Here, the water was percolating through sandstone and depositing nodules of uranium in the vicinity of some hydrocarbon deposits, bathed with groundwater. And this is where things start to get interesting!

Nuclear reactors consist of clumps of uranium surrounded by water – when a uranium atom fissions it emits (among other things) neutrons. The neutrons bounce off the hydrogen atoms in the water molecules and slow down, just as a cue ball slows down as it bounces off the other balls of the same size – this is important because slow neutrons are more likely to be absorbed by a uranium atom and to cause it to fission. If one of those neutrons is absorbed by another uranium atom and causes a second fission the reactor is said to be critical. In our sandstone, the uranium nodules were surrounded by water…just as in a nuclear reactor. And, just as in a nuclear reactor, the water slowed down the fission neutrons to the point where they could be absorbed by another uranium atom.

Uranium is comprised of two isotopes – one has a mass of 235 atomic mass units (AMU) and the other is slightly heavier with a mass of 238 AMU. The lighter atoms of U-235 fission fairly easily, but they account for less than 1% of uranium atoms found in nature – too few to sustain a criticality. That’s why we have to enrich uranium – to increase the amount of U-235 to somewhere around 3-6% of the uranium atoms. And this is the last piece of the puzzle.

U-235 and U-238 have astonishingly long half-lives – about 4.5 billion years for U-238 and a mere 700 million years for U-235. This means that the fissile U-235 decays more rapidly than does U-238; calculating the amount of each that was present 2 billion years ago reveals that natural uranium at that time was around 3.5%…at the lower end of the band of concentrations we have in reactor fuel today. Thus, in one place there were lumps of uranium containing enough U-235 to sustain a chain reaction that was sitting in a water-saturated sandstone formation. And about 1.8 billion years or so ago, enough uranium had precipitated that the occasional spontaneous fission sparked a chain reaction – the uranium ore deposit had become a reactor.

There was one moment in time when this could have happened. Before about 2.2 billion years ago there wasn’t enough oxygen in the environment to mobilize the uranium and after about 1.5 billion years ago there was too little U-235 to sustain a chain reaction. But for about 700 million years the Earth could make it work.

Fast-forwarding to the 1970s, French geologists located a rich uranium ore deposit in Gabon that they started to extract at the Oklo uranium mine. As the uranium was extracted and enriched the radiochemists carefully tested the uranium enrichment at various stages of the process and were surprised to see that there was less U-235 than they expected. Through some impressive investigatory work they came to realize that what they were mining was the remnants of a natural nuclear reactor – the first (and, to date, only) ever found. And in the 40-odd years since it was discovered further study has helped to tease out some of the details of its operation.

As the reactor operated the fissions heated up the water percolating through the sandstone; as it warmed up it became less dense – it might even have boiled off entirely from time to time. When this happened, the neutron moderation ground to a halt and the reactor shut down. When the rocks and the water cooled the chain reaction restarted and the reactor would fission some more. It seems to have continued in this vein for about 100,000 years. And that leads us to another intriguing detail – with some implications for our storage of nuclear waste.

The natural nuclear fission reactors of Oklo: (1) Nuclear reactor zones (2) Sandstone (3) Uranium ore layer (4) Granite

We have a very good understanding of how fission works and what’s produced when uranium atoms split. And when examined in detail, physicists realized that virtually all of the fission products formed during the millennia of operation are still present in the rock – in spite of being located in porous and fractured rocks that were saturated with water for the better part of 2 billion years. Such a location would never be approved for radioactive waste disposal today yet, in spite of the rocks’ unsuitability, the fission products are still there. With no planning, no engineering, and a lousy (by our standards) location Nature managed to store the waste safely for eons – this bodes well for our ability to store radioactive waste in a well-designed and well-constructed disposal site located in an impermeable rock formation.

And, interestingly, the conditions under which the reactor formed and the rock formation that hosted it are hardly rare…what’s rare is the preservation of such a formation for so long. Earth might have once been littered with natural reactors and we might never know. And think about it – uranium from a dying star flew through space, causing the collapse of the pre-solar nebula (itself the detritus of other dying stars) and formed the Sun and the Earth. There it sat, gradually moving into the continental crust, eroding and collecting in streams…and waiting for oxygen levels to increase to the point where it could form an ore, and waiting until there was enough uranium in a suitable rock formation to achieve criticality. We don’t know how many times it might have happened on Earth. But we do know that it happened once – and that’s pretty cool.

China’s Taishan Nuclear Power Plant

Why we should care about noble gases

So a friend sent me an email on Monday (June 14, 2021) that linked to a news story reporting that China’s Tianshan nuclear reactor seemed to have elevated levels of noble gases in the reactor coolant. This caused some mental alarms to sound – muted at this point, but there nevertheless. But the reason for these alarms isn’t very obvious to the vast majority of the population – and why noble gases should cause any sort of reaction calls for a little explanation. So let’s see if we can figure out where these noble gases come from and what they might portend. But to do that we’ve got to get into things like how fission works, how reactor fuel is made, and how we can use this knowledge. So…let’s get started!

Where noble gases come from

When uranium atoms fission they split into two radioactive atoms that are called fission products (also called fission fragments). These aren’t equal in size – most are clustered around masses of about 135 and 95 atomic mass units (AMU). And as it turns out, there are two noble gases (krypton and xenon) that fall into these peaks. This means that fission causes (among other things) these fission product noble gases to be produced and they accumulate in the fuel as the reactor operates. Most have relatively short half-lives, but some (such as Kr-85) stick around for a while. Luckily they’re normally trapped in the fuel matrix – we’ll talk about that next.

How reactor fuel is made

Reactor fuel is made up of uranium oxide that’s compressed into a pellet that’s an inch or so in diameter and about the same length:


These fuel pellets are loaded into fuel rods that are a few tens of feet in length, clad with an alloy of zirconium, a tough and corrosion-resistant metal. These fuel rods are then assembled into fuel bundles that are arranged in the reactor core in a pattern that will sustain a critical reaction while permitting the fuel to be cooled and that leaves room for neutron-absorbing control rods to be inserted to help control the chain reaction. As long as the cladding remains intact the fission products are contained safely in the fuel pellets. But if the cladding cracks or otherwise becomes compromised then these fission products can be released into the reactor coolant. And the cladding isn’t made of pure zirconium. The process of making it and loading it with fuel results in traces of uranium (called “tramp uranium”) to be present in the cladding.

What it all means

Radioactive krypton and xenon are not found in nature, and they’re certainly not found in the ultra-purified water circulating through the reactor plant. So if we find evidence of these in the reactor fuel it tells us that there might be a crack of some sort that’s letting the fission products into the reactor coolant.

But here’s the thing – the tramp uranium is also fissioning and since it’s in the cladding some of the fission products can shoot out of the cladding and into the coolant. This means that there are always traces of fission product noble gas in the reactor coolant. So simply finding them in the coolant doesn’t necessarily mean that there’s a problem with the fuel – before we can determine that we need to know how much noble gas is in the coolant and how the levels we’re measuring differs from what we normally see.

Something else we have to be aware of is that the amount of this noble gas in the coolant depends on the reactor power – if reactor power increases then we see more in the coolant than we do at low powers. But even more than that – since these radionuclides don’t decay immediately, they can stick around for hours after reactor power drops. So we need to calculate a “power-corrected” fission product activity to see how what’s measured compares to what we expect to see. The power-correction calculations aren’t necessarily highly complex; at the same time, it’s easy to make a mistake, especially if power is changing frequently. On the nuclear submarine I was on power changed frequently – every time we changed speed – and commercial reactor plants change power as well, responding to changing electrical demand during the day.

What’s the situation at Tianshan?

And that brings us to Tianshan. The Tianshan reactors were built by a consortium that includes the French utility EDF (Électricité de France), Framatome (a French design and construction firm), and the Chinese government. Framatome got word that the concentrations of fission product noble gases were higher than expected and they asked the US government for help in interpreting the laboratory results.

The problem is that there’s very little information that’s been released to date. The Chinese government notes that fission product noble gases are normally found in reactor coolant, which is true, and that the levels that have been found are more or less normal. The problem is that we don’t know what the normal levels are on the Tianshan reactor, we don’t know the levels that have been found, and we don’t know the power history so we can’t do the power corrections. So the information that’s available isn’t sufficient to know if the noble gasses are really elevated or if they just seem that way. And without knowing that we don’t know how significant these lab results might be.

What comes next?

This is another question that we can’t really answer at this time, but there are a few possibilities.

If there really is a defective or compromised fuel rod then we’ll likely continue to see noble gas concentrations rise for a while, after which they should stabilize. We might also start to see other radionuclides show up, especially if the crack or defect gets worse over time. And the radionuclides we see ought to follow a predictable pattern – first we can expect to see volatile elements such as iodine and cesium, which are in the fission product peak close to 135 AMU. This is what I saw when I was in the Fukushima area after the accident there – I-131, Cs-134, and Cs-137 were all present, and they were exactly what I expected to see (the device I was using wasn’t the correct instrument to measure noble gases).

At the moment we can be reasonably confident that the reactor isn’t melting down – if that were the case then we’d see a wider variety of radionuclides, including some that have higher melting and vaporization temperatures such as strontium, molybdenum, and so forth. The fact that these have not been reported suggests that the reactor is not suffering a meltdown.

I should also note that fuel element defects are not unheard-of – they don’t happen frequently, but they do happen and they’re not necessarily a disaster. In fact, they happen often enough that the International Atomic Energy Agency has even written a document titled Review of Fuel Element Failures in Water-Cooled Reactors (https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1445_web.pdf). On the other hand, if there’s a problem with a fuel element then the utility will have to do its best to limit future damage to the fuel element and might need to try to locate the exact fuel rod that’s damaged to that it can be removed and replaced. This can be a long and expensive process – if the defect isn’t too bad then the best course of action might simply be to operate the reactor carefully until the next refueling when the damaged fuel can be removed and replaced with a new one.

The bottom line

At the moment we just don’t know very much, which means that there’s far more speculation than fact. It is entirely possible that the levels of fission product noble gases that so concerned Framatome are actually no more than what’s to be expected for the Tianshan reactor. It’s also possible that there is a fuel element defect similar to what a number of other reactors have experienced. Or this could be the prelude to something more serious – there’s just no way to know. But it seems fairly reasonable to assume that, as of now, the reactor is not melting down or we’d be seeing a lot more than noble gas in the coolant.

So at the moment, I’m interested – but I’m not worried. At the same time, I’m going to keep looking for more information to see what else I can learn.

My Day With Radiation

So…I’m a radiation safety professional, which means that I have a bunch of radiation detectors at my home. And every now again I turn on my meters to see what they read – sometimes I’m teaching a class via Zoom, sometimes I’m checking to make sure the instruments are working properly, I might be checking my own radioactive materials that I use for teaching, or sometimes I’m just curious. The other day I was making some measurements and I noticed they were a little higher than I’m used to seeing and it made me think about all the ways that I encounter radiation on a regular basis. And, being a writer, it occurred to me that it might be worthwhile to share with you the sorts of things I run across.

We can start with natural background radiation – every minute of every day we are all exposed to radiation from nature. Potassium, for example, is vital to the proper operation of our bodies (including our hearts and other muscles)…and about one potassium atom in 10,000 is radioactive, exposing us to radiation from within our own bodies. Not only that, but we also have small amounts of radioactive carbon and hydrogen in our bodies – these are formed by cosmic ray interactions in the upper atmosphere and they filter on down to sea level where we breathe them, drink them, and eat them. Incidentally, a colleague of mine once calculated the radiation he received from potassium in his wife’s body due to, as he put it, “spending about 25% of his time at a distance of less than 1 meter from her” (when I asked him about installing lead shielding, he pointed out that the toxicity of lead would be more dangerous than that extra radiation exposure).

The potassium in bananas, salt substitutes, and other high-potassium foods gives us a little radiation as well – as do the traces of radium found in Brazil nuts. Add to that scant amounts of uranium, thorium, radium, and a few other natural heavy elements that lodge in our bones (mostly from breathing and ingesting dust) and we get about 40 mrem every year from radioactivity that’s a part of our bodies.

There’s also cosmic radiation – some from our Sun, but most that originate in exploding stars elsewhere in our galaxy. In fact, every time I fly I can see cosmic radiation exposure increase as we climb to cruising altitude – in 2019 I flew from NYC to Seoul South Korea on a flight that took us over the North Pole and I saw cosmic radiation levels climb even higher as we flew increasingly northward.

Then there’s still more radiation from the rocks and soils as well as from things made from rocks and soil (granite countertops, bricks, concrete, and so forth) – these each account for just under 30 mrem annually. And the radon emanating from the ground exposes us to another 200 mrem a year, although this is variable, depending on the amount of uranium in the soil and the underlying bedrock. All told, we get about 300 mrem (more or less) from natural sources and from things that are built or made of natural materials. And that’s just the start!

In my apartment I’ve got a lot of radiation sources – some of these are pretty common, some are not, but I don’t need to have a radioactive materials license for any of them. There’s the granite countertop that my landlord installed, for example (granite contains potassium as well as uranium and thorium), as well as the brick my building is made of (brick is made of clays that often contain potassium). I’ve got my collection of radioactive rocks and minerals as well – I picked up most of these at rock and mineral shows or shopping online – and I also have a bunch of consumer products, most of which I bought online. Thoriated welding electrodes, “Vaseline glass” and Fiestaware plates colored with uranium, a stainless-steel soap dispenser contaminated with radioactive cobalt, and a few other things.

There used to be even more than this – I recently got a “Revigator” that’s almost 75 years old. The Revigator is a ceramic crock that’s lined with what looks like concrete…except that the concrete is impregnated with radium-bearing rocks. The premise was that people would fill this with water and, overnight, the radium would “invigorate” the water with energy that, when drunk the next day, would improve one’s health. I only made a few measurements on my new acquisition, but it looks like the most radioactive thing in my collection. Having said that, it still gives off too little radiation to pose a risk to me – especially since it sits about six feet from my desk. Another “back in the day” source of radiation were the cathode ray-type television sets and computer monitors – they never gave off enough radiation to cause problems, but they did give off radiation.

Interestingly, a few months ago I turned on one of my radiation detectors and noticed that dose rates were higher than I’m used to seeing. At the time I was borrowing a gamma spectroscopy device from a colleague – I identified the nuclide as I-131, which is commonly used for treating thyroid cancer and other thyroid diseases. My guess is that one of my neighbors was having thyroid problems – I probably could have figured out which one by checking the walls, floor, and ceiling…but decided to leave my neighbors with a bit of privacy, especially since the dose rate wasn’t at all high enough to be a concern (for me or for them). Of course, nuclear medicine is hardly rare – when I was working for the police (as a civilian scientist) I made hundreds of radiation surveys, both on the ground and from the helicopter, and we picked up nuclear medicine patients on our instruments all the time.

This is a photo of my radiation detector display when we were flying circles over the Brooklyn Bridge and the East River. The high readings showed up when we were over the bridge.

Getting back to building materials, granite’s a big one – due to the geochemistry of uranium, thorium, and potassium (and due to the way that minerals crystallize in magma chambers – https://en.wikipedia.org/wiki/Bowen%27s_reaction_series) many light-colored igneous rocks, including the gray, pink, and red granites, have more radioactivity than many other types of rock. Enough, in fact, to sometimes set off radiation alarms for ground-based surveys and to show elevated dose rates from the air. Flying over the granite Brooklyn Bridge, we always saw higher readings than when we flew over the East River; flying over cemeteries gave us higher readings due to all of the granite headstones.

I’ve also been called on to respond to other sources of radiation – loads of ceramic tiles for example that were coated with glaze that included uranium for the bright colors (mostly yellow and orange) it could produce. And then there was a time we detected radiation from industrial radiography – using radioactivity to take images of pipes, welds, structural steel, and the like – anyplace with a lot of construction and a lot of welding is likely to have radiography taking place on a regular basis. Here, too, the radiation levels aren’t nearly high enough to cause problems, providing the radiographer is doing their job properly – in the US and Europe that’s likely to be the case, but there have been radiography accidents in a number of nations over the years.

I teach a lot of classes on radiation safety – many of my students work in industries that use radioactive sources to gauge the levels of tanks or to control various manufacturing processes. If you put anything between a radiation source and a detector, the radiation levels drop – the more material, the more the levels go down. So radiation levels from a source at the top of one side of, say, a tank filled with caustic chemicals will suddenly drop when the tank fills too high, letting operators (or automatic systems) know it’s time to stop filling the tank; a source at the bottom of the tank will keep the tank from emptying out and possibly ruining a pump. Similarly, I’ve seen radioactive sources used to check the levels of beer bottles on an assembly line, to control the thickness of paper or steel, to check the density of soil, even to detect clogged conveyor belts at a gold mine in Nevada. While none of these expose me on a daily basis, they’re examples of how radiation and radioactivity are used on a daily basis in the nooks and crannies of industry.

In fact, radiation is present in all sorts of society’s nooks and crannies, whether it’s the natural radiation we’re exposed to the mountains, in our basements, at high altitudes, or in the bunch of bananas we stash in our kitchens; in our workplaces and in the workplaces of others; or in the hospitals and clinics and contained within the patients who have visited them.

What’s interesting is that people are exposed to more or less radiation depending on where they live, where they work, what they buy, what they do for a living, and so forth…but these don’t seem to affect the rates of cancer. This suggests that the radiation we run into on a daily basis isn’t likely to hurt us.