Radiation Safety & Health Physics Blog

Let’s Get Critical!

In some of my other posts, I’ve explained what’s meant by “criticality” with regards to nuclear reactors and that it’s not necessarily a bad thing. But what I didn’t really get into are some of the nuances of criticality – what’s meant by the terms critical mass and critical geometry, the difference between criticality in a nuclear reactor versus a nuclear weapon, and so forth. So this seems like a good time to take a deeper look at criticality – starting with a quick refresher on the basics.

To start with, the term “criticality” is, I guess you could say, non-judgmental. By that, I mean that the term is neither inherently good nor bad – it’s simply descriptive. When a uranium atom fissions it emits two or three neutrons; if one of those neutrons (on average) goes on to cause a second fission then the reaction is said to be “critical.” If more than one fission occurs then the reaction is super-critical; less than one fission means that the reaction is subcritical.

A critical chain reaction

In the setting of a nuclear reactor, supercriticality means that the reactor is starting up (increasing power), subcriticality means that the reactor’s shutting down (or reducing power), and criticality is its normal operating state of being. A reactor can be critical and producing barely enough energy to heat a cup of coffee…or to light up a city, or anything in between. When we were starting up the reactor on the submarine I was stationed on the announcement “the reactor is critical” was met with a yawn (if it was an early-morning startup) and a notation in our logs.

Now – say, to pick a number, that a neutron has to travel an average distance of one inch before it can be absorbed by a second uranium atom, causing it to fission. If that’s the case, is it possible for a ball of uranium an inch in diameter to sustain a fission? Well…no – because most of the neutrons will be formed less than an inch from the surface and many of those neutrons will escape from the ball of uranium first. In order for a ball of uranium to achieve criticality, it has to be large enough that most of the neutrons emitted will remain within the uranium so they can go on to cause a fission. For nuclear weapons-grade U-235 that turns out to be a little bigger than the size of a softball and weighs a little over 100 pounds – this is a critical mass for U-235 as a bare sphere of uranium metal. Changing the density, the chemical composition, and the uranium enrichment will change the critical mass, as well as surrounding the sphere with materials that will reflect escaping neutrons back into the sphere.

If the fissionable material is in a flat plane, most of the neutrons will escape without causing a fission, while in a more compact shape (a sphere, for example) like the one shown above these same neutrons would be able to be absorbed by another uranium atom, causing the fission reaction to continue.

OK – so now picture that sphere of uranium melted down into a paper-thin flat sheet. Will this achieve criticality? Well…no – because the only way a neutron can find another uranium atom in which to cause a fission is to be emitted in the plane of the sheet – the great majority will escape. But it’s also easy to visualize slowly crumpling this thin sheet of uranium until, eventually, most of the neutrons will remain within the uranium – this is a critical geometry. We need both a critical mass and a critical geometry to achieve criticality. In fact, an important aspect of nuclear criticality safety is making sure that one cannot achieve both of these at the same time.

So the first shape would weigh less than a critical mass and the second will weigh more.

And this brings up an interesting couple of questions – why is it that criticality in a reactor is not very thrilling, but criticality in a nuclear weapon is bad? And why is it that we need to add water to a nuclear reactor to achieve criticality, but not in nuclear weapons – especially if neutrons need to be moving slowly to cause a fission?

The answer to both of these questions begins with the amount of U-235 – the enrichment – of the uranium in question. In a nuclear reactor, there is far less U-235 and far more U-238 than there is in a nuclear weapon. This means that there are fewer “target atoms” in a given volume of reactor fuel (where 3-6% of the atoms are U-235) than there are in the same volume of weapons-grade uranium (in which over 90% of the atoms are U-235). It turns out that U-235 will fission if it’s struck with a fast neutron – it’s just not as likely as when the neutrons are moving at a more sedate pace. On top of that, fission caused by fast neutrons produces up to twice as many neutrons compared to atoms that are split by slower-moving neutrons, which also helps to make up for the lower efficiency of fast neutrons in causing fission to occur.

In a reactor, this means that we need to give the neutrons the best chance possible to cause a fission, and this means slowing them down with a moderator like water. In weapons-grade uranium, on the other hand, there are so many U-235 atoms that even fast neutrons are likely to be able to cause a fission. And since the neutrons in a nuclear weapon don’t need to be moderated and don’t travel as far, they can cause fission more quickly than in a reactor – this, the presence of control rods in a reactor to help absorb excess neutrons, and one other thing I’ll get to in just a moment is why the chain reaction in a nuclear reactor is controlled and that in a nuclear weapon is not.

The final piece of the criticality puzzle has to do with the two types of neutrons emitted during fission – prompt and delayed neutrons. Prompt neutrons are emitted immediately when the atom is split and they go on to cause fission fairly quickly as well (within nano- or microseconds), but there are neutrons that don’t emerge for seconds or minutes – these are the delayed neutrons. In a nuclear reactor, which operates for days, weeks, or even months at a time, delayed neutrons contribute to the neutron population in the reactor core, so they’re factored in with the prompt neutrons when designing a nuclear reactor core. It makes sense – a reactor startup can take several hours, so delayed neutrons can even play a role in controlling the reactor from the time the first control rods are pulled until they are inserted to shut it down. And, in fact, were it not for these delayed neutrons it would be impossible for a person to control the reactor – and would be very difficult even for electronic systems. Were it not for delayed neutrons we would likely not have nuclear reactors. In a nuclear weapon, by comparison, there’s not enough time for delayed neutrons to make an appearance – nuclear weapons are critical (actually supercritical) on prompt neutrons alone.

There are a lot more aspects of nuclear criticality – criticality safety comes to mind, for example – but they start to get a bit more complicated so this seems like a good place to stop for the moment. So let’s do a quick rehash and then we can call it a day!

• Criticality simply means that the number of fissions is staying constant over time – that one neutron from a fission goes on to cause another fission.
• There is a minimum amount of fissile material that will sustain a critical chain reaction – this is called the critical mass.
• It’s also important for the mass of material to be in a configuration in which most of the neutrons have a chance of causing a fission – this is the critical geometry – without both a critical mass and a critical geometry there will be no criticality.
• In a nuclear reactor, the fissions occur at a controllable pace due to a number of factors, including the dependence on delayed neutrons to achieve criticality,
• While in a nuclear weapon all of the fissions come from prompt neutrons so the reaction proceeds much more rapidly.

So there’s this rock that’s sitting about a meter from where I’m sitting right now. It’s got a beautiful deep green color and it’s a mass of flat squarish crystals that are maybe 5 mm on a side and about 1 mm thick. It’s also radioactive, which is why I bought it at the Columbus (Ohio) rock and mineral show a few decades ago – it’s a uranium mineral called torbernite; this particular piece came from Morocco.

Figure 1: my torbernite-encrusted rock

Knowing that it’s radioactive I was eager to take some measurements when I got it home – and, as a radiation safety professional, I’ve got my own instruments. I can’t remember the readings when I first bought the rock, so give me a minute to grab my instruments and I can get some readings now. Ready for some numbers?

At a distance of 1 cm my Geiger probe gives me a count rate of about 250,000 counts per minutes – a respectable count rate. But I’ve got additional detectors – let me see what they say. My “baby” sodium iodide detector (the crystal on this one is 1 inch tall and an inch in diameter) gives me a reading of 140,000 cpm – less than the Geiger counter.

Figure 2: My GM (in the clip on top of the meter) and my two sodium iodide detectors

I’ve got a larger sodium iodide as well – 2”x2”, or about 8 times the volume of the smaller crystal. That one gives me nearly a half-million counts per minute. The background count rates on each of these (that is, the count rate when the probes are away from any radioactive materials) are about 75 cpm for the Geiger counter, about 3500 cpm with the baby sodium iodide, and about 10,000 cpm with the larger crystal. Or, to summarize them in a table:

Interestingly, the meter that I’ve connected these detectors to has a faceplate that reads out in mR/hr as well as in CPM. I’ll explain the difference in a minute; for now, let it suffice to say that the dose rate is more important than the count rate if I’m trying to figure out how dangerous this rock might be.

Figure 3: the faceplate of my radiation detector. CPM is on the top and the other two scales are for dose rate.

And – bonus! – I’ve also got a different meter that measures dose rate as well. So let’s see what readings I get from all of these:

Both of these tables show an awful lot of variability – it can make a lot of people wonder how we can ever know exactly what numbers to use and what these numbers mean.

Let’s look at the count rates – the first table – first.

In particular, take a look at the background count rates – a paltry 75 cpm for the Geiger counter and a whopping 10,000 cpm for the large sodium iodide detector, with the smaller sodium iodide in between. The reason for the difference here is that a Geiger tube is very sensitive to beta radiation and it’s not very sensitive at all to gamma rays; sodium iodide, on the other hand, does a good job of measuring gammas but not so much with beta particles. So looking at these readings tells us that background radiation mostly consists of gamma rays – which makes sense because beta particles can’t travel more than 20 feet through the air at best, and mostly not even 5 feet. So our two gamma detectors are picking up the background gamma rays, which mostly pass through the Geiger tube without registering. And, of course, the 2”x2” sodium iodide detector has a higher count rate because it has four times as much cross-sectional area as the smaller one.

Now let’s look at the meter readings – and this one surprised me, to be honest. The Geiger tube had a higher count rate than did the sodium iodide…but why? One thing that comes to mind is that the Geiger counter is sensitive to beta radiation and the sodium iodide isn’t. This suggests that the torbernite is giving off both beta and gamma radiation – the Geiger tube will measure almost all of the betas and some of the gammas, while the sodium iodide is only seeing the gammas. So my rock must be giving off both beta and gamma radiation. In fact, there’s probably some alpha radiation being emitted as well, but I don’t have a working alpha detector at the moment (mine’s out for repair) so I can’t check to see how much. As far as the larger sodium iodide detector – we see the same factor of four difference between the baby detector and the larger one, so this is just due to the size of the detectors again.

When we look at the dose rate, though, the numbers are all over the place – none of these numbers are dangerous, but since radiation dose affects our risk of getting cancer or radiation sickness it’d be sort of nice to know which (if any) of these numbers we can trust. And part of the key here is to look at the ratios of the readings between the Geiger counter and the two sodium iodides. See anything familiar here? Once again you see that the dose rates are proportional to the readings – the GM dose rate is twice that of the small sodium iodide and the large sodium iodide dose rate is four times that of the smaller detector. In other words, the meter is just looking at the count rate and converting that to dose rate. It’s easy to do – the problem is that it’s the wrong way to measure dose rate.

Here’s why.

Say I throw a piece of gravel at you – you’re upset so you throw a rock back at me. “Not fair!” I say. “Your rock’s a lot bigger than the gravel I threw at you.”

You reply “You threw one thing at me and I threw one thing back at you – we’re even.”

“But yours hurt me more than mine hurt you.”

And that’s the thing – the meter that I have these different probes connected to was calibrated with Cs-137, with a gamma energy of 662,000 electron volts (or 662 keV). In effect, my meter has been “told” that every time it sees a count, the radiation causing that count has an energy of 662 keV. But the radiation coming from my rock has a slew of energies – some have more energy than Cs-137 and most have less. And that’s why the readings are so varied – my meter has no idea how much energy is passing through it – it only sees the number of counts. And since radiation dose (and dose rate) are related to the amount of energy deposited in a material, this particular meter can only accurately measure radiation dose rate from Cs-137 and it’s going to be wrong for everything else. Or, to put it another way, I don’t trust any of the first three readings.

But the ion chamber – that’s different. The ion chamber can tell the difference between high and low energies, so the readings take into account the higher and lower gamma energies from my rock. And – even better – there’s a piece of plastic on the bottom of the meter!

Figure 4: my ion chamber; the case (left) shows the brown beta shield that can be slid down to measure beta dose; the beta radiation passes through the foil window on the bottom of the black ionization chamber on the right.

I know – exciting, right? But here’s the thing – when that piece of plastic is covering a thin metal window on the bottom of the meter it screens out the beta radiation that we know is coming from the rock; when the window is open then the betas can enter the chamber as well. So the dose rate with the beta window is open is 40 times as high as when the beta window is closed – that confirms what we concluded with the count rate measurements – there’s more beta radiation being given off than there is gamma. Cool, right?

It’s also good to know what these readings mean – what’s normal, what should be investigated, and what might hurt us. Let’s start with the easy one – count rate.

If I’m measuring count rate it’s usually because I’m looking for contamination – contamination is only very rarely a health risk, it just affects whether or not we need to wear protective clothing or decontaminate ourselves, equipment, or areas. So as long as the count rate I’m measuring isn’t high enough to call for decontamination then I don’t worry about it all that much. And unless the count rate is really high – in the hundreds of thousands of counts per minute, the contamination doesn’t pose much (if any) risk.

Most of the time, in a non-emergency setting, we want to try to keep contamination levels to a minimum. So anytime I have more than about a few hundred cpm above background with a GM or more than about 1000 cpm above background with a 1”x1” sodium iodide I’ll stop to clean it up (I don’t do contamination surveys with the larger detector because it’s too hard to see low levels above background). But in an emergency – a nuclear reactor accident or a dirty bomb, for example – we can actually let people have as much as 100,000 cpm with a Geiger counter before we need to start cleaning things up. And after the Fukushima accident, there were so many people who were contaminated that the Japanese changed their limits from about 10,000 cpm to over 100,000 cpm without causing any added risk to the public.

With dose rate, I normally measure less than 0.1 mR/hr with my ion chamber – and more like 0.01 mR/hr with a suitably sensitive instrument. When dose rates get to about 2 mR/hr the public isn’t allowed to have unrestricted access – but nobody’s going to be harmed by this level of radiation. In fact, it’s not until the dose rates get into the tens of thousands of mR/hr that they start to pose a risk. Here’s a subjective summary:

The last thing to mention here is the units of radioactivity, what they mean, and when they start to become a concern.

Of course, you might be able to find a label that gives the activity of a radioactive source, there might be a sign on the door to a room, shipping papers, or something like that. If you can find out the source activity, here’s what some of the numbers mean:

Putting it all together

So…let’s put this information to use!

1. Say you’re doing a contamination survey using your trusty pancake GM and you get a reading of 2000 cpm. What should you do?

This is clearly higher than background (remember, with a GM, background is normally around 50-100 cpm), but it’s not a danger to anybody. As a radiation safety professional, I’d be inclined to clean up the contamination if I could unless it were a large-scale emergency with other more pressing problems.

2. During a routine radiation survey you notice radiation dose rates are around 1.5 mR/hr in a waiting room. Is this a concern?

This dose rate is clearly elevated, but not enough to pose a risk to anybody. It’s also lower than the 2 mR/hr level that would call for restricting access for members of the public. On the other hand, radiation levels are higher than they ought to be – this warrants investigation to find out what’s causing the elevated rad levels. They should be reduced if possible.

3. A technician tells you that an incontinent nuclear medicine patient urinated on the floor a half-hour after being injected with 10 mCi of Tc-99m. He’s measuring radiation dose rates of about 10 mR/hr with a pancake GM. What do you need to do?

Tc-99m emits a gamma with much less energy than Cs-137, so the readings we measure with a Geiger counter are going to be higher than the actual dose rates. You need to bring an ionization chamber or an energy-compensated GM to the scene to find out what the actual dose rate is before you know what actions you need to take. Oh – and clean up the radioactive urine!

4. You see a source lying on the ground and you’re able to find the instrument it fell out of. The label tells you that the source is 75 Ci ofIr-192. What should you do?

Seventy-five curies is a fairly high amount of activity – that much Ir-192 will produce a dose rate of about 32 R/hr a meter away, which can cause radiation sickness in 2-3 hours. This source isn’t deadly over a short period of time, but it’s got to be treated with care. You’ll need to fall back from the source until dose rates drop to 2 mR/hr and establish a radiological boundary, evacuating everyone from within that boundary (don’t forget to survey on floors above and below the source if appropriate). If you have training in recovering radiography sources then you can attempt to retrieve it. If you don’t have such training, you’ll need to contact your regulators and the manufacturer so that they can retrieve the source.

Radon being a gas, it could easily be extracted from the radium ore and would then be loaded into tiny gold capsules that were sealed – these “seeds” were then inserted into tumors in order to treat cancer. And to make sure that they had enough (in those very early days of medical imaging) the doctors tended to order more seeds than they ended up using. The extras often ended up being sold to gold buyers and were melted down and frequently sold to jewelry manufacturers…along with the radioactive lead, polonium, and bismuth it contained.

Fast-forward nearly a half-century to the 1960s when doctors began reporting patients with odd skin conditions that were eventually identified as radiation dermatitis – not necessarily radiation burns (those take a higher dose over a shorter period of time), but skin damage, pigmentation changes, and damage to the underlying cells; one patient even died of skin cancer that was likely caused by the radiation. While it took some time, physicians and public health officials came to realize that the skin damage was due to radioactivity in the jewelry they were wearing; gamma spectroscopy showed the radionuclides to be radon decay products and more detective work revealed the origin to be the decades-old gold capsules.

Once the source of the contamination was known public health officials let the public know what was going on and offered to survey any gold jewelry brought to them – especially antique gold. Of about 160,000 pieces of jewelry examined, 155 were found to contain radioactivity; 133 of these were turned over to the government for disposal and the other 22 were kept by their owners. Of the pieces collected, the majority dated back to the 1930s and 1940s, although one ring was engraved with the year 1910.

Two factors seem to have made the difference between those who developed radiation dermatitis and those who did not, as well as the varying degrees of severity among those afflicted: the amount of radioactivity that was present in the gold and the amount of time that it was worn; the type of jewelry played a role as well, albeit not as important as the other factors. A wedding ring, for example, would likely be worn continually for years or decades and, as a ring, would be in closer contact with the skin compared to, say, a brooch or a pendant – an amount of radioactivity in a wedding ring, then, would be expected to produce more serious skin damage than in a brooch or a pair of earrings.

There haven’t been any cases of radiation injury from radioactive jewelry in several decades; this doesn’t mean that every bit of contaminated gold has been accounted for; most likely any contaminated jewelry that remains is only lightly contaminated or is being held as a family heirloom rather than something that is worn frequently. In any event, whatever is left of this contaminated gold appears to pose little risk.

————–

A decade ago I became aware of another area in which radiation and jewelry cross paths – it turns out that radiation can cause some gemstones to change color, and some of these changes are for the better. Topaz, for example, can change from a fairly ordinary straw color to a much more attractive blue; diamonds can turn green and bluish-green (they can turn yellow or brown, but that was found to be due to thermal heating of the gems placed in the beam of a particle accelerator), and other gemstones can turn still more colors. The way it works is that the color of a gemstone (or anything else for that matter) is a function of the manner in which light interacts with it – in gemstones it has to do with what are called “color centers” and these color centers can be affected by exposure to ionizing radiation. Not only that, but different types of radiation and different irradiation periods can cause different kinds of color changes!

One example of this is with blue topaz, which is typically exposed to either neutrons or to high-energy electrons. Neutrons weigh about 2000 times as much as electrons and they cause more ionization within the crystal; at the same time, they are also large enough to jostle atoms around within the crystal structure or even to be captured by an atom, causing it to become radioactive and to decay to form an atom of a different element. On the other hand, the much lighter electrons don’t do nearly as much when they interact with the topaz crystal – they can cause ionization and minor changes, but that’s about it. So topaz that’s irradiated with neutrons ends up being a much darker blue than electron-irradiated topaz. Or, to put it another way, topaz that’s placed in a nuclear reactor core (which is a great source of neutrons) will be a deeper blue than the topaz that’s placed in the beam of an electron accelerator.

Here’s the thing, though – slamming neutrons into atoms can make them become radioactive and, if the electron energy is high enough, so can electrons. So irradiated gems makes them more attractive, but it can also make them radioactive – the question is whether or not they become radioactive enough to pose a threat to the wearer.

This was enough of a concern that a number of studies were done to try to evaluate the threat (if any) these gems posed, including a few studies performed or funded by the Nuclear Regulatory Commission, as well as by some gemological organizations. And they all found the same thing – that the gems are radioactive, but not to the point of causing problems. And some of the reasons for this are different for different types of irradiation.

One reason is that the elements of which most gemstones are comprised don’t lend themselves well to becoming activated, so not much radioactivity is produced to begin with; on top of that, most activation products decay away fairly quickly, so it’s not too big a deal to store the gems until most of the activity that is induced is gone, and the traces of longer-lived radionuclides that remain are present in quantities too low to cause a problem. And there’s an additional factor that comes into play with electron-irradiated gems – unless the electrons have a lot of energy they won’t strike any atoms hard enough to eject them from the nucleus. Or, put another way, electrons can’t induce radioactivity unless they’re very high-energy – higher than what most accelerators are capable of producing.

OK – so I mentioned that there might be traces of radioactivity left in some of the gems, but that it’s not dangerous…and you might wonder how I can say that so confidently. Well, it turns out that there are traces of radioactivity in a lot of things – including all of the food that we eat and in the water that we drink. In most cases, our food and water contains more of this natural radioactivity than do irradiated gems. I’ve made measurements on both irradiated gemstones and bananas and salt substitute (both of which contain naturally radioactive potassium-40), and it turns out that a bunch of bananas gives off more radiation than even a pound of irradiated blue topaz that’s been cut, mounted, and ready to sell.

————–

In addition to materials that have been made radioactive by people there are also some gems with natural radioactivity – primarily uranium and thorium and their decay products. But these, too, pose no risk to the wearers. If you’re interested in reading more about this topic, here are some links that might be useful. Some of these reports are a bit technical, but they all contain a great deal of useful information:

Health Risk Assessment of Irradiated Blue Topaz (NUREG 5883) – Nuclear Regulatory Commission, 1992.

A History of Diamond Treatments – Overton and Shigley, 2008

Are You Afraid of Your Phone?

I got my first cell phone about 30 years ago – one generation ago in human time and five generations in transmission technology. And it seems that every few years – especially whenever a new transmission technology is developed – there’s a flurry of concerns about the harmful effects of cell phone radiation. So maybe this is a good time to take a look at the science behind cell phones and cancer.

(Source: US EPA https://www.epa.gov/sites/default/files/2017-05/electromagnetic-spectrum_0.png)

Here’s where some of the misunderstandings begin – to a scientist or an engineer “radiation” has a different meaning than it does to an ordinary person. When someone with a scientific background uses the term they’re speaking about the energy that’s given off by one object and transmitted through space to another. So scientists and engineers talk about thermal radiation, radio and microwave radiation, visible light radiation, and even gravitational radiation – as well as x-ray, gamma ray, cosmic, and alpha, beta, and gamma radiation. The thing is, only the second group of these has enough energy to remove an electron from an atom – to cause an ionization – and it’s the ionization that’s the first step in causing cancer. This means that the first types of radiation (thermal, etc) cannot cause cancer and the second group can.

The reason for this is actually fairly simple – it takes a minimum amount of energy to cause an ionization and, absent ionization, radiation can’t initiate the sequence of events that might lead one day to cancer. Anything with less energy can’t do the trick.

Think about trying to throw a ball onto the roof of, say, a school. If the roof is 40 feet off the ground then you have to throw the ball hard enough to reach 40 feet in the air. If you can’t throw the ball with enough energy to get 40 feet in the air it’s never going to land on the roof, no matter how many times you throw it. Electrons around atoms are the same – it takes a minimum amount of energy to strip an electron from an atom and nothing with less energy is going to cause an ionization. Radiation with less energy than ultraviolet isn’t energetic enough to cause an ionization – and cell phone radiation (including 5G) lacks the energy to ionize an atom.

OK – so what about all the studies showing that, in spite of this, cell phones do cause cancer? And this is somewhat personal for me – not only do my wife and I, our kids, parents, and…well…our entire extended families use cell phones on a regular basis, but one of my relatives developed parotid gland cancer on the side of her neck that she normally holds her cell phone on. And then there’s the occasional paper in the medical literature as well – what about them?

The thing is, there’s not a single one of these papers that’s conclusive. What I mean by that is that, for every paper that shows a slight increase in cancer risk among cell phone users there are others that show no change at all. And the changes that they do show are tiny – smaller than the normal variability in the data, which means that they’re not very convincing. Consider – say there’s a group of six people. On average, about half the people in the world are men and the other half are women. But say that this group has four women and two men – 33% of the group is male and 67% is female. Is this significant? Should we start looking for a reason for this big shift in population statistics? Well…no. The change of a single person in a small group might cause what appears to be a dramatic change in the statistics – one that might not be borne out as the group grows in size. Think of a larger group – 30 people, say – with one extra woman. Now we’ve got 16 women and 14 men (53% and 47% respectively). In an even larger group of 100, that one extra person brings the numbers of 51% and 49% – even less impressive. The moral of the story is that, in a small study population, a very small actual change can look impressive – more impressive than is warranted.

Now think about a cancer that affects only, say, one person in 10,000. This means that if you’re looking at 10,000 people then you’d expect to see a single of these cancers. So what if you see two such cancers in a group of people who drink orange juice – does that mean that orange juice causes cancer? Or could it be that it’s just a fluke – that just one extra case, even of a rare cancer, isn’t that big a deal? And in fact, it’s the latter; a single extra case – even if it causes the expected number of cases to double – simply isn’t enough to show cause and effect.

One of the more recent studies seemed to show an increase in some very rare cancers among rates that were exposed to cell phone “radiation” – but this study suffers from the problems of small-number statistics; these were pointed out by internal reviewers when the study was first done. And, of course, there’s also the fact that cell phone “radiation” can’t ionize atoms so they can’t initiate the process that leads to cancer – and the report’s author doesn’t provide any plausible explanations as to how it might. But there’s more than this.

One factor is that this study exposed the rats to very, very high levels of exposure – far higher than any user would be exposed to. And dose rate makes a difference – just as the rate at which you add water to a bathtub makes a difference. At a lower dose rate, the body can repair damage much more effectively than it can when its repair mechanisms are overwhelmed. And, of course, there’s the minor point that rats aren’t people and they might not respond the same way that we do to any exposure. So that makes it hard to apply these results to humans as well.

But, then, the study also exposed the rats to the same dose across their entire bodies, which is something that doesn’t happen when we’re talking on the phone. If you’re holding the phone in your right hand then your right ear, part of the right side of your head, and maybe the right side of your neck are closest to the phone and will get the highest dose. The heart (where some of the rats developed tumors) is at least 10 times as far from the phone and will receive less than 1% of the dose of the head and neck – not to mention that the intervening tissue will reduce exposure even more by absorbing still more of the radiation. So for this study to be at all relevant to humans, our hearts would have to be amazingly sensitive to the effects of radiofrequency radiation – something that has never been noted (or even postulated).

The bottom line is that this report adds nothing compelling to the arguments that cell phones might cause cancer. Or, to put it another way, I’m still using my phone and I’m not worried about my kids using theirs. As long as they don’t text while driving, that is.

NORM Who?

My very first radiological consulting project was for a glass-making company; they were re-bricking one of their furnaces and a load of replaced refractory brick they were disposing of ended up tripping a radiation detector at the landfill. One of my colleagues nodded knowingly and said “Norm.”

I’d been with the company for more than a year and there was nobody by that name in our office so I was confused. “Norm? Norm who? Is he in one of our other offices?” My colleague started laughing.

“Not ‘Norm who’ – you should be asking “NORM what?”

Turns out that NORM stands for Naturally Occurring Radioactive Materials, and that’s what the refractory brick contained – one of the components was a zirconium mineral and, because they have similar geochemistries, anywhere we find zirconium we’re going to find some uranium as well. And there’s not just the uranium – as uranium decays to stable lead it goes through over a dozen intermediate steps, so the refractory brick also contained traces of radioactive thorium, radon, radium, polonium, and more. This is what the landfill’s portal monitor was picking up. And since we’re so good at detecting radiation, the amount that gave a clear signal – strong enough to set off the alarm – was nowhere close to what it took to be harmful.

NORM was in the news not long ago, although it was easy to miss the reference (https://news.wfsu.org/state-news/2021-04-13/state-of-florida-plans-cleanup-of-old-piney-point-phosphate-plant – combined with information published by the EPA

Here too, due to the geochemistry of uranium, phosphate rock also tends to have elevated levels of the stuff. Interestingly, I saw evidence of this in aerial radiation surveys over my home state of Ohio – I also saw it when I was doing radiation surveys at the racetrack in Indianapolis – the greenish-yellow area in the image here shows the higher radiation levels we detected.

It turns out that phosphate rock is used to make some types of fertilizer, and anywhere that this fertilizer is used has enough radioactivity to show up in radiation surveys. As with the refractory brick, this isn’t nearly enough radioactivity to pose a risk.

I’ve run across NORM in a lot of places – in Iranian hot springs, in coal seams, in a former mineral processing facility in NYC, in the North Dakota oil fields, and many more. In some cases – primarily fossil fuel deposits – the NORM is there because uranium is insoluble in water that lacks oxygen. Since decaying organic materials removes oxygen from water, swamps, and other places where there’s stagnant water that can collect leaves and plants – which turn into fossil fuel deposits over millions of years – tend to collect higher levels of uranium over time. In other cases, a mineral will be made of elements that are similar to uranium – the atoms are close to the same size and they have similar chemical properties – so it’s easy for uranium to slip into the crystal structure of any of a number of minerals, including ores of niobium, vanadium, titanium, and any of the rare earth elements. This happens in our bodies too, by the way – many actinide elements, as well as radium, will also slip into the crystal structure of our bones. And – getting back to fossil fuels briefly – radium and other radionuclides that come from the decay of uranium will become trapped inside the scale lining the pipes that convey oil and natural gas (as well as the brines that tend to accompany hydrocarbon deposits) from the depths to the surface as well as the pipes and tanks used to process and store these fuels.