Author Archives: Dr. Zoomie

The Little Neutral One

A century ago was a heady time to be a physicist – new phenomena, new particles, new laws of nature, and more were being discovered, it seemed, daily as physicists appeared to be determined to refute an earlier contention that “the future truths of physical science are to be looked for in the sixth place of decimals.” And, of course, every discovery brought with it additional questions to be answered – but for a few decades, one of the most significant of these questions involved beta radiation. To explain why, though, we need to back up a little bit and get into some of the background.

One of the bedrock principles of physics is the conservation of energy – often stated along the lines of “Energy is neither created nor destroyed, but only changes in form.” We see this in a pendulum – at the top of its arc (maximum potential energy) the pendulum is momentarily motionless (zero kinetic energy); as it begins its downward swing it loses potential energy (it’s closer to the ground) but gains kinetic energy (it’s moving faster). If we add the potential and kinetic energies at any point in the pendulum’s cycle the sum will be the same – the total energy is the same at all times.

Two spectra I collected, Cs-137 on the top and Co-60 on the bottom. Note the distinct peaks that are unique to each of these radionuclides.

Conservation of energy works at the atomic level as well. A radioactive atom has a specific amount of energy contained within, as does the atom to which it decays. Thus, every radioactive decay involves an atom losing a very specific amount of energy – energy that goes into the radiation the atom has just emitted. So when early researchers analyzed the energy of the alpha and gamma radiation given off by the radioactive atoms they were studying they were pleased to note that the radiation had distinct energies – so distinct, in fact, that they are used to identify the radionuclides that emitted them. If I see gamma radiation with 662,000 electron volts (abbreviated as 662 keV) I know there is Cs-137 in the area; 1460 keV is the signature of potassium (specifically, the naturally radioactive isotope K-40), and so forth. Alpha radiation is the same – an alpha with 4871keV of energy was almost certainly emitted by a Ra-226 atom.

So imagine the surprise of physicists a century ago when they realized that beta decays have a continuous spectrum and not the sharp peaks they expected to see. This left them in a quandary. One possibility was that that conservation of energy was not as rigid a rule as physicists thought – perhaps it was only accurate when averaged across a large number of atoms, but not with every individual atom; that was the suggestion of the great Niels Bohr. Or, as suggested by another great physicist, Wolfgang Pauli, maybe beta decays were associated with the ghost of a particle that we just couldn’t find a way to detect – that, together, both particles contained the total energy difference between parent and progeny, divvying it up between them and conserving energy.

Bohr, a future Nobel laureate (as was Pauli), was highly respected and any suggestions he made had to be taken seriously. On the other hand, the thought that energy might not be conserved in every single interaction – that some radioactive decays might simply involve more or less energy than other identical decays – was anathema. But by 1934, as beta spectra became more accurately determined, physicists noticed that they were unique to each beta emitter – they did not behave the way that Bohr’s hypothesis required. It turned out to be easier to believe in a ghost particle that we hadn’t yet learned to detect than to believe that the conservation of energy might sometimes be violated. So, however reluctantly, physicists began to try to puzzle out what this particle might be like.

Beta energy spectra – note that, unlike the earlier gamma spectrum, there is no peak energy. Source: https://ocw.mit.edu/courses/nuclear-engineering/22-02-introduction-to-applied-nuclear-physics-spring-2012/lecture-notes/MIT22_02S12_lec_ch1.pdf

One thing seemed likely – since it was invisible to all of the detectors of the day it likely had no electrical charge; it was likely electrically neutral. For this reason, Pauli suggested naming it a neutron (following the convention set by the names electron and proton). The problem was that, around the same time, British physicist James Chadwick wanted to use “neutron” to name the heavy neutral particle in the atomic nucleus he’d just discovered. To resolve this small impasse, Italian physicist Edoardo Amaldi joked that they should name Pauli’s smaller particle “neutrino,” the “-ino” an Italian diminutive making the word mean “little neutral one” (in comparison to Chadwick’s much larger particle). The name was adopted – now physicists just had to find the thing. And that raised an interesting question – how do you find a particle that doesn’t want to interact with your detectors?

Consider ionizing radiation for example. Gas-filled detectors work because incoming radiation slams into an atom and strips off an electron; this sets into motion the process of gas amplification that leads to a pulse of electrons passing through an electrical circuit that includes an analyzer capable of extracting information about the radiation. Or a scintillation detector, in which the radiation interacts with the detector medium (usually a crystal, but sometimes liquid or plastic) – this interaction deposits energy into the scintillator and leads to the emission of photons that, passing through a photomultiplier tube, produce a signal that can be amplified and studied. In fact, even our eyes work this way with non-ionizing radiation (light) – the photons interact with the rods and cones of our retinas which lets them be detected, sending signals to our brains. But something that doesn’t seem to interact with anything? That’s a bit more difficult.

In 1942, Chinese physicist Wang Ganchang suggested that, since neutrinos are emitted along with beta particles (which are electrons ejected from the nuclei of unstable atoms), perhaps they could be detected by looking for evidence of neutrinos interacting with beta particles during a form of radioactive decay called beta capture. Fourteen years later a team of American physicists announced their detection of neutrinos in the journal Science, earning the 1995 Nobel Prize for their achievement.

We’ve learned a lot in the decades since neutrinos were first confirmed to exist, and the ghostly little particles turn out to give us some huge insights about the universe – in fact, for some time it was thought that the fate of the universe hinged on whether neutrinos weighed nothing…or almost nothing. Neutrino observations from the Sun have raised questions about the inner workings of the solar furnace and, in 1986, a burst of neutrinos that appeared in the world’s neutrino telescopes were associated with a supernova in a satellite galaxy to our own Milky Way. In fact, some astrophysicists have even suggested (erroneously – https://what-if.xkcd.com/73/) that neutrinos from gamma ray bursts and supernovae might even be responsible for mass extinctions throughout the galaxy. We’ve also discovered that there is an entire family of neutrinos out there, associated with not only beta particles (electrons), but also with the more exotic tau particles and muons – along with their respective anti-neutrinos. These are all still quite ghostly – but they are no longer invisible or unknown…and they’re fascinating.

A neutrino observatory in Antarctica. Neutrinos are so elusive that they pass completely through the Earth before they’re detected here. Source: https://research.msu.edu/23m-nsf-grant-funds-to-upgrade-antarctic-icecube-neutrino-observatory/

About That Tritium…

I noticed recently a flurry of articles about Japan’s recent announcement that it planned to release millions of gallons of water from the site of the Fukushima reactor into the Pacific Ocean. Most of the “flurry” was about the response to this announcement – from environmental groups, some of Japan’s neighbors, and other concerned parties. And it made me wonder why so many were so exercised about so little a risk. But before getting into that, let’s take a look at this tritium stuff.

The first time I came across tritium was when I was in the Navy – we had a tritium monitor in the Torpedo Room of my submarine and I asked why. “It’s for the special weapons” was the reply. Which makes sense – tritium is used in some forms of nuclear weapon and if any of it leaked out then we needed to know. I read up on it and realized it wasn’t much of a threat, so I stopped giving it much thought.

Fast-forward to after I got out of the Navy and was working for an academic radiation safety program to help pay for my college. I noticed that a lot of our research labs used tritium so I learned a little more about it. I found out it emitted a very low-energy beta particle – so low-energy that we couldn’t even detect it with our normal Geiger-Mueller detectors; so low-energy that the tritium beta would barely (if ever) even penetrate through the outer layers of skin to reach the living cells underneath. In fact, I remember holding a small plastic vial that contained (according to the label) over 20 curies of tritium – a large amount of radioactivity – and I couldn’t get a single peep on my Geiger counter. And even if someone ingested or inhaled some of it, this same low-energy beta particle made it about the most innocuous type of radioactivity any of us could take into our bodies. It just wasn’t a big deal.

A few years later I was working for the state government and I learned a little more about tritium – mostly that the same low-energy beta that made it such a minor threat also made it hard to clean up since nobody could survey for it directly. Not only that, but tritium, as a nuclide of hydrogen, would simply dissolve into water and would go wherever water or water vapor traveled. My boss, for example, had needed to deal with a concrete wall that was contaminated with tritium; concrete is somewhat porous and the tritium had permeated the entire thickness of the wall and the only way to “decontaminate” the wall was to tear it down entirely. Not because the tritium posed a risk to anybody – the beta particles couldn’t even escape from the concrete to expose anybody (and, one can hope, nobody was nibbling on the concrete itself) – but because the amount of tritium contamination exceeded regulatory contamination limits. So the wall was ripped down, loaded into waste containers, and shipped off to be buried in a waste disposal site. A lot of money was spent to mitigate a non-risk.

Let’s jump ahead a little more – to another job I had that included oversight of the radiation safety at a laser fusion research facility. They had more tritium than I’d ever seen at a single location – enough to kill anyone who ingested or inhaled any of it – and even there we didn’t get any readings from their stores. On the other hand, if they took a tiny (1 mm) plastic sphere, filled it with frozen tritium, and slammed it with a powerful laser a tiny fraction of the tritium would fuse, releasing enough energy and enough radiation to give a fatal dose to anyone unlucky enough to be in the target chamber at the time. Enough of anything can be dangerous – we just have to know where the boundary between “most likely safe” and “possibly unsafe” lies.

Somewhere along the line I also came to understand where tritium comes from – in nature it’s formed when cosmic rays collide with atoms in the atmosphere. The Earth actually has a lot of natural tritium; every glass of water we drink, every lake or ocean or pool we wade into, every tub or hot spring in which we soak has tritium dissolved in it, to the tune of a few tens of picoCuries (pCi) per liter of natural water. Globally, cosmic rays produce over one and a quarter million curies of tritium every year; couple that production rate with the rate at which it decays and we find that our planet has a total inventory of tritium of about 26 million curies, including the tritium that’s in our own bodies from eating, drinking, and breathing.

Tritium is also, of course, produced in nuclear reactors – the core of a reactor is a neutron-rich environment and as water passes through it’s exposed to those neutrons. When they strike a hydrogen atom, sometimes a neutron will stick, forming deuterium (a stable isotope of hydrogen with one neutron and a proton). If another neutron hits a deuterium atom and is captured then it will form tritium (one proton and two neutrons) – this is where the tritium at the Fukushima site originated.

And that brings us to the water to be discharged from Fukushima into the ocean. It turns out that there are about 20,540 curies of tritium contained in the 820,000 cubic meters of water stored at the Fukushima site. Most of this is groundwater from the area, contaminated with reactor coolant that leaked from the broken reactors. But the Pacific Ocean is huge – it contains about 400 billion times as much water as is being stored on the Fukushima site. So adding the water from Fukushima into the Pacific Ocean increases the tritium in the ocean by only a minuscule amount – by less than a trillionth of a curie (or about 1 pCi) for every cubic meter of water in the ocean. And because tritium emits so low-energy a beta particle, this increases radiation exposure by so little (less than 1 microrem annually) that it poses no risk to anybody – or to any creature – in that water. It’s like adding a few more grains of sugar to a pitcher of KoolAid. This means that seafood lovers can continue eating their sushi or their fish-and-chips without worrying that it’s going to hurt them.

Something else to keep in mind is that seawater also contains many other natural radionuclides that also swamp any radiation dose from the waters of Fukushima. Uranium, rubidium, and potassium are also dissolved in this water, and each of these contributes far more radioactivity (and produces far more radiation dose) than the tritium from Fukushima. The bottom line is that, while discharging this water might be something that we can measure and calculate, it’s just not worth worrying about – in fact, the stress from worrying is going to be more dangerous than the cause of that worry.

Reference: https://www.whoi.edu/multimedia/source-of-radioactivity-in-the-ocean/

Nuclear Medicine

This month marks the 80th anniversary of doctors injecting radioactivity into their patients – not as part of evil experiments, but to help to diagnose and treat disease. And in spite of being only 80 years old, the field of nuclear medicine has become impressively versatile and valuable over the decades, giving physicians the ability to, in effect, “see” within their patients’ bodies to learn what was ailing them. Here’s how it works.

Tracking tagged molecules

Every organ, every tissue in our bodies has a different biochemistry and uses different molecules and elements. The thyroid, for example, uses iodine to function and to produce the hormones that help to regulate our metabolism; the brain thrives on an exclusive diet of glucose; our bones incorporate calcium, phosphorus, and a handful of other elements. So if we want to learn about how the thyroid is functioning – or to learn where in the body metastatic thyroid tissue has migrated – injecting radioactive iodine will cause the thyroid tissue to become radioactive, making it easy to find. Similarly, adding radioactive atoms to glucose molecules can help us to see what’s going on in a patient’s brain while adding radioactivity to molecules that are used by growing bone can tell a physician where there is a small fracture that has begun to knit together.

Physicians have know for decades that certain organs and different tissues have different biochemical characteristics. But until the middle of the last century this knowledge, while interesting and important, was of no help whatsoever in diagnosing illness – but with the advent of nuclear technology (specifically, nuclear reactors and accelerators) this began to change. Radioactive atoms emit radiation (of course!) and that includes gamma rays that can penetrate through the overlying tissues to be detected from outside the body.

So say a patient has metastatic thyroid cancer – when I-131 is injected into the patient the iodine will seek out thyroid tissue, including any metastases where cancerous thyroid cells have spread through the body; when the patient is scanned with a gamma camera (more on this shortly) these clumps of now-radioactive thyroid tissue will show up as bright spots on the scan. Give the patient a higher dose and, instead of revealing their locations, the radioactive iodine will deliver a high enough dose of radiation to destroy the cancerous tissue.

In 1934, Frederick and Irene Joliet Curie formed radioactive P-30 by bombarding stable aluminum with alpha particles. Not long after, Enrico Fermi, speculating that neutrons’ lack of an electrical charge might make them even more effective, began slamming neutrons into as many elements as he could get his hands on to see what would happen. Four years later Fermi was awarded the Nobel Prize for this work.

The Second World War slowed non-war-related research; at the same time, it provided wonderful new tools for producing artificial radionuclides – nuclear reactors and particle accelerators – as well as new instruments with which to measure and characterize the radiation they gave off. After the war, the stage was set for putting these to use in medicine.

While there had been some early work with nuclear medicine in the 1930s, the first bona fide treatment came about in 1946 with the use of I-131 to stop the growth of a thyroid tumor and the subsequent realization that lower doses could be used for imaging rather than destroying the diseased tissue. Over the ensuing years and decades, nuclear medicine physicians learned to insert radioactive atoms into molecules that would be taken up by other organs, by cancerous tumors, by healing bone, and more – the current list of nuclear medicine procedures published by the Nuclear Medicine Technologist Certification Board lists 78 procedures using radiopharmaceuticals not even dreamed of in 1946 (https://www.nmtcb.org/exam/procedureslist.php). And the most recent additions – positron-emitting radionuclides (often used in conjunction with CT scanners) – make use of a particle of antimatter, most commonly encountered in science fiction.

Collecting images

It’s possible to find out where the radioactivity is collecting without creating an image at all. If I give someone I-131, for example, I can start scanning their body with my Geiger counter – as I survey I’ll notice I get a low count rate at their feet, higher when surveying their pelvis and abdomen, and highest when my detector is over their throat. This would tell me that there’s not much in the feet or legs that absorbs iodine, that it collects in the bladder and colon while waiting for the person to urinate or defecate, and that the thyroid collects iodine more effectively than any other part of the body. On the other hand, if I got a high count rate from, say, the underarm, groin, or liver then I might suspect a metastatic tumor that had spread to lymph nodes or to the liver.

The thing is, while many scientists and engineers are comfortable with numbers, they’re a little unusual in that; most people can get more information from a picture than from a table of numbers. Finding a way to produce images of where the radioactivity was collecting was a way to make this technique accessible to a much wider range of physicians

The Moritz Orthodiagraph – devised by Friedrich Moritz, of Munich (1861-1938)

The problem is that gamma rays are given off in all directions – so if I’m holding my radiation detector over, say, an I-131 patient’s nose, it’s going to show a high count rate. This isn’t because some iodine ended up in the person’s nose – it’s because the iodine in the patient’s thyroid will emit gamma radiation in all directions, including into my radiation detector. Unless I can find a way to screen out all of the gammas except for those that are directly beneath my detector.

One way to do this is to take a piece of lead and drill an array of holes through it, putting an array of radiation detectors on the other side of the lead. If a particular detector registers a “hit” then I can be sure that the gamma was emitted directly beneath that hole in the lead. Doing this with the entire array of holes and detectors gives us an image of the part of the body beneath the array; scanning the array over a part (or all) of the body can tell us where the radionuclide has collected. The former (parking the array over, say, the neck) can help us to locate small nodules within the thyroid while the latter (assembling an image of the entire body) can show us where metastatic thyroid tissue has landed.

Newer methods

One of the problems with nuclear medicine is that it shows where the radionuclide ends up, but not necessarily what structures it ends up in. For example, a “hot spot” in the abdomen might be the result of radioactivity in the liver, the gall bladder, one of the ducts between the liver, gall bladder, and digestive tract, the connective tissue in the area, or maybe even in the muscles that line the abdominal cavity. Taking “pictures” from different angles can help to narrow down the possible location – a hot spot halfway between the front and back walls of the abdomen, for example, is more likely to be in the liver than in the abdominal muscles. But even the best gamma camera still has limits to their ability to precisely resolve the location – precise information requires a higher-resolution image; we can get that sort of precise three-dimensional imaging from a CT scan. Thus, the PET-CT – a device that can be used to show not only metabolic or biochemical activity (where the radiopharmaceutical collects) but also the structures in which the hot spots lie.

Images from a Ga-68 PSMA PET-CT in a man with prostate cancer shows tumors in lymph nodes in the chest and abdomen. Credit: Adapted from Int J Mol Sci. July 2013. doi: 10.3390/ijms140713842. CC BY 3.0.

Radiation safety

Injecting a patient with radioactivity raises some safety concerns – for the patient, for the medical staff, for their family, and for others with whom they might come in contact.

Consider the I-131 patient we discussed earlier. I-131 gives off both beta and gamma radiation; the beta radiation is not much of a concern because it’s trapped within the body, but gamma rays can (and do) expose others. The key is to limit that exposure.

When a patient is in the hospital, workers can use the principles of Time, Distance, and Shielding to reduce their exposure and that of others. Reducing the amount of time anyone is near a patient reduces their exposure, as does increasing the distance to the patient. Nurses, for example, can stand at the foot of the bed instead of at the head to be further from the thyroid; if they must be at the head of the bed, they can stand back a step instead of standing at the bedside. Simply taking one step away from the patient can reduce radiation exposure by a factor of four or greater. In addition to this, many hospitals install lead shielding in the walls surrounding the rooms where their I-131 patients remain during their stay; this reduces radiation exposure to those in adjacent rooms.

Radiation safety for the patient primarily takes the form of carefully calculating the amount of I-131 that’s administered – a high enough dose to obtain a good image or to ablate the thyroid tissue, but not much more than that to avoid excessive radiation exposure to healthy tissues. In addition, nuclear medicine patients are usually given instructions on how to reduce radiation exposure to their family and friends when they return home.

These instructions usually include cautions to use a separate bathroom than other family members (if possible), not to share cups or cutlery with others, not to hold babies and young children on their lap, to sleep in a separate bed from spouse and children, and so forth – the idea is to not only reduce radiation exposure from the gamma radiation emitted from their bodies but also to reduce the chance for family members to come in contact with radioactive contamination. In cities with widely used mass transportation (or patients who rely on mass transit) there is also a common admonition to try not to sit or stand too close to other passengers if possible. These precautions might last for only a few hours (with most PET radionuclides), for a few days (in the case of Tc-99m), or for a few weeks (in the case of I-131).

One other consideration in recent years is that many cities have established networks of radiation detectors as part of their counterterrorism efforts, and nuclear medicine patients cause the majority of radiation alarms that come in. In the decade or so that I’ve worked with law enforcement on radiological interdiction, I’ve seen far more alarms from nuclear medicine patients than from all other sources combined. For this reason, many nuclear medicine physicians will give notes or cards to their patients so that, if they do trigger a radiation alarm, they can show the cards to the police.

After 80 years nuclear medicine is a mature methodology with a solid technological and scientific foundation. Over the years it’s been developed to a high level, and in recent years there have been about 15 million procedures annually – both diagnostic and therapeutic. For what it’s worth, I account for some of those procedures – I’ve had a number of scans over the last decade or so, so I can personally confirm that it doesn’t hurt, it doesn’t make one dangerously radioactive, and it doesn’t put others at risk. And the information that it provides can either be used to help guide treatment – or to put one’s mind at ease. Either way, it’s useful to both patients and physicians – probably one of the reasons it’s still in use.

The Demon Core

On May 21, 1946 physicist Louis Slotin was conducting a dangerous experiment with a plutonium sphere – designed and constructed to the exacting standards required to achieve criticality and to explode with the force of tens of thousands of tons of TNT. The experiment Slotin was conducting involved bringing the two halves of the sphere close together – close enough for neutrons in one half to start to cause plutonium atoms in the other to start fissioning, but barely far enough apart to keep those fissions from increasing uncontrollably. To do this, Slotin had placed one hemisphere on top of the other, using a screwdriver to hold the two pieces a safe distance apart. To cause the fissions to start he’d move the screwdriver slightly to lower the top half ever so slightly, noting the increase in neutron levels; moving the screwdriver in the other direction increased the separation and shut the reaction down.

And then his hand slipped.

The top hemisphere moved too close to the bottom and everyone in the room saw a blue flash and felt a pulse of heat. Realizing what had happened, Slotin quickly used the screwdriver to flip the top hemisphere away from the bottom, stopping the reaction.

Slotin died nine days later, his estimated radiation exposure was well over the 1000 rem (10 Sv) that is invariably fatal. Of the other seven people in the room at the time of the accident, the highest radiation exposure was 182 rem and nobody else exceeded a dose of 62 rem. Two later died of cancer – both of different forms of leukemia – but their doses were low enough and the cancer appeared late enough (19 and 29 years after the accident) that it’s unlikely that radiation from this accident played a role in their illness.

The image above is a re-creation of the 1946 experiment. The half-sphere is seen, but the core inside is not. The beryllium hemisphere is held up with a screwdriver. Reference: Los Alamos National Laboratory – Taken from “A Review of Criticality Accidents”, LA-13638, Figure 42, page 75, Los Alamos National Laboratory

The thing is, this same plutonium core had killed before – in August, 1945. That time its victim had been a physicist name Haroutune (Harry) Daghlian.

While Daghlian was performing a different experiment than Slotin he, too, was working by hand, manually moving pieces of beryllium around the core to change the number of neutrons that were reflected off the beryllium atoms back into the plutonium. Like Slotin, Daghlian’s hand slipped, dropping a beryllium block onto the experiment and causing the core to become critical. Also like Slotin, Daghlian was able to disassemble the critical configuration, shutting down the reaction, and receiving a fatal dose of radiation in the process. Receiving a dose of about 310 rem (3.1 Sv), Daghlian died 25 days later.

I had known about the accidents and the deaths; I learned about them both in the 1990s during a class I took on nuclear criticality safety. But it turns out that the history of this particular core – re-nicknamed the “Demon Core” in the aftermath of Slotin’s death (its original nickname had been “Rufus”) – is far more interesting than this, and extends over a longer period of time.

Weighing a mere 14 pounds and measuring 3 ½ inches in diameter, this particular bit of plutonium had actually been manufactured in 1945 and it was originally intended to be placed in the third weapon to be dropped on Japan in the closing days of the Second World War – tentatively scheduled to be used on August 19, 1945…or on the first day after August 18 with good weather over the target area. As it was being prepared for shipment, Japan surrendered and it remained in the US, at the Los Alamos Laboratory site in New Mexico – the same site at which both Daghlian and Slotin worked.

In the aftermath of Slotin’s accident, the core received its nickname. But, more importantly, the lab also implemented a number of new safety measures that were intended to reduce the risk of such an accident occurring as well as moving the scientists far enough away to keep them safe even if a criticality did occur. The criticality safety training I attended in the mid-1990s was a continuation of these changes, as was a series of criticality safety inspections and audits I conducted at a uranium enrichment facility several years later.

But the history of the Demon Core didn’t end with Slotin’s death. As the US moved into the post-war years it realized that it needed to better understand the effects of these powerful new weapons; when the Soviet Union set off its first nuclear weapon in 1949 these tests gained a new urgency. One series of tests, dubbed Operation Crossroads, was planned for 1946 and was intended to learn about the impact of nuclear weapons on naval ships – the Demon Core was to be shipped to a balmy tropical island where it would be placed into a nuclear weapon and detonated.

The second accident put an end to that plan. The reason for this is that, when plutonium and uranium fission they split roughly in half, and the two fission fragments that are formed are radioactive. Not only that, but some of these fission fragments absorb neutrons quite efficiently – in these early days of nuclear weapons design and fission research, nobody was quite sure if these fission products and “poisons” would affect the test explosions. Why, the thinking went, use an uncertain weapon core when they could use a new one with no such issues instead? So instead of a working vacation on a tropical island, the Demon Core remained locked up in New Mexico.

Unable to be used in Operation Crossroads, the Demon Core (or, if you prefer, Rufus) was eventually melted down and mixed in with other plutonium, eventually ending up in a number of nuclear weapons cores in the nation’s nascent, but growing, nuclear stockpile.

References:

https://web.archive.org/web/20160524122659/

https://www.newyorker.com/tech/annals-of-technology/demon-core-the-strange-death-of-louis-slotin

Nuclear Death in the Desert: the SL-1 Accident

On my very last day at Naval Nuclear Power School, in addition to doing a lot of paperwork and receiving our orders and still more paperwork to bring to our next duty stations, we were sent to the only room at Nuke School that had room for our entire class. There, we heard about the loss of the USS Thresher (SSN 593) and what happened to cause it to sink. And then we were shown a movie about a nuclear reactor accident in the desert of southern Idaho. The discussion of the Thresher got my attention – I had already volunteered for submarine duty (and, as it turned out, I ended up on one of Thresher’s sister ships – but that’s a story for another time) – but there was something about the reactor accident that riveted my attention, including the fact that, since all three operators died, nobody was quite able to figure out exactly why the accident happened. Even today, in spite of the engineering investigations, scientific publications, and popular accounts, we’re still not quite sure about what, exactly, caused the accident – the reason is that it was due to the actions of a single person, and that person is dead and can’t tell us why he took the actions he took. It was this human factor that caught my attention.

But I’m getting ahead of myself – let me go back to the start.

In the 1950s it was clear that the Navy’s nuclear power program was going to be a success and other branches of the military became interested in what nuclear energy might do for them. The Air Force began experimenting with, among other things, nuclear-powered aircraft and the Army looked into developing small land-based nuclear power plants that could be easily transported by truck or airplane and used to power remote bases – the SL-1. As an aside, there were some non-military programs aimed at developing spaceships powered by nuclear reactors or driven by nuclear explosions – which sounds like a good topic for a future blog posting!

The SL-1 reactor was the Army’s first design of a “stationary, low-power” nuclear reactor. The prototype of this reactor was built at a government facility in the desert of southern Idaho, not far from the Navy’s prototype submarine and carrier reactors, the Air Force’s nuclear airplane project, and a few other military or military-adjacent nuclear projects.

US Atomic Energy Commission image of SL-1 reactor core being removed from National Reactor Testing Station facility in Idaho

Along with its other specifications, the Army wanted a reactor that could be operated by a small number of people – the design that was constructed in Idaho was fueled with uranium enriched to the same level as the uranium used in nuclear weapons, arranged in an array that included nine control rods. In a nuclear reactor, neutrons caused by one fission fly out of the fuel rod and into the water in which the core is immersed; while in the water the neutrons lose energy and slow down, becoming more effective at causing fission. Neutron-absorbing control rods can be inserted into the reactor core to soak up these neutrons, shutting down the reactor; pulling the rods from the core will allow the reactor to start up. Normally a reactor’s control rods are moved by a motor, hydraulics, or some other sort of mechanism – in the case of the SL-1 the reactor sat in a pool on the main floor of the containment structure, beneath some radiation shielding. The operators could actually walk out on top of the reactor when the reactor was shut down and radiation levels were low.

So…with that as a prelude, here’s what we know about the events of January 3, 1961.

Following a holiday shut-down and maintenance period the three operators, Army Specialists John A. Byrnes (age 22) and Richard Leroy McKinley (age 27), and Navy Seabee Construction Electrician First Class (CE1) Richard C. Legg (age 26) were starting the reactor. At 9:01 PM Byrnes abruptly pulled one of the control rods, causing a sudden and dramatic increase in power. In the next few seconds, much of the fuel either melted or was vaporized, the sudden input of thermal energy caused some of the water in the core to flash to steam and the steam explosion shot jets of water out of the reactor core and caused the entire reactor to jump upwards by about nine or ten feet and spraying the inside of the containment structure with radioactive water. The pressure also shot the #7 shield plug out of the top of the reactor vessel; standing above it was Richard Legg, who was impaled by the plug and pinned to the ceiling of the containment structure.

The other two operators were apparently standing nearby at the time of the accident and were also killed by the accident – it’s possible that they were killed by the shock when the reactor jumped upwards so violently and the impact of the operators with the reactor. Byrne and Legg appear to have died instantly, McKinley seems to have survived for an hour or two before succumbing to his injuries. Although all three men died of physical trauma from the explosion, they received a dose of radiation that would have been fatal even in the absence of their injuries.

The aftermath of the accident unfolded over the next several years and included an extensive investigation as well as decontamination and demolition of the reactor plant along with radiation surveys and decontamination of areas surrounding the reactor site. But – and this is the part that fascinated me – there is no definitive way to know why Byrnes was standing over the reactor, manually pulling the control rod.

The story I heard was anecdotal; one of my classmates said he’d heard that Byrnes had learned that his wife was having an affair with one of the other two operators – that he had deliberately caused the explosion to either commit suicide, to try to kill his wife’s lover, or both. That was the conclusion of one popular account of the accident as well. The problem is that there’s no way to prove – or to disprove – this hypothesis. Not only that, but there are plausible explanations other than murder and/or suicide.

As one example, the control rods sometimes stuck in their channels; it’s possible that Byrnes was trying to free a stuck rod and, when it finally came free, he was pulling so strongly that the rod traveled far past the point at which the reactor became critical, causing it into a dangerous condition known as “prompt criticality” in which power increases uncontrollably.

The rod might have been pulled inadvertently, perhaps by an operator expecting some resistance and finding none.

Or the operators, noticing the rod having a tendency to stick when moved, might have been trying to move it up and down by hand until it was able to move smoothly, accidentally jerking it upwards.

Unfortunately, the operators failed to make any notes in their operating logs that they were having problems with any of the control rods or that they would be working on them.

Thus, my fascination with SL-1. First, at the very start of my stint in the Navy’s nuclear power program, it was sobering to hear of an accident that had cost the lives of three military operators; just as hearing the technical details was fascinating. Hearing how the accident was re-created and what the investigators were able to learn was fascinating, sparking that part of my mind that had so enjoyed reading Sherlock Holmes stories when I was younger. And adding in the fact that we might never know exactly why it happened – plus a little bit of spice with the suggestion that it might have been the result of a love triangle…that made it irresistible.

Feel the Burn

Roentgen’s discovery of x-rays was easily the most exciting scientific discovery of its time. Here were invisible rays that were fairly easy to generate and that could pass through solid matter – not only that, but they could even show you what was inside that matter they’d passed through! In the case of the human body (one of the first and most-frequently objects to be x-rayed), these rays would show bones – both broken and whole – as well as internal organs and any foreign objects that might have ended up within the body.

Wilhelm Conrad Röntgen (March, 27 1845 – February, 10 1923) was a German mechanical engineer and physicist, who, on 8 November 1895, produced and detected electromagnetic radiation in a wavelength range known as X-rays or Röntgen rays, an achievement that earned him the inaugural Nobel Prize in Physics in 1901.

Reference: https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen#/media/File:Roentgen2.jpg

Now take a moment to consider what this would mean to a doctor. Say, for example, a mother brings in a child she thinks has swallowed a key. The child appears to be in pain, which would seem to confirm the key-swallowing, so what do you do?

You could simply watch and wait to see if the key passes through – but what if it’s hung up in a bend of the intestine and tears its way through into the abdominal cavity?

You could perform exploratory surgery – but where do you cut? And what if you search and search and don’t find the key? Was the mother wrong and the key was simply lost (in which case the surgery wasn’t necessary), or did you miss it somehow?

And now you hear of this device that takes out all the guesswork – you don’t have to wait and hope, you don’t have to cut into an otherwise healthy child, wondering if you’re doing the right thing. One quick picture and you know exactly what’s going on, and likely what you need to do. This has to have seemed nearly miraculous to doctors at that time.

With all of the x-rays that were being taken, physicians noticed they were starting to see a new kind of burn on their patients, sort of like a sunburn, but a little more painful. With time, they noticed a correlation between these burns and patients who’d had x-rays and it was easy to assume that the x-rays were causing the burns. But that was an assumption, not proof.

Elihu Thomson (March 29, 1853 – March 13, 1937) was an English-born American engineer and inventor who was instrumental in the founding of major electrical companies in the United States, the United Kingdom and France.

Enter Elihu Thomson, a physicist living in Boston. Hearing speculation after speculation on both sides of the matter, he decided to take matters into his own hands (so to speak). Over a period of time in 1896 (the year after Roentgen’s discovery), Thomson stuck his little finger, partly protected from the radiation with a small amount of lead, into the x-ray beam. It didn’t take long for him to build up enough of a burn on the unshielded part of the finger to convince himself that the x-rays were causing the skin burns.

These earliest days of x-ray use saw the development of what are now fairly standard radiation safety practices – the use of lead for shielding, the use of photographic film (actually, glass plates at the start) for personal dosimetry, limiting exposure by reducing the amount of time spent in a radiation field, and more. In fact, many of the radiation safety practices that I use today originated during these years.

The reason that safety practices were developed so quickly is that radiation injuries began piling up. To quote EF Lang in a column titled From Earlier Pages…. in the October 1978 edition of the American Journal of Roentgenology, “Percy Brown, in his book American Martyrs to Science through the Roentgen Rays, records a litany of sorrow, and describes numberless unsuccessful skin grafts and gradual dismemberment.” (https://www.ajronline.org/doi/pdfplus/10.2214/ajr.131.4.734) Once radiation had been proven to be dangerous, there was every incentive to learn how to work with it safely.

At this same time, medical professionals were taking their explorations of radiation even further. It’s not a great stretch, for example, to wonder if radiation can burn a tumor as easily as a finger, and then to begin experimenting with radiation therapy. This led to the beginnings of radiation therapy – inserting small capsules containing radioactivity into tumors as well as exposing them to radiation from x-ray machines and from sources outside the body.

While oncologists were learning to use radiation for cancer treatment, others were seeing what else it could treat. It’s hard to find diseases that are more important to treat than cancer, as a result, many of these experiments seem frivolous or even silly today. Radiation was known to cause hair loss, for example, and one early experiment involved using radiation to remove women’s unwanted facial hair. Less frivolously, there were also experiments trying to treat lupus and various diseases of the nervous system using radiation – the latter hampered by the fact that nerve tissue is the least sensitive in the body to radiation damage.

For a few decades, radium was fairly widely used in medicine, applied as a source inserted into diseased tissues, administered as a paste or ointment, even drunk as a medicine. As a personal aside, as a pre-teen living in an apartment above his uncle’s medical practice in the 1940s, my father often signed for packages of radium that my great-uncle used to treat his patients.

Sadly, many of these treatments were worthless or even dangerous. A radium-based “medicine,” Radithor (https://en.wikipedia.org/wiki/Radithor), contained distilled water into which at least 1 µCi of radium had been added; it was advertised as helping with any number of health issues and, such was the luster of radium at this point in history, these claims were assumed to be correct. Thus, when an aging Eben Byers, a former amateur golf champion and perennial ladies man, injured his arm in 1927 and a physician recommended Radithor to him, Byers accepted the advice without question. He continued taking it regularly until 1930 when his teeth started falling out and his jaw fell off. In 1931, the company that made Radithor was ordered to stop making the compound; Byers died the following year.

As the relative ignorance of radiation health effects of the earlier years began to be replaced by increased scientific and medical understanding and as the public began to hear stories of the effects of these patent medicines as well as the cases of the radium watch dial painters the market for patent medicines shrank and the public’s initial fascination with radium and radiation turned into a more appropriate caution. At the same time, the medical community had accumulated a degree of caution, based on accumulated experience, and had backed away from its initial enthusiasm for radiation, using it primarily for cancer therapy – the field of diagnostic nuclear medicine was still a decade in the future.

Jumping forward to the 1990s, I spent some time working for the Ohio Department of Health as a radiation safety professional. One of my tasks at one point was to try to locate some of the sites in which radioactivity might have been used in the past – virtually all fell into one of two categories, former manufacturers of self-luminous paints and products and former medical facilities. We found a good number of both; most didn’t appear to be contaminated and, of the ones that were, the majority were not too bad. In addition, the state-funded a few “radium round-ups” in which anybody could turn in any radium source with no questions asked, the idea is to simply collect as much as possible for disposal. But there’s still radium out there – in 2002, as Radiation Safety Officer at a university in upstate NY I was asked to take custody of a small foil containing several mCi of radium – it had turned up in a railroad gondola car carrying scrap metal to a plant in Indiana. While working for New York City in 2010 we found a storage area that contained a large number of radium-dial compasses, several of which were leaking contamination, and even more recently, a friend of mine found a whole curie of radium in a garbage truck. Along with the rest of the radium that’s found in public places, it was shipped for disposal – in spite of its early promise, we just don’t use radium anymore.

The Golden Age of Radium

Marie Curie was perhaps the first “rock star” scientist – she was smart, hard-working, attractive, had survived tragedy, and made history as the first woman to win the Nobel Prize in an era in which women’s role in science was more likely to be that of cleaning up the lab than organizing it. And when the world heard about this petite Polish woman, laboring in obscurity in a makeshift laboratory in Paris, who had discovered not one, but two new elements (radium and polonium) – both of them radioactive, at a time when radioactivity itself was novel and exciting the world was enchanted with Madame Curie, her story, and the radium she had discovered.

Within a few years of its discovery radium’s ability to cause skin burns had been noted, along with appropriate protective measures. By the end of the first decade of the 20th-century people knew how to work safely with radium and they were beginning to tease out some uses for this novel material.

In industry, for example, radium was quickly found to cause a suspension of zinc sulfide to emit light; mixing radium into such a suspension produced a paint that glowed in the dark. Not only that, but mixing zinc sulfide and radium into plastic created a plastic that glowed in the dark. Between the paint and the plastic, virtually anything could be made to glow in the dark.
Some of these products were made for the military; self-luminous dials for airplane instruments, self-luminous nautical navigational instruments, glow-in-the-dark patches soldiers put on their shoulders to help the person behind them follow more easily (these were great to put next to doors, light switches, and the like at home as well), and compasses that could be read in the dark were among the most common.

Bausch and Lomb AN5854-1 Aircraft Bubble Sextant. These bubble sextants were used by large planes (Mostly Bombers) during WWII to navigate by the stars (over the Atlantic or Pacific Oceans) where the natural horizon was not visible and replaced by the bubble. It had electric lights for standard illumination and radium for low-intensity illumination.

A few years ago, I was asked to survey a sextant that had radium-based paint – it didn’t give off high radiation levels, but they were clearly higher than background. And a few years before that I’d responded with the NYPD to a storage area that held a collection of antique military compasses – also with radium-painted numbers and hashmarks. Radium let people see things in the dark, and the public loved it as much as the soldiers, fliers, and sailors.

What’s interesting is that companies started using “radium” in product names, even when the product contained nary a trace of the element. In the BBC documentary Nuclear Nightmares (which was about radiation phobia) health physicist Paul Frame commented on this:

“The word radium meant ‘quality.’ It’s the equivalent today of ‘gold’ or ‘platinum.’ I have a platinum credit card; back then you’d have had a radium credit card. So they had radium condoms, radium cigarettes, radioactive toothpaste, radium show polish – not radioactive, but using that word “radium” to connote high quality.”

I mentioned earlier that radium was used to paint glow-in-the-dark dials for watches, aircraft instruments, compasses, and so forth. What I didn’t mention was that these dials were painted by hand and the vast majority of those hands belonged to young women working for one of the companies making these dials. The girls doing this work were trained to twirl the brush briefly in their lips or against their tongue to assure a good point for painting the fine lines; when they did this they ingested tiny amounts of radium. Over time, this radium was deposited in their bones and exposed them to high levels of radiation; in 1922 the first “Radium girl” died from radiation exposure, followed by an increasing number of others in subsequent years. For many, this was the first indication that radium might not be purely good.

Reference:

https://en.wikipedia.org/wiki/Radithor#/media/File:Radithor_bottle_(25799475341).jpg

Radium was also a huge hit in the medical arena. When it was discovered that radium could cause skin burns, physicians made the leap to realizing it might also be used to burn out cancers or other unwanted tissues. But then the quacks got their hands on radium, mixing it in with patent medicines such as Radithor. One problem with Radithor was that it killed its most prominent user, amateur athlete and industrialist Eben Byers, in 1932, after drinking an estimated 1400 bottles over the previous five years. Radium is a bone-seeking element (it replaces calcium in the bone) and Byers’ teeth, jaws, and bones were rotting away.

The Radium Girls, Eben Byers, and others who over-indulged in radium patent medicines were the first people to come to the attention of the general public to be injured by radiation. And they were sympathetic characters – young women and a famous socialite, athlete, and industrialist. By the time that Byers died, dozens, if not hundreds of Radium Girls had also become ill and many of them had died.

Within a decade of Byers’ death, radiation research would be plugging along at full speed in the US, a part of the Manhattan Project. In the 1950s, during the era of atmospheric nuclear weapons testing, the health effects of radiation manifested themselves more widely, culminating in the Atmospheric Test Ban Treaty.

A few decades more and environmental regulatory standards began to change, requiring the cleanup of the places where radium watch dials used to be produced. I was involved in one of these remediation projects, in central Illinois – overseeing radiation safety for the remediation. In addition to excavating thousands of cubic yards of contaminated soil, our radiation safety technicians found a radium source inside the wall of a house in town, a source that had been there for over a half-century. More interestingly, we also found a flagpole that had been painted with radium paint – apparently, it was in front of the house that had once been the governor’s home and, in the 1920s, it was felt that the governor’s flagpole should stand out (this particular vanity ended up resulting in over 10 cubic yards of contaminated soil, that had to be painstakingly excavated around a number of buried utility lines).

Since the golden age of radium attitudes towards radiation have changed dramatically. I began working with radiation in 1981 and, frankly, it’s hard for me to picture an era in which the public was enthusiastic about radiation and viewed it with hope and excitement rather than fear. But it’s awfully nice to fantasize about.

The Radiation Propeller

When I was in the Navy, at one point I got to stand watch for four hours every day for nearly 2 months – I happened to be sitting directly in front of a cabinet that was marked with the radiation symbol (or, as we called it, the radiation propeller) which meant that I got to spend four hours every day staring at the sign. Three magenta blades and a magenta dot in the center, all on a yellow background – thanks to our training I could tell you the size of the blades, the size of the dot in the middle, and the spacing of the “propeller” blades, I knew that this was all specified by regulations, but I didn’t have a clue as to who chose this particular color scheme or why they decided to go with the three-bladed design. And, yeah, I know that in the giant overall scheme of things this isn’t that big a deal – but when you’re staring at this thing for a half shift every day for two months one just starts to wonder about that sort of thing.

Later I was able to find out where the radiation symbol was described in the regulations (in case you’re wondering it’s in 10 CFR 20.1901 https://www.nrc.gov/reading-rm/doc-collections/cfr/part020/part020-1901.html), but not much more than that. I was starting to resign myself to the possibility I might go to my grave without ever learning the origin of the radiation symbol…I pretty much decided that I’d be OK with that. So imagine my surprise when, sometime in the 1990s, I stumbled across a paper in the scientific journal Health Physics entitled “A Brief History of a 20th Century Danger Sign” by Stephens and Barrett that appeared in Health Physics Vol. 36 (May) pp. 565-571, describing where the design came from. While I can’t say that this made my life complete, it was sort of neat!

Without getting into all the nitty-gritty detail, I thought it was interesting to read that the design was (somehow) supposed to represent energy radiating out from a radioactive source. More importantly, it was also interesting to find out that there’s more to the radiation symbol than the propeller shape – if it doesn’t have a yellow background and a black or magenta symbol then it’s not compliant with the regulations and it doesn’t “count.” But instead of retracing the history of the symbol, let me instead tell you a few stories about people getting it wrong.

Early radiation warning tags used at the University of California Radiation Laboratory.

At one university I worked at in the early 1990s I was walking down a corridor and saw a radiation sign on the door to a lab that I knew didn’t use radioactivity. When I asked, I found out that that particular lab had lost a lot of analytical balances – they were stolen by drug dealers – so they started putting a radiation sign on the door. Drug dealers didn’t want to break into the labs because they were scared of the radiation they thought was there so they stopped stealing equipment. The only problem was that it wasn’t quite legal – we were cited for misuse of the radiation symbol. A few years later, when I was working as a regulator, we cited an environmental consulting company for the same thing – they were hanging radiation symbols on their equipment trailers when they had to leave them overnight in a location, again to discourage theft. Good in theory – expensive in practice (at least for the environmental consulting firm – they were fined).

Several years later I was working at a different university and we were putting up a new research building. I went to a meeting that included our interior designers and they were eager to show me their idea for a less-garish radiation sign – apparently, the yellow-and-magenta clashed with their color scheme. I agree with them that pastel green-and-cream looked much nicer, but then had to break the news to them that no matter how nice it looked, wasn’t legal. They ignored me and put up the more soothing signs all over the new building. And they were disappointed when they came back to do a walkthrough as the building began to fill with labs, seeing that their tasteful and subdued color scheme was ruined by the garish (but regulatorily compliant) signs. But I do have to admit that their signs were a lot nicer than the legal ones….

Around this same time, I was going through some of the old records at my university and I found some old radioactive materials tags – apparently ones that pre-dated the official colors. These had a green background and a red radiation propeller. I’m guessing that these dated back to the mid or late 1940s when the university had done a lot of research on radiation health effects as part of the Manhattan Project. I was tempted to use these in the new building…but we really did have to follow the regs. So I just put them in a folder instead (well…except for a few I kept for myself).

An early version of the radiation symbol used on a gummed tape label for chemical ware

Coming back to my time in the Navy, I remember one day that I was sitting at my watch station, bored out of my mind and staring at the radiation sign. I let my eyes unfocus and was just staring at the dot in the center of the symbol, letting my mind go sort-of blank. And I noticed that one of the blades on the propeller was starting to dissolve and vanish. The only thing I could figure was that it was at the edge of my blind spot. So I shifted my gaze a little and the blade returned, but the next blade began to dissolve into my blind spot. Then I did it again with the third blade. At the end of the watch, I mentioned this to one of the other mechanics with who I shared the watch. His reply? “You too, huh?” It’s probably the most fun I’ve had with the radiation symbol.

Atoms for…pace?

At 2 PM local time on January 19, 2006, an Atlas V rocket lifted off from Cape Canaveral’s Pad 41, carrying a craft that was bound for the outermost part of the Solar System – the New Horizons spacecraft. The payload included an RTG – a radioisotopic thermal generator – powered by 7.8 kg of plutonium (Pu-238 to be precise). How this works is that the plutonium gives off a lot of heat – enough to make it glow red-hot – when it radioactively decays and this heat is turned into electricity through devices called thermocouples. There are plutonium-powered RTGs scattered throughout the Solar System – in orbit around Saturn and Jupiter, gathering scientific information on the Moon, orbiting the Sun, roaming Mars, and entering interstellar space as part of the Voyager probes. And, in the early 1970s, this same technology was being implanted into the chests of heart patients to help keep their hearts beat steadily.

At Kennedy Space Center’s Payload Hazardous Servicing Facility, technicians prepare the New Horizons spacecraft for a media event (2005).

RTGs are actually pretty interesting devices. The whole thing starts off with a heat source – a chunk of radioactive material that gives off enough energy to produce serious amounts of heat. For reasons that go beyond the scope of this piece (which is another way of saying that I don’t quite understand the physics…) when you put two different metals together and heat one end to a high temperature then electrical current starts to flow. If the temperature difference is high enough and the radioactive source is producing enough heat then you can use the current to power a meteorological station, a lighthouse…or a spaceship.

Diagram of an RTG used on the Cassini probe.

The USSR used to make RTGs – they were used to power the meteorological stations and lighthouses I noted earlier, as well as some of their space probes. The US sent most of its own RTGs into space and, with an 88-year half-life, the Pu-238 is able to power the craft for decades (the Voyager RTGs have lost a fair amount of their activity after 40+ years but are still putting out enough energy to keep their radio transmissions reaching Earth). On the more negative side, a number of the Soviet RTGs were abandoned in their former subject nations when the Soviet Union collapsed at the end of the Cold War; one of these caused severe radiation burns in a number of woodcutters who found an RTG in the woods in 2001.

The pacemaker is a device with a much longer history than most would guess. As early as 1889 British physician John Alexander MacWilliam had reported that applying electricity to the heart of a person with a serious arrhythmia could shock the heart back into a normal rhythm. This idea was refined over the next few decades and, in 1928 the American physician Alfred Hyman invented an electro-mechanical pacemaker (even using that name), although the idea didn’t really start to take off until after the Second World War.

The next decade or so saw continued improvements, although some of the earlier efforts delivered the current (painfully) through the skin and had to be plugged into the wall. Batteries helped, although the batteries of the day didn’t last very long and, while the cardiologists appreciated what they had they couldn’t help but hope for something better. In the 1970s, that “something better” turned out to be plutonium “batteries” in a device compact enough to be implanted into the patient’s body. And for the next decade – until lithium-based batteries proved to be sufficiently reliable and long-lived – plutonium-powered pacemakers were regularly implanted into heart patients.

Before and during their use the radiation dose and dose rate from these devices was evaluated and turned out not to be too bad. And while the dose rate on the surface of the pacemaker could be as high as 15 mrem/hr, the radiation they emitted was quickly attenuated by the body so the dose to the patient was only about 100 mrem annually, with even lower doses to their spouse. On the other hand, they contained as much as 4 Ci of radioactivity, which is enough to require a radioactive materials license – I wasn’t able to find out if the patient had to be a licensee (although I doubt it), but the plutonium itself needed to be turned over to the government upon the patient’s death.

There were concerns of course – especially about potential radiation dose to the patient. Luckily, Pu-238 emits alpha radiation almost exclusively and its gamma emissions are so rare and so low in energy as to be insignificant. On the other hand, plutonium is highly toxic, requiring careful encapsulation to ensure the patient could not be accidentally poisoned. Thermal shielding was another concern – a large enough plutonium-238 source will glow red-hot – making these pacemakers a bit larger and heavier than most; on the other hand, the plutonium “battery” would last for the patient’s entire lifetime so it would never be necessary to operate again to change the batteries. The biggest downside, however, was the plutonium itself – when the patient died the pacemaker had to be removed and sent off for proper disposal – burying a patient with their pacemaker wasn’t too bad, but cremation was out of the question.

Eventually, the plutonium pacemaker was superseded by non-radiological versions and the plutonium-powered devices are being turned over to the government so they can be disposed of properly.

What strikes me about the plutonium pacemakers is their similarity to other RTG-powered devices. And while their time has come and gone, I think it’s neat to think of a device that can be used to steady a heartbeat, to illuminate a lighthouse on the Arctic Ocean, or to power craft on Mars, circling Saturn and Jupiter, and making humanity’s first push into interstellar space.

Radioluminescence

I’ve been working in radiation safety and related fields my entire adult life (well…except for a short stint during which I worked the night shift running copier machines on campus). I got used to hearing the standard jokes about glowing in the dark (for example, “Do you need a night light?”), and I have to admit that I came up with some myself. For that matter, one of the nicknames for Navy Nukes (those of us who worked in Naval Nuclear Power) was glow-worms. The thing is – things (including people) who are exposed to radiation don’t glow in the dark. Don’t get me wrong – radioactivity has been used in things that glow in the dark for over a century – but the glow doesn’t come from the radiation directly; it’s a little more interesting than that.

What’s happening is that some materials give off light when they’re hit by the radiation. Looking at the science behind it can get fairly complicated – the short version is that the radiation deposits energy into a substance and that energy goes into (among other things) bumping electrons up to higher energy levels. They only stay there for a short period of time before they drop back down to what’s called the “ground state” – when the electrons drop back down, they give off a greenish or bluish photon. That’s where the glow comes from.

This phenomenon – called radioluminescence – was discovered over a century ago and it was put to use almost immediately. We’re all familiar with watches with luminous dials, making it possible to tell time at night. Radioluminescent paint was also used for aircraft instruments, compasses, gunsights…during the First World War soldiers would even put a small radioluminescent patch on their uniform to help the person behind them follow a little more easily when marching at night. This spread to the private sector as well – luminous paints, clock dials, even fishing tackle and golf balls were painted with radium compounds to make them easier to see in the dark.

Interestingly, this phenomenon had been used even before the war by Ernest Rutherford, a New Zealander whose graduate students spent many long hours with their attention glued to a screen made of zinc sulfide. They were looking for tiny flashes of light that were caused by alpha particles scattering off of atoms in a piece of gold foil, by measuring the scattering angle Rutherford hoped to figure out how atoms were structured. To Rutherford’s great surprise, they noticed that every now and again an alpha particle would bounce directly back from the foil – with what was known about atoms at the time this was as unexpected “as if you fired a 15” naval shell at a piece of tissue paper and it bounced back.” (https://cerncourier.com/a/rutherford-transmutation-and-the-proton/)

So by the time WWI rolled around, radioluminescence was well-known to scientists, the public, and to any number of manufacturers and their customers. The problem is that the people making these things (and those buying them) didn’t know that there were dangers.

One of the things, of course, was the minor factor that these self-luminous products were radioactive. And – yes – they knew that radium was radioactive, but a century ago they didn’t know that radiation caused cancer, or how much radioactivity it took to give someone radiation sickness. In fact, so little was known about the effects of ingested radioactivity that people were prescribed medicines that contained radium that they’d drink for their “health” – to enhance their libido, for example. It wasn’t until an industrialist and amateur golfer by the name of Eben Byers (https://www.cultofweird.com/medical/eben-byers-radithor-poisoning/) died as a result of taking a radium-based patent medicine that the public began taking this sort of thing seriously; the deaths of the radium watch-dial painters (https://www.atomicheritage.org/history/radium-girls) added to the public’s concerns about radiation and radioactivity. The fact that it was younger girls – chosen because of their steadier hands and attention to detail – who were getting sick and (in some cases) dying made for a tragic story that captured the public’s attention.

So…radioluminescence can be a good thing – making it possible to read a watch or instrument dials or to remain in marching formation in the dark. But it can be bad as well – as attested to by the Radium Girls. Because in the days before automation all of those watch dials were painted by hand, and the great majority of those hands belonged to the young women who were employed in hand-painting watches, aircraft instruments, compasses, and the like.

Young women were chosen for this role because of their steady hands and attention to detail, and the ladies did a great job, as anybody who’s owned one of their watches or other instruments can attest. But much of what they were painting was tiny and making it look good required a fine point on the brushes. To get that fine point, the ladies would “point” the brushes – rolling the tip of the brush quickly on their tongue. In doing so, they ingested a small amount of radium each time and, over the weeks and months that radium became deadly. There are accounts of some of these ladies having so much radium in their bodies that they could cause the luminescent paint to sparkle just by exhaling on it (due to the presence of radon, which forms from the decay of the radium).

A photograph of the “radium girls” who worked at the U.S. Radium Corporation in Orange, New Jersey. This historical event was also recently popularized in the 2018 movie, Radium Girls, directed by Lydia Dean Pilcher and Ginny Mohler.

In the 1920s and 1930s, many of these women became ill and some died from various radiation-related illnesses. This, naturally, garnered media attention and eventually led to some regulatory changes – including contributing to the passage of laws regarding worker compensation for job-related illness and injury. Radium continued to be used for watches into the 1960s (later in some nations) before being phased out.

For the last few decades, the focus has shifted to cleaning up the sites where these watches and other self-luminous products were once made; in fact, I spent a few years managing aspects of a radium remediation in central Illinois. While much of the work was fairly routine, we did find a radium source inside the wall of one home, found some areas where contamination had spread further than expected, and even had to characterize the amount of radium paint used on a flag pole that was about 10 meters high. It was an interesting project; it was also neat to be working on a project that had such an interesting historical aspect.

Radioluminescence has an interesting history – but it’s also useful even today. In fact, I have a few examples in my radiation instrument case – in what are called scintillation detectors. One of my detectors, the one I use for measuring alpha radiation, uses a zinc sulfide crystal – the crystal emits photons of visible light each time it’s struck by an alpha particle. And sometimes when I’m performing an alpha survey it strikes me that the detector I’m using is based on the same principles as the scintillation screen used by Ernest Rutherford when he was teasing out the structure of the atom and the same principles as the phosphor that the Radium Girls were meticulously painting onto watch, compass, and instrument dials at such great cost to so many.