Medical News

Ballet flats are back. Everyone’s saying it—Vogue, the TikTok girlies, The New York Times, Instagram’s foremost fashion narcs, the whole gang. Shoes from trendsetting brands such as Alaïa and Miu Miu line store shelves, and hundreds of cheap alternatives are available online at fast-fashion juggernauts such as Shein and Temu. You can run from the return of the ballet flat, but you can’t hide. And, depending on how much time your feet spent in the shoes the last time they were trendy, maybe you can’t run either.

The ballet flat—a slipperlike, largely unstructured shoe style meant to evoke a ballerina’s pointe shoes—never disappears from the fashion landscape entirely, but its previous period of decided coolness was during the mid-to-late 2000s. Back then, teens were swathing themselves in Juicy Couture and Abercrombie & Fitch, Lauren Conrad was ruining her life by turning down a trip to Paris on The Hills, and fashion magazines were full of Lanvin and Chloé and Tory Burch flats. The style was paired with every kind of outfit you could think of—the chunky white sneaker of its day, if you will.

How you feel about the shoes’ revival likely has a lot to do with your age. If you’re young enough to be witnessing ballet flats’ popularity for the first time, then maybe they seem like a pleasantly retro and feminine departure from lug soles and sneakers. If, like me, you’ve made it past 30(ish), the whole thing might make you feel a little old. Physically, ballet flats are a nightmare for your back, your knees, your arches; when it comes to support, most offer little more than you’d get from a pair of socks. Spiritually, the injury might be even worse. Twenty years is a normal amount of time to have passed for a trend to be revived as retro, but it’s also a rude interval at which to contemplate being punted out of the zeitgeist in favor of those who see your youth as something to be mined for inspiration—and therefore as something definitively in the past.

Trends are a funny thing. Especially in fashion, people see trends as the province of the very young, but tracing their paths is often less straightforward. Take normcore’s dad sneakers: In the mid-2010s, the shoes became popular among Millennials, who were then hitting their 30s, precisely because they were the sneakers of choice for retired Boomers. But in order for a trend to reach the rare heights of population-level relevance, very young people do eventually need to sign on. In the case of dad sneakers, it took years for Zoomers to come around en masse, but their seal of approval has helped keep bulky New Balances popular for nearly a decade—far past the point when most trends fizzle.

The return of ballet flats is a signal of this new cohort of fashion consumers asserting itself even more widely in the marketplace. The trends young people endorse tend to swing between extremes. The durable popularity of dad shoes all but guaranteed that some young people would eventually start to look for something sleeker and less substantial. The ballet flat fits perfectly within the turn-of-the-millennium fashion tropes—overplucked eyebrows, low-rise jeans, tiny sunglasses—that Zoomers have been tinkering with for several years.

Ballet flats are an all-the-more-appropriate sign of a generational shift, in fact, because they are the folly of youth made manifest. Wearing them is an act of violence against podiatry, yes, but their drawbacks go further. Many ballet flats are so flimsy that they look trashed after only a few wears. They’re difficult to pair with socks, so they stink like feet almost as quickly. Ballet flats are impractical shoes that sneak into closets under the guise of practicality—hey, they’re not high heels!—and prey on people who do not yet know better.

What does that mean, then, for the people who do know better? For one, it means that the extended adolescence that some Millennials experienced following the Great Recession is finally, inarguably over. We’re old, at least relatively speaking. Every generation eventually ages out of the particular cultural power of youth and then watches as younger people make mistakes that seem obvious in hindsight, and the ballet flat is a reminder that people my age are no longer the default main characters in culture that we once were. When I was a middle schooler begging for a pair of wooden-soled Candie’s platform sandals in the mid-’90s, I remember my mother, in a fit of exasperation, telling me that I couldn’t have them because she saw too many people fall off their platforms in the ’70s. This is the first time I remember contemplating my mom as a human being who existed long before I was conscious of her: someone who bought cool but ill-advised clothes and uncomfortable shoes, who went to parties where people sometimes had a hard time remaining upright.

Even the cool girls with the coolest shoes at some point grow to regard parts of their past selves as a bit silly, and they become the people trying to save the kids from their own fashion hubris. This sensation is undoubtedly acute for Millennials, because this hubris is displayed most prominently in an arena they used to rule: the internet. On TikTok, the world’s hottest trend machine, the over-30 crowd is more onlooker than participant, and the youth are using the platform to encourage one another to dress like they’re going to a party at the Delt house in 2007. Someone has to warn them.

If you’re realizing that this someone is you, my advice would be to not let the generational responsibilities of aging weigh too heavily on you. The upside of losing your spot at culture’s center stage, after all, is freedom. You can look around at what’s fashionable, pick the things that work for you, and write off the rest as the folly of youth. (The Zoomers are right: The lug-soled combat boots that I wore in high school actually are very cool.) In place of chasing trends, you can cultivate taste. When you fail at taste, at least you can be aware of your own questionable decisions. In the process of writing this article, I realized that French Sole still makes the exact same prim little flats that I must have bought three or four times over during the course of my first post-college job, in the late 2000s. They’re as flimsy as ever, but whatever made me love them 15 years ago is still there, buried under all of my better judgment. I haven’t closed the tab quite yet.

Here is a straightforward, clinical description of a migraine: intense throbbing headache, nausea, vomiting, and sensitivity to light and noise, lasting for hours or days.

And here is a fuller, more honest picture: an intense, throbbing sense of annoyance as the pain around my eye blooms. Wondering what the trigger was this time. Popping my beloved Excedrin—a combination of acetaminophen, aspirin, and caffeine—and hoping it has a chance to percolate in my system before I start vomiting. There’s the drawing of the curtains, the curling up in bed, the dash to the toilet to puke my guts out. I am not a religious person, but during my worst migraines, I have whimpered at the universe, my hands jammed into the side of my skull, and begged it for relief.

That probably sounds melodramatic, but listen: Migraines are miserable. They’re miserable for about 40 million Americans, most of them women, though the precise symptoms and their severity vary across sufferers. For about a quarter, myself included, the onset is sometimes preceded by an aura, a short-lived phase that can include blind spots, tingling, numbness, and language problems. (These can resemble stroke symptoms, and you should seek immediate medical care if you experience them and don’t have a history of migraines.) Many experience a final phase known as the “migraine hangover,” which consists of fatigue, trouble concentrating, and dizziness after the worst pain has passed.

These days, migraine sufferers are caught in a bit of a paradox. In some ways, their situation looks bright (but, please, not too bright): More treatments are available now than ever before—though still no cure—and researchers are learning more about what triggers a migraine, with occasionally surprising results. “It’s a really exciting time in headache medicine,” Mia Minen, a neurologist and the chief of headache research at NYU Langone, told me.

And yet the enthusiasm within the medical community doesn’t seem to align with conditions on the ground (which, by the way, is a nice, cool place to press your cheek during an attack). Migraine sufferers cancel plans and feel guilty about it. They struggle to parent. They call in sick, and if they can’t, they move through the work day like zombies. In a 2019 survey, about 30 percent of participants with episodic migraines—attacks that occur on fewer than 15 days a month—said that the disorder had negatively affected their careers. About 58 percent with chronic migraines—attacks that occur more often than that—said the same.

Migraines are still misunderstood, including by the people who deal with them. “We still don’t have a full understanding of exactly what causes migraine, and why some people suffer more than others do,” Elizabeth Loder, a headache clinician at Brigham and Women’s Hospital in Boston and a neurology professor at Harvard Medical School, told me. Despite scientific progress, awareness campaigns, and frequent reminders that migraines are a neurological disorder and not “just headaches,” too often, they’re not treated with the medical care they require. Yes, it’s the best time in history to have migraines. It just doesn’t feel that way.

Humans have had migraines probably for as long as we’ve had brains. As the historian Katherine Foxhall argues in her 2019 book, Migraine: A History, “much evidence suggests migraine had been taken seriously in both medical and lay literature throughout the classical, medieval, and early modern periods as a serious disorder requiring prompt and sustained treatment.” It was only in the 18th century, when medical professionals lumped migraines in with other “nervous disorders” such as hysteria, that they “came to be seen as characteristic of sensitivity, femininity, overwork, and moral and personal failure.” The association persisted, Stephen Silberstein, the director of the headache center at Thomas Jefferson University, told me. When Silberstein began his training in the 1960s, “nobody talked about migraine in medical school,” he told me. Physicians still believed that migraines were “the disorder of neurotic women.”

The first drug treatments for migraines appeared in the 1920s, and they were discovered somewhat by accident: Doctors found that ergotamine, a drug used to stimulate contractions in childbirth and control postpartum bleeding, also sometimes relieved migraines. (It could also cause pain, muscle weakness, and, in high enough doses, gangrene; some later studies have found that it’s little better than placebo.) The drug constricted blood vessels in the brain, so doctors assumed that migraine was a vascular disorder, the symptoms brought on by changes in blood flow and inflamed vessels. In the 1960s, a physician studying the effectiveness of a heart medication noticed that one of his participants experienced migraine attacks less frequently than he used to; a decade later, the FDA approved that class of drug, called beta-blockers, as a preventative treatment. (In the decades since their approval, studies have found that beta-blockers helped about a quarter of participants reduce their monthly migraine days by half, compared with 4 percent of people taking a placebo.)

Things changed in the 1990s, when triptans, a new class of drugs made specifically for migraines, became available. Triptans were often more effective and faster at easing migraine pain than earlier drugs, though the effects didn’t last as long. Around the same time, genetic studies revealed that migraines are often hereditary. Meanwhile, new brain-imaging technology allowed researchers to observe migraines in real time. It showed that, although blood vessels could become inflamed during an attack and contribute to pain, migraine isn’t strictly a vascular disorder. The chaos comes from within the nervous system: Scientists’ best understanding is that the trigeminal nerve, which provides sensation in the face, becomes stimulated, which triggers cells in the brain to release neurotransmitters that produce headache pain. How exactly the nerve gets perturbed remains unclear.

[Read: Doctors suddenly got way better at treating eczema]

The past few years of migraine medicine have felt like the ’90s all over again. In 2018, the FDA approved a monthly injection that prevents migraines by regulating CGRP, a neurotransmitter that’s known to spike during attacks. For 40 percent of people with chronic migraines participating in one clinical trial, the treatment cut their monthly migraine days in half. Similar remedies followed; Lady Gaga, a longtime migraine sufferer, appeared in a commercial this summer to endorse Pfizer’s CGRP-blocking pill, and the company’s CEO launched a migraine-awareness campaign earlier this month. Solid evidence has emerged that cognitive behavioral therapy and relaxation techniques tailored to migraine can be helpful as part of a larger treatment plan. The FDA has cleared several wearable devices designed to curb migraines by delivering mild electric stimulation. Last year, the agency decided to speed up the development of a device that deploys gentle puffs of air into a user’s ears.

Researchers are still, to this day, making progress on identifying migraine triggers. Experts agree on many common triggers, such as skipping meals, getting too little sleep, getting too much sleep, stress, the comedown from stress, and hormone changes linked to menstruation or menopause. They’re also realizing that some long-held beliefs about triggers might be entirely wrong. MSG, for example, probably doesn’t induce migraines; changes in air pressure don’t do so as often as many people who have migraines seem to think.

Some supposed triggers might actually be signs of an oncoming migraine. The majority of migraine sufferers experience something called the premonitory phase, which can last for several hours or days before headache pain sets in and has its own set of symptoms, including food cravings. We migraine sufferers are frequently advised to steer clear of chocolate, but if you’re craving a Snickers bar, the migraine may already be coming whether or not you eat it. “When you get a headache, you blame it on the chocolate—even though the migraine made you eat the chocolate,” Silberstein said. “I always tell people, if they think they’re getting a migraine, eat a bar of chocolate … It’s more likely to do good than harm.”

Silberstein’s advice sounded like absolute blasphemy to me. Virtually every migraine FAQ page in existence had led me to believe that chocolate is a ruthless trigger. Maybe I shouldn’t have been relying on general guidelines on the internet, even though they came from reputable medical institutions. But I had turned to the internet because I didn’t think my migraines necessitated a visit to a specialist. According to the American Migraine Foundation, the majority of people who have migraines never consult a doctor to receive proper diagnosis and treatment.

Recent surveys have shown that people are reluctant to see a professional for a variety of reasons: They think their migraine isn’t bad enough, they worry that their symptoms won’t be taken seriously, or they can’t afford the care. The hot new preventative medications in particular “are extremely expensive, putting them out of reach of some of the people who might benefit the most,” Loder said. In 2018, when the much-heralded CGRP blocker hit the market, the journalist Libby Watson, a longtime migraine patient herself, interviewed migraine sufferers who described themselves as low-income, and found that most of them hadn’t heard of the new drug at all.

Even if you can get them, the treatments don’t guarantee relief. One recent study showed that triptans might not relieve pain—or might not be tolerable—for up to 40 percent of migraine patients. Experts are still trying to figure out why the same treatment might work wonderfully for one person, and not at all for another, Minen said. Some patients find that drugs eventually stop working for them, or that they come with side effects bad enough to discourage continued use, such as dizziness and still more nausea.

[Read: Why has a useless cold medication been allowed on shelves for years?]

These problems remain unsolved in part because of a dearth of research. Like other conditions that mostly afflict women, migraines receive “much less funding in proportion to the burden they exert on the U.S. population,” Nature’s Kerri Smith reported in May. And many doctors are unaware of the research that exists: A 2021 study of non-migraine physicians found that 43 percent had “poor knowledge” of the condition’s symptoms and management, and just 21 percent were aware of targeted treatments. Specialists tend to have a much better knowledge base, but good luck seeing one: America has too few headache doctors, and there are significantly fewer of them in rural areas.

Many migraine sufferers rely on over-the-counter pain relievers, myself included. Years ago, my primary-care physician prescribed me a triptan nasal spray. It produced a terrible aftertaste and worsened the throbbing in my head, and I gave up on it after only a couple of uses. Back to Excedrin I went, not realizing—until reporting this story—that nonprescription medications can cause even more attacks if you overuse them. Some people get by on home remedies that the journalist Katy Schneider, who battles migraines herself, has described as a “medicine cabinet of curiosities”; one person she interviewed shotguns an ice-cold Coke when she feels the symptoms coming on.

When triptans and tricks fail, some people try to prevent migraines by avoiding triggers. Don’t stay up too late or sleep in. Don’t drink red wine. Put down that Snickers. This strategy of avoidance “interferes with the quality of their life in many cases,” Loder said, and probably doesn’t stop the attacks. And drawing associations is a futile exercise because most migraines are brought on by more than one trigger, Minen said. People can end up internalizing the 18th-century idea that migraines are a personal failure rather than a disease—and migraine FAQs perpetuate that myth by advising patients to live an ascetic life.

[Read: Adult ADHD is the Wild West of psychiatry]

The misconceptions surrounding migraine, combined with its invisibility, make the disorder easy to stigmatize. The authors of a 2021 review found that, compared with epilepsy, a neurological disorder with a physical manifestation, “people with chronic migraine are viewed as less trustworthy, less likely to try their hardest, and more likely to malinger.” Perhaps as a result, many feel pressure to grind through it. Migraines are estimated to account for 16 percent of presenteeism—being on the job but not operating at full capacity—in the American workforce.

Before reporting this story, I had never thought to call my migraines a neurological disorder, let alone a “debilitating” one, as Minen and other experts do. Migraines were just this thing that I’ve lived with for more than a decade, and had accepted as an unfortunate part of my existence. Just my Excedrin and me, together forever, barreling through the wasted days. The attacks began in my late teens, around the same time that my childhood epilepsy mysteriously vanished. I never got an explanation for my seizures, despite years of daily medication and countless EEGs. A neurologist once told me that the two might be related, but he couldn’t say for sure; research has shown that people who have epilepsy are more likely to experience migraines. And so I assumed that I just had a slightly broken brain, prone to electrochemical misfiring.

All of the experts I spoke with were politely horrified when I told them about my migraines and how I manage them. I promised them that I’d make an appointment with a specialist. Before we got off the phone, Silberstein gave me a tip. “Put a cold pack on your neck and then a heating pad, 15 minutes alternating,” he said. “It’ll take the migraine away.” He told me that researchers are developing a device that does this, but the old-fashioned way can be effective too. At this point, my cabinet of curiosities is falling apart, its hinges squeaking from overuse. I’m already rethinking my entire migraine life, so I may as well try this too.

Productivity is a sore subject for a lot of people. Philosophically, the concept is a nightmare. Americans invest personal productivity with moral weight, as though human worth can be divined through careful examination of work product, both professional and personal. The more practical questions of productivity are no less freighted with anxiety. Are you doing enough to hold on to your job? To improve your marriage? To raise well-adjusted kids? To maintain your health? What can you change in order to do more?

Anxiety breeds products, and the tech industry’s obsession with personal optimization in particular has yielded a bounty of them in the past decade or two: digital calendars that send you push notifications about your daily schedule. Platforms that reimagine your life as a series of project-management issues. Planners as thick as encyclopedias that encourage you to set daily intentions and monthly priorities. Self-help books that cobble together specious principles of behavioral psychology to teach you the secrets of actually using all of the stuff you’ve bought in order to optimize your waking hours (and maybe your sleeping ones too).

Underneath all of the tiresome discourse about enhancing human productivity or rejecting it as a concept, there is a bedrock truth that tends to get lost. There probably is a bunch of stuff that you need or want to get done, for reasons that have no discernable moral or political valence—making a long-delayed dentist appointment, picking up groceries, returning a few nagging emails, hanging curtains in your new apartment. For that, I come bearing but one life hack: the humble to-do list, written out on actual paper, with actual pen.

First, cards on the table: I’m not an organized person. Much of the advice on these topics is given by people with a natural capacity for organization and focus—the people who, as kids, kept meticulous records of assignments and impending tests in their school-issued planners. Now they send out calendar invites to their friends once next weekend’s dinner plans are settled and have never killed a plant by forgetting to water it. They were, in my opinion, largely born on third base and think they hit a triple. I, by contrast, have what a psychiatrist once called a “really classic case” of ADHD. My executive function is never coming back from war. I have tried the tips, the tricks, the hacks, the apps, and the methods. I have abandoned countless planners three weeks into January. Years ago, I bought a box with a timed lock so that I could put my phone in it and force myself to write emails. Perhaps counterintuitively, that makes me somewhat of an amateur expert in the tactics that are often recommended for getting your life (or at least your day) in order.

It took me an embarrassingly long time to try putting pen to paper. By the time I was in the working world, smartphones were beginning to proliferate, and suddenly, there was an app for that. In the late 2000s, optimism abounded about the capacity for consumer technology to help people overcome personal foibles and make everyday life more efficient. Didn’t a calendar app seem much neater and tidier than a paper planner? Wouldn’t a list of tasks that need your attention be that much more effective if it could zap you with a little vibration to remind you it exists? If all of your schedules and documents and contacts and to-do lists could live in one place, wouldn’t that be better?

Fifteen years later, the answer to those questions seems to be “not really.” People habituate to the constant beeps and buzzes of their phone, which makes rote push-notification task reminders less likely to break through the noise. If you make a to-do list in your notes app, it disappears into the ether when you finally lock your phone in an effort to get something—anything!—done. Shareable digital calendars do hold certain practical advantages over their paper predecessors, and services such as Slack and Google Docs, which let people work together at a distance, provide obvious efficiencies over mailing paperwork back and forth. But those services’ unexpected downsides have also become clear. Trivial meetings stack up. Work bleeds into your personal time, which isn’t actually efficient. Above all, these apps and tactics tend to be designed with a very specific kind of productivity in mind: that which is expected of the average office worker, whose days tend to involve a lot of computer tasks and be scheduleable and predictable. If your work is more siloed or scattered or unpredictable—like, say, a reporter’s—then bending those tools to your will is a task all its own. Which is to say nothing of the difficulty of bending those tools to the necessities of life outside of work.

My personal collision with the shortcomings of digital productivity hacks came during the first year of the pandemic, when many people were feeling particularly isolated and feral. Without the benefit of the routines that I’d constructed for myself in day-to-day life in the outside world, time passed without notice, and I had trouble remembering what I was supposed to be doing at any given time. I set reminders for myself, opened accounts on task-management platforms, tried different kinds of note-taking software. It was all a wash. At the end of my rope, I pulled out a notebook and pen, and flipped to a clean page. I made a list of all the things I could remember that I’d left hanging, broken down into their simplest component parts—not clean the apartment, but vacuum, take out the trash, and change your sheets.

It worked. When I made a list, all of the clutter from my mind was transferred to the page, and things started getting done. It has kept working, years later, any time I get a little overwhelmed. A few months after my list-making breakthrough, I tried to translate this tactic to regular use of a planner, but that tanked the whole thing. I just need a regular notebook and a pen. There’s no use in getting cute with it. Don’t make your to-do list a task of its own.

All of this might sound preposterously simple and obvious. If you were born with this knowledge or learned it long ago, then I’m happy for you. But for people like me for whom this behavior doesn’t come naturally, that obvious simplicity is exactly the genius of cultivating it. Your list lives with you on the physical plane, a tactile representation of tasks that might otherwise be out of sight and out of mind (or, worse, buried in the depths of your laptop). It contains only things that you can actually accomplish in a day or two, and then you turn the page forever and start again. If you think of more things that need to be on the list after you think you’re done making it, just add them. If you get to the last few things on the list and realize they’re not that important, don’t do them. This type of to-do list doesn’t take any work to assemble. It isn’t aesthetically pleasing. It doesn’t need to be organized in any particular way, or at all. It’s not a plan. It’s just a list.

If you’d feel more convinced by some psychological evidence instead of the personal recommendation of a stranger with an aversion to calendars, a modest amount of research has amassed over the years to suggest that I’m on the right track. List-making seems to be a boon to working memory, and writing longhand instead of typing on a keyboard seems to aid in certain types of cognition, including learning and memory. My own experience is in line with the basic findings of that research: Writing down a list forces me to recall all of the things that are swimming around in my head and occasionally breaking through to steal my attention, and then it moves the tasks from my head onto the paper. My head is then free to do other things. Like, you know, the stuff on the list. There are no branded tools you have to buy, and no subscriptions. It cannot be monetized. Write on the back of your water bill, for all I care. Just remember to pay your water bill.

You wake up with a stuffy nose, so you head to the pharmacy, where a plethora of options awaits in the cold-and-flu aisle. Ah, how lucky you are to live in 21st-century America. There’s Sudafed PE, which promises “maximum-strength sinus pressure and nasal congestion relief.” Sounds great. Or why not grab DayQuil in case other symptoms show up, or Tylenol Cold + Flu Severe should whatever it is get really bad? Could you have allergies instead? Good thing you can get Benadryl Allergy Plus Congestion, too.

Unfortunately for you and me and everyone else in this country, the decongestant in all of these pills and syrups is entirely ineffective. The brand names might be different, but the active ingredient aimed at congestion is the same: phenylephrine. Roughly two decades ago, oral phenylephrine began proliferating on pharmacy shelves despite mounting—and now damning—evidence that the drug simply does not work.

“It has been an open secret among pharmacists,” says Randy Hatton, a pharmacy professor at the University of Florida, who filed a citizen petition in 2007 and again in 2015 asking the FDA to reevaluate phenylephrine. This week, an advisory panel to the FDA voted 16–0 that the drug is ineffective orally, which could pave the way for the agency to finally pull the drug.

If so, the impact would be huge. Phenylephrine is combined with fever reducers, cough suppressants, or antihistamines in many popular multidrug products such as the aforementioned DayQuil. Americans collectively shell out $1.763 billion a year for cold and allergy meds with phenylephrine, according to the FDA, which also calls the number a likely underestimate. That’s a lot of money for a decongestant that, again, does not work.

Over-the-counter oral decongestants weren’t always this bad. But in the early 2000s, states began restricting access to pseudoephedrine—a different drug that actually is effective against congestion—because it could be used to make meth; the Combat Methamphetamine Epidemic Act, signed in 2006, took the restrictions national. You can still buy real-deal Sudafed containing pseudoephedrine, but you have to show an ID and sign a logbook. Meanwhile, manufacturers filled over-the-counter shelves with phenylephrine replacements such as Sudafed PE. The PE is for phenylephrine, but you would be forgiven for not noticing the different name.

“Thet switch from pseudoephedrine to phenylephrine was a big mistake,” says Ronald Eccles, who ran the Common Cold Unit at Cardiff University until his retirement. Eccles was critical of the switch back in 2006. The evidence, he wrote at the time, was already pointing to phenylephrine as a lousy oral drug.

Problems started showing up quickly. Hatton, who was then a co-director of the University of Florida Drug Information Center, started getting a flurry of questions about phenylephrine: Does it work? What’s the right dose? Because my patients are complaining that it’s not doing anything. He decided to investigate, and he went deep. Hatton filed a Freedom of Information Act request for the data behind FDA’s initial evaluation of the drug in 1976. He soon found himself searching through a banker’s box of records, looking for studies whose raw data he and a postdoctoral resident typed up by hand to reanalyze. The 14 studies the FDA had considered at the time had mixed results. Five of the positive ones were all conducted at the same research center, whose results looked better than everyone else’s. Hutton’s team thought that was suspicious. If you excluded those studies, the drug no longer looked effective at its usual dose.

All told, the case for phenylephrine was not great, but the case against was no slam dunk either. When Hatton and colleagues at the University of Florida, including Leslie Hendeles, filed a citizen petition, they asked the agency to increase the maximum dose to something that could be more effective. They did not ask to pull the drug entirely.

There was more damning evidence to come, though. The petition led to a first FDA advisory committee meeting, in 2007, where scientists from a pharmaceutical company named Schering-Plough, which later became Merck, presented brand-new data. The company had begun studying the drug, Hatton and Hendeles recalled, because it was interested in replacing the pseudoepinephrine in its allergy drug Claritin-D. But these industry scientists did not come to defend phenylephrine. Instead, they dismantled the very foundation of the drug’s supposed efficacy.

They showed that almost no phenylephrine reaches the nasal passages, where it theoretically could reduce congestion and swelling by causing blood vessels to constrict. When taken orally, most of it gets destroyed in the gut; only 1 percent is active in the bloodstream. This seemed to be borne out by what people experienced when they took the drug—which was nothing. The scientists presented two more studies that found phenylephrine to be no better than placebo in people congested because of pollen allergies.

These studies, the FDA later wrote, were “remarkable,” changing the way the agency thought about how oral phenylephrine works in the body. But experts still weren’t ready to write the drug off entirely. The 2007 meeting ended with the advisory committee asking for data from higher doses.

The story for phenylephrine only got worse from there. In hopes of making an effective product, Merck went to study higher doses in two randomized clinical trials published in 2015 and 2016. “We went double, triple, quadruple—showed no benefit,” Eli Meltzer, an allergist who helped conduct the trials for Merck, said at the FDA-advisory-panel meeting this week. In other words, not only is phenylephrine ineffective at the labeled dosage of 10 milligrams every four hours, it is not even effective at four times that dose. These data prompted Hatton and Hendeles to file a second citizen petition and helped prompt this week’s advisory meeting. This time, the panel didn’t need any more data. “We’re kind of beating a dead horse … This is a done deal as far as I’m concerned. It doesn’t work,” one committee member, Paul Pisarik, said at the meeting. The advisory’s 16–0 vote is not binding, though, so it’s still up to the FDA to decide what to do about phenylephrine.

In any case, phenylephrine is not the only cold-and-flu drug with questionable effectiveness in its approved form. The common cough drugs guaifenesin and dextromethorphan have both come under fire. But we lack the robust clinical-trial data to draw a definitive conclusion on those one way or the other. “What really helped our case is the fact that Merck funded those studies,” Hatton says. And that Merck let its scientists publish them. Failed studies from drug companies usually don’t see the light of day because they present few incentives for publication. Changing the consensus on phenylephrine took an extraordinary set of circumstances.

It also required two dogged guys who have now been at this work for nearly two decades. “We’re just a couple of older professors from the University of Florida trying to do what’s best for society,” Hatton told me. When I asked whether they would be tackling other cold medications, he demurred: “I don’t know if either one of us has another 20 years in us.” He would instead like to see public funding for trials like Merck’s to reevaluate other over-the-counter drugs.

There are other effective decongestants on pharmacy shelves. Even though phenylephrine does not work in pill form, “phenylephrine is very effective if you spray it into the nose,” Hendeles says. Neo-Synephrine is one such phenylephrine spray. Other nasal sprays containing other decongestants, such as Afrin, are also effective. But the only other common oral decongestant is pseudoephedrine, which requires that extra step of asking the pharmacist.
Restricting pseudoephedrine has not  curbed the meth epidemic, either. Meth-related overdoses are skyrocketing, after Mexican drug rings perfected a newer, cheap way to make methamphetamine without using pseudoephedrine at all. This actually effective drug still remains behind the counter, while ineffective ones fill the shelves.

Paul Offit is not an anti-vaxxer. His résumé alone would tell you that: A pediatrician at Children’s Hospital of Philadelphia, he is the co-inventor of a rotavirus vaccine for infants that has been credited with saving “hundreds of lives every day”; he is the author of roughly a dozen books on immunization that repeatedly debunk anti-vaccine claims. And from the earliest days of COVID-19 vaccines, he’s stressed the importance of getting the shots. At least, up to a certain point.

Like most of his public-health colleagues, Offit strongly advocates annual COVID shots for those at highest risk. But regularly reimmunizing young and healthy Americans is a waste of resources, he told me, and invites unnecessary exposure to the shots’ rare but nontrivial side effects. If they’ve already received two or three doses of a COVID vaccine, as is the case for most, they can stop—and should be told as much.

His view cuts directly against the CDC’s new COVID-vaccine guidelines, announced Tuesday following an advisory committee’s 13–1 vote: Every American six months or older should get at least one dose of this autumn’s updated shot. For his less-than-full-throated support for annual vaccination, Offit has become a lightning rod. Peers in medicine and public health have called his opinions “preposterous.” He’s also been made into an unlikely star in anti-vaccine circles. Public figures with prominently shot-skeptical stances have approvingly parroted his quotes. Right-leaning news outlets that have featured vaccine misinformation have called him up for quotes and sound bites—a sign, he told me, that as a public-health expert “you screwed up somehow.”

Offit stands by his opinion, the core of which is certainly scientifically sound: Some sectors of the population are at much higher risk for COVID than the rest of us. But the crux of the controversy around his view is not about facts alone. At this point in the pandemic, in a country where seasonal vaccine uptake is worryingly low and direly inequitable, where health care is privatized and piecemeal, where anti-vaccine activists will pull at any single loose thread, many experts now argue that policies riddled with ifs, ands, or buts—factually sound though they may be—are not the path toward maximizing uptake. “The nuanced, totally correct way can also be the garbled-message way,” Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases, told me.

For the past two years, the United States’ biggest COVID-vaccine problem hasn’t been that too many young and healthy people are clamoring for shots and crowding out more vulnerable groups. It’s been that no one, really—including those who most need additional doses—is opting for additional injections at all. America’s vaccination pipeline is already so riddled with obstacles that plenty of public-health experts have become deeply hesitant to add more. They’re opting instead for a simple, proactive message—one that is broadly inclusive—in the hope that a concerted push for all will nudge at least some fraction of the public to actually get a shot this year.  

Listen to Katherine J.Wu on Radio Atlantic:

Subscribe here: Apple Podcasts | Spotify | YouTube | Google Podcasts | Pocket Casts

On several key vaccination points, experts do largely agree. The people who bear a disproportionate share of COVID’s risk should receive a disproportionate share of immunization outreach, says Saad Omer, the dean of UT Southwestern’s O’Donnell School of Public Health.

Choosing which groups to prioritize, however, is tricky. Offit told me he sees four groups as being at highest risk: people who are pregnant, immunocompromised, over the age of 70, or dealing with multiple chronic health conditions. Céline Gounder, an infectious-disease specialist and epidemiologist at NYC Health + Hospitals/Bellevue, who mostly aligns with Offit’s stance, would add other groups based on exposure risk: people living in shelters, jails, or other group settings, for instance, and potentially people who work in health care. (Both Gounder and Offit also emphasize that unvaccinated people, especially infants, should get their shots this year, period.) But there are other vulnerable groups to consider. Risk of severe COVID still stratifies by factors such as socioeconomic status and race, concentrating among groups who are already disproportionately disconnected from health care.

That’s a potentially lengthy list—and messy messaging has hampered pandemic responses before. As Gretchen Chapman, a vaccine-behavior expert at Carnegie Mellon University, told me last month, a key part of improving uptake is “making it easy, making it convenient, making it the automatic thing.” Fauci agrees. Offit, had he been at the CDC’s helm, would have strongly recommended the vaccine for only his four high-risk groups, and merely allowed everyone else to get it if they wanted to—drawing a stark line between those who should and those who may. Fauci, meanwhile, approves of the CDC’s decision. If it were entirely up to him, “I would recommend it for everyone” for the sheer sake of clarity, he told me.

[Read: Fall’s vaccine routine didn’t have to be this hard]

The benefit-risk ratio for the young and healthy, Fauci told me, is lower than it is for older or sicker people, but “it’s not zero.” Anyone can end up developing a severe case of COVID. That means that shoring up immunity, especially with a shot that targets a recent coronavirus variant, will still bolster protection against the worst outcomes. Secondarily, the doses will lower the likelihood of infection and transmission for at least several weeks. Amid the current rise in cases, that protection could soften short-term symptoms and reduce people’s chances of developing long COVID; it could minimize absences from workplaces and classrooms; it could curb spread within highly immunized communities. For Fauci, those perks are all enough to tip the scales.

Offit did tell me that he’s frustrated at the way his views have frequently been framed. Some people, for instance, are inaccurately portraying him as actively dissuading people from signing up for shots. “I’m not opposed to offering the vaccine for anyone who wants it,” he told me. In the case of the young and healthy, “I just don’t think they need another dose.” He often uses himself as an example: At 72 years old, Offit didn’t get the bivalent shot last fall, because he says he’s in good health; he also won’t be getting this year’s XBB.1-targeting brew. Three original-recipe shots, plus a bout of COVID, are protection enough for him. He gave similar advice to his two adult children, he told me, and he’d say the same to a healthy thrice-dosed teen: More vaccine is “low risk, low reward.”

The vax-for-all guideline isn’t incompatible, exactly, with a more targeted approach. Even with a universal recommendation in place, government resources could be funneled toward promoting higher uptake among essential-to-protect groups. But in a country where people, especially adults, are already disinclined to vaccinate, other experts argue that the slight difference between these two tactics could compound into a chasm between public-health outcomes. A strong recommendation for all, followed by targeted implementation, they argue, is more likely to result in higher vaccination rates all around, including in more vulnerable populations. Narrow recommendations, meanwhile, could inadvertently exclude people who really need the shot, while inviting scrutiny over a vaccine’s downsides—cratering uptake in high- and low-risk groups alike. Among Americans, avoiding a strong recommendation for certain populations could be functionally synonymous with explicitly discouraging those people from getting a shot at all.

Offit pointed out to me that several other countries, including the United Kingdom, have issued recommendations that target COVID vaccines to high-risk groups, as he’d hoped the U.S. would. “What I’ve said is really nothing that other countries haven’t said,” Offit told me. But the situation in the U.S. is arguably different. Our health care is privatized and far more difficult to access and navigate. People who are unable to, or decide not to, access a shot have a weaker, more porous safety net—especially if they lack insurance. (Plus, in the U.K., cost was reportedly a major policy impetus.) A broad recommendation cuts against these forces, especially because it makes it harder for insurance companies to deny coverage.

[Read: The big COVID question for hospitals this fall]

A weaker call for COVID shots would also make that recommendation incongruous with the CDC’s message on flu shots—another universal call for all Americans six months and older to dose up each year. Offit actually does endorse annual shots for the flu: Immunity to flu viruses erodes faster, he argues, and flu vaccines are “safer” than COVID ones.

It’s true that COVID and the flu aren’t identical—not least because SARS-CoV-2 continues to kill and chronically sicken more people each year. But other experts noted that the cadence of vaccination isn’t just about immunity. Recent studies suggest that, at least for now, the coronavirus is shape-shifting far faster than seasonal flu viruses are—a point in favor of immunizing more regularly, says Vijay Dhanasekaran, a viral-evolution researcher at the University of Hong Kong. The coronavirus is also, for now, simply around for more of the year, which makes infections more likely and frequent—and regular vaccination perhaps more prudent. Besides, scientifically and logistically, “flu is the closest template we have,” Ali Ellebedy, an immunologist at Washington University in St. Louis, told me. Syncing the two shots’ schedules could have its own rewards: The regularity and predictability of flu vaccination, which is typically higher among the elderly, could buoy uptake of COVID shots—especially if manufacturers are able to bundle the immunizations into the same syringe.

Flu’s touchstone may be especially important this fall. With the newly updated shots arriving late in the season, and COVID deaths still at a relative low, experts are predicting that uptake may be worse than it was last year, when less than 20 percent of people opted in to the bivalent dose. A recommendation from the CDC “is just the beginning” of reversing that trend, Omer, of UT Southwestern, told me. Getting the shots also needs to be straightforward and routine. That could mean actively promoting them in health-care settings, making it easier for providers to check if their patients are up to date, guaranteeing availability for the uninsured, and conducting outreach to the broader community—especially to vulnerable groups.

Offit hasn’t changed his mind on who most needs these new COVID vaccines. But he is rethinking how he talks about it: “I will stop putting myself in a position where I’m going to be misinterpreted,” he told me. After the past week, he more clearly sees the merits of focusing on who should be signing up rather than who doesn’t need another dose. Better to emphasize the importance of the shot for the people he worries most about and recommend it to them, without reservation, to whatever extent we can.

I started to sew for a simple, selfish reason: I just wanted cool clothes that actually fit my body. I was a very tall teenage girl in an era long before online shopping was popular, living in a small town where the mall options were limited at best. (Our mall did not even have The Limited.) And I was lucky enough to have a crafty midwestern mom who had a sewing machine set up in our basement. One day, I started using it.

I did not think then that I was forever altering my relationship to buying clothes. If anything, I was just following a teenage whim. I rode my bike to the Goodwill up the street, bought some floral bedsheets, and turned them into pajama pants. (This was not couture. I remember mismatching the crotch seams and having to re-sew them with my mom’s help.) Soon after, like any good grunge girl of the mid-’90s, I made a skirt out of neckties. And then I was hooked.

My skills improved as finding clothes that almost fit and adapting them became a hobby, then a habit. By college, I was making whole garments. The era of fast fashion was dawning, but Forever 21 and H&M had yet to make inroads into my town—and didn’t carry pants with my lengthy inseam anyway. In order to have an aesthetic I loved at a price I could afford, I had to make most things myself.

Having a basic understanding of how to make and alter clothes has fundamentally shaped the way I dress myself. But if I’d grown up in the age of $10 Shein tops and $15 PrettyLittleThing dresses, I’m not sure I would have found my way to a sewing machine. This is doubly true because fast-fashion brands are now the ones that tend to cater to extended sizes. I probably would have ordered those pajama pants with just a few clicks, then tried not to think about the garment worker who made them, or how many times I’d wear them before the seams unraveled and I threw them in the trash.

Fast-fashion behemoths know their customers are aware of the many reports that detail the hazardous materials and labor violations underlying the mountains of landfill-bound garments. They apply the word sustainable to select items made with recycled polyester and nylon; meanwhile, the bloated market for disposable clothes just keeps expanding. For shoppers, fast fashion is cheap and easy; truly sustainable clothing consumption appears expensive and confusing. Many small-batch or eco-friendly brands have limited size options, and even with the rise of secondhand-shopping apps, sifting through the inventory can be time-consuming. Impulse clicking “Confirm order” in several sizes and then going through a returns process later seems so much easier by comparison.

Learning to sew will not only help you avoid the environmental horrors of modern retail; it will show you the thrill of wearing clothes that actually fit. This is not an argument for a cottage-core lifestyle in which you hand-make every raw-linen garment that touches your body. I’m more for an incremental approach: Acquiring a few basic sewing skills, little by little, will change how you get dressed. Even if you never make a whole garment from scratch, knowing how to adjust a seam will make secondhand shopping easier and more accessible. And when you’re looking for new clothes, knowing your measurements will help you order only items that are likely to fit. The goal is not to become a master tailor. It’s to become fluent in how clothes fit your body.

When you sew for yourself, you really learn your body. You also relearn how to think about your body. Even a beginner-level sewing project makes clear that it is impossible to reduce your complex contours and spans to a single number or letter on a tag. And you learn how you like things to fit you: where you prefer your waistband to hit on your belly, what inseam works for a crop length versus ankle, how low you like a neckline to go. Once you know these things, you will never acquire clothes the same way again.

Sewing skills open up the possibilities of secondhand shopping. Instead of hoping to strike gold with the perfect fit, you can see garments for their possibilities. That dress would be perfect if I took off the sleeves, you’ll catch yourself thinking. Or, I could hem those trousers in about five minutes. And the same goes for your own rarely worn items. The ritual of a closet clean-out takes on a new twist when you can alter things to match your current shape and style. I’ve turned a shift dress into a skirt and boxy top, an old bedsheet into the backing material for a quilt, and cropped too many T-shirts to count. Instead of ending up in the trash or a giveaway pile, these items have gotten a second spin through my wardrobe.

Learning to sew has also profoundly affected how I buy new clothes. Knowing my body and my measurements means I can check the actual dimensions of an item before I buy it. Few retailers list those numbers, so in many cases I have to email customer-support representatives to ask for actual inches instead of the meaningless designations of S, M, or L. This might sound annoying, but it’s way more efficient than scrolling through dozens of comments, hoping someone with extra-long legs has noted where the pants hit them. No more guesswork! Measurements help me feel confident that an item will fit, which means I don’t have to order multiple sizes or fret about two-week return windows.

I have simply become a more discerning shopper. Knowing a bit about how a garment is constructed means I know what a quality seam looks like, and working with various fabrics means I know how various materials feel between my fingers. The difference between polyester and modal and linen is immediately apparent. Paying attention to these details means that, when I do buy new clothes, I tend to save up for better-quality ones. And I have a bit more money to do so because the rest of my closet is secondhand or handmade.

Ready to join me in sewing eco-bliss? Evangelists who tout their head-to-toe “me made” looks have always been a little alienating to me; I would lower the stakes by finding a few YouTube or TikTok accounts devoted to repurposing thrifted materials, and then experimenting with tweaks to a garment you’d otherwise throw away.

A few other dos and don’ts:

Don’t use that 1970s sewing machine you inherited from your great aunt. It will take ages to thread and be bulky to store. Do spend less than $100 on a basic new machine that will be easier to thread and move to and from a table or desk. Get a fresh pair of scissors (ones that you use only to cut fabric) and some straight pins. That’s all you need.

Don’t feel like you need to throw out half your closet and fill it with homemade items. Do take stock of your wardrobe and body. Take your measurements from top to bottom and write them down. I keep mine in a notes app so they’re always handy. Measure the garments you own that fit you well. (You just might learn that all your favorite pants have the same rise and waist size! Who knew?) Look closely at an item in your closet and examine how it’s constructed. Where are the seams? This is how you start to learn the anatomy of a garment. Don’t feel like you need to do anything to these clothes—it’s just about noticing what’s already working for you.

Don’t rush to a fabric store and buy a bolt of new material. The linens and housewares sections of your local thrift store are great sources of decent-quality fabrics. Cotton bedsheets are the cheapest and easiest sewing material for beginners. But any fabric that feels good in your hand—and isn’t too thick or too stretchy—will do. Wash it, dry it, and iron it before you start.

Don’t try to make a wedding gown right off the bat. Try a beginner project like a boxy top, an A-line skirt, or a tote bag. Or take one of the clothing items that fits you well (here, too, avoid stretchy fabric) and use it as the pattern to make something new. The goal is not to win a CFDA emerging-designer award but to develop a basic understanding of how clothes are made. Play. Experiment. Pay attention.

You will mess up. You will sew the butt seam to the side seam and create an unwearable pair of “pants” with no leg opening. You will accidentally snip the center of a huge piece of fabric, destroying hours of work. You will get big tangled knots in the thread of your machine. You will curse and scream and tear your hair out. You will occasionally destroy an item you were hoping to rescue.

In these moments, it can help to remember that you have a higher purpose. You are not filling every corner of the Earth with nonbiodegradable tube dresses and puff-sleeve tops, and you won’t have to remember to return the sizes that didn’t fit. Best of all, when you do succeed in finishing a garment, you will receive compliments about your clothes. And you will respond, in the humblest tone you can bring yourself to adopt (which is really much closer to a brag), “Thanks. I made it.”

Trust me, it never gets old.

This story is part of the Atlantic Planet series supported by HHMI’s Science and Educational Media Group.

This article originally appeared in Undark Magazine.

When Kevin E. Taylor became a pastor 22 years ago, he didn’t expect how often he’d have to help families make gut-wrenching decisions for a loved one who was very ill or about to die. The families in his predominantly Black church in New Jersey generally didn’t have any written instructions, or conversations to recall, to help them know if their relative wanted—or didn’t want—certain types of medical treatment.

So Taylor started encouraging church members to ask their elders questions, such as whether they would want to be kept on life support if they became sick and were unable to make decisions for themselves.

“Each time you have the conversation, you destigmatize it,” says Taylor, now the senior pastor at Unity Fellowship Church NewArk, a Christian church with about 120 regular members.

Taylor is part of an initiative led by Compassion & Choices, a nonprofit advocacy group that encourages more Black Americans to consider and document their medical wishes for the end of their life.

End-of-life planning—also known as advance care planning, or ACP—usually requires a person to fill out legal documents that indicate the care they would want if they were to become unable to speak for themselves because of injury or illness. There are options to specify whether they would want life-sustaining care, even if it were unlikely to cure or improve their condition, or comfort care to manage pain, even if it hastened death. Medical groups have supported ACP, and proposed public-awareness campaigns aim to promote the practice.

Yet research has found that many Americans—particularly Black Americans—have not bought into the promise of ACP. Advocates say that such plans are especially important for Black Americans, who are more likely to experience racial discrimination and lower-quality care throughout the health-care system. Advance care planning, they say, could help patients understand their options and document their wishes, as well as reduce anxiety for family members.

However, the practice has also come under scrutiny in recent years: Some research suggests that it might not actually help patients get the kind of care they want at the end of life. It’s unclear whether those results are due to research methods or to a failure of ACP itself; comparing the care that individuals said they want in the future with the care they actually received while dying is exceedingly difficult. And many studies that show the shortcomings of ACP look predominantly at white patients.

Still, researchers maintain that encouraging discussions about end-of-life care is important, while also acknowledging that ACP needs either improvement or an overhaul. “We should be looking for, okay, what else can we do other than advance care planning?” says Karen Bullock, a social-work professor at Boston College, who researches decision-making and acceptance around ACP in Black communities. “Or can we do something different with advance care planning?”

[Read: The new old age]

Advance care planning was first proposed in the U.S. in 1967, when a lawyer for the now-defunct Euthanasia Society of America advocated for the idea of a living will—a document that would allow a person to indicate whether to withhold or withdraw life-sustaining treatment if they were no longer capable of making health-care decisions. By 1986, most states had adopted living-will laws that established standardized documents for patients, as well as protections for physicians who complied with patients’ wishes.

Over the past four decades, ACP has expanded to include a range of legal documents, called advance directives, for detailing one’s wishes for end-of-life care. In addition to do-not-resuscitate, or DNR, orders, patients can list treatments they would want and under which scenarios, as well as appoint a surrogate to make health-care decisions for them. Health-care facilities that receive Medicare or Medicaid reimbursement are required to ask whether patients have advance directives, and to provide them with relevant information. And in most states, doctors can record a patient’s end-of-life wishes in a form called a Provider Order for Life-Sustaining Treatment. These documents encourage patients to talk with their physician about their wishes, which are then added to the patient chart, unlike advance directives, which usually consist of the patient filling out forms themselves without discussing them directly with their doctor.

But as far as who makes those plans, research has shown a racial disparity: A 2016 study of more than 2,000 adults, all of whom were over the age of 50, showed that 44 percent of white participants had completed an advance directive, compared with 24 percent of Black participants. Many people simply aren’t aware of ACP or don’t fully understand it. And for Black individuals, that knowledge may be especially hard to come by—one study found that clinicians tend to avoid discussions with Black and other nonwhite patients about the care they want at the end of life, because they feel uncomfortable broaching these conversations or are unsure of whether patients want to have them.

Other research has found that Black Americans may be more hesitant to fill out documents in part because of a mistrust in the health-care system, rooted in a long history of racist treatment. “It’s a direct, in my opinion, outcome from segregated health-care systems,” Bullock says. “When we forced integration, integration didn’t mean equitable care.”

Religion can also be a major barrier to ACP. A large proportion of Black Americans are religious, and some say they are hesitant to engage in ACP because of the belief that God, rather than clinicians, should decide their fate. That’s one reason programs such as Compassion & Choices have looked to churches to make ACP more accessible. Several studies support the effectiveness of sharing health messages, including about smoking cessation and heart health, in church communities. “Black people tend to trust their faith leaders, and so if the church is saying this is a good thing to do, then we will be willing to try it,” Bullock says.

But in 2021, an article by palliative-care doctors laid bare the growing evidence that ACP may be failing to get patients the end-of-life care they want, also known as goal-concordant care. The paper summarized the findings of numerous studies investigating the effectiveness of the practice, and concluded that “despite the intrinsic logic of ACP, the evidence suggests it does not have the desired effect.”

For example, although some studies identified benefits such as increased likelihood of a patient dying in the place they desired or avoiding unwanted resuscitation, others found the opposite. One study found that seriously ill patients who prioritized comfort care in their advance directive spent practically just as many days in the hospital as did patients who prioritized life-extending experiences. The authors of the 2021 summary paper suggested several reasons that goal-concordant care might not occur: Patients may request treatments that are not available; clinicians may not have access to the documentation; surrogates may override patients’ requests.

A pair of older studies suggested that these issues might be especially pronounced for Black patients; they found that Black patients with cancer who had signed DNR orders were more likely to be resuscitated, for example. These studies have been held up as evidence that Black Americans receive less goal-concordant care. But Holly Prigerson, a researcher at Cornell University who oversaw the studies, notes that her team investigated the care of Black participants who were resuscitated against their wishes, and in those cases, clinicians did not have access to their records because the patients had been transferred from another hospital.

One issue facing research on advance care planning is that so many studies focus on white patients, giving little insight into whether ACP helps Black patients. For example, in two recent studies on the subject, more than 90 percent of patients were white.

Many experts, including Prigerson, agree that it’s important to devise new approaches to assess goal-concordant care, which generally relies on what patients indicated in advance directives or what they told family members months or years before dying. But patients change their mind, and relatives may not understand or accept their wishes.

[Read: My mom will email me after she dies]

“It’s a very problematic thing to assess,” Prigerson says. “It’s not impossible, but there are so many issues with it.”

As for whether ACP can manage to improve end-of-life care specifically in areas where Black patients receive worse care, such as pain management, experts such as Bullock note that studies have not really explored that question. But addressing other racial disparities—including correcting physicians’ false beliefs about Black patients being less sensitive to pain, improving how physicians communicate with Black patients, and strengthening social supports for patients who want to enroll in hospice—is likely more crucial than expanding ACP.

ACP “may be part of the solution, but it is not going to be sufficient,” says Robert M. Arnold, a University of Pittsburgh professor of palliative care and medical ethics, and one of the authors of the 2021 article that questioned the benefits of ACP.

Many of the shortcomings of ACP, including the low engagement rate and the unclear benefits, have prompted researchers and clinicians to think about how to overhaul the practice.

Efforts to make ACP more accessible have spanned creating easy-to-read versions absent any legalese, and short, simple videos. A 2023 study found that one program that incorporated these elements, called PREPARE for Your Care, helped both white and Black adults with chronic medical conditions get goal-concordant care. The study stood out because it asked patients who were still able to communicate if they were getting the medical care they wanted, rather than waiting until after they died to evaluate goal-concordant care.

“That, to me, is incredibly important,” says Rebecca Sudore, a geriatrician and researcher at UC San Francisco, who was the senior author of the study and helped develop PREPARE for Your Care. Sudore and her colleagues have proposed “real-time assessment from patients and their caregivers” to more accurately measure goal-concordant care.

In the past few years, clinicians have become more aware that ACP should involve ongoing conversations and shared decision-making among patients, clinicians, and surrogates, rather than just legal documents, says Ramona Rhodes, a geriatrician affiliated with the University of Arkansas for Medical Sciences.

Rhodes and her colleagues are leading a study to address whether certain types of ACP can promote engagement and improve care for Black patients. A group of older patients—half are Black, and half are white—with serious illnesses at clinics across the South are receiving materials either for Respecting Choices, an ACP guide that focuses on conversations with patients and families, or Five Wishes, a short patient questionnaire and the most widely used advance directive in the United States. The team hypothesizes that Respecting Choices will lead to greater participation among Black patients and possibly more goal-concordant care, if it prepares patients and families to talk with clinicians about their wishes, Rhodes says.

Taylor, the pastor, notes that when he talks with church members about planning for end-of-life care, they often see the importance of it for the first time. And it usually persuades them to take action. “Sometimes it’s awkward,” he says. “But it’s now awkward and informed.”

In the 1970s, they tried lithium. Then it was zinc and THC. Anti-anxiety drugs had their turn. So did Prozac and SSRIs and atypical antidepressants. Nothing worked. Patients with anorexia were still unable to bring themselves to eat, still stuck in rigid thought patterns, still chillingly underweight.

A few years ago, a group led by Evelyn Attia, the director of the Center for Eating Disorders at New York Presbyterian Hospital and the New York State Psychiatric Institute, tried giving patients an antipsychotic drug called olanzapine, normally used to treat schizophrenia and bipolar disorder, and known to cause weight gain as a side effect. Those patients in her study who were on olanzapine increased their BMI a bit more than others who were taking a placebo, but the two groups showed no difference in their cognitive and psychological symptoms. This was the only medication trial for treating anorexia that has shown any positive effect at all, Attia told me, and even then, the effects were “very modest.”

Despite nearly half a century of attempts, no pill or shot has been identified to effectively treat anorexia nervosa. Anorexia is well known to be the deadliest eating disorder; the only psychiatric diagnosis with a higher death rate is opioid-use disorder. A 2020 review found people who have been hospitalized for the disease are more than five times likelier to die than their peers without it. The National Institutes of Health has devoted more than $100 million over the past decade to studying anorexia, yet researchers have not found a single compound that reliably helps people with the disorder.

Other eating disorders aren’t nearly so resistant to treatment. The FDA has approved fluoxetine (a.k.a. Prozac) to treat bulimia nervosa and binge-eating disorder (BED); doctors prescribe additional SSRIs off-label to treat both conditions, with a fair rate of success. An ADHD drug, Vyvanse, was approved for BED within two years of the disorder’s official recognition. But when it comes to anorexia, “we’ve tried, I don’t know, eight or 10 fundamentally different kinds of approaches without much in the way of success,” says Scott Crow, an adjunct psychology professor at the University of Minnesota and the vice president of psychiatry for Accanto Health.

The discrepancy is puzzling to anorexia specialists and researchers. “We don’t fully understand why medications work so differently in this group, and boy, do they ever work differently,” Attia told me. Still, experts have some ideas. Over the past few decades, they have been learning about the changes in brain activity that accompany anorexia. For example, Walter Kaye, the founder and executive director of the Eating Disorders Program at UC San Diego, told me that the neurotransmitters serotonin and dopamine, both of which are involved in the brain’s reward system, seem to act differently in anorexia patients.

Perhaps some underlying differences in brain chemistry and function play a role in anorexia patients’ extreme aversion to eating. Or perhaps, the experts I spoke with suggested, these brain changes are at least in part a result of patients’ malnourishment. People with anorexia suffer from many effects of malnutrition: Their bones are more brittle; their brain is smaller; their heart beats slower; their breath comes shorter; their wounds fail to heal. Maybe their neurons respond differently to psychoactive drugs too.

[Read: The challenge of treating anorexia in adults]

Psychiatrists have found that many patients with anorexia don’t improve with treatment even when medicines are prescribed for conditions other than their eating disorder. If an anorexia patient also has anxiety, for example, taking an anti-anxiety drug would likely fail to relieve either set of symptoms, Attia told me. “Time and again, investigators have found very little or no difference between active medication and placebo in randomized controlled trials,” she said. The fact that fluoxetine seems to help anorexia patients avoid relapse—but only when it’s given after they’ve regained a healthy weight—also supports the notion that malnourished brains don’t respond so well to psychoactive medication. (In that case, the effect might be especially acute for people with anorexia nervosa, because they tend to have lower BMIs than people with other eating disorders.)

Why exactly this would be true remains a mystery. Attia noted that proteins and certain fats have been shown to be crucial for brain function; get too little of either, and the brain might not metabolize drugs in expected ways. Both she and Kaye suggested a possible role for tryptophan, an amino acid that humans get only from food. Tryptophan is converted into serotonin (among other things) when we release insulin after a meal, Kaye said, but in anorexia patients, whose insulin levels tend to be low, that process could end up off-kilter. “We suspect that that might be the reason why [SSRIs] don’t work very well,” he said, though he emphasized that the theory is very speculative.

In the absence of meaningful pharmacologic intervention, doctors who treat anorexia rely on methods such as nutrition counseling and psychotherapy. But even non-pharmaceutical interventions, such as cognitive behavioral therapy, are more effective at treating bulimia and binge-eating disorder than anorexia. Studies from around the world have shown that as many as half of people with anorexia relapse.

Colleen Clarkin Schreyer, a clinical psychologist at Johns Hopkins University, sees both patients with anorexia nervosa and those with bulimia nervosa, and told me that the former can be more difficult to treat—“but not just because of the fact that we don’t have any medication to help us along. I often find that patients with anorexia nervosa are more ambivalent about making behavior change.” Bulimia patients, she said, tend to feel shame about their condition, because binge eating is stigmatized and, well, no one likes vomit. But anorexia patients might be praised for skipping meals or rapidly losing weight, despite the fact that their behaviors can be just as dangerous over the long term as binging and vomiting.

[Read: Raising a daughter with a body like mine]

Researchers are still trying to find substances that can help anorexia patients. Crow told me that case studies testing a synthetic version of leptin, a naturally occurring human hormone, have produced interesting data. Meanwhile, some early research into using psychedelics, including ketamine, psilocybin, and ayahuasca, suggests that they may relieve some symptoms in some cases. But until randomized, controlled trials are conducted, we won’t know whether or how well any psychedelic really works. Kaye is currently recruiting participants for such a study of psilocybin, which is planned to have multiple sites in the U.S. and Europe.

Pharmaceutical companies just don’t seem that enthusiastic about testing treatments for anorexia, Crow said. “I think that drug makers have taken to heart the message that the mortality is high” among anorexia patients, he told me, and thus avoid the risk of having deaths occur during their clinical trials. And drug development isn’t the only area where the study of anorexia has fallen short. Research on eating disorders tends to be underfunded on the whole, Crow said. That stems, in part, from “a widely prevailing belief that this is something that people could or should just stop … I wish that were how it works, frankly. But it’s not.”

Back in the spring, around the end of the COVID-19 public-health emergency, hospitals around the country underwent a change in dress code. The masks that staff had been wearing at work for more than three years vanished, in some places overnight. At UChicago Medicine, where masking policies softened at the end of May, Emily Landon, the executive medical director of infection prevention and control, fielded hate mail from colleagues, some chiding her for waiting too long to lift the requirement, others accusing her of imperiling the immunocompromised. At Vanderbilt University Medical Center, which did away with masking in April, ahead of many institutions, Tom Talbot, the chief hospital epidemiologist, was inundated with thank-yous. “People were ready; they were tired,” he told me. “They’d been asking for several months before that, ‘Can we not stop?’”

But across hospitals and policies, infection-prevention experts shared one sentiment: They felt almost certain that the masks would need to return, likely by the end of the calendar year. The big question was exactly when.

For some hospitals, the answer is now. In recent weeks, as COVID-19 hospitalizations have been rising nationwide, stricter masking requirements have returned to a smattering of hospitals in Massachusetts, California, and New York. But what’s happening around the country is hardly uniform. The coming respiratory-virus season will be the country’s first after the end of the public-health emergency—its first, since the arrival of COVID, without crisis-caliber funding set aside, routine tracking of community spread, and health-care precautions already in place. After years of fighting COVID in concert, hospitals are back to going it alone.

A return to masking has a clear logic in hospitals. Sick patients come into close contact; medical procedures produce aerosols. “It’s a perfect storm for potential transmission of microbes,” Costi David Sifri, the director of hospital epidemiology at UVA Health, told me. Hospitals are on the front lines of disease response: They, more than nearly any other place, must prioritize protecting society’s vulnerable. And with one more deadly respiratory virus now in winter’s repertoire, precautions should logically increase in lockstep. But “there is no clear answer on how to do this right,” says Cameron Wolfe, an infectious-disease physician at Duke. Americans have already staked out their stances on masks, and now hospitals have to operate within those confines.

When hospitals moved away from masking this spring, they each did so at their own pace—and settled on very different baselines. Like many other hospitals in Massachusetts, Brigham and Women’s Hospital dropped its mask mandate on May 12, the day the public-health emergency expired; “it was a noticeable difference, just walking around the hospital” that day, Meghan Baker, a hospital epidemiologist for both Brigham and Women’s Hospital and Dana-Farber Cancer Institute, told me. UVA Health, meanwhile, weaned staff off of universal masking over the course of about 10 weeks.

Most masks at the Brigham are now donned on only a case-by-case basis: when a patient has active respiratory symptoms, say, or when a health-care worker has been recently sick or exposed to the coronavirus. Staff also still mask around the same subset of vulnerable patients that received extra protection before the pandemic, including bone-marrow-transplant patients and others who are highly immunocompromised, says Chanu Rhee, an associate hospital epidemiologist at Brigham and Women’s Hospital. UVA Health, meanwhile, is requiring masks for everyone in the hospital’s highest-risk areas—among them, certain intensive-care units, as well as cancer, transplant, and infusion wards. And although Brigham patients can always request that their providers mask, at UVA, all patients are asked upon admission whether they’d like hospital staff to mask.

Nearly every expert I spoke with told me they expected that masks would at some point come back. But unlike the early days of the pandemic, “there is basically no guidance from the top now,” Saskia Popescu, an epidemiologist and infection-prevention expert at the University of Maryland School of Medicine, said. The CDC still has a webpage with advice on when to mask. Those recommendations are tailored to the general public, though—and don’t advise covering up until COVID hospital admissions go “way high, when the horse has well and truly left the barn,” Landon, at UChicago, told me. “In health care, we need to do something before that”—tamping down transmission prior to wards filling up.

More specific advice could still emerge from the CDC, or individual state health departments. But going forward, the assumption is that “each hospital is supposed to have its own general plan,” Rhee told me. (I reached out to the CDC repeatedly about whether it might update its infection-prevention-guidance webpage for COVID—last retooled in May—but didn’t receive a response.)

Which leaves hospitals with one of two possible paths. They could schedule a start to masking season, based on when they estimate cases might rise—or they could react to data as they come in, tying masking policies to transmission bumps. With SARS-CoV-2 still so unpredictable, many hospitals are opting for the latter. That also means defining a true case rise—“what I think everybody is struggling with right now,” Rhee said. There is no universal definition, still, for what constitutes a surge. And with more immunity layered over the population, fewer infections are resulting in severe disease and death—even, to a limited extent, long COVID—making numbers that might have triggered mitigations just a year or two ago now less urgent catalysts.

[Read: The future of long COVID]

Further clouding the forecast is the fact that much of the data that experts once relied on to monitor COVID in the community have faded away. In most parts of the country, COVID cases are no longer regularly tallied; people are either not testing, or testing only at home. Wastewater surveillance and systems that track all influenza-like illnesses could provide some support. But that’s not a whole lot to go on, especially in parts of the country such as Tennessee, where sewage isn’t as closely tracked, Tom Talbot, of Vanderbilt, told me.

Some hospitals have turned instead to in-house stats. At Duke—which has adopted a mitigation policy that’s very similar to UVA’s—Wolfe has mulled pulling the more-masking lever when respiratory viruses account for 2 to 4 percent of emergency and urgent-care visits; at UVA, Sifri has considered taking action once 1 or 2 percent of employees call out sick, with the aim of staunching sickness and preserving staff. “It really doesn’t take much to have an impact on our ability to maintain operations,” Sifri told me. But “I don’t know if those are the right numbers.” Plus, internal metrics are now tricky for the same reasons they’ve gotten shaky elsewhere, says Xiaoyan Song, the chief infection-control officer at Children’s National Hospital, in Washington, D.C. Screening is no longer routine for patients, skewing positivity stats; even sniffly health-care workers, several experts told me, are now less eager to test and report.

[Read: What COVID hospitalization numbers are missing]

For hospitals that have maintained a more masky baseline, scenarios in which universal masking returns are a little easier to envision and enact. At UChicago Medicine, Landon and her colleagues have developed a color-coded system that begins at teal—masking for high-risk patients, patients who request masked care, and anyone with symptoms, plus masking in high-risk areas—and goes through everyone-mask-up-everywhere red; their team plans to meet weekly to assess the situation, based on a variety of community and internal metrics, and march their masking up or down. Wolfe, of Duke, told me that his hospital “wanted to reserve a little bit of extra masking quite intentionally,” so that any shift back toward stricter standards would feel like less of a shock: Habits are hard to break and then reform.

Other hospitals that have been living mostly maskless for months, though, have a longer road back to universal masking, and staff members who might not be game for the trek. Should masks need to return at the Brigham or Dana-Farber, for instance, “I suspect the reaction will be mixed,” Baker told me. “So we really are trying to be judicious.” The hospital might try to preserve some maskless zones in offices and waiting rooms, for instance, or lower-risk rooms. And at Children’s National, which has also largely done away with masks, Song plans to follow the local health department’s lead. “Once D.C. Health requires hospitals to reimplement the universal-masking policy,” she told me, “we will be implementing it too.”

Other mitigations are on the table. Several hospital epidemiologists told me they expected to reimplement some degree of asymptomatic screening for various viruses around the same time they reinstate masks. But measures such as visiting restrictions are a tougher call. Wolfe is reluctant to pull that lever before he absolutely has to: Going through a hospital stay alone is one of the “harder things for patients to endure.”

A bespoke approach to hospital masking isn’t impractical. COVID waves won’t happen synchronously across communities, and so perhaps neither should policies. But hospitals that lack the resources to keep tabs on viral spread will likely be at a disadvantage, and Popescu told me she worries that “we’re going to see significant transmission” in the very institutions least equipped to handle such influx. Even the best-resourced places may hit stumbling blocks: Many are still reeling from three-plus years of crisis and are dealing with nursing shortages and worker burnout.

Coordination hasn’t entirely gone away. In North Carolina, Duke is working with the University of North Carolina at Chapel Hill and North Carolina State University to shift policies in tandem; in Washington State, several regional health-care organizations have pledged to align their masking policies. And the Veterans Health Administration—where masking remains required in high-risk units—has developed a playbook for augmenting mitigations across its many facilities, which together make up the country’s largest integrated health-care system, says Shereef Elnahal, the undersecretary of Veterans Affairs for health. Still, institutions can struggle to move in sync: Attitudes on masking aren’t exactly universal across health-care providers, even within a hospital.

The country’s experience with COVID has made hospitals that much more attuned to the impacts of infectious disease. Before the pandemic began, Talbot said, masking was a rarity in his hospital, even around high-risk patients; many employees would go on shifts sick. “We were pretty complacent about influenza,” he told me. “People could come to work and spread it.” Now hospital workers hold themselves to a stricter standard. At the same time, they have become intimately attuned to the drawbacks of constant masking: Some have complained that masks interfere with communication, especially for patients who are young or hard of hearing, or who have a language barrier. “I do think you lose a little bit of that personal bonding,” Talbot said. And prior to the lifting of universal masking at Vanderbilt, he said, some staff were telling him that one out of 10 times they’d ask a patient or family to mask, the exchange would “get antagonistic.”

[Read: The pandemic’s legacy is already clear]

When lifting mandates, many of the hospital epidemiologists I spoke with were careful to message to colleagues that the situation was fluid: “We’re suspending universal masking temporarily,” as Landon put it to her colleagues. Still, she admits that she felt uncomfortable returning to a low-mask norm at all. (When she informally polled nearly two dozen other hospital epidemiologists around the country in the spring, most of them told her that they felt the same.) Health-care settings aren’t meant to look like the rest of the world; they are places where precautions are expected to go above and beyond. COVID’s arrival had cemented masks’ ability to stop respiratory spread in close quarters; removing them felt to Landon like pushing those data aside, and putting the onus on patients—particularly those already less likely to advocate for themselves—to account for their own protection.

She can still imagine a United States in which a pandemic-era response solidified, as it has in several other countries, into a peacetime norm: where wearing masks would have remained as routine as donning gloves while drawing blood, a tangible symbol of pandemic lessons learned. Instead, many American hospitals will be entering their fourth COVID winter looking a lot like they did in early 2020—when the virus surprised us, when our defenses were down.

The numbers were climbing on a radiation dosimeter as the minibus carried me deeper into the complex. Biohazard suits are no longer required in most parts of Japan’s Fukushima Daiichi power plant, but still, I’d been given a helmet, eyewear, an N95 mask, gloves, two pairs of socks, and rubber boots. At the site of the world’s worst nuclear disaster since Chernobyl, you can never be too safe.

The road to the plant passes abandoned houses, convenience stores, and gas stations where forests of weeds sprout in the asphalt cracks. Inside, ironic signs, posted after the disaster, warning of tsunami risk. In March 2011, a 9.0-magnitude earthquake struck off Japan’s Pacific coast and flooded the plant, knocking out its emergency diesel generators and initiating the failure of cooling systems that led to a deadly triple-reactor meltdown.

Now, looking down from a high platform, I could see a crumpled roof where a hydrogen explosion had ripped through the Unit 1 reactor the day after the tsunami hit. The eerie stillness of the place was punctuated by the rattle of heavy machinery and the cries of gulls down by the water, where an immense metal containment tank has been mangled like a dog’s chew toy. Great waves dashing against the distant breakwater shook the metal decks by the shore. Gazing out across this scene, I felt like I was standing at the vestibule of hell.

A dozen years after the roughly 50-foot waves crashed over Fukushima Daiichi, water remains its biggest problem. The nuclear fuel left over from the meltdown has a tendency to overheat, so it must be continuously cooled with water. That water becomes radioactive in the process, and so does any groundwater and rain that happens to enter the reactor buildings; all of it must be kept away from people and the environment to prevent contamination. To that end, about 1,000 dirty-water storage vats of various sizes blanket the complex. In all, they currently store 343 million gallons, and another 26,000 gallons are added to the total every day. But the power plant, its operator claims, is running out of room.

On August 24, that operator—the Tokyo Electric Power Company, or TEPCO—began letting the water go. The radioactive wastewater is first being run through a system of chemical filters in an effort to strip it of dangerous constituents, and then flushed into the ocean and potentially local fisheries. Although this plan has official backing from the Japanese government and the International Atomic Energy Agency, many in the region—including local fishermen and their potential customers—are frightened by its implications.

“The IAEA has said this will have a negligible impact on people and the environment,” Junichi Matsumoto, a TEPCO official in charge of water treatment, told reporters during a briefing at Daiichi during my visit in July. Only water that meets certain purity standards would be released into the ocean, he explained. The rest would be run through the filters and pumps again as needed. But no matter how many chances it gets, TEPCO’s Advanced Liquid Processing System cannot cleanse the water of tritium, a radioactive form of hydrogen that is produced by nuclear-power plants even during normal operations, or of carbon-14. These lingering contaminants are a source of continuing anxiety.

Last month, China, the biggest importer of Japanese seafood, imposed a blanket ban on fisheries’ products from Japan, and Japanese news media have reported domestic seafood chains receiving numerous harassing phone calls originating in China. The issue has exacerbated tensions between the two countries. (The Japanese public broadcaster NHK responded by reporting that each of 13 nuclear-power plants in China released more tritium in 2021 than Daiichi will release in one year.) In South Korea, the government tried to allay fears after thousands of people protested in Seoul over the water release.

Opposition within Japan has coalesced around potential harms to local fishermen. In Fukushima, where the season for trawl fishing has just begun, workers are worried that seafood consumers in Japan and overseas will view their products as tainted and boycott them. “We have to appeal to people that they’re safe and secure, and do our best as we go forward despite falling prices and harmful rumors,” one elderly fisherman told Fukushima Broadcasting as he brought in his catch.

Government officials are doing what they can to protect that brand. Representatives from Japan’s environmental agency and Fukushima prefecture announced last week that separate tests showed no detectable levels of tritium in local seawater after the water release began. But even if its presence were observed, many experts say the environmental risks of the release are negligible. According to the IAEA, tritium is a radiation hazard to humans only if ingested in large quantities. Jukka Lehto, a professor emeritus of radiochemistry at the University of Helsinki, co-authored a detailed study of TEPCO’s purification system that found it works efficiently to remove certain radionuclides. (Lehto’s earlier research played a role in the development of the system.) Tritium is “not completely harmless,” he told me, but the threat is “very minor.” The release of purified wastewater into the sea will not, practically speaking, “cause any radiological problem to any living organism.” As for carbon-14, the Japanese government says its concentration in even the untreated wastewater is, at most, just one-tenth the country’s regulatory standards.  

[From 1976: Richard Rhodes on the benefits, costs, and risks of nuclear energy]

Opponents point to other potential problems. Greenpeace Japan says the biological impacts of releasing different radionuclides into the water, including strontium-90 and iodine-129, have been ignored. (When asked about these radionuclides, a spokesperson for the utility told me that the dirty water is “treated with cesium/strontium-filtering equipment to remove most of the contamination” and then subsequently processed to remove “most of the remaining nuclides except for tritium.”) Last December, the Virginia-based National Association of Marine Laboratories put out a position paper arguing that neither TEPCO nor the Japanese government has provided “adequate and accurate scientific data” to demonstrate the project’s safety, and alleged that there are “flaws in sampling protocols, statistical design, sample analyses, and assumptions.” (TEPCO did not respond to a request for comment on these claims.)

If, as these groups worry, the water from Fukushima does end up contaminating the ocean, scientific proof could be hard to find. In 2019, for example, scientists reported the results of a study that had begun eight years earlier, to monitor water near San Diego for iodine-129 released by the Fukushima meltdown. None was found, in spite of expectations based on ocean currents. When the scientists checked elsewhere on the West Coast, they found high levels of iodine-129 in the Columbia River in Washington—but Fukushima was not to blame. The source of that contamination was the nearby site where plutonium had been produced for the nuclear bomb that the U.S. dropped on Nagasaki.

Concerns about the safety of the water release persist in part because of TEPCO’s history of wavering transparency. In 2016, for instance, a commission tasked with investigating the utility’s actions during the 2011 disaster found that its leader at the time told staff not to use the term core meltdown. Even now, the company has put out analyses of the contents of only three-fifths of the dirty-water storage tanks on-site, Ken Buesseler, the director of the Center for Marine and Environmental Radioactivity at the Woods Hole Oceanographic Institution, told me earlier this summer. Japan’s environmental ministry maintains that 62 radionuclides other than tritium can be sufficiently removed from the wastewater using TEPCO’s filtration system, but Buesseler believes that not enough is known about the levels of those contaminants in all of the tanks to make this claim. Instead of flushing the water now, he said, it should first be completely analyzed, and then alternatives to dumping, such as longer on-site storage or using the water to make concrete for tsunami barriers, should be considered.

It looks like that radioactive ship has sailed, however. The release that began in August is expected to continue for as long as the plant decommissioning lasts, which means that contaminated water will continue to flow out to the Pacific Ocean at least until the 2050s. In this case, the argument over relative risks—and whether Fukushima’s dirty water will ever be made clean enough for dumping to proceed—has already been decided. But parallel, and unresolved, debates attend to nuclear power on the whole. Leaving aside the wisdom of building nuclear reactors in an archipelago prone to earthquakes and tsunami, plants such as Daiichi provide cleaner energy than fossil-fuel facilities, and proponents say they’re vital to the process of decarbonizing the economy.

Some 60 nuclear reactors are under construction around the world and will join the hundreds of others that now deliver about 10 percent of global electricity, according to the World Nuclear Association. Meltdowns like the one that happened in Fukushima in 2011, or at Chernobyl in 1986, are very rare. The WNA says that these are the only major accidents to have occurred in 18,500 cumulative reactor-years of commercial operations, and that reactor design is always improving. But the possibility of disaster, remote as it may be in any given year, is ever-present. For instance, the Zaporizhzhia Nuclear Power Station, Europe’s largest, has been threatened by military strikes and loss of electricity during the war in Ukraine, increasing the chances of meltdown. It took just 25 years for an accident at the scale of Chernobyl’s to be repeated.

[Read: Reckoning with the legacy of the nuclear reactor, 75 years later]

“We are faced with a difficult choice, either to continue using nuclear power while accepting that a major accident is likely to occur somewhere every 20 or 30 years, or to forgo its possible role in helping slow climate change that will make large swaths of the globe uninhabitable in coming decades,” says Azby Brown, the lead researcher at Safecast, a nonprofit environmental-monitoring group that began tracking radiation from Fukushima in 2011.

The Fukushima water release underscores the fact that the risks associated with nuclear energy are never zero and that dealing with nuclear waste is a dangerous, long-term undertaking where mistakes can be extremely costly. TEPCO and the Japanese government made a difficult, unpopular decision to flush the water. In the next few decades, they will have to show that it was the right thing to do.

Newer Posts