Category:

Medical News

The lab-leak theory of COVID-19’s origins comes in many forms. Here is Donald Trump’s: A scientist in Wuhan walked outside to have lunch, maybe with a girlfriend or something. “That’s how it leaked out in my opinion, and I’ve never changed that opinion,” the president said earlier this month at a press event. Whether something like this really happened was, until this year, a subject of lively debate. These days, it’s being presented as official history. Yes, COVID did come out of a Chinese lab, White House Press Secretary Karoline Leavitt told reporters shortly after Trump’s inauguration. “We now know that to be the confirmable truth.”

Of course, we don’t really know that, and they don’t know it either. Director of National Intelligence Tulsi Gabbard, who has convened yet another lab-leak investigation at Trump’s behest (after many other intelligence assessments led to split results), could only dance around the matter in an interview with Megyn Kelly earlier this month. Has some new and final proof been found? Kelly asked. Gabbard responded: “We are working on that with Jay Bhattacharya,” the director of the National Institutes of Health, “and look forward to being able to share that hopefully very soon.” (Gabbard’s office did not reply to a request for comment.)

Any hedging on the matter of pandemic origins represents a standard view among the experts: We simply aren’t sure. In reporting on this question for the past few years, I’ve spoken with some scientists and pandemic-origins investigators who are confident the coronavirus came out of a Wuhan lab, and with some who say they’re nearly certain that the virus spread to humans from a market stall. I’ve also heard from many others whose appraisals of the odds fall somewhere in between. Their only common ground may be the single plain acknowledgment that the evidence we have is incomplete.

But, despite the well-established data gaps—and in willful disregard of them—the lab-leak theory has become a MAGA theorem. Adherence to it is now a central tenet of the Trump administration: a shibboleth for loyalists, an animating grievance, and, in recent weeks, a stated rationale for punitive reforms. Earlier this month, when the White House proposed an $18 billion cut to the nation’s budget for biomedical research, the lab-leak theory—described as “now confirmed”—was given as a pretext.

There are many reasons to regret this shift toward artificial certainty, starting with the fact that whatever nuance now attached to the topic of pandemic origins has been hard-won. For much of 2020, a different bullheadedness prevailed: Invocations of the lab-leak theory were often tarred as right-wing propaganda, or even racist lies. At the start of Joe Biden’s presidency, “there was a clear and almost overwhelming leaning towards natural origin,” David Relman, a Stanford microbiologist and former member of the National Science Advisory Board for Biosecurity who has long maintained that a laboratory origin is more likely, told me. This bias weakened over time, as the theory came to have more distance from the Trump administration, and more suggestive bits of circumstantial evidence accrued. In the spring of 2023, the COVID-19 Origin Act, which demanded the declassification of all lab-leak-related intelligence, passed without a wisp of opposition, and in 2024, Relman himself was detailed to the White House as a senior adviser working on pandemic preparedness. “There was a palpable shift to the middle,” he said.  

But this equanimity has proved to be short-lived. According to the new administration and its supporters, the laboratory origin is presumptively correct. On covid.gov, which until last month offered only basic patient information (“If you test positive for COVID-19, talk to a doctor as soon as possible”), LAB LEAK now appears in jumbo font across the top—with Trump himself emerging from the gap between the B and L, as if he’d just leaked out himself. “The true origins of COVID-19,” the government website says, beside his foot.

Declaring fealty to this point of view has now become a sacred rite within the GOP, not unlike endorsement of the claim that the 2020 election was a fraud. Plenty of Trump’s most senior appointees have averred that COVID started in a lab. Secretary of Homeland Security Kristi Noem described it as “the truth.” FDA Commissioner Marty Makary has claimed that a laboratory origin is a “no-brainer,” and described it falsely as “now the leading theory among scientists.” Bhattacharya said at an NIH town hall on Monday that he believes the coronavirus was released from a lab, and that it derived from U.S.-funded research. The DHS, FDA, and NIH did not reply to requests for comment.

Health and Human Services Secretary Robert F. Kennedy Jr. has staked out the most extreme position of the bunch, publicly declaring that “SARS CoV-2 is certainly the product of bioweapons research.” As of January, the entire U.S. intelligence community disagreed with this assessment. In an email, an HHS spokesperson told me that Americans “will no longer accept silence, censorship, or scientific groupthink” and “deserve the truth.”

In the background, too, the administration has looked to bring other hard-liners on the lab-leak theory into the fold. Robert Kadlec, for instance, has been nominated for a role at the Department of Defense. A veteran of the first Trump administration who was instrumental in the management of Operation Warp Speed, he is also the author of a report that argues SARS-CoV-2 might have been developed by the Chinese military as a bioweapon that could lower American IQs by fogging up our brains with long COVID. (Kadlec told me that he doesn’t think COVID would be a major part of his portfolio, if he were confirmed—but “it will have relevance with the biosurveillance work that may be done,” he said.)

A former senior scientist at NIH told me about two others whose potential roles in government have not previously been reported. The first is Alina Chan, a molecular biologist at the Broad Institute of MIT and Harvard, the author of Viral: The Search for the Origin of COVID-19, and a dogged advocate for more vigorous investigations of the lab-leak theory and tighter restrictions on virology research. Chan confirmed to me that she is in discussions for a role at the NIH. “I haven’t committed to anything,” she told me, “but I do feel like now that we’ve reached this point, I feel that this is probably the most important thing that I should be doing in my life—doing as much as I can to help the U.S. government prevent future catastrophic lab leaks.”

The former NIH scientist, who requested anonymity in order to preserve professional relationships, also said a contract was under consideration for Bryce Nickels, a Rutgers geneticist and Bhattacharya’s friend and former podcast co-host. Nickels has been notably aggressive on the lab-leak theory, and as an advocate for better oversight of research that could lead to the production of more dangerous pathogens. In his posts on social media, Nickels has called Anthony Fauci a “monster” and maintained that the U.S. is in the business of developing “bioweapon agents.” (Nickels did not reply to questions for this article.)

In principle, the arrival of this lab-leak coterie in Washington could have marked a useful shift in the study of pandemic origins. If the old guard in public health was at times inclined to paper over uncomfortable debates, this new one might be zealously transparent. Chan, for instance, told me that she’d like to see investigators take a closer look at documents and correspondence from EcoHealth Alliance, the NIH-funded nonprofit that was working with the Wuhan Institute of Virology, and spend more effort trying to nail down the very first cases of disease in China. She also thinks the government should release more details of the intelligence community’s assessments, which might explain why different agencies and offices have come to different answers as to what is most likely to have occurred. (The FBI, CIA, and Department of Energy lean toward a laboratory accident of some kind. Five others, including the National Intelligence Council and the Defense Intelligence Agency, are inclined the other way.)

But this administration seems unlikely to make much progress on this front. If anything, its policies and proclamations have only made the subject more intractable. Even before Trump took office, many scientists were reluctant to engage with the topic, for fear of being drawn into what has been a very public and vituperative debate. Now that worry must be multiplied a hundred times. In recent months, the NIH has terminated grants that run afoul of the government’s positions on diversity and gender, and shut off funding to entire research universities. It will soon end the system that U.S. researchers use to share grant funding with foreign collaborators, and has begun suspending collaborations overseas. The risks of stepping out of line have never been so salient.

In the meantime, new government restrictions inspired by the lab-leak theory could serve to make it even harder to fill in the remaining details of what happened in Wuhan. Michael Worobey, an evolutionary biologist at the University of Arizona who has published a string of papers laying out an aggressive case for the market origin, told me that he’d like to see more sampled DNA from wild populations of civets, raccoon dogs, and bamboo rats throughout China. But this sort of work would require close collaboration with Chinese researchers, at just the time when those collaborations are being scrutinized or canceled.

“The administration is developing a very adversarial relationship with the scientific and technical communities,” Filippa Lentzos, a biosecurity researcher and professor at King’s College London, told me. “It’s not a facts-based discussion. There are facts from one side, but not from the other side.” This climate will tend to undermine the work of encouraging more prudence in the labs of those who study risky pathogens, she said. As for the COVID-origins debate itself, she does not expect a satisfying answer. “I think it’s kind of a lost cause.”

Either way, by tying budget cuts and other new restrictions to the lab-leak theory, the administration seems intent on punishing an enormous swath of biomedical researchers for the actions of the tiny handful whose work could even theoretically be tied to the pandemic. “This is the most enormous case of baby and bathwater that I have ever seen,” Relman told me. “The baby is just being shoved down the drain.”

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

It took just a few hours for devotees of the “Make America healthy again” movement to question former President Joe Biden’s prostate-cancer diagnosis. Tumors of the prostate are the most common serious malignancy identified in men: Even aggressive ones like Biden’s are diagnosed roughly 25,000 times a year in the United States. Although Biden’s condition is conventional, a certain segment of the public has been beguiled into blaming mainstream medicine for every unexpected death or health-related tragedy it comes across. The anti-vaccine community, including the group formerly led by Health Secretary Robert F. Kennedy Jr., has spent years promoting the idea that mRNA vaccines for COVID regularly push tumors into overdrive. (Rare anecdotes aside, there is no evidence to support this fear.) Now, predictably, the claim is cropping up again on social media. “Prostate cancer takes years to metastasize to bone unless super aggressive or turbo cancer,” the Kennedy-endorsed physician Craig Wax suggested.

That an 82-year-old man who had aged out of prostate-cancer-screening tests has been found to have an advanced malignancy should not be surprising. In my experience as a doctor who diagnoses cancer, many tumors are discovered out of the blue. Prostate cancer in particular may not become apparent until an individual goes to his doctor with a minor complaint—in Biden’s case, urinary symptoms, according to the announcement—only to have further testing discover the worst. (Biden’s cancer isn’t curable; people with Stage 4 disease like his live for about three years on average—although the outlook is worse for men who are more than 80 years old.)

Cancer is an enigmatic disease, one that is simultaneously influenced by genetics, environment, personal habits, the aging process, and—not to be discounted—bad luck. But its muddled nature can be uncomfortable for those who share the view that nearly all sickness is preventable with virtuous behavior and a clean environment. According to Kennedy, the current leader of the U.S. health-care system, tumors are a product of not only the vaccines in our arms, but also the fluoride in our water, the toxins in our school lunches, the signals from our phones, and surely many other ubiquitous aspects of modernity. Indeed, in MAHA land, cancer is not just a misfortune, but a cover-up. Before he became health secretary, Kennedy ominously suggested that doctors might find its cause in the “places they dare not look.”

[Read: The inflated risk of vaccine-induced cardiac arrest]

It’s not just Kennedy. Trump’s health-care team routinely draws from the logic of this wellness-paranoia complex. Last year, Marty Makary, who has since become the FDA commissioner, told a group of MAHA wellness influencers convened by Senator Ron Johnson that cancer is a consequence of “low-grade chronic inflammation” induced by a poisoned food supply. (Years ago, he also speciously declared that undetected medical errors were a leading cause of death.) Casey Means, Trump’s new nominee for surgeon general, has claimed that “the biggest lie in healthcare” is that high blood sugar, malignant tumors, and clogged arteries “are totally different diseases requiring separate doctors and pills for life.” The truth is “simpler than we are told,” she said. (Buy her book to find out what it is.) And Mehmet Oz, the former lifestyle guru and current Medicare administrator, recently informed Americans, “It’s your patriotic duty to be as healthy as you can. It’s our job to help you get there, make it easy to do the right things.” Never mind that you can do everything right and still get sick. (For now, none of the administration’s major health officials has weighed in on Biden’s diagnosis.)

[Read: Did a famous doctor’s COVID shot make his cancer worse?]

Joe Biden is no stranger to tough luck. His son Beau died of a brain tumor at age 46 in 2015, leading to Biden’s participation in a government-funded “cancer moonshot” to combat the condition. The moonshot initiative was an old-fashioned approach to medicine, one that sought to ameliorate illness through advances in science and technology. RFK Jr. and his MAHA acolytes are naturally suspicious of this approach. Now their weird discomfort with disease—and their outré views on cancer in particular—is being refracted through a sea of false, indecent speculations. No, Biden’s cancer was not “courtesy of the mRNA shot.” One can only hope that the government’s bevy of vaccine skeptics will be able to resist the siren’s call to join in saying otherwise.

Robert F. Kennedy Jr.’s anti-vaccine activism is not what you’d call subtle. For decades, he has questioned the safety and effectiveness of various childhood vaccines, insisting that some of them cause autism, lying about their ingredients, and dismissing troves of evidence that counter his views. However much he might deny it, Kennedy is “an old-school anti-vaxxer,” Dorit Reiss, an expert in vaccine law at UC Law San Francisco, told me.

When he became the United States’ health secretary, Kennedy brought few of his staunchest and oldest allies in the anti-vaccine movement with him. Instead, the Department of Health and Human Services is filling with political appointees whose views of vaccines run less obviously counter to evidence than Kennedy’s. But these officials, too, question the safety and usefulness of at least some vaccines, and seek to slow or stop their use.

Among those officials are Marty Makary, the new FDA commissioner, and Tracy Beth Høeg, his new special assistant; Vinay Prasad, the new director of the FDA center that oversees the regulation of vaccines; and Jay Bhattacharya, the new director of the National Institutes of Health. Unlike Kennedy, they hold advanced degrees in science, medicine, or public health, and have published scientific papers—often in direct collaboration with one another. And they have each endorsed at least some vaccines for children, or even pushed back on some of Kennedy’s most flagrant vaccine misinformation—criticizing, for instance, his false claims that MMR shots cause autism. When reached for comment by email, Emily Hilliard, HHS’s deputy press secretary, described the cohort to me as “credentialed physicians and researchers with long-standing commitments to evidence-based medicine” who “were brought into HHS to restore scientific rigor, transparency, and public trust—not to blindly affirm the status quo.” (Emails to the FDA and the NIH requesting interviews with each of these four officials either went unanswered or redirected me to HHS.)

These new appointees can also be described, more succinctly, as a COVID contrarian who has questioned the worth of vaccines. Their approach to immunization policy is less extreme, more engaged with evidence, and more academic than Kennedy’s. And precisely because these officials’ perspectives carry a sheen of legitimacy that most of the secretary’s usual allies lack, they could be more effective than Kennedy at undermining America’s protections against disease.


In sharp contrast to Kennedy, this new cohort—you could call them the neo-anti-vaxxers—are generally established in their respective scientific fields. Makary, for instance, has been hailed for pioneering several surgical procedures; in the 2010s, Prasad, a hematologist-oncologist, gained recognition for his rigorous—albeit acerbic—takes on precision medicine and cancer drugs. And each has acknowledged, in at least some capacity, the lifesaving powers of immunization. When they’ve argued about vaccines, they’ve often done so in respected scientific venues, and performed their own analyses of the evidence.

No medical intervention is without risk, and on the broadest level, what these officials are asking for appears to fit the essential tenets of public health: thorough testing of vaccines before they’re debuted, and careful scrutiny of each immunization’s relative pros and cons. But these officials’ past actions show that they haven’t always weighed those scales fairly or objectively.

All four of these officials began to publicly coalesce in their view of vaccination in the early months of COVID. None of them had trained as infectious-disease specialists or vaccinologists. But in their public comments, and in several publications, they contended that the virus was far less dangerous than most public-health officials thought, and that the measures that the U.S. was taking against it were far too extreme. They argued against mandates and boosters, especially for children and for young and healthy adults; they exaggerated the side effects of the shots, extolled the benefits of acquiring immunity through infection, and dismissed the notion that people who’d already had COVID should still get shots later on. In October 2020, Bhattacharya and a group of colleagues advocated for reopening society before vaccines had debuted; Makary, although initially supportive of COVID vaccines, went on to praise the Omicron variant of the virus—which at one point killed an average of 2,200 Americans each day—as “nature’s vaccine.” Prasad, meanwhile, has said that COVID-vaccine makers should be sued for the rare side effects caught and disclosed with standard monitoring. And Høeg, who’d previously worked with Florida Surgeon General Joseph Ladapo, influenced his decision to recommend against the COVID vaccine for healthy children.

Plenty of Americans were reasonably nervous about taking a vaccine developed at record speed, with new technology, under conditions of crisis. But Bhattacharya, Makary, Prasad, and Høeg went further than simply recommending caution; they questioned the legitimacy of the data supporting repeat immunization and at times actively advised against it.

Their criticism of vaccination has transcended COVID. Prasad allows that some vaccines are important but has also questioned the value of RSV vaccines during pregnancy; he’s argued that the evidence for annual flu shots is “extremely poor,” and disparaged doctors who consider all vaccines lifesaving. He has suggested that Kennedy randomize different parts of the U.S. to different childhood vaccine schedules, to determine an optimal dosing strategy—an experiment that could keep kids from accessing safe and effective shots in a timely fashion. Høeg, too, has called for an overhaul of how vaccines are tested, approved, and regulated in this country. And she has sharply criticized the American pediatric immunization schedule for including more vaccines than the one in Denmark, where she holds citizenship. Makary, while more tempered in his public comments, has still declined at times to urge parents to vaccinate their children against measles, and downplayed the virus’s risks.

As a group, these officials have generally been more sanguine about Kennedy’s false claims about vaccines and autism than other researchers have. Bhattacharya, for example, said at his confirmation hearing that he is convinced that vaccines don’t cause autism, but added that he wouldn’t reject more studies on the issue.

Hilliard, at HHS, wrote that, by interrogating vaccines, these officials are doing only what science requires: “Questioning the quality of data, highlighting the limits of past decisions, or advocating for better trials is not anti-science—it is the gold standard of science.” But truly rigorous science also rests on the foundations of previous data—and a willingness to accept those data, even if they conflict with one’s priors. Many of the questions these officials are asking have already been repeatedly asked and answered—and the four of them have been criticized by public-health experts for their tendency to, like Kennedy, ignore reams of evidence that do not support their views. Some of their suggestions for revamping vaccines would also put Americans at unnecessary risk: Asking certain American jurisdictions to delay childhood vaccinations, or perhaps even skip certain shots, could leave entire communities more vulnerable to disease.

Fundamentally, they, like Kennedy, believe that vaccines should generally be more optional for more Americans—a perspective that elides the population-level benefits of widespread immunity against disease. And fundamentally, they, like Kennedy, have argued that vaccines that have passed rigorous tests of safety and efficacy, been successfully administered to hundreds of millions of people, and saved lives around the world are not safe or necessary. If those stances are further codified into policy, they could waste the country’s resources on unnecessary testing, produce misleading data, and erode confidence in public health as a whole.


Already, these officials have turned their new powers on COVID vaccines, some of which are still authorized only for emergency use. The FDA has delayed full approval of the Novavax COVID vaccine and is reportedly asking for a new—and very costly and laborious—randomized controlled trial on the shot’s effectiveness, even though the shot has already been through those sorts of studies and been safely administered to people for years. The agency could also require all COVID-vaccine makers to submit new effectiveness data for shots updated to include new variants of the virus—essentially treating them as brand-new vaccines and potentially making it nearly impossible, logistically, to produce new formulations of the COVID vaccine each fall. (Experts worry that the agency will apply the same logic to flu vaccines, with the same result.) The FDA could also go as far as revoking emergency-use authorizations, such as the one for pediatric COVID vaccines, which Prasad has said should be stricken from the childhood immunization schedule.

These officials’ powers have limits: The CDC (which still doesn’t have a permanent director), not the FDA, recommends the childhood immunization schedule. At a meeting last month of the CDC’s advisory council on immunization practices, though, Høeg came as the FDA’s liaison—an unconventional choice, Jason Schwartz, a vaccine-policy expert at Yale, told me, for a role historically filled by a career scientist from the Center for Biologics Evaluation and Research, the FDA center Prasad runs. (After deferring to HHS, the FDA responded to a request for comment by pointing out that Høeg holds the title of “senior clinical science adviser” at CBER—a title she was apparently given after the meeting.) Grace Lee, who previously chaired the committee, told me that the FDA liaison is “not usually an active participant.” And yet, Høeg pointedly questioned the safety and effectiveness of multiple vaccines, including COVID shots—the sort of contribution that could influence the discussion, the ultimate vote, and, potentially, the eventual CDC director’s decision to accept the panel’s advice, Lee and Schwartz said.

Bhattacharya’s sway, too, is likely to expand far past his own agency. Under this administration, the NIH has already canceled grants for hundreds of infectious-disease-focused studies, including dozens that look at vaccine uptake and hesitancy. Now, with Bhattacharya in charge, the agency is leading a $50 million study into the causes of autism, as directed by Kennedy—who already seems set on the answers to that question. When asked in a recent interview with Politico whether mRNA-focused science might be defunded, Bhattacharya said that “many, many people now think that mRNA is a bad platform.” If the U.S. ignores vaccine hesitancy—or if researchers have fewer resources to develop new vaccines—immunization in this country will stall, regardless of who runs the FDA or the CDC.  

Modern American politics does still consider some positions to be too anti-vaccine: Trump’s original pick for CDC director, Dave Weldon, who has repeatedly promoted the debunked idea of a connection between vaccines and autism, had his nomination withdrawn by the White House in March. Kennedy’s own confirmation hearing was contentious, and heavily focused on vaccines; in official press statements and in interviews since then, he has softened some of his stances—acknowledging the protective powers of the MMR shot, for instance—to the point where he has angered his extremist base. Bhattacharya and Makary faced less resistance during their own hearings, during which they both praised the importance of vaccines. The vaccine distortions they’ve pushed are less blatant than Kennedy’s, but also more difficult to combat.

When Kennedy began his new position, some feared he would immediately take a sledgehammer to American vaccines. The moves he and his new team are making have stopped short of obliterating access to shots; they’re more about creating new roadblocks, Luciana Borio, a former acting chief scientist at the FDA, told me. But even seemingly minor hurdles can mark a substantial philosophical shift: Where HHS once treated the U.S.’s vaccines as well-vetted, lifesaving technologies, it’s now casting them as dubious tools with a murky track record, pushed onto the public by companies rife with corruption. By sowing doubt that vaccines can safely protect people, HHS’s lesser skeptics will help legitimize Kennedy—until all of their views, fringe as they may have begun, start to feel entirely reasonable.

Photographs by Sarah Blesener

The Toyota pickup hit the tree that May morning with enough explosive force to leave a gash that is still visible on its trunk 39 years later. Inside the truck, the bodies of three teenage boys hurled forward, each with terrible velocity.


This article was featured in the One Story to Read Today newsletter. Sign up for it here.


One boy died instantly; a second was found alive outside the car. The third boy, Ian Berg, remained pinned in the driver’s seat, a bruise blooming on the right side of his forehead. He had smacked it hard—much harder than one might have guessed from the bruise alone—which caused the soft mass of his brain to slam against the rigid confines of his skull. Where brain met bone, brain gave way. The matter of his mind stretched and twisted, tore and burst.

When the jaws of life freed him from the wreckage, Ian was still alive, but unconscious. “Please don’t die. Please don’t die. Please don’t die,” his mother, Eve Baer, pleaded over him at the hospital. She imagined throwing a golden lasso around his foot to keep him from floating away.

And Ian didn’t die. After 17 days in a coma, he finally opened his eyes, but they flicked wildly around the room, unable to sync or track. He could not speak. He could not control his limbs. The severe brain injury he’d suffered, doctors said, had put him in a vegetative state. He was alive, but assumed to be cognitively gone—devoid of thought, of feeling, of consciousness.

Eve hated that term, vegetative—an “unhuman-type classification,” she thought. If you had asked her then, in 1986, she would have said she expected her 17-year-old son to fully recover. Ian had been handsome, popular, in love with a new girlfriend—the kind of golden boy upon whom fortune smiles. At school, he was known as the kid who greeted everyone, teachers included, with a hug. He and his two friends in the car belonged to a tight-knit group of seniors. But on the day he would have graduated that June, Ian was still lying in a hospital bed, his big achievement being that he’d finally made a bowel movement.

“What kind of life is that?” Ian’s brother Geoff remembers thinking. When he first arrived at the hospital, he had looked around the room for a plug to pull. The two brothers had talked about scenarios like this before, Geoff told me: “If anything ever happens to me and I can’t wipe my ass, make sure you kill me.” Angry that their mother was keeping his brother alive, Geoff fled, moving for a time to St. Thomas.

Three months after the accident, when doctors at the hospital could do no more for Ian, Eve took him home. She was adamant that he live with family, rather than under the impersonal care of a nursing home. That she had ample space for Ian and all of his specialized equipment was fortuitous. A few weeks before the accident, Eve’s husband, Marshall, had stumbled upon the Rainbow Lodge, an old hotel for hunters and fishers, for sale near Woodstock, New York. He loved the idea of a compound for their big blended family—his two grown children plus nieces and nephews, as well as Eve’s four kids, of whom Ian is the youngest. The sale was finalized while Ian was in the hospital.

At the lodge, Eve and a rotating cast of caretakers kept Ian alive: bathing him, pureeing home-cooked meals for his feeding tube, changing the urine bag that drained his catheter. She also devised a busy schedule of therapies, anchored by up to six hours a day of psychomotor “patterning”—an exercise program she’d read about in which a team of volunteers took each of Ian’s limbs and moved them in a pattern that mimicked an infant learning to crawl. Friends and acquaintances came to help with patterning; some started living in the lodge’s guest rooms, staying for months or even years. They formed a kind of unconventional extended family, with Ian at the center. Every Sunday, Eve cooked big dinners for the crowd.

photo of woman leaning with hand pressed against enormous tree next to road with snow on ground and house behind
Sarah Blesener for The Atlantic
The tree Ian struck with a pickup truck in 1986 still bears a scar from the accident.

The patterning exercises, which are not based on science, ultimately did not really help Ian. But his mother didn’t dwell on this. She made regular calls to the National Institutes of Health to inquire about the latest brain-injury research. And where mainstream medicine failed, Eve—who had moved to Woodstock in the ’60s as a “wannabe bohemian slash beatnik”—turned enthusiastically to alternatives. Ian was treated by the spiritual guru Ram Dass; a “magic man” with a pendulum; a craniosacral therapist; a Buddhist monk; Filipino “psychic surgeons”; and a healer in Chandigarh, India. Eve and Marshall took him on the 7,000-mile journey to India themselves, pushing him in a rented collapsible wheelchair. When, after all of this, Ian’s condition still did not improve, Eve became angry. It was one of the rare times that she allowed disappointment to puncture her relentless optimism.

Still, like so many other family members of vegetative patients, she held on to a mother’s belief that Ian could understand everything around him. She took care, when shaving him, to leave the wispy mustache he had been trying to grow. When his high-school friends went to see the Grateful Dead, she brought him along in his wheelchair and a tie-dyed shirt. She kept believing for herself as much as for Ian: If her son was aware, it would mean her gestures of love were not unseen, her words not unheard.

Science would take decades to catch up with Eve, but she turned out to be right in one crucial respect: Ian is still aware. Doctors now agree that he can see, he can hear, and he can understand, at least in some ways, the people around him.

Over the past 20 years, the science of consciousness has undergone a reckoning as researchers have used new tools to peer inside the brains of people once thought to lack any cognitive function. Ian is part of a landmark study published in The New England Journal of Medicine last year, which found that 25 percent of unresponsive brain-injury patients show signs of awareness, based on their brain activity. The finding suggests that there could be tens of thousands of people like Ian in the United States—many in nursing homes where caretakers might have no clue that their patients silently understand and think and feel. These patients live in a profound isolation, their conscious minds trapped inside unresponsive bodies. Doctors are just beginning to grasp what it might take to help them.

photo of young woman smiling with hands on top of young man's head, his eyes open
photo of person with yellow scarf wrapped around head, cradling the head of young man lying under blanket
photo of white-haired woman in ponytail cradling bandaged head of man with breathing mask and monitors in her arms
Sarah Blesener for The Atlantic; Courtesy of the Baer family
After Ian was discharged from the hospital, Eve and a rotating cast of caretakers and alternative healers tried to help him recover. Throughout, Eve held on to a mother’s belief that Ian could understand everything around him.

For Ian, the signs were there, if not right at the beginning, at least early on. Three years after the accident, he began to laugh.

Eve was in the kitchen with him, idly singing the Jeopardy theme song in a silly falsetto when she heard it: “Ha!” Laughter? Laughter! “Other than a cough, it was the first sound I heard from him in three years,” she told me. In time, Ian started laughing at other things too: stories Eve made up about a cantankerous Russian named Boris, the word debris, pots clanging, keys jangling. Fart and poop jokes were a perennial favorite; his brain seemed to have preserved a 17-year-old’s sense of humor. His friends and family took that to mean the Ian they knew was still in there. What else might he be thinking?

At the time, Ian was not regularly seeing a neurologist. But even if he had been, most neurologists in the ’80s would not have known what to make of his laughter; it flew in the face of conventional wisdom.

Doctors first defined the condition of the persistent vegetative state in 1972, less than a decade and a half before Ian’s accident. Fred Plum and Bryan Jennett coined the term to describe a perplexing new class of patients—people who, thanks to advances in medical care, were surviving brain injuries that used to be fatal, but were still left stranded somewhere short of consciousness. This condition is distinct from coma, a temporary state in which the eyes are closed. Vegetative patients are awake; their eyes are open, and they may be neither silent nor still. They can moan and move their limbs, just without purpose or control. And while their bodies continue to breathe, sleep, wake, and digest, they seem to have no connection to the outside world. Today, experts sometimes refer to the vegetative state as “unresponsive wakefulness syndrome.”

Back then, the two doctors also distinguished it from locked-in syndrome, which Plum had helped name a few years prior. Locked-in patients are fully conscious though immobile, except for typically their eyes. (Jean-Dominique Bauby wrote his famous 1997 memoir about locked-in syndrome, The Diving Bell and the Butterfly, by blinking out one letter at a time.) In contrast, Plum and Jennett considered the vegetative state “mindless,” with no cognitive function intact.

What, then, could the laughter mean? By the ’90s, some of the most prominent experts on consciousness—including Plum and Jennett themselves—had begun to realize that they had perhaps too categorically or hastily dismissed patients diagnosed as vegetative. Researchers were documenting flickers of potential consciousness in some supposedly vegetative patients. These patients could utter occasional words, grasp for an object every now and then, or seem to answer the odd question with a gesture—suggesting that they were at least sometimes aware of their surroundings. They seemed to be neither vegetative nor fully conscious, but fluctuating on a continuum.

This in-between space became formally recognized in 2002 as the “minimally conscious state,” in an effort led by Joseph Giacino, a neuropsychologist who specializes in rehabilitation after brain injury. (Coma, vegetative, and minimally conscious are sometimes collectively called “disorders of consciousness.”)

photo of hand-written card with words 'LOVE IS LOVE' AND 'NOT FADE AWAY' alongside heart with arrow on left page and photo of people with flower on right
Sarah Blesener for The Atlantic

One day in spring 2007, Marshall, Ian’s stepfather, slipped on a mossy stone and fractured his hip. As he and Eve waited for an ambulance, the phone rang. Giacino had heard about Eve’s NIH inquiries, and he was interested in meeting Ian—he wondered if the minimally conscious diagnosis might apply to him. If so, Ian could qualify for a new experimental trial.

Giacino didn’t make any promises. Still, after all those years, Eve told me, “he was the first voice of positive possibility that I heard.” So even as Marshall lay next to her with his broken hip, neither of them dared hang up the phone.

Around this time, in 2006, an astonishing case report came out from researchers led by Adrian Owen, a cognitive neuroscientist at the University of Cambridge; it suggested that even vegetative patients could retain some awareness. Owen found a 23-year-old woman who had been in a car accident. Months later, she still had no response on behavioral exams. But in an fMRI machine, her brain looked surprisingly active: When she was asked to imagine playing tennis, blood flowed to her brain’s supplementary motor area, a region that helps coordinate movement. When she was asked to imagine visiting the rooms of her house, blood flowed to different parts of her brain, including the parahippocampal gyrus, a strip of cortex crucial for spatial navigation. And when she was told to rest, these patterns of brain activity ceased. Based on the limited window of an fMRI scan, at least, she seemed to understand everything she was being asked to do.

“Unsettling and disturbing” is how one neurologist described the implications of the study to me. Also: controversial. Another doctor recounted a scientific meeting soon after where the speakers were split 50–50 on whether to accept the results. Was the fMRI finding just a fluke? Owen did not inform the woman’s family of what he found, because the study’s ethical protocol was ambiguous about how much information he could share. He wishes he could have. The woman died in 2011, without her family ever being told that she might have been aware.

[Read: I know the secret to the quiet mind. I wish I’d never learned it.]

Over time, Owen and his group identified more patients with what they came to call “covert awareness.” Some were vegetative, while others were considered minimally conscious, based on behaviors such as eye tracking and command following. The researchers found that outward response and inner awareness were not always correlated: The most physically responsive patients were not necessarily the ones with the clearest signs of brain activity when asked to imagine the tasks. Covert awareness, then, can be detected only using tools that peer at a brain’s inner workings, such as fMRI.

In 2010, one of Owen’s collaborators, the Belgian neurologist Steven Laureys, asked a minimally conscious patient, a 22-year-old man, a series of five yes-or-no questions while he was in an fMRI machine, covering topics such as his father’s name and the last vacation he took prior to his motorcycle accident. To answer yes, the patient would imagine playing tennis for 30 seconds; to answer no, he would imagine walking through his house. The researchers ran through the questions only once, but he got them all right, the appropriate region of his brain lighting up each time.

It is hard to say what experience of human consciousness some colored pixels on a brain scan really depict. To answer intentionally, the patient would have had to understand language. He would also have needed to store the questions in his working memory and retrieve the answers from his long-term memory. In my conversations with neurologists, this was the study they cited again and again as the most compelling evidence of covert awareness.

A few years later, using the same yes-or-no method, Owen found a vegetative patient who seemed to know about his niece, born after his brain injury. To Owen, this suggested that the man was laying down new memories, that life was not simply passing him by. In yet another case, Owen used fMRI not just to quiz a 38-year-old vegetative man, but to actually ask about the quality of his life 12 years post-injury: Was he in pain right now? No. Did he still enjoy watching hockey on TV, as he had before his accident? Yes.

Most researchers I spoke with were reluctant to speculate about the inner life of these brain-injury patients, because the answer lies beyond any known science. The brains of minimally conscious patients do activate in response to pain or music, Laureys told me, but their experience of pain or music is likely different from yours or mine. Their state of consciousness may resemble the twilight zone of drifting in and out of sleep; it almost certainly differs from person to person. Owen believes that some of his vegetative patients may actually be “completely conscious,” akin to a locked-in person who is fully aware, but cannot move even their eyes. Until that is proved otherwise, he sees no reason not to extend them the benefit of the doubt.

Several months after the phone call from Giacino’s office, Ian’s family made the trip to New Jersey to meet the researcher. In the exam room, Giacino put Ian through an intense battery of tests. He found that Ian could intermittently reach on command for a red ball. He laughed at loud noises, such as keys jangling, which Giacino said could be a simple response to the sound. But Ian also laughed appropriately at jokes, especially adolescent ones, as if he understood humor and intent. These behaviors were enough to qualify Ian for a brand-new diagnosis two decades after his accident: not vegetative, but minimally conscious.

Giacino’s collaborators were eager to put Ian in an fMRI machine, to see what might be happening inside his brain. On a separate trip, this time to an fMRI facility in New York City, his family met Nicholas Schiff, a neurologist at Weill Cornell and a protégé of Fred Plum’s. Schiff, too, was intrigued by Ian’s laughter, and the possibility that he understood more than he could physically let on. Schiff’s team showed Ian pictures and played voices—to see whether his brain could process faces and speech—and asked him to imagine tasks such as walking around his house.

Ian’s brother Geoff was also at this scan, having by then returned to New York. Crammed into the small fMRI control room with all the scientists peering at Ian’s brain, he remembers being incredulous at the things they wanted his brother to imagine. “You really think he can understand you?” he asked.

photo of man in suit and tie
image of brain scan with brain outlined in purple and two regions glowing bright yellow alongside printed notes
Sarah Blesener for The Atlantic
Left: Nicholas Schiff, a neurologist at Weill Cornell, was intrigued by the possibility that Ian understood more than he could physically let on. Right: A brain scan of Ian’s.

The scientists did. They believed Ian still retained some kind of consciousness. They also thought there was a chance, with luck and the right tools, of unlocking more. This had happened before. In some extraordinary patients, the line between conscious and unconscious is more permeable than one might expect.

In 2003, Terry Wallis, in Arkansas, suddenly uttered “Mom!” after 19 years as a vegetative patient in a nursing home. Then he said “Pepsi”—his favorite soft drink. After that, his mother took him home. Wallis couldn’t move below his neck and he struggled with his memory and impulse control, but he began to speak in short sentences, recognized his family, and continued to request Pepsis. In retrospect, he probably had not been vegetative at all, but minimally conscious during those first 19 years. His mom had seen signs that others at the nursing home had not: Wallis occasionally tracked objects with his eyes, and he became agitated after witnessing the death of his roommate with dementia.

[Read: How people with dementia make sense of the world]

Slowly, over time, Wallis’s brain had recovered to the point of regaining speech. When Schiff and his colleagues later scanned him, they found changes that suggested neuronal connections were being formed and pruned decades after his injury. “Terry changed what we thought about what might be possible,” Schiff told Ian’s family.

There was also Louis Viljoen, in South Africa, who in 1999 began speaking when put on zolpidem, better known as Ambien, a sedative that was, ironically, supposed to put him to sleep. He, too, had been declared vegetative—a “cabbage,” according to one doctor—after being hit by a truck. Within 25 minutes of taking zolpidem, his mother recalled, he started making his first sounds, and when she spoke, he responded, “Hello, Mummy.” Then the effects of the drug faded as rapidly as they’d come on.

Viljoen would continue taking zolpidem every day; he eventually recovered enough to be conscious even without the drug, but a daily dose reanimated him further. “After nine minutes the grey pallor disappears and his face flushes. He starts smiling and laughing. After 10 minutes he begins asking questions,” a reporter who met him in 2006 wrote. Several other drugs, including amantadine and apomorphine, can have similarly arousing effects, though none has worked in more than a tiny sliver of patients. In certain people, for reasons still not understood, they might activate a damaged brain just enough to kick it into gear, “like catching a ride on a wave,” Schiff, who has studied patients on Ambien, told me.

Greg Pearson, in New Jersey, had electrodes implanted in his thalamus in 2005 as part of a study by Schiff and Giacino. The thalamus is a walnut-size region of the brain that sits above the opening at the bottom of the skull, where the spinal cord meets the brain, a position that makes it particularly vulnerable during injury: When a bruised brain swells, it has nowhere to go but down, putting tremendous pressure on the thalamus. Because the thalamus usually regulates arousal—Schiff likens it to a pacemaker for the brain—damage to this region can induce disorders of consciousness. Schiff wondered if stimulating the thalamus could restore some of its function. And indeed, when the electrodes were turned on during surgery, Pearson blurted out his first word in many years: “Yup.” He was eventually able to recite the first 16 words of the Pledge of Allegiance and tell his mother, “I love you.”

A damaged brain, in some cases, might be more like a flickering lamp with faulty wiring than a lamp that has had its wiring ripped out. If so, that circuitry can be manipulated. The neurosurgeon Wilder Penfield realized this decades ago, when he discovered that he could make a conscious patient fall unconscious by gently pressing on a certain area of the brain.

That our consciousness might actually be dynamic, that it can be dialed up and down, is not so strange if you consider what happens every day. We become unconscious when we sleep at night, only to reanimate the next day. Could this dialing back up be artificially controlled when the brain is too damaged to do so itself?

After the publication of the study on Pearson, in 2007, Schiff couldn’t keep up with all the calls to his office. He and his colleagues were now looking for more patients, including people who were even less responsive initially than Pearson—people whose condition would test the extent of what deep-brain stimulation using electrodes could do.

Given his limited but still discernible responses, Ian seemed like the perfect candidate. The researchers were careful not to make guarantees. But Eve harbored hope that Ian could one day tell her, “I love you.” His family agreed to join the trial.

Ill cut to the chase: Ian’s deep-brain stimulation did not work. At one point during the surgery to implant the electrodes, he said the only intelligible word he’s uttered since 1986—“Down,” in response to being asked, “What is the opposite of up?” Then he lapsed into silence once again. In the months that followed, therapists spent hours and hours asking Ian to move his arm or respond to questions, to no avail.

Geoff, who worked in video production at the time, captured the process on film. He had intended to make a documentary about what he hoped would be his brother’s recovery. In addition to filming Ian in the trial, he’d taped interviews with family members, asking what hearing Ian speak again would mean to them.

He never did make the documentary. Without a miraculous recovery, he felt, the story was just too sad. This past winter, Geoff dug up the old camcorder tapes, and we watched the footage together on the living-room TV. He hadn’t seen it since he filmed it nearly 20 years ago. “Tough to watch,” he said more than once.

photo of living room with shelves and a TV displaying video of young man at desk giving presentation
Sarah Blesener for The Atlantic
At the time of his accident, Ian—seen here in a video from a high-school class—was a month away from graduation.

After Ian went home, life at the Rainbow Lodge went on largely as it had before. Something did change, though—specifically for Geoff. Knowing that scientists now believed Ian retained some awareness transformed how he related to his younger brother. He started spending more time with Ian, and the two regained a brotherly intimacy. “Ian, are you conscious or are you a vegetable?” Geoff teased during one of my visits. “I think you’re a vegetable. I think you look like a kumquat.”

Geoff eventually took on more and more of Ian’s care; he is now paid through Medicaid as a part-time caregiver, helping Eve, who is 86. Geoff is the one who puts Ian to bed every evening, smoothing out the sheets to make sure he does not lie on a wrinkle all night long. He tucks an extra pillow on Ian’s left side, as his head has a tendency to droop that way.

For Eve, caregiving came naturally; she told me her ambition in life was always to be a mother. She had married at 18 and had three children in quick succession. When their marriage became strained, she and her first husband decided to try an open relationship. In 1964, Eve got a job waitressing at a Woodstock café whose owners let a singer named Bob Dylan live upstairs. She flirted with men. She flirted with Dylan, who took her to play pool and showed her pages of his book in progress, Tarantula. (“Bob was much cuter,” she says of Timothée Chalamet, who starred in the recent Dylan biopic.) Eventually she got divorced; her second husband was Ian’s father. Her third, Marshall, was an artist with a successful marketing career in New York City. Eve and Marshall planned to spend more time there after Ian graduated. The car crash upended everything.

Afterward, Eve threw herself back into the role of devoted mother. (Marshall helped take care of Ian until his death in 2011.) Even now, with Geoff and two nurses who cover five days a week, Eve has certain tasks she insists on carrying out herself. She trims Ian’s nails and hair, now thinning on top to reveal the faint scars from his deep-brain-stimulation surgery. She shaves him. When she speaks to her son, she leans over close, their matching Roman noses almost touching. In these moments, Ian will vocalize—“Aaaaaahh ahhhhhh”—like he is trying to talk with his mother.

photo of man smiling and woman laughing, standing next to motorized chair with young man with mustache, eyes and mouth open.
Sarah Blesener for The Atlantic; Courtesy of the Baer family
Ian’s stepfather, Marshall, cared for him alongside Eve until his death in 2011.

“I think Ian lived for my mom,” Geoff told me at one point, thinking back to the hospital, where Eve pleaded over his unconscious body, holding on to Ian with her imagined golden lasso. She had promised Ian then that she would do anything for him if he lived—hence the healers, the studies, and her devotion to him for the past 39 years.

While Ian was recovering from the deep-brain-stimulation surgery, Eve came across a poem by E. E. Cummings that affected her so deeply, she took to reading it aloud to him in a morning ritual. The second stanza goes:

(i who have died am alive again today,
and this is the sun’s birthday;this is the birth
day of life and of love and wings:and of the gay
great happening illimitably earth)

Schiff kept probing the outer limits of consciousness in patients with severe brain injuries. Last year, he, along with Owen, Laureys, and other researchers in the field, published the largest and most comprehensive study yet of covert awareness. This is the New England Journal of Medicine study that included Ian, and found one in four vegetative or minimally conscious brain-injury patients to have covert awareness. (Schiff prefers the term cognitive motor disassociation, to highlight the disconnect between the patients’ mental and physical abilities.) “Our experience was Wow, it’s not so hard to find these people,” Schiff told me.

The researchers do not believe that everyone with a disorder of consciousness is somehow cognitively intact—a majority are probably not, according to this study. The most important takeaway, researchers say, is simply this: People with covert awareness exist, and they are not exceedingly rare.

[From the June 2015 issue: Hacking the brain]

These findings raise profound questions about our ethical obligation to people with severe brain injuries. In his 2015 book, Rights Come to Mind, Joseph Fins, a medical ethicist at Cornell who frequently collaborates with Schiff, argues that such patients deserve better than to be “cast aside by an indifferent health care system,” or left to languish as mere bodies to feed and clean. “For so long, I’d been stripped of any identity,” one brain-injury patient, Julia Tavalaro, wrote in her memoir, Look Up for Yes. “I had begun to think of myself as less than an animal.” She was able to write the book after a particularly observant speech therapist finally noticed, six years after her injury, that she could communicate with her eyes. But too often, Fins told me, patients are shunted into long-term-care homes that cannot provide the attention and rehab that could uncover subtle signs of consciousness.

These patients are also especially vulnerable to abuse. In 2019, staff at a facility in Phoenix called 911 in a panic after a patient—who was reportedly vegetative but may have been minimally conscious—unexpectedly gave birth. No one at the facility, where she had lived for years, even knew she was pregnant until a nurse saw the baby’s head. She had been raped by a male nurse.

In some cases, patients with covert awareness may never make it to long-term care—they simply die when life support is withdrawn at the hospital. “If you went back 15, 20 years, there was a tremendous amount of nihilism” among doctors, says Kevin Sheth, a neurologist at Yale. Even as medicine has become less fatalistic about brain injury, hospitals still rarely look for covert awareness using fMRI. ICU patients may be too fragile to be moved to an fMRI machine, and the technology is too cumbersome and expensive to bring into the ICU.

Varina Boerwinkle, a neurocritical-care specialist now at the University of North Carolina, believes the technology should be routinely used with brain-injury patients. She told me about a 6-year-old boy she treated at a previous job in 2021, who had been in a car crash. Her initial impression was that he would not survive, and his first fMRI scan showed no signs of awareness. Boerwinkle began to wonder if doctors were prolonging his suffering. But the team repeated the test on day 10, in anticipation of discussing withdrawal of care with the boy’s parents. To Boerwinkle’s astonishment, his brain was now active: He could respond when asked to perform specific mental tasks in the fMRI.

At first, Boerwinkle wasn’t sure what to say to the boy’s family about the fMRI. Though it implied that he still had cognitive function, it did not guarantee that he would ever recover enough to respond physically or verbally. Her colleagues have seen families struggle to care for a child with a severe brain injury, Boerwinkle told me, and everyone was wary of providing false hope.

The doctors ultimately did inform the boy’s parents about their findings; his mother told me the fMRI gave them the confidence to agree to another surgery. It worked. Four years later, the boy is back in school. He uses an eye-gaze device to communicate and zoom around in his wheelchair, and his reading and math skills are on par with those of other kids his age.

Scientists are now looking for simpler tools to test for covert awareness. Patients who show signs of awareness early on, it seems, tend to have better recoveries than those who don’t. Owen, now based at the University of Western Ontario, recently published a study using functional near-infrared spectroscopy, which shines a light through the skull. A group at Columbia University, led by Jan Claassen, is experimenting with EEG electrodes that sit on the head.

But even after 20 years of research, little has changed in terms of what doctors can do to help patients found to have covert awareness long after their injury—which is still, in most cases, nothing. On his office wall, Schiff has taped the brain scans of five patients to remind him of the human stakes of his work. He is now exploring brain implants, which are already helping certain paralyzed patients control cursors with their mind or speak via a computer-generated voice. The next several years could prove crucial, as a crop of well-funded companies tests new ways of interfacing with the brain: Elon Musk’s Neuralink, perhaps the best-known of these, uses filaments implanted by a sewing-machine-like robot; Precision Neuroscience’s thin film floats atop the cortex; and Synchron’s implant is threaded up to the brain through the jugular vein.

Getting any of these implants to work in people with severe injuries like Ian’s will be particularly challenging. Ian’s age and the electrodes already implanted in his brain also make him an unlikely early candidate. This technology—if it ever works for people like him—may arrive too late for Ian.

Even in 1972, when Plum and Jennett first described the vegetative state, the doctors foresaw that they were barreling toward a “problem with humanitarian and socioeconomic implications.” The vegetative patients they described could now be kept alive indefinitely—but should they be? At what cost? Who’s to decide? Soon enough, Plum himself was asked to weigh in on the life of a 21-year-old woman.

In 1975, Plum became the lead witness in the case of Karen Ann Quinlan, who’d recently fallen into a vegetative state. She had collapsed after taking Valium mixed with alcohol, which temporarily starved her brain of oxygen. Her parents wanted her ventilator removed. Her doctors refused. In the ensuing legal battle, Quinlan’s family and friends testified that she had said, in conversations about people with cancer, that she wouldn’t want to be “kept alive by machines.” But there was no way to know what Quinlan wanted in her current condition. Plum categorically pronounced that she “no longer has any cognitive function”; another doctor likened her, in his court testimony, to an “anencephalic monster.”

In the end, a court granted her parents’ request to remove Quinlan’s ventilator. The controversy surrounding her case fueled interest in then-novel advance directives, which allow people to spell out if and at what point they want to die in the event of future incapacitation. In recognizing that life might not always be worth living, the court’s ruling also inspired a nascent “right to die” movement in the U.S.

By the time Terri Schiavo, in Florida, made national news in the early 2000s, resurfacing many of the same legal and ethical questions, the science had become more complicated. Schiavo had also been diagnosed as vegetative after she collapsed—from cardiac arrest, in her case. When her condition did not improve after eight years, her husband sought to have her feeding tube removed. Her parents fought back, fiercely. Although most experts found her to be vegetative, those aligned with her parents seized on the newly defined minimally conscious state to argue that Schiavo was still aware. The family released video clips purporting to show her responding to her mother’s voice or tracking a Mickey Mouse balloon with her eyes. If she was still conscious, they argued, she should not be made to die.

Schiavo became a cause célèbre for the religious right, and opinions hardened. Where one side saw parents honoring their daughter’s life, the other saw them clinging to illusory hope. Giacino told me that because of his key role in defining the minimally conscious state, he was asked to examine Schiavo by the office of Jeb Bush, then Florida’s governor. The behavioral exam he planned to perform, Giacino said, could have helped discern whether Schiavo’s responses were real or random. He never did go to Florida, though, because a court proceeding made another exam moot.

Schiavo eventually died when her feeding tube was removed in 2005. The general consensus now holds that she likely was vegetative—an autopsy later found that her brain had atrophied to half its normal size—but Giacino still wonders how that correlated with her level of consciousness. Because he never examined her himself, he personally reserved judgment.

If Schiavo—or let’s say a hypothetical patient diagnosed as vegetative, like her—were in fact minimally conscious or covertly aware, would that tip the calculus of keeping her alive one way or the other? Which way? On one hand is the horrifying proposition of snuffing out a human consciousness. On the other hand is what some might consider a fate worse than death, of living imprisoned in a body entirely without choice, without freedom. In memoirs and interviews, brain-injury patients who regained communication—Tavalaro among them—speak of despair, of abuse, and of sheer, uninterrupted boredom. They could not even turn their head to stare at a different patch of wall paint. One young man described the particular agony of being placed carelessly in a wheelchair and forced to sit for hours atop his testicles. Some have tried to end their life by holding their breath, which turns out to be physically impossible. The classical notion of a totally mindless vegetative state offered at least meager solace: a person devoid of consciousness would not experience pain or suffering.

One-third of locked-in patients, who can communicate only using their eyes, have thought of suicide often or occasionally, according to a survey of 65 people conducted by Laureys, the Belgian neurologist. But a majority of these patients have never contemplated suicide. They say they are happy, and those who have been locked in longer report being happier, which squares with other research showing that people with disabilities are in fact quite adaptable in the long term. Of course, those who responded to the survey are not entirely representative of everyone with a brain injury; for one thing, they could still communicate, albeit with difficulty.

What about covertly aware patients, with total loss of communication—are they happy to be alive? As far as I know, only one such person has ever had the opportunity to answer this question. In the 2010 study, after the 22-year-old man answered five consecutive yes-or-no questions correctly, Laureys decided to pose a last question, one to which he did not already know the answer: Do you want to die?

Where the man’s previous responses were clear, this one was ambiguous. The scan suggested that he was imagining neither tennis nor his house. He seemed to be thinking neither yes nor no, but something more complicated—exactly what, we will never know.

I posed a version of this question to the researchers who have devoted their career to understanding disorders of consciousness. Would you choose to live? “If no one was coming to the rescue, if help was not on the way, I wouldn’t want to be in any of these situations,” said Schiff, who has a practical eye toward brain-implant research that could one day help these patients.

Owen was more philosophical. He told me that when people learn about his research, many say they would prefer to die; even his wife says that. But he is less certain. He does not have an advance directive. Perhaps the only thing worse than wanting to die and being forced to live, he said, is to watch everyone let you die when you have decided, in the moment of truth, that you actually want to live.

On one of my trips to the Rainbow Lodge this past winter, Geoff rigged up Ian’s foot switch—one of countless assistive devices his family has tried—to play a prerecorded message for me. “Hey, Sarah, thanks for coming!” it went in Geoff’s singsong voice. “I’m glad to see ya.” His family had hoped, at one point, that Ian’s left foot, which waves back and forth, unlike his permanently fixed right one, could become a mode of communication. But Ian has never been able to push the switch reliably on command. Still, occasionally, he hits the big green button just hard enough to set it off.

photo of man leaning over head of hospital bed and holding hand of the man lying in it in home living room
Sarah Blesener for The Atlantic
Ian’s brother Geoff has become one of his caregivers, despite his earlier misgivings about their mother’s decision to keep Ian alive. 

I cannot know to what extent, if any, this movement is voluntary. But Ian’s foot is certainly more active at some times than others. While his family and I chatted over lunch at the kitchen table one day, it went tap, tap. “Hey, Sarah, thanks for coming!” Was he trying to join the conversation? “Hey, Sarah, thanks for coming!” If so, what did he want to say?

There was one other instance when I saw his foot moving that much—during a previous visit, when we spoke in detail about Ian’s car crash for the first time. The crash took place in the early morning, after the boys had been together all night. Ian was driving. When Eve was asked to identify the body of the boy who died, Sam, she recognized the white shell necklace Ian had brought back for him from a recent trip to Florida. The third boy—the one who survived—eventually stopped keeping in touch with high-school friends, a disappearance they attributed to survivor’s guilt.

I wondered if our conversation would distress Ian, if we should be replaying these events in front of him. To me, it seemed as though his face had turned especially tense. His foot was going tap, tap, tap. Or was I projecting my own thoughts, as it is so easy to do with someone who cannot respond? “Ian knows he killed his best friend,” Geoff said at one point that night. “By accident.”

The next day, Ian was grinding his teeth. It happens sometimes, Eve told me. Perhaps something hurt. Or his stomach was upset. Or an eyelash was stuck in his eye. They tried to rule out causes one by one, but it’s always a guessing game. I thought back to our conversation the night before, and wondered whether the presence of a stranger probing the traumatic events of his life might have agitated him.

Ian could not walk away from a conversation he did not want to have, nor could he correct the record of what we got wrong. If his memories and cognition are more intact than not, then he has had time—so much time—to live inside his own thoughts. Has he come to his own reckoning over his friend’s death? Does he feel his own survivor’s guilt? Does he ever wish for the fate of one of his friends in the car over the one he was actually dealt? Perhaps being incapable of these thoughts would be a mercy in itself.

At one point, Geoff decided to reprogram Ian’s foot switch, in part to cheer up Molly Holm, one of Ian’s nurses since 2008, who had bruised her ribs slipping on ice. Molly had known Ian back in high school; he was friends with her older brother. She started coming to patterning sessions at the Rainbow Lodge after the accident, taking a position at Ian’s right hand. She later became a nurse. Her first job was at a head-trauma center, where she looked after young men with injuries like Ian’s. In some of the vegetative patients, she would see flashes of what seemed like awareness. But who was she, a very green nurse, to question a doctor’s diagnosis? Some of the men at this facility rarely had visitors, Molly says, their isolation so unlike the warmth of Ian’s home.

[From the April 2024 issue: Sarah Zhang on the cystic-fibrosis breakthrough that changed everything]

That’s what originally drew her, a deeply unhappy 14-year-old, to the Rainbow Lodge all those years ago. (Okay, she admits, she’d also had a huge crush on Ian before the crash.) It drew other people too, including those who temporarily moved into the lodge’s guest rooms during the patterning days: Ian’s girlfriend, Valerie Cashen; a friend of Geoff’s, Karen McKenna, who was 21 and pregnant, and had recently split from her boyfriend; and, perhaps most unexpectedly, the mother of the boy killed in the car crash, Renee Montana. Eve had overheard her primal scream of grief in the hospital, and when they later met, the mothers felt connected rather than divided by their respective tragedies.

photo of 3 people playing cards at warmly lit kitchen table with man in wheelchair next to head of table
Sarah Blesener for The Atlantic
Ian, Eve, Geoff, and Geoff’s partner, Molly—also one of Ian’s nurses—gather for cards after dinner at the Rainbow Lodge.

Valerie, Karen, and Renee all arrived at the Rainbow Lodge overwhelmed by their own life circumstances. The two younger women stayed for a year or two and became close friends. Karen hadn’t known Ian at all before his injury. She first came to the hospital as a friend of the family; she offered to watch over Ian for Eve because, well, she didn’t have much else to do. She gave birth to her baby while living at the lodge, Eve by her side as her Lamaze coach. Karen’s time caring for Ian helped inspire her to enroll in nursing school, and she eventually became a nurse at the very ICU where she first met Ian.

Renee stayed for a few years. She did not blame Ian for Sam’s death, though she knew that others did. When I asked her if she ever thought about what might have happened if their fates had been switched, she had an immediate answer: “My poor boy would have been institutionalized.”

She didn’t have the means to care for him at home; she didn’t have the Rainbow Lodge. She was a single mom, living with a boyfriend in a disintegrating relationship. Eve and Marshall’s welcoming her into their community kept her from going adrift. “They just saved my life,” she said. Her life took an unexpected turn there too: Renee ended up having another child—her daughter, Morganne—born in 1988, after Renee had a brief affair with Eve’s brother.

Out of these chaotic circumstances, Eve and Renee found their bond as new friends cemented into that of family. Eve was present at this birth as well; she cut Morganne’s umbilical cord. Back at the lodge, they put the newborn girl in Ian’s lap, letting him hold a new life that would not exist had his own not been thrown off course. Morganne, now 37, told me that her earliest memories are of curling up at Ian’s feet to watch TV.

Reflecting on life after Ian’s accident, Eve prefers to speak not of loss but of gains: a new niece, lifelong friends, the entire Rainbow Lodge community. She decided long ago that she could carry others forward—Ian most of all—on her brute optimism. And in our hours of conversation, I never heard her linger on a negative note.

In this respect, Geoff does not take after his mother. “Geoff’s more like, I see your suffering, brother,” Molly told me. He and Ian have a different kind of bond, she added, “because Geoff recognizes that, sometimes, this sucks.”

“No, I mean, it definitely sucks, right?” Geoff said. “Not to be able to communicate sucks.”

Geoff’s coping mechanism is humor, at times dark, at times juvenile. It helps that Ian’s most reliable response is laughter. When he really gets going, his chuckle turns into a full chest shake. Geoff still dreams about the technology that might help his brother communicate. For now, they have the foot switch.

The message Geoff recorded after Molly’s fall was meant to make her, and everyone else, laugh: He blew a fart noise, scattered objects on the ground, and shouted, “Oh my God! What happened there?” Then he slipped the switch under Ian’s left foot.

photo of woman and man smiling and walking outdoors pushing brother in wheelchair, the man bending over head of his brother, with snowy landscape and mountains behind
Sarah Blesener for The Atlantic
Molly and Geoff care for Ian together, and will continue to do so after Eve is gone.

Geoff was so keen to lift Molly’s spirits because they are a couple, together since 2000. Over the course of their relationship, Geoff had grown close to another of her patients, a spunky boy who eventually died of epidermolysis bullosa, also known as butterfly-skin syndrome, in his 20s. They don’t have children of their own but they had become a caretaking unit, their relationship deepening over their shared love for the boy. Now they care for Ian together, and they will continue to care for him when Eve is gone.

When I was leaving the Rainbow Lodge for the last time, Eve impressed upon me what she hoped people would take away from Ian’s life: “It’s not a sad story.” On this, Molly concurred. Yes, it sucks sometimes. But Ian has been continuously surrounded by people who love him, people who took that love and made something of it.

As if on cue, Ian’s foot switch went off. Fart noise. Objects scattering. “Oh my God! What happened there?” Maybe it was just a random movement of his foot. Maybe he wanted to disagree with his mother’s assessment. Or maybe he agreed that his is not a sad story. If only he could tell us in his own words.


This article appears in the June 2025 print edition with the headline “Is Ian Still In There?” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Ever since the pharmaceutical company Novo Nordisk realized that GLP-1 drugs were useful for more than diabetes, doctors and researchers have struggled to answer a deceptively simple question: Who should be taking them? The medications are highly effective at inducing weight loss, and most Americans are overweight or have obesity. But GLP-1s are also expensive, not covered by most insurance, and designed to be taken for life—not to mention that they frequently give rise to nausea and a loss of appetite. Giving them to every overweight American clearly isn’t appropriate.

Take President Donald Trump. During his first term, a scan showed signs of plaque buildup in his coronary arteries, which put him at risk of a heart attack. In 2020, his body mass index was just over the threshold for obesity. That combination would have made him a candidate for a GLP-1 drug, and indeed, throughout his 2024 campaign, people speculated that he was taking one. Then, last month, Trump’s latest physical showed that he had dropped 20 pounds, moving him from obese to overweight. (Trump has never publicly said that he is on a GLP-1, and when reached for comment, the White House did not address questions about how the president had lost the weight. Trump is “in peak physical and mental condition,” White House Press Secretary Karoline Leavitt told The Atlantic in an emailed statement.)The most revealing aspect of the president’s medical report was the list of drugs he is taking, which includes a combination that amounts to what doctors call “intensive lipid-lowering therapy”—a treatment usually reserved for patients who are at significant risk of cardiac disease. As far as the president’s health is concerned, his weight is no more important than the fact that he is on that drug regimen and that it seems to be working: His LDL (the “bad” cholesterol) has dropped dramatically in recent years.

Trump’s example shows that doctors’ and patients’ primary goal should not be changes in weight alone, but changes in health. GLP-1 drugs can help a wide spectrum of people lose weight, but their risks are likely justified for only a smaller subset of Americans. To say whether the health benefits a person might gain from taking the drugs are worth the expense and likely gastrointestinal distress, physicians cannot rely on weight alone. The calculus can be life-and-death; nearly 1,000 deaths a day are linked to diet-related disease in the United States. To save lives and improve health, doctors, researchers, and politicians need to reckon with the true killer: not weight or size, but a particularly toxic kind of fat.

When humans eat too many calories—especially too many of the highly processed, rapidly absorbed carbohydrates that are so common in the modern diet—fat accumulates around the waist, surrounding and invading the liver, heart, and pancreas. Doctors call it visceral, central, or abdominal fat. It’s more dangerous to health than fat that accumulates in places such as the arms and thighs because it leaks free fatty acids and other molecules into the body, generating inflammation, upending the metabolism, and wreaking havoc on our organs. Visceral fat is linked to cardiovascular disease, stroke, diabetes, 13 types of cancer, and likely some forms of dementia, among other major chronic illnesses. Reduce visceral fat, and these conditions can be prevented or even, in certain cases, treated.

[Read: The science behind Ozempic was wrong]

Visceral fat is closely tied to two hallmarks of metabolic disease: high insulin levels and insulin resistance. Scientists haven’t yet determined which comes first, visceral fat or elevated insulin, but they know that high insulin levels are part of a vicious cycle that promotes fat storage, visceral fat, and disease. As elevated insulin has become dramatically more common—by 2018, more than 40 percent of Americans had high insulin—so too has chronic disease. Six in 10 Americans have at least one chronic disease, and four in 10 have more.

GLP-1 drugs are remarkably effective at reducing visceral fat. In fact, that may be a large part of why GLP-1s so improve the metabolic health of people who take them. The strongest case for use of GLP-1s, then, is in people with excess visceral fat who have begun to suffer its consequences. The crucial problem for physicians is how to identify those people. BMI is a poor measure, but waist size is a good predictor of visceral fat, type 2 diabetes, and atherosclerosis. Certain abnormalities in blood-lipid patterns can indicate the beginning of organ dysfunction.

And yet, the primary metric by which anti-obesity drugs are judged and distributed is weight. Originally, the FDA approved these medications for people with a BMI of 30 or above, or with a BMI of at least 27 and at least one weight-related ailment. But the agency has since quietly removed its references to BMI from the drugs’ labels, which now simply state that the medications are for patients “with obesity” or those who are “overweight in the presence of at least one weight-related comorbid condition.” Without explicitly saying so, this change recognizes that BMI is not a good measure of body fat, nor of the visceral fat that causes the most harm. Yet the agency still requires that clinical trials of obesity drugs use BMI as a criterion for enrolling patients. When I go to obesity-medicine meetings, many of the physicians I speak with still use BMI as a guideline.

[Read: BMI won’t die]

Over the past decade or so, awareness has grown among doctors and patients alike that BMI has limited utility as a health metric. It doesn’t distinguish between muscle and fat. It doesn’t account for how fat tends to be distributed differently on male and female bodies. These shortcomings are important when considering what a patient has to gain from a GLP-1 drug. People of South Asian heritage, for example, can develop insulin resistance at much lower BMIs than other populations. According to the American College of Cardiology, in terms of insulin resistance, a white person with a BMI of 30 can be metabolically equivalent to a South Asian person with a BMI of 23.9. Unfortunately, doctors do not have easy and reliable ways to measure insulin resistance directly. Developing a diagnostic test would go a long way in helping determine who should be treated with anti-obesity medications.

The United States is still deciding how exactly to approach GLP-1s. The Trump administration scrapped a Biden-administration proposal to cover anti-obesity medications under Medicare’s Part D drug benefit, but it hasn’t ruled out future coverage. Within the past year, the FDA has both expanded its eligibility guidelines for the drugs and declared that the drugs are no longer in shortage. That means that compounding pharmacies can no longer produce replicas of Novo Nordisk’s Wegovy and Eli Lilly’s Zepbound, which will reduce the availability of cheaper options but might also curb the risks associated with copycats. Plus, Novo Nordisk and Eli Lilly have recently introduced new discount programs. Early data suggest that the drugs may be useful in treating fatty liver disease, heart failure, and possibly neurodegenerative diseases, which, I suspect, will lead even more people to take them.

If GLP-1s really do become more common in America, everyone who goes on them needs to understand that they are doing so without an endgame. GLP-1 drugs were approved under the premise that patients will stay on them for life, but so far, most people take them for less than a year, in large part because of their side effects, typically high cost, and lack of insurance coverage. Scientists do not have good data on whether and how to get off the drugs without regaining weight, whether they can be used safely and effectively on an intermittent basis, or how to adjust doses downward over the long term. The best way to find those answers is for the FDA to require pharmaceutical companies to gather the data. Letting the companies off the hook by assuming that people are going to be on these drugs forever would be a grave mistake.

[Read: The Ozempic flip-flop]

All of these unanswered questions only add to the urgency of determining who is most likely to benefit from GLP-1s, and who would be safer or healthier by sticking with lifestyle changes and other medications. GLP-1 drugs are not a panacea. They are one powerful tool to help control America’s crisis of metabolic disease—one that we need to get right.

The surgeon general, America’s doctor, is the public face of medicine in the United States. The job is more educational than it is technical. Vivek Murthy, who was appointed as surgeon general during both the Obama and Biden administrations, went on Sesame Street to stress the importance of vaccinations and put out a guidebook to hosting dinner parties as a cure for loneliness.

In many ways, Casey Means is the perfect person for that job. Donald Trump’s new nominee for surgeon general, announced yesterday, is a Stanford-trained doctor who is well-spoken and telegenic. Most important, she clearly knows how to draw attention to health issues. Good Energy, the book she published last year with her brother, Calley (who, by the way, is a special adviser in the Trump administration), is Amazon’s No. 1 best seller in its “nutrition” and “aging” categories. She regularly posts on Instagram, where she has more than 700,000 followers.

In many other ways, however, Means is far from perfect. A leading voice in Robert F. Kennedy Jr.’s “Make America healthy again” movement, she has a habit of trafficking in pseudoscience and at times can be hyperbolic, to put it lightly. Means has said that America’s diet-related health issues could lead to a “genocidal-level health collapse” and that “all of us are a little bit dead while we are alive” because of what she calls “metabolic dysfunction.” She has also written about taking part in full-moon ceremonies and about how talking to trees helped her find love—though she admitted that the rituals were “out there.” And Means (who didn’t respond to a request for comment) has used her platform to promote “mitochondrial health” gummies, algae-laden “energy bits,” and vitamins she described as her “immunity stack.”

Means was not Trump’s top choice for surgeon general. His first nominee, Janette Nesheiwat, was pulled out of contention yesterday amid allegations that she had misrepresented her medical training. Presuming the Senate confirms Means as the next surgeon general, she will be another one of RFK Jr.’s ideological compatriots who have joined him in the Trump administration. National Institutes of Health Director Jay Bhattacharya and FDA Commissioner Marty Makary are both also skeptics of the public-health establishment. Earlier this week, Vinay Prasad, another prominent medical contrarian, assumed a top job at the FDA. Now the “MAHA” takeover of the federal health agencies is all but complete. Earlier today, Trump told reporters that he tapped Means “because Bobby thought she was fantastic.”

Means fits right in with the Trump administration’s approach to health. She dropped out of her medical residency, citing her frustrations with the myopic focus of modern medicine. By her telling in Good Energy, she left her program in ear, nose, and throat surgery because “not once” was she taught what caused the inflammation in her patients’ sinuses. In the third chapter of her book, titled “Trust Yourself, Not Your Doctor,” Means writes that you should not trust physicians, because the medical establishment makes more money when you are sick and does not understand how to treat the root causes of chronic disease.

Alleviating chronic disease is also a passion of Kennedy’s, and the similarities between them run deep. Like the health secretary, Means believes that you should avoid seed oils and ultraprocessed foods. She is prone to musings about the crisis of American health care that lean more Goop than C. Everett Koop. She has proclaimed that Americans have “totally lost respect for the miraculousness of life.” She has said that the birth-control pill disrespects life because it is “shutting down the hormones in the female body that create this cyclical life-giving nature of women.” One of the latest editions of her weekly email newsletter was dedicated to the children’s movie Moana, which she called “a forgotten blueprint for how we lead, heal, and regenerate.” (For the record, Koop, America’s surgeon general during Ronald Reagan’s presidency, never implied that he’d done mushrooms to find love.)  

Tucker Carlson, Joe Rogan, and Andrew Huberman have all hosted Means on their podcasts. Means’s rise is, in many ways, emblematic of modern internet wellness culture writ large: If you’re articulate and confident and can convincingly recite what seems like academic evidence, you can become famous—and perhaps even be named surgeon general. Her most dangerous inclination is to toe the line of her new boss, Kennedy, on the issue of vaccines. On Rogan’s show in October, she questioned whether the barrage of shots kids receive as infants might cause autism. And on Carlson’s podcast, she argued that perhaps certain shots given to infants should be given later in life to avoid overexposure to neurotoxins. There is no scientific evidence to back up those claims.

But at the same time, much of Means’s philosophy toward health doesn’t seem that objectionable. Whereas the books that RFK Jr. has written are crammed full of conspiracy theories, hers focuses on how America’s ills can be treated with whole foods, exercise, and good sleep. It even includes a recipe guide. (Her fennel-and-apple salad with lemon-dijon dressing and smoked salmon is delicious, I must admit.) If her book is any indication, her first move as surgeon general will be to urge parents to cut down on their kids’ sugar consumption. “If the surgeon general, the dean of Stanford Medical School, and the head of the NIH gave a press conference on the steps of Congress tomorrow saying we should have an urgent national effort to cut sugar consumption among children, I believe sugar consumption would go down,” she wrote.

If Means sticks to these issues—encouraging Americans to eat organic, go on a walk, and get some shut-eye—she could be a force for positive change in American health care. If she urges women to forgo birth control, plugs unproven supplements, or uses her bully pulpit to question the safety of childhood vaccines, she will go down as one of the most dangerous surgeons general in modern history. In this way, she is much like Kennedy and the rest of the MAHA universe. Their big-picture concerns sound reasonable and are resonating with lots of people. America does have a chronic-disease problem; food companies are selling junk that makes us sick; the public-health establishment hasn’t gotten everything right. But for every reasonable idea they proffer, there is a pseudoscientific belief that strains their credibility.

When you think of food poisoning, perhaps what first comes to mind is undercooked chicken, spoiled milk, or oysters. Personally, I remember the time I devoured a sushi boat as a high-school senior and found myself calling for my mommy in the early hours of the morning.

But don’t overlook your vegetable crisper. In terms of foodborne illness, leafy greens stand alone. In 2022, they were identified as the cause of five separate multistate foodborne-illness outbreaks, more than any other food. Romaine lettuce has a particularly bad reputation, and for good reason. In 2018, tainted romaine killed five people and induced kidney failure in another 27. Last year, an E. coli outbreak tied to—you guessed it—romaine sent 36 people to the hospital across 15 states. Perhaps ironically, the bags of shredded lettuce that promise to be pre-washed and ready to eat are riskier than whole heads of romaine.

Eating romaine lettuce is especially a gamble right now. Although America’s system for tracking and responding to foodborne illnesses has been woefully neglected for decades, it has recently been further undermined. The Biden administration cut funding for food inspections, and the Trump White House’s attempts to ruthlessly thin the federal workforce has made the future of food safety even murkier. The system faces so many stressors, food-safety experts told me, that regulators may miss cases of foodborne illness, giving Americans a false sense of security. If there’s one thing you can do right now to help protect yourself, it’s this: swearing off bagged, prechopped lettuce.

[Read: The onion problem]

Americans aren’t suddenly falling sick en masse from romaine lettuce, or anything else. “There’s just millions of these bags that go out with no problem,” David Acheson, a former FDA food-safety official who now advises food companies (including lettuce producers), told me. But what’s most disturbing of late is the government’s lackadaisical approach to alerting the public of potential threats. Consider the romaine-lettuce outbreak last year. Americans became aware of the outbreak only last month, when NBC News obtained an internal report from the FDA. The agency reportedly did not publicize the outbreak or release the names of the companies that produced the lettuce because the threat was over by the time the FDA determined the cause. The rationale almost seems reasonable—until you realize that Americans can’t determine what foods are, or aren’t, safe without knowing just how often they make people sick. (A spokesperson for the FDA didn’t respond to a request for comment.)

In that information void, forgoing bagged lettuce is a bit like wearing a seat belt. In the same way that you likely don’t entirely avoid riding in a car because of the risk of an accident, it’s unnecessary to swear off all romaine because it could one day make you sick. Lettuce and other leafy greens are full of nutrients, and abandoning them is not a win for your health. That doesn’t mean, however, that you shouldn’t practice harm reduction. Buying whole heads of lettuce might just be the life hack that keeps you from hacking up your Caesar salad.

Bagged lettuce ups the odds of getting a tainted product. When you buy a single head of lettuce, you’re making a bet that that exact crop hasn’t been infected. But the process of making prechopped lettuce essentially entails putting whole heads through a wood chipper. Once a single infected head enters that machine, the pieces of the infected lettuce stick around, and it’s likely that subsequent heads will become infected. “Buying a head of romaine lettuce is like taking a bath with your significant other; buying a bag of romaine lettuce is like swimming in a swimming pool in Las Vegas,” Bill Marler, a food-safety lawyer, told me.

There’s also some evidence that chopping romaine makes the lettuce more susceptible to pathogens. One study that tested the growth of E. coli on purposefully infected romaine found that within four hours of cutting the lettuce into large chunks, the amount of E. coli on the plant increased more than twice as much as on the uncut lettuce. Shredding the lettuce was even worse; the E. coli on that plant increased elevenfold over the same time period. The theory for why this occurs is similar to the reason cuts make people more susceptible to infection; essentially, cutting romaine breaks the outer protective layer of the lettuce, making it easier for bacteria to proliferate. (This experiment was done in relatively hot temperatures, so your chopped lettuce is likely safer if you keep it refrigerated. But the convenience of pre-shredded lettuce still comes with yet another additional risk.)

[Read: The dilemma at the center of McDonald’s E. Coli outbreak]

And no, washing your bagged lettuce rigorously is not the answer. If it’s infected, only a thorough cooking is going to kill the bacteria and protect you from getting sick. Rinsing your vegetables is “a mitigation step that’s reducing risk, but it is not a guarantee,” Benjamin Chapman, a food-safety expert at North Carolina State University, told me. Buying whole heads of lettuce is an imperfect solution to a major problem, but it’s the best thing consumers can do as regulators have continued to drop the ball on food safety. A lot of lettuce is contaminated by irrigation water that comes from nearby feedlots, and yet it has taken the FDA a decade to enforce water-quality standards for most crops. The FDA has also continually fallen behind on its own inspection goals. A January report from the Government Accountability Office, the government’s internal watchdog, found that the FDA has consistently missed its targets for conducting routine food inspections since 2018.

Politicians of both parties have seemed content to make cuts to an already overstressed system. Late last year, the Biden administration announced that it was cutting $34 million in funding to states to carry out routine inspections of farms and factories on behalf of the FDA, reportedly because the agency’s budget needed to make up for inflation. And under Health and Human Services Secretary Robert F. Kennedy Jr., the FDA is now making steep funding and staff cuts. Although the Trump administration has claimed that no actual food inspectors will be laid off as a result of government downsizing, there’s already evidence that the moves will, in fact, make it harder for the government to respond when illnesses strike. Spending freezes and cuts to administrative staff have reportedly made it more difficult for FDA inspectors to travel to farms, and for them to purchase sample products in grocery stores for testing. A committee tasked with exploring a range of food-safety questions, including probing what strains of E. coli cause bloody diarrhea and kidney failure, has been shut down, and a key food-safety lab in San Francisco has been hit with wide-scale layoffs, according to The New York Times. (Employees at the San Francisco lab told me that they are now being hired back.)

Skipping prechopped bagged lettuce might sound like neurotic advice, but a leafy-green outbreak is almost guaranteed to occur in the coming months. One seems to happen every fall, and it’ll be up to RFK Jr. to respond. Although Kennedy has promised to foster a culture of radical transparency at the federal health agencies, his first months on the job haven’t been reassuring. The staff at the FDA’s main communications department—employees typically tasked with briefing national news outlets during outbreaks—have been fired. So have staff at public-record offices. Government updates on the ongoing bird-flu outbreak have virtually stopped. It’s reasonable to assume that the Trump administration will take a similar “see no evil, hear no evil, speak no evil” approach to foods that can make us sick.

“I’m really worried that we are going to see the number of outbreaks, and the number of illnesses, go down—and it has nothing to do with the safety of the food supply,” Barbara Kowalcyk, the director of the Institute for Food Safety and Nutrition Security at George Washington University, told me. “It just means if you don’t look for something, you don’t find it.” With so much uncertainty about food safety, busting out a knife and chopping some lettuce beats a trip to the hospital, or a night hugging the toilet.

Matthew J. Memoli has had an exceptionally good year.

At the beginning of January, Memoli was a relatively little-known flu researcher running a small lab at the National Institute for Allergy and Infectious Diseases (NIAID) at the National Institutes of Health. Then the Trump administration handpicked him to be the acting director of the $48 billion federal agency, a role in which he oversaw pauses in award payments, the mass cancellation of grants, the defunding of clinical trials, and the firing of thousands of employees. Now the NIH’s principal deputy director, Memoli will soon see his own research thrive as it never has before: He and a close collaborator, Jeffery Taubenberger, also at the NIH, recently approached Health Secretary Robert F. Kennedy Jr. to pitch their research, three current and former NIH officials familiar with the matter told me. And as The Wall Street Journal reported on Thursday, the pair are now set to be awarded up to $500 million for their in-house vaccine research. (All of the current and former NIH officials I spoke with for this story requested anonymity out of fear of professional retribution from the federal government.)

In a press release last week, the Department of Health and Human Services described the award’s goal as developing universal vaccines against flu viruses, coronaviruses, and other “pandemic-prone viruses”—at face value, a worthwhile investment. Universal vaccines are designed to guard against multiple strains of a virus at once, including, ideally, versions of a pathogen that haven’t yet caused outbreaks.

But this particular course toward pandemic prevention is shortsighted and suspect, several vaccine researchers and immunologists told me, especially when the administration has been gutting HHS staff and stripping funds away from hundreds of other infectious-disease-focused projects. As described in the press release, this new project, dubbed Generation Gold Standard, appears to rely on only one vaccination strategy, and not a particularly novel one, several researchers told me. And the way the award was granted represents a stark departure from the government’s traditional model of assembling panels of independent scientific experts to consider an array of research strategies, and simultaneously funding several projects at separate institutions, in the hopes that at least one might succeed. Memoli’s involvement in this latest award “is clearly someone taking advantage of the system,” one official told me.

When I reached out to HHS and Memoli for comment, they gave conflicting accounts of Generation Gold Standard. An HHS spokesperson confirmed to me that the sum of the award was $500 million and referred only to Memoli and Taubenberger’s vaccine technology when discussing the initiative, describing it as “developed entirely by government scientists.” Memoli, in contrast, wrote to me in an email that the $500 million sum would “support more than one project,” including partners within NIH and outside the agency, and described Generation Gold Standard as “a large-scale investment in a host of research.” When I asked HHS for clarification, the spokesperson told me that the funding “will support multiple projects,” adding that “the first initiative focuses on influenza.” The spokesperson and Memoli did not respond to questions about the criteria for other projects to be included in this initiative or the timeline on which they will be solicited or funded.

Neither Memoli nor Taubenberger’s work has ever received this level of financial attention. Both have spent much of their careers running small labs at NIAID. Taubenberger, who did not respond to a request for comment, has long been respected in the field of virology; a few years ago, he received widespread recognition for uncovering and sequencing the flu virus that caused the 1918 flu pandemic. Last month, he was also named the acting director of NIAID, after its previous director, Jeanne Marrazzo, was ousted by the Trump administration. He has frequently collaborated with Memoli, whose work has flown more under the radar.

Memoli’s appointment to acting director was also unorthodox: Prior to January, he had no experience overseeing grants or running a large federal agency. He had, though, criticized COVID-vaccine mandates as “extraordinarily problematic” in an email to Anthony Fauci in 2021; Jay Bhattacharya, now the head of NIH, praised Memoli on social media for the scuffle, calling him “a brave man who stood up when it was hard.” And last year, during an internal NIH review, Memoli described the term DEI—another Trump-administration bugaboo—as “offensive and demeaning.”

Memoli and Taubenberger’s vaccine technology could end up yielding an effective product. It relies on a type of vaccine composed of whole viruses that have been chemically inactivated; at least one of the vaccines under development has undergone safety testing, and has some encouraging preliminary data behind it. But flu viruses mutate often, hop frequently across species, and are tricky to durably vaccinate against; although scientists have been trying to concoct a universal-flu-vaccine recipe for decades, none have succeeded. When the goal is this lofty, and the path there this difficult, the smartest and most efficient way to succeed is to “fund as broadly as you can,” Deepta Bhattacharya, an immunologist at the University of Arizona (who is unrelated to Jay Bhattacharya), told me. That strategy has long been core to the mission of the NIH, which spends the majority of its budget powering research outside the agency itself.  

Memoli and Taubenberger’s whole, inactivated virus strategy is also “not exactly cutting-edge,” Bhattacharya said. The technology is decades old and has been tried before by many other scientists—and has since mostly fallen out of favor. Newer technologies tend to be more effective, faster to produce, and less likely to cause side effects. And the pair’s vaccine candidates have yet to clear the point at which many immunizations fail in clinical trials; usually, funding of this magnitude is reserved for projects that already have strong data to suggest that they’re effective at reducing disease or infection, Bhattacharya said. Already, though, HHS seems confident in how the project will play out, according to its press release: The department is targeting FDA approval for at least one of the vaccines in 2029, and claims that the vaccines will be adaptable for other respiratory viruses (such as RSV and parainfluenza). But no published evidence supports the technology’s compatibility with those other viruses.

Multiple vaccine experts told me that Memoli and Taubenberger’s work is not, on its own, a $500 million initiative; half a billion dollars would be “a truly absurd amount of money” for any single research initiative, one NIH official told me. NIH labs are usually funded by the agency institutes they’re based in, and given much smaller budgets: For fiscal year 2025, NIAID sought just $879 million of its total $6.6 billion budget for its roughly 130 internal research groups. At a recent meeting of NIAID leadership, even Taubenberger admitted that he was shocked by the sheer dollar amount that the initial HHS announcement had tied to his platform, an official who attended that meeting told me.

In their responses to me, both Memoli and HHS claimed that the $500 million would eventually fund multiple projects. But neither would respond to questions about how that other research would be identified or how much money would be directed to Memoli and Taubenberger’s work, which was the only research mentioned in HHS’s announcement of the initiative. Memoli and Taubenberger’s vaccine does appear to be Generation Gold Standard’s linchpin: Memoli and the HHS spokesperson both said that their project would be the initiative’s main starting point. That still puts “a lot of eggs in one basket,” Marion Pepper, an immunologist at the University of Washington, told me. If Memoli and Taubenberger’s vaccine technology fails, without clear alternatives, the country may be especially vulnerable when the next big outbreak hits.

At the start of the coronavirus pandemic, one NIH official pointed out to me, the first Trump administration did pour billions into developing mRNA-based vaccines—a new technology that was, at the time, unproven. The government invested especially heavily into the pharmaceutical company Moderna, which has continued to receive substantial federal grants for its mRNA vaccine work. (HHS, however, is now reportedly considering pulling funds from one of Moderna’s contracts, worth nearly $600 million, awarded to develop vaccines against flu viruses that could cause pandemics, such as the H5N1 bird flu.) But the early data on mRNA vaccines, and the speedy manufacturing timeline they promised, made them “a smart bet,” the official said. “I’m not sure Memoli’s is.”

While funding Moderna, the government also distributed its resources elsewhere—including to several other types of immunizations, made by several other companies, all of them with massive research teams and a long history of scaling up vaccine technology and running enormous clinical trials. The new initiative, meanwhile, appears to come at the expense of other vaccine-related work that was already in motion. The money for Generation Gold Standard, one NIH official told me, comes from HHS’s Biomedical Advanced Research and Development Authority (BARDA), and was reallocated from funds originally set aside for Project NextGen, a $5 billion Biden-administration initiative to develop new COVID-19 vaccines and therapeutics. The HHS spokesperson told me that the shuffling of funds “realigns BARDA with its core mission: preparing for all flu viral pathogens, not just COVID-19,” and called Project NextGen “wasteful.” (SARS-CoV-2, the coronavirus that causes COVID-19, is not a flu virus.)

NIH leaders are well within their rights to funnel money toward favored scientific pursuits. Francis Collins, who served as director until 2021, wasn’t shy about pushing through the NIH’s neuroscience-focused BRAIN Initiative or the All of Us precision-medicine program. Monica Bertagnolli, who until January directed the NIH, kick-started the health-equity-focused CARE for Health program and advanced a Biden White House initiative on women’s health. But those programs funded a wide array of projects—and none concentrated resources of this scale on any single NIH leader’s own work. Taubenberger is also listed as an inventor on a patent on the vaccine technology, which isn’t unusual in vaccine research, but it means that he could be set up to directly benefit from HHS’s huge investment. (When I asked Memoli if he and Taubenberger might both receive royalties from a commercialization of their vaccine technology, he noted that he was not listed as an inventor and had “no right to royalties on that particular patent.”)

Heavily funding in-house vaccine research does align, in one way, with the apparent priorities of Kennedy, who has railed against the influence of private companies on medicine. The press release about this “gold standard” vaccine project brags that the technology is “fully government-owned and NIH-developed,” which “ensures radical transparency, public accountability, and freedom from commercial conflicts of interest.” The statement also notes that one of the vaccine technology’s assets is its “traditional” approach—a potential appeal to Kennedy’s skepticism of newer vaccine technologies, one NIH official told me. (Kennedy has been critical of COVID-19 vaccines and recently falsely claimed that vaccines that target only one part of a respiratory pathogen—so called single-antigen vaccines—don’t work.)

Kennedy, a longtime anti-vaccine activist, does not appear to have sought out vaccine research to fund, though. Memoli “is really the one who has pushed this ahead,” one NIH official told me: A few weeks ago, he dispatched Taubenberger to brief Kennedy on the pair’s work. (Memoli did not respond to questions about this briefing or about how he had solicited so much of Kennedy’s support.) No matter the instigator, though, the outcome sends an unsettling message to the rest of the American research community—“the only way to overcome HHS priorities is to be part of the inner circle,” the University of Arizona’s Bhattacharya told me. One NIH official put it more bluntly: “It’s very clear it’s all cronyism going forward.”

During the crucial early weeks of pregnancy, when fetal cells knit themselves into a brain and organs and fingers and lips, a steady flow of man-made chemicals pulses through the umbilical cord. Scientists once believed that the placenta filtered out most of these pollutants, but now they know that is not the case. Along with nutrients and oxygen, numerous synthetic substances travel to the womb, permeating the fetus’s blood and tissues. This is why, from their very first moments of life, every American newborn carries a slew of synthetic chemicals in their body.

Crucially, many of these chemicals have never been tested for safety. Of those that have, some are known to cause cancer or impede fetal development. Others alter the levels of hormones in the womb, causing subtle changes to a baby’s brain and organs that may not be apparent at birth but can lead to a wide variety of ailments, including cancer, heart disease, infertility, early puberty, reduced IQ, and neurological disorders such as ADHD. How did we end up in this situation, where every child is born pre-polluted? The answer lies in America’s fervor for the synthetic materials that, beginning in the mid-20th century, reshaped our entire society—and in the cunning methods that chemical makers used to ensure their untrammeled spread.

It began in 1934, when the munitions company DuPont was struggling to rescue its reputation. A new blockbuster book, Merchants of Death, argued that the company had unduly influenced America’s decision to enter World War I, then reaped exorbitant profits by supplying its products to America’s enemies and Allied forces alike. Meanwhile, a congressional probe had uncovered a bizarre plot—allegedly funded by DuPont and other companies that opposed the New Deal—to overthrow the U.S. government and install a Mussolini-style dictatorship. Almost overnight, DuPont became a national pariah.

In response, the company hired a legendary PR consultant who concluded that there was only one way DuPont could escape the controversy: by transforming itself in the public’s mind from a maker of deadly munitions into a source of marvelous inventions that benefited the general public. In 1938, the company debuted the first of these revolutionary materials: nylon, which could be spun into fibers “as strong as steel, as fine as the spider’s web,” a DuPont executive declared at the unveiling. The company’s wildly popular exhibit at the 1939 New York World’s Fair featured a shapely Miss Chemistry rising out of a test tube in a nylon evening gown and stockings. When nylon stockings went on sale in 1940, they sold out almost immediately.  

But it wasn’t until World War II that synthetics really took off. Faced with shortages of natural materials such as steel and rubber, the U.S. government spent huge sums developing synthetic materials and expanding the assembly lines of chemical companies so that they could produce the quantities needed for global warfare. After the conflict, industry transformed these substances into a cornucopia of household goods. The plastic polyethylene, used to coat radar cable during the war, became Tupperware, Hula-Hoops, and grocery bags. An exotic new family of chemicals developed through the top-secret Manhattan Project showed up in products such as Scotchgard fabric protector. These substances, known to scientists as perfluoroalkyl and polyfluoroalkyl substances, or PFAS, gave ordinary goods uncanny resistance to grease, stains, water, and heat. They soon found their way into thousands of household items.

With the world suddenly awash in synthetics, people had access to a huge variety of low-cost goods—and this brought thousands of new chemicals into American homes. Most people didn’t give much thought to the implications. But manufacturers sponsored research on the health effects of the new substances they were using, much of it performed in the laboratory of Robert Kehoe, a toxicologist with a quasi-religious faith in the power of technological progress to solve society’s problems.

When I visited Kehoe’s archives at the University of Cincinnati, they were brimming with unpublished reports linking synthetic chemicals to a wide variety of health problems. Kehoe believed that the secrecy was justified. These chemicals, he argued in a 1963 essay that I found among his papers, would be desperately needed to “feed, clothe and house those who will populate this bountiful land in succeeding generations.” Given that the science was still developing, he wrote, focusing the public’s attention on the chemicals’ toxicity would be “neither wise nor kind.”

But by the 1950s, the emerging scientific consensus was that many man-made chemicals could disrupt key bodily functions, making them harmful at lower doses than ordinary poisons. A small but vocal group of activists began raising concerns about the lack of testing for chemicals in the food supply. They found an advocate in James Delaney, a Democratic congressman from New York, who formed a committee to investigate the issue. One of his lead witnesses was Wilhelm Hueper, a former DuPont pathologist who, according to his unpublished autobiography, had warned his employer of the link between synthetic chemicals and cancer as early as the ’30s. During his testimony, Hueper argued that because synthetic compounds could be damaging in minuscule doses and the effects were cumulative, no level of exposure to them could be presumed safe. He advised the lawmakers to require that chemicals in food be “tested for toxic and possibly carcinogenic properties,” and to ban those that cause cancer.

The titans of American industry had other ideas. Aided by the PR firm that would later pioneer Big Tobacco’s campaign to discredit the science on the harms of smoking, chemical companies lobbied lawmakers, hosted all-expenses-paid conferences for journalists, and placed pro-industry science materials in public-school classrooms, according to meetings minutes from the chemical industry’s main trade association. These efforts paid off. In 1958, when Congress passed a law requiring safety testing for chemicals that wound up in food, the thousands of substances already in use were presumed to be safe and grandfathered in.

One of those substances was Teflon, which is made with PFAS, or forever chemicals, as they are now known. According to correspondence in Kehoe’s files, DuPont had previously avoided marketing it for use in most consumer goods because of toxicity concerns. Workers who inhaled Teflon fumes developed flu-like symptoms. When scientists in Kehoe’s lab exposed dogs, guinea pigs, rabbits, and mice to the gases Teflon emitted when heated, many died within minutes, according to an unpublished 1954 report. But because Teflon’s ingredients had been grandfathered in, the company no longer needed to prove its safety to the government—only its benefits to customers. In 1959, it invited a reporter from Popular Science to its Wilmington, Delaware, headquarters for a pancake demonstration using a prototype Teflon pan. According to the magazine, the cakes came out nicely brown and left no crusty residue, “because the pan was lined with Teflon, a remarkable fluorocarbon plastic” that was “as slippery as ice on ice.” By 1962, DuPont-branded Happy Pans were flying off store shelves.

That same year, the naturalist Rachel Carson published Silent Spring, introducing the public to the disquieting idea that man-made chemicals were inundating people’s bodies. Most of the research Carson had drawn on wasn’t new. It was the same data that scientists such as Hueper—whom Carson cited at length—had developed decades earlier, but Carson was the first to pull it all together for a broad audience. The grassroots environmental movement ignited by Silent Spring led to the creation of the EPA in 1970 and, six years later, the passage of the Toxic Substances Control Act, which gave the agency power to regulate chemicals. Thanks to aggressive industry lobbying, the law was appallingly lax. Manufacturers weren’t required to proactively test new chemicals for safety except in rare cases, and once again, existing chemicals were grandfathered in.

By the time of the bill’s passage, DuPont and another manufacturer, the Minnesota-based 3M, had discovered that PFAS were accumulating in the blood of people around the country. Internal industry studies from this period showed that the chemicals refused to break down in the environment—meaning that every molecule the companies produced would linger on the planet for millennia. The chemicals were also found to build up rapidly in the food chain and lead to devastating health effects in lab animals. One 1978 study of PFAS in monkeys had to be aborted two months early because all of the monkeys died.

When DuPont and 3M began investigating the chemicals’ effect on workers, the results were even more troubling. A 1981 study of “pregnancy outcomes” among women in DuPont’s Teflon factory, which was later revealed through litigation, found that two of seven pregnant workers gave birth to babies with serious facial deformities, a “statistically significant excess” over the birth-defects rate in the general population. But rather than alerting employees or the public, the company simply abandoned the research.

A spokesperson for DuPont, which in 2015 spun off the division that made PFAS as part of a major restructuring, told me that he was “not in a position to speak to products that were or are a part of businesses that are owned by other independent, publicly traded companies.” A spokesperson for 3M said, “Over the decades, 3M has shared significant information about PFAS, including by publishing many of its findings regarding PFAS in publicly available journals dating back to the 1970s,” and added that 3M is on target to remove PFAS from its manufacturing globally by the end of 2025.

Limiting the use of PFAS now, however, doesn’t change how far the chemicals, and their damages, have already spread. A large body of research by independent scientists has linked forever chemicals to serious health problems, including obesity, infertility, testicular cancer, thyroid disease, neurological problems, immune suppression, and life-threatening pregnancy complications. Researchers tracking the spread of PFAS have found that they suffuse the blood of polar bears in the Arctic, eagles in the American wilderness, and fish in the depths of the ocean. They permeate snow on Mount Everest and breast milk in rural Ghana. A 2022 study of rainwater around the world found that levels of the two best-known PFAS alone were high enough to endanger the health of people and ecosystems everywhere. Less than a century after these chemicals entered the world, nowhere is pristine.


This article has been adapted from Mariah Blake’s forthcoming book, They Poisoned the World: Life and Death in the Age of Forever Chemicals.

In the morning weekday rush, any breakfast will suffice. A bowl of cereal, buttered toast, yogurt with granola—maybe avocado toast, if you’re feeling fancy. But when there’s time for something heartier, nothing satisfies like the classic American breakfast plate, soothing for both stomach and soul. No matter where you get the meal—at home, a diner, a local brunch spot—it’s pleasingly consistent in form and price: eggs, toast, potatoes, and some kind of salty, reddish meat, with orange juice and coffee on the side. Pancakes, if you’re really hungry. If you’re craving a filling, greasy, and relatively cheap meal, look no further than an all-American breakfast.

The classic breakfast hasn’t changed in roughly a century. A Los Angeles breakfast menu from the 1930s closely resembles that of my neighborhood greasy spoon in New York; diners from Pittsburgh to Portland offer up pretty much the same plate. The meal’s long-lived uniformity—so rare as food habits have moved from meatloaf and Jell-O cake to banh mi and panettone—was made possible by abundance: Each of its ingredients has long been accessible and affordable in the United States.

But lately, breakfast diehards like me have noticed a troubling change. At my neighborhood diner, a breakfast plate that cost $11.50 in 2020 now costs $14—and it isn’t just because of inflation. Although all kinds of food have gotten more expensive in recent years, traditional breakfast has had a particularly rough go of it. The cost of eggs has soared; supply shortages have driven coffee and orange-juice prices to historic highs. And that’s not even taking President Donald Trump’s tariffs into account. “Milk, sausage, certainly not coffee—these things are not going to get cheaper,” Jason Miller, a supply-chain-management professor at Michigan State University who researches the impact of tariffs, told me. The stream of staples that have made American breakfast so cheap for so long is now starting to sputter.

Breakfast can symbolize an entire nation: the full English, the French omelet, Belgian waffles. In many ways, America’s plate chronicles the nation’s history. Reverence for bacon and eggs was partly inherited from the English; a vigorous public-relations campaign later cemented its popularity. In the 18th century, the Boston Tea Party helped tip the nation permanently toward coffee, and Scotch-Irish settlers kick-started American potato growing in New Hampshire. With the Industrial Revolution, access to these and other breakfast foods exploded: Bacon was packed onto trains carrying mass-produced eggs, milk, and potatoes across the country. In 1945, the invention of frozen concentrated orange juice gave all Americans a taste of Florida.

But if breakfast was once a story of American innovation and plenty, it is now something different. No food captures the changes better than eggs. Since 2023, bird flu has wiped out henhouses, leading to egg shortages that have intermittently made buying a carton eye-wateringly expensive. Profiteering in the egg industry may also be keeping prices high: “When there are these horrible bird-flu outbreaks, the producers are actually making a lot more profit,” Miller said. After peaking at more than $8 for a dozen in February, the wholesale cost of eggs has come down, but a carton still costs double what it did at the start of 2020.

Ordering eggs at a restaurant will put even more of a dent in your wallet. Earlier this year, the breakfast chain Waffle House imposed a temporary 50-cent “egg surcharge,” and Denny’s followed suit with a surcharge that varies by region. (Denny’s and Waffle House did not respond to a request for comment.) At restaurants, the price of eggs probably won’t return to pre-bird-flu levels anytime soon, even when outbreaks subside. “In general, stuff tends to not get cheaper,” Miller said. And any reprieve from egg shortages is likely to be short-lived: Scientists predict that bird-flu outbreaks will return year after year, unless the virus is brought under control. Until that changes, the tradition of centering eggs in the morning meal will be costly to uphold.

Another factor endangering the classic breakfast is climate change. The global coffee supply has fallen precipitously because of extreme weather in Brazil and Vietnam, which together produce more than half the world’s beans. Since January 2020, the shortages have driven up the retail price of ground coffee by 75 percent. So far, coffee importers have shouldered most of the rising costs to shield consumers, but “eventually something has to give,” Miller said. Orange juice is likewise drying up. As I wrote in February, all-American orange juice barely exists anymore because Florida’s citrus production has plummeted 92 percent in the past two decades. The spread of an incurable disease and a spate of grove-destroying hurricanes have forced juice companies to rely heavily on oranges imported from Brazil and Mexico. Climate change has also messed with the supply of non-breakfast food, such as chocolate, but it has particularly hammered our morning routines. Even add-ons to the classic breakfast, such as bananas and blueberries, have been in short supply because of extreme weather.

And now the syrup on the pancake: Trump’s trade war is poised to make matters worse. The current 10 percent tariff on most imported goods is just a preview of what could come this summer, if the president’s wider reciprocal tariffs take effect. You can’t exactly grow coffee in Iowa; most of America’s supply is imported from Latin America, and the rest from Vietnam, which could face a 46 percent tariff. Eggs and orange juice are easy to think of as all-American products, but imports have shored up our supply. The Trump administration has turned to Turkey and South Korea to help keep eggs in stock at your grocery store, but bringing over those cartons might soon be subject to steep tariffs.

Even potatoes aren’t immune. Though spuds are the most widely produced vegetable in the U.S., Americans love them so much that the country has become a net importer of them: Canada alone provided $375 million worth of potatoes in 2024. All of those potatoes need to be cooked somehow—often, in canola oil also produced in Canada. Most Canadian foods are exempt from tariffs for now, but considering Trump’s ongoing feud with our northern neighbor, taxes seem like only a matter of time. Even if you don’t eat the classic American breakfast, tariffs are likely coming for your morning meal: Bananas, avocados, berries, maple syrup, and lox, among other foods, are at risk of price increases from tariffs.

Some elements of the breakfast plate are safe—for now. America is a grain-producing powerhouse, so foods such as toast, pancakes, and waffles aren’t expected to become wildly pricey. Bacon and sausage will probably be fine too; if China stops importing U.S. pork as a result of the trade war, there will be an even bigger supply at home, Miller said. A tariff-ridden future could shift more homegrown foods onto the breakfast plate: sausage and pancakes, ham and toast, with a glass of milk to wash it down. Of course, people eat plenty of other foods for breakfast, and these alternatives may just become more popular: Greek yogurt, oatmeal, cereal. Still, a crucial part of breakfast that can’t be overlooked is the cookware used to make it. The majority of America’s toasters, microwaves, coffee makers, juicers, and pans come from China, which currently faces a 145 percent tariff.

Yes, seemingly everything has become more expensive in recent years, and tariffs risk raising the cost of many goods. But it hurts most when higher prices affect the things we count on to be inexpensive. The defining characteristic of the American breakfast is not bacon and eggs, or toast or coffee, but its affordability. Diners proliferated near factories because working-class people knew they could fill up on a classic plate after an overnight shift without fretting about the cost. Now stepping out for a diner breakfast can require a level of budgeting once reserved for fancy brunch.

Whether or not a trade war escalates, the notion of the classic American breakfast is in peril—as is the vision of the nation it once symbolized. The forces affecting orange juice, coffee, and eggs are far harder to control than economic hostility. For the time being, eggs, bacon, and all of the other foods that make up the American breakfast are still available. But if the plate is no longer cheap, it just won’t be the same.

Newer Posts