Category:

Medical News

The human-head louse has a ghostly quality. It tends to glimmer in and out of view, leaving only subtle signs and omens of its presence. Is that oblong speck an egg sac or a flake of dandruff? Was that a prickle on your scalp? Is it normal that your son is scratching just behind his ear? Maybe you have lice and he has lice, and you’ve all had lice for weeks. The possibility is frightening. The uncertainty leads to madness.

The louse evolved to be intrepid and sneaky. Its behavioral imperative is simple and relentless: “They are naturally negatively geotropic,” Ian Burgess, a medical entomologist who runs a company that tests insect-control products, told me. “They will always climb upwards towards the head.” He recalled a day when one must have fallen on his shoe during a comb-out in his lab. He was driving home that afternoon and noticed that the bug was sitting on his knee, apparently confused. “It had climbed to the highest point it could get, and it didn’t know where to go from there.”

Once the bugs ascend, they suck your blood and attach their eggs to the roots of your hair. Within a month or two, your skin might start reacting to the parasites’ saliva, and some degree of itching will ensue. But otherwise, a case of head lice has no ill effects. “To say the truth, head lice are not a real medical problem,” Kosta Mumcuoglu, a lice entomologist at the Hebrew University of Jerusalem, told me. Still, their presence is unwelcome, to say the very least. Emergency scalp checks, precautionary treatments, instant-onset symptoms of delusional parasitosis: “It is definitely a psychological, emotional problem,” Mumcuoglu said.

Mumcuoglu is an expert, but this was not a useful insight. I have two kids in elementary school. I am quite familiar with the psychological, emotional problem that is lice.

For decades now, experts have been trying to convince Americans that the plague of Pediculus humanus capitis is very mild, and that it doesn’t really merit drastic measures; for decades, too, parents have heard the opposite from schools, in the form of urgent letters saying lice have been detected and countermeasures are essential. Along the way, we’ve heard claims and rumors that the head-lice situation is deteriorating—that outbreaks are increasing, that drug-resistant superbugs are taking over. Yet the basic facts of lice remain the same. The parasites are programmed to get up to our heads. We are programmed to let them get inside our minds.


The modern lice wars got their start in Newton, Massachusetts. Deborah Altschuler’s son had lice, and his school implied that this was her fault—that a family like hers would not be welcome anymore. It seemed to her just then, in the early 1980s, that the school itself should have taken more responsibility, that its policies on lice should have been clearer. For that matter, shouldn’t everyone, from parents up to politicians, have been more informed about the problem?

The group she formed out of her home, first called Parents Against Lice and later the National Pediculosis Association, would advocate for aggressive lice-check protocols and more systematic rules for expelling kids from class. If this was not a movement to abolish lice, then it was at least an all-out bid for taking head lice seriously as a public-health concern. With the help of several entomologists at Harvard and the University of Massachusetts, Altschuler pressed the case. At one point, she proposed that head lice were a vector for the virus causing AIDS, spreading it from scalp to scalp. “I felt that AIDS was a wake-up call,” she told me. “We got lucky that it wasn’t insect borne, but it could’ve been.”

[From the September 1987 issue: AIDS and insects ]

Altschuler also worried that the common treatments of the time—various insecticide shampoos either used alone or, more distressingly, in combination—were causing harm to kids. In this and other ways, she was an early incarnation of the MAHA mom: incensed about the failings of the public-health establishment, inclined to do her own research, worried about toxic products and the companies that manufacture them. And in the ’90s, her approach to lice caught on. A new industry of lice consultants and lice-removal salons began to form. Altschuler herself became a known expert in the field. (Her proudest moment, she said, was speaking to the Armed Forces Epidemiological Board in 1992, not long after Operation Desert Storm.) Eventually she’d have a dozen people working for her association, taking about 100 calls a day and selling T-shirts with messages such as Keep your wits, not your nits and Don’t let your child become an egghead.

But even as this wave was gaining strength, a sort of countercurrent formed beneath it. One of Altschuler’s Harvard contacts, a public-health entomologist named Richard Pollack, had begun to worry that the newly fostered panic was unfounded. He knew that the bugs were harmless in themselves; more than that, he believed that they might be far less prevalent than many people suspected, especially the ones who had been reaching out to him with horror stories: “They were telling me that lice had become so common, so abundant, that I could go into any elementary school and collect thousands of them in a morning.” But when he followed up on this idea, and started doing field research on grade-schoolers’ heads, the results weren’t really that alarming. Misdiagnosis was indeed a rampant problem, and the people who were the most upset—the ones who’d put their families through repeated treatments—were the least likely, as he saw it, to actually have lice. “They were stuck in this vicious cycle,” he told me.

A folk entomology of head lice had taken hold, and it entailed a range of false beliefs. Pollack sometimes heard it said that lice could jump from one child’s head to another, and that they were infesting kids from classroom desks and rugs. None of this was true, he said; in general, contagion happens with only direct head-to-head contact—and a louse that falls off a kid at school will soon dry out, infesting no one else. (Mumcuoglu told me that parents needn’t worry about lice-infested furniture or bedding either.)

Even worse, for many schools and lots of parents, the mere presence of a nit, or even a nitlike object in a child’s hair, was prima facie evidence of a dangerous contagion. But a louse’s empty egg sac can remain adhered to growing hair for months, like a shell casing on a dormant battlefield. This is one reason why studies find little benefit from screening kids in class: False positives abound. In 2006, Mumcuoglu estimated that two-thirds of all lice treatments in the U.S. are given to children who don’t have an active infestation. And because many schools send kids home if they’re suspected of having lice, those false positives may add up to as many as 24 million lost days of school. (No one has tallied the time and tears wasted on unnecessary pillow quarantines and the bagging of stuffed animals.)

If Altschuler and her group hoped to raise awareness and concern, then Pollack sought, in part, to tone things down. The two of them had worked together in the ’90s, but inevitably they had a falling out. “Early on, I saw him as a wonderful ally and a helpful person to the cause,” Altschuler told me, “but then he started trying to become us, in his own way, with information that was inconsistent with ours.”

In particular, Pollack and the other head-lice moderates took aim at the unforgiving “no nit” policies in schools, which might exclude a child from the classroom on the basis of a single empty egg sac, and they were winning some important hearts and minds. In 2002, when the American Academy of Pediatrics put out its first clinical report on treating head lice, its authors called no-nit policies “detrimental” and “a response to infestations that is out of proportion to their medical significance.” Two years later, the nation’s school nurses—who have been focused on the scourge of head lice since their profession’s founding in 1902—followed suit. Eventually, some school systems would ease up too. In New York City, where I live, the public schools that service a million students had done away with no-nit policies by 2008.

These were salutary changes, to be sure. Fewer kids would be ejected from the classroom, and presumably some degree of lice-related learning loss was curbed. But the new guidelines did not eliminate the broader set of problems that the parasites create for children and their families. Even though some schools now tolerate nits on children’s heads during the school day, they still communicate aggressively with parents on lice-related matters, encouraging frequent checks at home and better-safe-than-sorry interventions.

No one ever articulates the rationale for treating the mere possibility of lice with more concern than a cold or even COVID. (No one ever gets a letter home from school saying A case of rhinovirus has been detected in your classroom.) But the thinking surely boils down to this: It’s icky to imagine that your child’s hair—and yours!—might at this very moment be aswarm with bloodsucking bugs. Once the lice have gotten to your head, and in your head, no amount of “Just calm down” can make them go away.


If the fear of lice can be recalcitrant, the lice themselves are even more defiant. “It’s an insect that is abnormally subjected to stresses on its survival,” Burgess, the entomologist who tests insect-control products, told me. A louse’s clawlike feet hold fast in both a shower and a swimming pool. Its physiology weathers perfumes and soaps and the bristles on our hairbrushes. And given time enough, its genome may adapt to shrug off almost any poison we apply.

You don’t need to hire experts to dislodge an infestation—humans have managed to delouse themselves since deep antiquity—but the process takes some work, along with proper information. I’m sorry to say that the latter isn’t always near at hand. The New York City schools, for instance, suggest that parents give their kids chemical treatments, naming two kinds in particular—permethrins and pyrethrins—that were obsolesced long ago by insect evolution. America’s head-lice problem will never improve, Mumcuoglu told me with some annoyance in his voice, so long as our institutions insist on making this mistake, particularly about permethrin. (In an emailed statement, the New York City Department of Health said that over-the-counter, permethrin-based products are “an accessible starting point for many families,” but other products may be necessary if “there is treatment failure or suspected resistance.”)

In fact, we do have many other ways of getting rid of lice, even so-called drug-resistant superlice. Ivermectin may not be an anti-cancer, anti-COVID wonder drug, but it’s miraculous at treating parasites: A topical ivermectin cream can likely cure your child of an infestation, and may be purchased over the counter. Ivermectin pills—which could be even more effective—can be prescribed off-label. (Dawn Nolt, the lead author of the American Academy of Pediatrics’ head-lice report, told me that ivermectin may be upgraded to a first-line treatment in the next update to the guidelines.) I happen to be partial to dimethicone, a lubricant that will coat a louse’s body and seal up the holes it uses to get rid of excess water. Since dimethicone’s effect on lice is physical instead of chemical, it may be less likely to provoke resistance over the long term. But Burgess, who first identified dimethicone as a lice-killer, warned me that he’s seeing signs of its waning potency. (Perhaps the louse’s body has been changing shape, he said, and their excretion holes are now harder to plug up.)

[Read: How ivermectin became right-wing aspirin]

And then, of course, there is the fine-toothed comb—a medical device that seems to have been in use by human populations for at least 5,000 years. (Archaeologists have found lice on human mummies, and lice combs in mummies’ tombs.) Combing, when performed with diligence, can sometimes be effective on its own accord, even if it’s also highly, highly inconvenient. Proper combing technique involves sectioning out and clipping up the hair in strips, then combing out each section repeatedly while inspecting for lice and nits that may be the size of sesame seeds. After that, one might need to repeat the entire procedure as soon as two or three days later. Strangely, the AAP guidelines say this process might be beneficial, head lice notwithstanding, in the sense that it allows “a caregiver and child or adolescent to have some close, extended time together.” Extended? Yes. Close? Please be serious. I asked Nolt why the AAP was spreading this absurd misinformation. “We were trying to have a silver lining,” she said.

But there is no silver lining, I’m afraid. In the end, the lice wars have only brought us back to where we started, and also where we’ve always been: worried, inconvenienced, and confused. Pollack says he’s proud of what his 40 years of advocacy accomplished—“an awful lot of kids stayed in school, rather than being sent home,” he said. But he also knows that certain head-lice myths have never gone away, and maybe never will.

When I spoke with Altschuler, she lamented the idea, sometimes floated by today’s head-lice experts, that head lice aren’t so important. “They are important for the people who have them,” she said. This, at least, is a scientific fact.

When he was interviewed onstage at the Conservative Political Action Conference on Saturday, Health and Human Services Secretary Robert F. Kennedy Jr. was asked a question unlikely to be on anyone’s mind in the midst of upheaval in the department he oversees and a conflict in the Middle East: “Who’s stronger—you or Secretary of War Pete Hegseth?”

The exchange was emblematic of the role that Kennedy and other HHS officials played during the four-day conference. As some MAGA attendees grumbled over the war in Iran, they were met with a whole lot of MAHA. Kennedy went after Froot Loops and bemoaned how Americans don’t know how to cook anymore. Mehmet Oz, the Centers for Medicare & Medicaid Services chief, warned about hospice fraud in California. And Jay Bhattacharya, the director of the National Institutes of Health and the acting director (of a sort) of the CDC, explained the value of repurposing already-approved drugs for new diseases. “This sounds geeky,” he told the crowd, “but it is really, really important.”

Perhaps the administration thought that leaning into health would distract from what’s going on in Iran—a sore subject for Republicans who want President Trump to focus on domestic issues. Kennedy did say, unprompted, that he believed that his war-averse uncle and his father would have approved of the military action. But he also was the most prominent member of the Trump administration to take the stage at CPAC. Notably absent was anyone named Trump or any official involved in the decision to bomb Iran.

If that was indeed the administration’s strategy, it didn’t seem to work among MAGA world’s staunchest opponents to Trump’s recent military action. Although a poll of this year’s CPAC attendees found that 89 percent approve of the administration’s actions in Iran, others I spoke with worried about the specter of another “forever war” in the Middle East. As Madeline Elizabeth, a Republican strategist who attended CPAC, told me, “I think that the MAHA movement is almost the only thing that’s ‘America First’ about this administration.”

As he has in his other recent public appearances, Kennedy mostly stuck to his talking points. He touted what he considers to be his wins at HHS—notably, the flipping of the food pyramid to emphasize protein consumption—and insisted that the president was “on my side on virtually every issue” when Kennedy decided to endorse Trump in 2024. But Kennedy didn’t mention his remaking of the CDC’s vaccine-advisory committee or the changes to the recommended childhood-immunization schedule, which are easily among the most consequential policies of his first year in office. Perhaps he avoided doing so because, as The Washington Post has reported, the White House has instructed him to stop taking action on vaccines for fear of Republicans losing the midterms. (The HHS spokesperson Andrew Nixon told me, “We remain focused on the priorities Americans consistently say matter most to them, including tackling chronic disease, improving nutrition and food quality, and lowering the cost of care and prescription drugs.” The White House did not respond to a request for comment.)

The closest Kennedy came to any talk of immunization was when he mentioned that, growing up, he didn’t know any kids who had autism—a seemingly veiled reference to his long-standing belief that vaccines have contributed to the rise in autism since the 1990s. That rise, according to experts, is largely due to better surveillance and broader diagnostic criteria. When Bhattacharya did talk about vaccines—to praise the shingles shot and to tease research on whether it might reduce the risk of Alzheimer’s—the crowd was unmoved. Bhattacharya’s only real applause line came when he said, “It’s no longer Tony Fauci’s NIH.”

[Read: RFK Jr. is losing his grip on the CDC]

Most of the CPAC-goers I spoke with told me they loved Kennedy. Usually in those words: I love him. They weren’t always sure about the specifics of his agenda, but they liked that he was in favor of being healthy. A CPAC attendee named Michael Smith, who was promoting the posting of the Ten Commandments in public schools and dressed as Moses—complete with a staff and a stick-on beard—told me that Kennedy was “getting us back to the Garden of Eden diet.” Several people, including a woman who says she lost 50 pounds just eating meat, told me their personal health stories. (Kennedy recently acknowledged that he’s on the carnivore diet.) Everyone, it seemed, had read Kennedy’s best seller, The Real Anthony Fauci, and many volunteered to me that they had not gotten the COVID vaccine.

The Kennedy adoration created an odd sense of tension at the meeting. It’s true that some GOP strategists have argued that the MAHA coalition is the key to Republicans winning the midterms later this year. For the most part, they’re talking about health-conscious voters who might be persuaded to back Republican candidates. Perhaps it should be no surprise that, at CPAC, the fans are MAGA first and MAHA as a kind of bonus.

But at the same time, the MAGA loyalties clearly supersede the MAHA hype. It was striking how no one seemed to be there solely, or even primarily, to support MAHA. At one point during Kennedy’s fireside chat, the interviewer, Mercedes Schlapp, asked attendees whether they were “MAHA moms.” In a crowd of several hundred, maybe a dozen hands shot up. Scanning the crowd, I saw no shortage of Trump-themed apparel, but no one wearing MAHA T-shirts or hats. Talking about MAHA priorities “doesn’t electrify anybody, and quite honestly, it’s not the conversation, like, broadly, that’s being had,” Vish Burra, a Republican strategist and MAGA provocateur, told me. (Last fall, Burra was fired from his job as a producer for One America News Network after he posted an anti-Semitic AI-generated video on his personal X account. He later deleted the post.)

[Read: The meme-washing of RFK Jr.]

Kennedy, who recently underwent surgery for a rotator-cuff injury, ended up telling Schlapp that Hegseth might have the edge in a test of strength: “He’s got a couple of pounds on me.” But even if Hegseth can bench more than the HHS secretary, at the country’s best-known conservative gathering, Kennedy was the one the Trump administration seems to have tasked with lifting up its disappointed fans.

Gurpreet Dhaliwal sat onstage in a hotel ballroom in Minneapolis. The gray curtains behind him were illuminated by bright blue lights, giving the slightest hint of performance at an otherwise typical medical conference. The presentation was among the most anticipated at the Society to Improve Diagnosis in Medicine’s 2022 meeting. The attendees were there to watch a kind of showcase: a complex diagnosis in action.

Dhaliwal, a professor of medicine at UC San Francisco, was given the details of a patient he had never seen before. As another physician slowly revealed pieces of the case, Dhaliwal narrated his thinking out loud: why he was considering one possibility and rejecting another, and what each new clue revealed for him. Eventually, he decided that the patient was likely suffering from a dangerous buildup of pressure in her abdomen. Left untreated, she could experience organ failure. It was the correct diagnosis, and the audience responded with applause.

Dhaliwal is regarded as one of the country’s most gifted diagnosticians. Colleagues have praised not only his command of physiology but also his ability to make his reasoning legible—to turn clinical uncertainty into something teachable. “To observe him at work is like watching Steven Spielberg tackle a script or Rory McIlroy a golf course,” a New York Times reporter wrote in 2012.

“I appreciate the designation but sort of reject it, only because of my own philosophical stance, which is that it’s very hard to master the diagnostic process,” Dhaliwal told me when I talked with him for my book about diagnosis. He considers himself a student of diagnosis, committed to getting better. “To me, the concept of the master diagnostician is that you’re never good enough.”

That belief puts Dhaliwal on one side of a core question of medicine: Are some doctors inherently better diagnosticians than others, or is diagnostic excellence a skill that any clinician can achieve? Doctors usually get it right—some estimates suggest about 90 percent of the time. But with roughly 1 billion physician-office visits each year in America, even a low error rate can still affect a large number of  people. A 2023 study estimated that 371,000 people die a year and 424,000 are disabled following a misdiagnosis.

In 2015, the National Academies of Sciences, Engineering, and Medicine published a seminal report on diagnostic error with a startling finding: Most people will experience at least one (such as a delayed, wrong, or missed diagnosis) in their lifetime, “sometimes with devastating consequences.” That report prompted a small but vocal group of physicians and other health providers to look inward. They argue that the number of diagnostic errors is unacceptable and must be improved. Dhaliwal has been part of the movement to figure out how.


Some research suggests that many, if not most, diagnostic errors arise from failures in thinking—cognitive bias, premature closure, insufficient reflection. Accordingly, some researchers frame diagnostic error as largely a problem in clinical judgment: the ability to reason through uncertainty and weigh competing explanations in order to reach the right diagnosis and make decisions about care. “Regrettably, how to think in medicine has been a much‑neglected area for medical educators, who stalled somewhere in the Middle Ages, or a century or two earlier,” Pat Croskerry, a retired professor in emergency medicine at Dalhousie University in Canada who’s known for his work on cognitive errors in the diagnosis, told me.

Dhaliwal credits his own abilities to paying close attention to his own thinking. “I do think you can train yourself to be a better diagnostician,” he said. Early in his training, he closely observed the physicians he most admired. Some of them had a knack for identifying rare diseases that evaded their peers. Others mastered the diagnosis of common conditions so thoroughly that they could recognize every permutation of pneumonia. Dhaliwal wanted to excel at both.

But when he asked physicians how to become that kind of doctor, their advice was usually the same: See a lot. Read a lot. It felt unsatisfying. Every physician sees patients. Every physician reads. What, he wondered, truly separates an exceptional diagnostician from a competent one?

He hung on to this question, and about two years after finishing residency in 2003, during a yearlong faculty-development course for medical educators, he encountered a session on clinical reasoning—an emerging field at the time. The physician and medical historian Adam Rodman has described clinical reasoning as “the study of the ability for expert physicians to see what others don’t.” Researchers were beginning to investigate what actually happens in doctors’ minds when they make diagnoses: how they organize their knowledge and put it into practice. Dhaliwal quickly recognized this as the quality he had seen in his role models, even though “they didn’t have a term for it, and neither did I.” The idea of clinical reasoning helped clarify the process; the next question was how to get better at it.

Dhaliwal laid out the key steps of a doctor’s reasoning process: collecting data from a patient; synthesizing that information; accessing “files” in the­ mind, including the details about diseases and how they present; listing possible diagnoses; and choosing one over others. He also began studying the science of expertise and how people—whether Nobel laureates, Olympic swimmers, or mechanics—become exceptional in their field. “They seek out challenges, whereas most of us instinctively try to minimize challenges after we’re competent,” he said.

They also learn from their mistakes. In a 2017 paper, Dhaliwal wrote that ordinary people develop “extraordinary judgment by extracting as much wisdom as possible from their inevitable errors,” a lesson he drew from Philip Tetlock and Dan Gardner’s book, Superforecasting: The Art and Science of Prediction. But medicine doesn’t make that easy for doctors, who may treat a patient once and never see them again. If the patient’s condition worsens, or they receive a different diagnosis later on from someone else, that information may never make its way back to the first doctor. With these ideas in mind, Dhaliwal set out to sharpen his skills. Today, he works in the San Francisco VA Medical Center’s emergency room, where he sees a variety of illnesses and necessarily follows that early advice to see a lot of patients. But, crucially, he also started keeping track of his own cases so that he could follow up on what happened. When he discovers he was wrong, he tries to figure out why. Did he miss something important? Was he exhausted at the end of a long shift? Did he anchor himself to a particular conclusion too quickly?

“I started to get kind of addicted to it,” he said. He explained that the mind wants closure; without knowing the outcome, people tend to assume that things turned out well. His habit of tracking down a patient’s outcome echoes advice delivered more than a century ago by William Osler, one of modern medicine’s founding figures: “Learn to play the game fair, no self-deception, no shrinking from the truth; mercy and consideration for the other man, but none for yourself, upon whom you have to keep an incessant watch.” Diagnostic mastery, Dhaliwal illustrates, is not a mysterious gift bestowed on a talented few. It is the result of examining one’s own thinking and practice without mercy.


But the reasoning that goes into diagnosis may start to look very different. Since his third year of medical school, Dhaliwal has read The New England Journal of Medicine’s Clinicopathological Conference, or CPC. The CPC is a teaching exercise in which doctors are presented with a real patient’s case and asked to reason aloud toward a diagnosis, similar to Dhaliwal’s Minneapolis presentation. Last fall, Dhaliwal participated in a CPC that put him in competition with an AI agent called Dr. CaBot, a medical-education tool developed by researchers at Harvard Medical School.

Both Dhaliwal and Dr. CaBot reached the correct diagnosis and explained their reasoning step by step. They correctly concluded that the patient had a problem in the upper part of his digestive system, which caused a bacterial infection to trigger sepsis, among other complications. Dr. CaBot didn’t identify the cause of the problem, whereas Dhaliwal deduced, correctly, that the man had swallowed a toothpick, which poked through his gut and caused the infection. He had seen that kind of case before.

That Dr. CaBot’s problem-solving came as close as it did to Dhaliwal’s is both promising and disconcerting: It suggests that machines may be able to match the performance of elite diagnosticians. More formal evidence also indicates that large language models may be able to approximate the kind of clinical reasoning expected of physicians. One study published in July 2024 found that when OpenAI’s GPT‑­4 examined the medical information of 100 patients in an emergency room, the AI was able to diagnose them with 97 percent accuracy, outperforming resident physicians. (OpenAI’s models have advanced since then.) Another study found that ChatGPT scored higher on a clinical-reasoning measure than internal-medicine residents and attending physicians at two academic medical centers. Other studies have been more mixed.

Serious concerns about reliability, sycophancy, and hallucinations remain. But in some ways, what a diagnostician does is not so different from what AI claims to do. Both use enormous amounts of information to recognize patterns in symptoms and diagnoses that tend to appear together. A doctor does this through medical education and personal experience; AI does it by predicting plausible explanations based on statistical patterns it has learned from its training materials.

“This is an electric moment in medicine,” Mark Graber, a physician and co-founder of the nonprofit Community Improving Diagnosis in Medicine, told me. “If you can come up with an AI agent that’s as good as Gurpreet Dhaliwal, that is an amazing accomplishment that will surpass the abilities of 99.9 percent of doctors.”

How medicine embraces any of this is an open question. Perhaps AI will strengthen clinicians’ reasoning and close the gap between the Dhaliwals and everyone else. Or it could become a crutch for clinicians, and lead them to lose skills. A 2025 study found that after just three months of using an AI tool to find precancerous growths during colonoscopies, doctors were less likely to identify the growths on their own.

For his part, Dhaliwal is equanimous. “I think AI is going to transform health care radically. I don’t think it’s going to change doctoring radically,” he said. He believes that AI is likely to perform best at the extremes of diagnosis: the very simple cases (such as a poison-ivy rash) and the very complex ones (rare or novel diseases). In the not-so-distant future, people may be able to get answers to routine medical questions at home—What’s this spot? Is my cough concerning? How’s my blood pressure?—without ever needing to see a physician. That may be entirely appropriate, because attending to these everyday concerns usually does not require sophisticated clinical judgment or nuanced decision making.

AI could also prove valuable in identifying conditions that a physician may never encounter in their career, or in helping diagnose patients that have stumped multiple clinicians. These cases tend to hinge on how encyclopedic a doctor’s knowledge of the medical literature is; AI can recognize obscure patterns across millions of cases and publications, and surface possibilities that may lie outside any single physician’s experience.

“What I think is less likely to change is sort of the muddy middle, which is what I think the vast majority of medical practice is,” Dhaliwal said. Much of medicine involves choosing between possibilities: Does a person have an infection, an allergic reaction, or an autoimmune disease? Is it a psychiatric or medical issue? AI could certainly help parse through the options. But medical judgment goes beyond identifying what’s most likely; it involves deciding what the diagnosis means for a particular patient. Two people diagnosed with the same cancer may desire different futures. One may want the most aggressive treatment available, whereas the other may decline interventions that would trade quality of life for longevity. These are value-laden decisions that, at least for now, still require something irreducibly human to navigate. An LLM can recite treatment options and survival rates, but it cannot share responsibility for the choices that follow.

Relying on AI for certain aspects of diagnosis could help free doctors to focus on those more human parts of the job. In the United States, more than 100 million people don’t have a primary-care provider, and the profession itself is dwindling. “If in some form AI is able to beat us, or help us improve our ability to do clinical reasoning, you don’t have to be the smartest person in the room to be a physician, which I think is better for the community,” Jeffrey Goddard, a medical student at the University of Iowa who uses chatbots in his training, told me. A diagnosis, most simply, is an answer to the question What is making me ill? But it can offer much more than that—reassurance, coherence, and, ultimately, relief. Not all of that can be outsourced.


This essay was adapted from Alexandra Sifferlin’s book, The Elusive Body: Patients, Doctors, and the Diagnosis Crisis, published today.

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

“What do you mean, you just take the stomach out?” Karyn Paringatai wondered, when doctors first said her stomach had to be surgically removed. Could she still eat? Yes, but differently. What would replace it? Nothing. She would have to live the rest of her life missing a major organ.  

Paringatai was not actually sick, not yet. Her stomach was fine. But her cousin, just a few years older, had recently died of an aggressive stomach cancer at age 33, leaving behind three children. In a video recorded after her diagnosis turned terminal, the cousin told her little kids to be good for their father. “Please don’t be too mean to the lady that he meets,” she added, anticipating how the void left by her death might be filled. But she must have known that this void could not be filled, not ever. The cousin’s own mother had died young of stomach cancer. So had her grandmother. So had her sister.  

To the doctors who saw Paringatai’s cousin in Tauranga, New Zealand, this pattern was hauntingly familiar. Her cancer was an unusual and distinct kind called diffuse gastric cancer, in which cancerous cells percolate undetected through the stomach, forming obvious masses only in advanced stages—usually too late to treat. The doctors had witnessed the same rare cancer run through a large Māori family near Tauranga. In that family, one woman lost six of her siblings to stomach cancer; a boy had died at 14. The family now reached out to Paringatai’s. It’s genetic, they said. You have to get tested.

Paringatai, whose father was also Māori, got tested. And indeed, she carried a mutation in the same gene, known as CDH1, as the other family. This gave her a 70 percent lifetime risk of developing advanced diffuse gastric cancer. Because this form of cancer can metastasize so quickly and unpredictably, the only surefire method of prevention is a complete removal of the stomach, or total gastrectomy. It’s analogous to a preventive mastectomy for breast cancer—but far more physically taxing. A number of women with a CDH1 mutation have actually had both their stomach and breasts removed because this mutation can also confer a 40 percent risk of breast cancer. One of them told me, about her gastrectomy, “If you can do that, you can do anything.”

Paringatai’s surgeon could not answer all of her questions about living without a stomach—her total gastrectomy was the first he would ever perform in a healthy person. But she went through with the procedure in 2010, and she credits it with saving her life. In the operating room, her surgeon made a long incision down her abdomen, cut out the fist-size pouch of her stomach, and stitched her esophagus to her small intestine. She was the first in her family to have her stomach removed prophylactically. Others followed. On a recent trip to visit her father’s family, Paringatai found herself sitting on a porch with her aunties and cousins. Of the eight people there, she realized, only one still had a stomach: her partner. “You’re the odd one out,” she teased.

Mutations in CDH1 seem to be unusually prevalent in Māori families, where they arose multiple times, possibly because they once conferred some evolutionary advantage. But mutated versions of CDH1 have been found around the world too, and thousands of patients have likely now had gastrectomies to head off cancer.

In New Zealand, “we’re coming up to nearly 30 years of people living with no stomachs,” says Paringatai, who is now a Māori-studies professor at the University of Otago. For the past several years, she has been documenting the experiences of Māori with CDH1. That people can live this long without a stomach is a testament to the adaptability and resilience of the human body. That doctors resort to such radical measures exposes the limits of what modern medicine can offer.

The first Māori to undergo prophylactic gastrectomies were the family that warned Paringatai’s about the cancer gene. They knew all about the gene because they had helped find it. Back in 1994, Maybelle McLeod contacted a genetics lab at the University of Otago about the premature cancer deaths stalking her relatives. Among themselves, she told me, “nobody even talked about it.” The family believed they lived under a curse for letting their land be sold for a quarry. McLeod grew up listening to the quarry’s warning sirens, learning to take cover indoors before the blasting began. She watched as the hill near her home was stripped bare.

McLeod eventually moved away, became a nurse, and learned of the then-nascent field of cancer genetics. This, she thought, explained the so-called curse. The geneticist she contacted, Parry Guilford, agreed to take the case. But her family still had to be persuaded to trust this pakeha, this white man, with their DNA. Over a series of meetings—attended by as many as 100 members—Guilford explained that his motive was the same as McLeod’s: to find the cause of so much death. They ultimately agreed to a contract where only the family, not Guilford, could directly approach members and gather their DNA samples. From there, the gene mapping went quickly, and scientists homed in on CDH1. The gene encodes a protein that normally orients and aligns cells in the stomach; without it, the cells become lopsided, rogue, and possibly cancerous. Any parent with a mutation in the gene has a 50 percent chance of passing it on to their children.

This breakthrough meant that a genetic test could now reveal who was at risk of diffuse gastric cancer; the family would no longer have to live in fear of where cancer would strike next. McLeod herself tested negative for the mutation. She was in the clear.

But those who tested positive for the mutation now faced an agonizing new dilemma. Doctors could not guarantee that endoscopies, even annual ones, would reliably catch such an aggressive cancer in time. Total gastrectomies had been performed before, in patients whose stomachs were already ridden with tumors—but never routinely in healthy people who did not have cancer and may ultimately never have cancer at all. So now they had to choose: 70 percent chance of deadly cancer or surgery with a 100 percent chance of significant side effects?

Rangi McLeod, who worked alongside Maybelle in urging relatives to join the study, was the first of the family to test positive for the CDH1 mutation. Not long after, doctors found a tumor in his stomach. It’s not all bad news, Guilford recalls Rangi saying. “I can lead my family to the next stage.” His gastrectomy would no longer be strictly prophylactic, but he’d have his stomach out, he’d recover, and he’d show anyone who still feared the surgery that it was safe. Rangi did not recover, though. He fell into a coma after the new connection between his esophagus and intestine grew weak. He died a few weeks later. “The whole project almost fell apart immediately on the spot,” Guilford told me.

In time, the family decided that Rangi would not want them to stop—he would not want for their children and grandchildren to continue to succumb to cancer. The family found a more experienced surgeon in a bigger hospital in Auckland; the next 10 gastrectomies were a success. Since then, stomach cancers in the family have plummeted. And the hill where the quarry once operated is green again. The land, it turned out, had not been sold but taken by the government, and was returned to the family. In any case, the curse, some in the family said, had finally lifted.

A successful gastrectomy looks like this: For at least the first six months, your life revolves completely around food, and not in a fun way. You eat tiny portions 10 times a day. You have to chew, chew, chew food like gum to make up for the lack of a stomach. Your digestive system spews from both ends. Your blood sugar rises and crashes unpredictably; you faint at the worst times. You are tired all the time. You lose a lot of weight, which might feel welcome at first but then feels scary. You are unable to work for a couple of months—longer if your job is physical or your recovery is difficult. About one in 10 patients has complications serious enough to warrant hospitalization, according to studies done in U.S. hospitals.

Gradually, the upper part of the intestine adapts into a sort of stomach. You start eating larger portions, less often. You gain weight. You still need regular shots of B12, which you cannot absorb without a stomach. But several people more than 10 years out from surgery told me they eat almost normally—with only small, lingering quirks. Plain water, for example, can be strangely hard to drink, possibly because of its surface tension, while flavored water goes down fine. Young women who have had their stomach out routinely go on to get pregnant and have healthy children.

Still, those who ultimately recovered well knew of family members who continue to struggle years later with nausea or reflux or fatigue. One of Paringatai’s cousins left her teaching position because she could no longer physically keep up in the classroom. Another cousin, Isaia Piho, was a firefighter. He, too, switched to a less demanding job afterward. Isaia and his younger brother, James, told me they had watched their mother die of stomach cancer. They are fathers themselves now, and they did not want their children to experience the same.

But not everyone who weighs the odds decides on surgery. Guilford knows another guy, also a firefighter, who decided to keep his stomach. “I run into burning buildings every second day,” he told Guilford. “I’m good with risk.” He’s chosen to have regular endoscopies instead. Still others might prefer not to know that they carry a CDH1 mutation. In the McLeod family, a young man in his 30s who went untested recently died of cancer, Guilford said. Diffuse gastric cancer has remained stubbornly difficult to treat over the three decades he’s been studying it. At the stage when it can be easily detected, the survival rate is just 20 percent.

Doctors are still trying to fully grasp the long-term consequences of losing a stomach, which makes it more difficult for the body to take up nutrients. “We’re learning that gastrectomy severely impacts bone health in the long run,” Daniel Coit, a surgical oncologist who recently retired from Memorial Sloan Kettering Cancer Center, told me. The loss of stomach acid may make calcium harder to absorb. As these patients age and continue to lose bone density, they will be particularly vulnerable to fractures.

Coit, who performed numerous prophylactic gastrectomies in his career, thinks the social and psychological consequences of losing a stomach deserve more attention too. He had one family in which multiple people died prematurely of suicide or alcohol use after their surgeries. Did the procedure lead directly to their struggles, or unmask a previous predisposition? His example is only anecdotal, he said, but the issue should be studied.

This is anecdotal as well, but alcohol also came up repeatedly in conversations I had with people who have been through gastrectomies—without me asking. Either they themselves started drinking too much, or they had family members who started drinking too much.

James Piho told me he drank to numb his fear of cancer and then he drank to numb his depression after his surgery, when he was unable to provide for his daughter. James actually works in a drug-and-alcohol rehab center, and his experience had him wondering about a link between total gastrectomy and alcohol. Could it be psychological or biological, or both? For people who find drinking plain water uncomfortable, alcohol seems to literally go down easier. And bariatric surgery, in which the stomach is shrunk but not entirely removed, is correlated with a well-documented increase in alcohol-use disorder. Losing even part of a stomach may make patients’ bodies more sensitive to alcohol—two drinks, according to one study, might feel like four. Haupiua Steventon, a member of McLeod’s family who had her stomach removed at 18, got a job at a bar after the long recovery derailed her university studies. “I fell into alcoholism very easily,” she told me. She eventually found her footing and has two kids now, but looking back, she wishes she had been warned about alcohol post-surgery. She wouldn’t have chosen to work in a bar.

In interviews that Paringatai conducted with different generations of McLeod’s family, she observed that some of the younger members struggled more, mentally, post-surgery. The older generation, she told me, had witnessed the deaths of so many “mothers, sisters, fathers, first cousins, children, grandchildren.” Today, young people in the family have not experienced those tragedies firsthand—a sign of progress that nevertheless made the sacrifice of a stomach harder to bear.

“I think we’ll look back one day and we’ll go, ‘Man, I can’t believe that we did such draconian surgery on those people,’” Guilford said. His lab continues to investigate diffuse gastric cancer, with the hope of developing a treatment or drug that makes a total gastrectomy obsolete. In recent years, doctors have become more open to patients choosing surveillance over surgery, especially as it’s become clear that CDH1 carriers with no family history are at lower risk for diffuse gastric cancer—perhaps a lifetime risk of about 10 to 40 percent rather than 70 percent. But even Coit, who is among the more skeptical of surgery, recommends prophylactic gastrectomies for people with a strong family history. Their risk is high. Surgery is the best solution we’ve got. But of course, Guilford said, “people would love to keep their stomachs.”  

Paringatai found that some of the Māori took their stomachs home, rather than allow the hospital to discard the organ as medical waste. In Māori culture, she explained, the body is sacrosanct. They wanted to honor the stomach, thanking it for its service. Several buried theirs on their family land. In a way, this is how they can, for now, keep their stomachs.

Last week, the Department of Health and Human Services finally followed through on a plan it first outlined for several of its top officials nearly a year ago: It reassigned them to positions in the Indian Health Service.

Many of the officials who were sent the reassignments—a group that includes at least half a dozen top-ranking employees at the National Institutes of Health, the CDC, and other agencies—have been on administrative leave since last spring, when they were abruptly ousted from their roles without explanation, or any indication of how long their hiatus might last. So they were shocked last week when, with no preamble, they received phone calls, then a letter, informing them of their new role, and an April 8 deadline to decline or accept.

In most or all cases, accepting these new roles would represent a major career shake-up and force a move across the country: Many senior HHS officials are based in Maryland—where the FDA and the NIH are located—or near Atlanta, where the CDC is headquartered; the recent letters lay out reassignments to places such as Arizona, New Mexico, Oklahoma, North Dakota, and South Dakota. If the officials accept the reassignments, they’ll be expected to report for their new jobs no later than May 26. If they decline, the officials expect to be removed from federal service entirely.

I spoke with two of the letter recipients, along with several former HHS officials who were also placed on leave by the administration last spring; all of them requested anonymity to avoid professional repercussions. For several of the reassigned officials, April 1 will mark the one-year anniversary of when they were put on administrative leave, shortly after HHS initially proposed via email to reassign them to IHS. The two officials who recently received reassignments also told me that last week is the first time they’ve heard from HHS since May or June 2025, when they were asked to provide their CVs. After being left for so long in limbo, then given so little time to make this choice, some officials feel like HHS is pretending it didn’t ghost some of its highest-ranking, highest-paid employees for the better part of 12 months. “Honestly, it’s hilarious,” one official told me: HHS did do what it said it would. It just took a year to do it.

When reached for comment, Emily G. Hilliard, HHS’s press secretary, emphasized in an email that HHS was dedicated to improving the IHS and that “each executive who joins IHS will strengthen leadership capacity and support mission delivery.”

IHS is, unquestionably, in need of more staff, especially in its more rural and remote locations. For years, the agency’s vacancy rate has hovered around 30 percent (and, for certain roles, has climbed higher in some regions). Last spring, when dozens of HHS officials were initially put on administrative leave, Thomas J. Nagy Jr., HHS’s deputy assistant secretary for human resources, wrote to them in an email that American Indian and Alaskan Native communities deserve “the highest quality of service, and HHS needs individuals like you to deliver that service.” In January, the IHS also announced what it described as the “largest hiring initiative” in its history to address staffing shortfalls, noting that the effort had the full support of HHS Secretary Robert F. Kennedy Jr., who has described tribal health as a priority.

But the reassigned officials and the tribal-health experts I spoke with both questioned how well the new reassignments fit current IHS needs. The primary feature of the re-assignees, as a group, is that they were high-ranking officials with extensive experience in administrative leadership; many were running departments of hundreds of employees or more. Among those who received the proposed reassignment last spring were the directors of several NIH institutes, leaders of several CDC centers, a top-ranking official from the FDA tobacco-products center, a bioethicist, a human-resources manager, a communications director, and a technology-information officer. Meanwhile, IHS’s greatest need is for “hands-on clinical people,” such as physicians and nurses, David Simmons, the director of government affairs and advocacy at the National Indian Child Welfare Association, told me. “People in communications, HR, researchers? Those are not going to be the people who are going to be helpful on a daily basis,” Simmons said. “On some level, I have to ask the question: Why are they sending these kinds of people?”

Last week’s letters, also signed by Nagy, described new IHS positions, multiple of them located at small hospitals in some of the country’s most rural and remote regions, several officials told me. The roles come with titles such as “Chief of Staff” and “Senior Advisor,” but the letters don’t describe the specific responsibilities attached to those positions. I asked one official whether their credentials lined up in any way with their reassigned role. “Zero,” they told me. If senior-executive officials accept the reassignment, the letters say, they will keep their current salaries—a minimum of about $150,000, though many high-level reassigned officials make far more, two officials told me. The IHS will likely be responsible for the salaries of reassigned officials, one NIH official told me, even though its budget is a small fraction of the NIH’s; the official told me that, as far as they could tell, they would be making about as much as their new supervisors.

To build trust and effectively deliver care, health officials need to be deeply familiar with tribal communities’ needs and should have an understanding of the local culture, Simmons told me. In 2023, American Indians and Alaskan Natives had lower life expectancy at birth than any other racial and ethnic group in the United States; Native people are especially vulnerable to conditions such as asthma, diabetes, and substance-use disorder. Tribes also have a long history of being severely mistreated by the federal government. But the officials I spoke with told me that they were not aware of any reassigned individuals who identified as Native or had extensive background in working with such communities. Last year, Deb Haaland, a member of the Pueblo of Laguna and a Democratic candidate for governor of New Mexico, criticized the reassignment proposals as “shameful” and “disrespectful.” The experts I spoke with also weren’t aware of any attempts HHS had made since to thoroughly consult tribal leaders about these reassignments; in at least one case, when a reassigned official tried making contact with their new hospital, with their new hospital, their new supervisor expressed confusion about who the official was or why they were reaching out at all, three current and former HHS officials told me. (Hilliard did not address my questions about whether the IHS or tribal leaders had been consulted about the reassignments, how qualified the reassigned officials were to meet the agency’s needs, or why HHS made the reassignments now.)

Meanwhile, health experts across the country have felt the loss of these officials from top tiers of HHS, especially agencies that focus on public health. “At the local health department level, we depend on their expertise,” Philip Huang, the director of Dallas’s health department, told me.

What prompted HHS to finally end these officials’ administrative leave is unclear; many officials had wondered if their hiatus might stretch on indefinitely, until they themselves chose to resign, as many of their colleagues have. The action may have been triggered by guidance from the Office of Personnel Management, released after the officials were first put on leave and newly effective in 2026, that limits administrative leave connected to workforce reassignment to 12 weeks. The end of March coincides with that limit.

No matter the trigger, the officials I spoke with told me they feel roughly the same as they did a year ago: “They obviously don’t want us to take these jobs, and want us to leave on our own,” one official said. Firing federal officials is difficult, especially without clear cause, and none of the officials I spoke with could identify a valid reason that they or their colleagues had been in federal limbo since last spring. The officials I spoke with uniformly emphasized that filling IHS with qualified people is essential, but added that they didn’t fit the bill. And several officials told me they worry that, should many of the reassigned officials reject the government’s offer, IHS will have a harder time attracting the personnel it needs. HHS’s “goal is to get people out, and I think that has been the goal from the beginning,” another official told me. “It’s cruel and unkind and unprofessional.”

Some of the letter recipients still feel extreme pressure to accept their reassignment. One told me that they’re just weeks away from full retirement eligibility but can’t run out the clock before the acceptance deadline passes. “I might have to move,” the official said. And, as federal policy states, if HHS pays for any part of their relocation, they’ll have to remain in a federal job for at least a year. (Early-retirement options do exist, with fewer benefits; another official told me they’re taking this option, and accepting another job elsewhere.) Still, even as officials weigh their decision, they feel a new sense of finality: Their administrative leave is ending, and whatever hope they might have had of returning to the agencies they once worked at is extinguished.  

Raw Farm does not sell your typical cheddar. A one-pound block of the aged, GMO-free cheese retails for $16. (Naturally, it’s for sale at Erewhon, the high-end grocery chain.) Some people are willing to pay that kind of premium because the cheese is made exclusively from unpasteurized milk. So is almost everything else that’s sold by Raw Farm, a 400-acre dairy farm in Fresno, California, that is commonly cited as the country’s biggest purveyor of raw milk and cheese. When Robert F. Kennedy Jr. ran for president in 2024, his running mate, Nicole Shanahan, visited Raw Farm and filmed herself on a tour of the milking operations. Raw Farm has carved a very successful niche out of the unfounded belief that raw milk and cheese are more nutritious than the regular stuff.

Pasteurization exists for a reason: It is a time-tested way to make sure that dairy products don’t contain bacteria that can make you sick. And now Raw Farm has found itself in the middle of an E. coli outbreak. The FDA is pointing to Raw Farm’s cheddar cheese as the reason nine people—including multiple children under the age of 5—have fallen ill. Three of the individuals have been hospitalized, and one has developed a serious kidney condition. Regulators have asked Raw Farm to recall the product. Yet the company has refused to do so. Grocery stores are still carrying the cheddar.

A maxim of food safety is that when the government says your product is making people sick, you stop selling it. Sometimes companies are in a “state of shock and disbelief,” Frank Yiannas, a former deputy FDA commissioner who was previously the vice president of food safety at Walmart, told me. “They can’t imagine that it’s truly their product.” In the history of the modern FDA, essentially every company implicated in a foodborne outbreak has agreed to a recall—until now. Democrats in Congress have even tried to place pressure on Raw Farm to pull the product off shelves, but the company is not backing down.

[Read: The real appeal of raw milk]

In this cheddar chaos, Raw Farm has painted itself as the victim. When I spoke with Aaron McAfee, the company’s president, he was quick to note that he takes safety very seriously. Raw Farm has voluntarily recalled its products at the FDA’s urging more than a dozen times. In 2024, Raw Farm immediately pulled its cheddar cheese after it was linked to E. coli. (The company subsequently wrote on its website that the recall was “UNFOUNDED.”) This time, however, the request “just felt wrong,” McAfee told me. He insists that the government’s case is based on “circumstantial evidence” and that his company isn’t at fault. None of Raw Farm’s cheddar has actually tested positive for E. coli.

Food-safety investigations are messy. Regulators need to move quickly to prevent more people from getting sick. Companies are often asked to voluntarily initiate a recall before the government can actually prove that a product is unsafe. At times, the FDA does shift its focus to other foods: In 2008, the agency warned consumers not to eat tomatoes suspected to be contaminated with Salmonella, but it later identified serrano peppers as the likely cause of the illnesses.

Still, food-safety experts I spoke with were emphatic that the FDA is probably correct about Raw Farm’s cheddar. Despite the lack of a positive test that the cheese is contaminated, the agency has two facts to rely on: The E. coli strains from all of the patients are closely related, suggesting that they came from the same product. Second, of the eight people who investigators have been able to interview, seven confirmed that they consumed Raw Farm’s dairy products. “The statistical likelihood of that just being pure chance is almost zero,” Yiannas said.

The agency does have the legal power to force Raw Farm’s cheese off the market through a legal maneuver known as a mandatory recall. Such a move has little precedent. In 2018, the FDA forced a mandatory recall of a brand’s kratom supplement, which had been contaminated with Salmonella. But the FDA would likely be in a tougher situation this time around. The kratom seller didn’t fight the mandatory recall, but Raw Farm would. McAfee told me that he had asked the FDA to pursue a mandatory recall because it would give him the opportunity to appeal. “I was not granted due process,” he said. (Companies can request an “informal hearing” to discuss the order.)

Exactly why the FDA hasn’t moved forward with a mandatory recall is unclear. (I asked a spokesperson at the Department of Health and Human Services, which oversees the FDA. The spokesperson referred me to only what the FDA has already said publicly about the case.) The agency could still be gathering the necessary evidence to justify such a step. Or perhaps Kennedy doesn’t want to declare a mandatory recall: He said shortly before his nomination as HHS secretary that the FDA’s alleged “war” on raw milk must end. McAfee claimed that Kennedy is a Raw Farm customer but that he has “not heard anything from D.C.”

[Read: America’s real ‘secretary of war’]

Even if the FDA eventually pushes Raw Farm to pull its products off the shelves—whether voluntarily or through force—that may not be the end of this saga. The reality is that we might never know with 100 percent certainty what caused those nine people to get sick. And doubt about the dangers of unpasteurized products is a reason they are so popular in the first place. The business of raw milk is based on convincing people that the milk is worth consuming despite objections from the FDA that it has no proven benefits over conventional pasteurized products and that it comes with an outsize risk of making you sick. If people found the FDA credible, a company like Raw Farm wouldn’t exist.

During my conversation with McAfee, it was easy to see why people might believe him more than they would a nameless bureaucrat. He talked about trusting his cheese so much that he feeds it to his daughter, and he cited FDA regulations like a trained lawyer. When we spoke, he was quick to emphasize all of the tests his company had done to ensure that the cheese was safe, and he referenced the company’s food-safety plan, which spans five binders.

People experiment with all kinds of products because they trust unproven anecdotes over government warnings. But the fact that a company is willing to risk more people falling sick from E. coli because of a belief that the FDA can’t be trusted should be a much bigger wake-up call for the agency. By McAfee’s telling, Raw Farm is the subject of a “witch hunt.” The FDA has the power to regulate the food supply with an iron fist, but its job has historically been much easier because companies have faith that the agency is doing what it can to stop an outbreak. That is no longer a guarantee.

The human-head louse has a ghostly quality. It tends to glimmer in and out of view, leaving only subtle signs and omens of its presence. Is that oblong speck an egg sac or a flake of dandruff? Was that a prickle on your scalp? Is it normal that your son is scratching just behind his ear? Maybe you have lice and he has lice, and you’ve all had lice for weeks. The possibility is frightening. The uncertainty leads to madness.

The louse evolved to be intrepid and sneaky. Its behavioral imperative is simple and relentless: “They are naturally negatively geotropic,” Ian Burgess, a medical entomologist who runs a company that tests insect-control products, told me. “They will always climb upwards towards the head.” He recalled a day when one must have fallen on his shoe during a comb-out in his lab. He was driving home that afternoon and noticed that the bug was sitting on his knee, apparently confused. “It had climbed to the highest point it could get, and it didn’t know where to go from there.”

Once the bugs ascend, they suck your blood and attach their eggs to the roots of your hair. Within a month or two, your skin might start reacting to the parasites’ saliva, and some degree of itching will ensue. But otherwise, a case of head lice has no ill effects. “To say the truth, head lice are not a real medical problem,” Kosta Mumcuoglu, a lice entomologist at the Hebrew University of Jerusalem, told me. Still, their presence is unwelcome, to say the very least. Emergency scalp checks, precautionary treatments, instant-onset symptoms of delusional parasitosis: “It is definitely a psychological, emotional problem,” Mumcuoglu said.

Mumcuoglu is an expert, but this was not a useful insight. I have two kids in elementary school. I am quite familiar with the psychological, emotional problem that is lice.

For decades now, experts have been trying to convince Americans that the plague of Pediculus humanus capitis is very mild, and that it doesn’t really merit drastic measures; for decades, too, parents have heard the opposite from schools, in the form of urgent letters saying lice have been detected and countermeasures are essential. Along the way, we’ve heard claims and rumors that the head-lice situation is deteriorating—that outbreaks are increasing, that drug-resistant superbugs are taking over. Yet the basic facts of lice remain the same. The parasites are programmed to get up to our heads. We are programmed to let them get inside our minds.


The modern lice wars got their start in Newton, Massachusetts. Deborah Altschuler’s son had lice, and his school implied that this was her fault—that a family like hers would not be welcome anymore. It seemed to her just then, in the early 1980s, that the school itself should have taken more responsibility, that its policies on lice should have been clearer. For that matter, shouldn’t everyone, from parents up to politicians, have been more informed about the problem?

The group she formed out of her home, first called Parents Against Lice and later the National Pediculosis Association, would advocate for aggressive lice-check protocols and more systematic rules for expelling kids from class. If this was not a movement to abolish lice, then it was at least an all-out bid for taking head lice seriously as a public-health concern. With the help of several entomologists at Harvard and the University of Massachusetts, Altschuler pressed the case. At one point, she proposed that head lice were a vector for the virus causing AIDS, spreading it from scalp to scalp. “I felt that AIDS was a wake-up call,” she told me. “We got lucky that it wasn’t insect borne, but it could’ve been.”

[From the September 1987 issue: AIDS and insects ]

Altschuler also worried that the common treatments of the time—various insecticide shampoos either used alone or, more distressingly, in combination—were causing harm to kids. In this and other ways, she was an early incarnation of the MAHA mom: incensed about the failings of the public-health establishment, inclined to do her own research, worried about toxic products and the companies that manufacture them. And in the ’90s, her approach to lice caught on. A new industry of lice consultants and lice-removal salons began to form. Altschuler herself became a known expert in the field. (Her proudest moment, she said, was speaking to the Armed Forces Epidemiological Board in 1992, not long after Operation Desert Storm.) Eventually she’d have a dozen people working for her association, taking about 100 calls a day and selling T-shirts with messages such as Keep your wits, not your nits and Don’t let your child become an egghead.

But even as this wave was gaining strength, a sort of countercurrent formed beneath it. One of Altschuler’s Harvard contacts, a public-health entomologist named Richard Pollack, had begun to worry that the newly fostered panic was unfounded. He knew that the bugs were harmless in themselves; more than that, he believed that they might be far less prevalent than many people suspected, especially the ones who had been reaching out to him with horror stories: “They were telling me that lice had become so common, so abundant, that I could go into any elementary school and collect thousands of them in a morning.” But when he followed up on this idea, and started doing field research on grade-schoolers’ heads, the results weren’t really that alarming. Misdiagnosis was indeed a rampant problem, and the people who were the most upset—the ones who’d put their families through repeated treatments—were the least likely, as he saw it, to actually have lice. “They were stuck in this vicious cycle,” he told me.

A folk entomology of head lice had taken hold, and it entailed a range of false beliefs. Pollack sometimes heard it said that lice could jump from one child’s head to another, and that they were infesting kids from classroom desks and rugs. None of this was true, he said; in general, contagion happens with only direct head-to-head contact—and a louse that falls off a kid at school will soon dry out, infesting no one else. (Mumcuoglu told me that parents needn’t worry about lice-infested furniture or bedding either.)

Even worse, for many schools and lots of parents, the mere presence of a nit, or even a nitlike object in a child’s hair, was prima facie evidence of a dangerous contagion. But a louse’s empty egg sac can remain adhered to growing hair for months, like a shell casing on a dormant battlefield. This is one reason why studies find little benefit from screening kids in class: False positives abound. In 2006, Mumcuoglu estimated that two-thirds of all lice treatments in the U.S. are given to children who don’t have an active infestation. And because many schools send kids home if they’re suspected of having lice, those false positives may add up to as many as 24 million lost days of school. (No one has tallied the time and tears wasted on unnecessary pillow quarantines and the bagging of stuffed animals.)

If Altschuler and her group hoped to raise awareness and concern, then Pollack sought, in part, to tone things down. The two of them had worked together in the ’90s, but inevitably they had a falling out. “Early on, I saw him as a wonderful ally and a helpful person to the cause,” Altschuler told me, “but then he started trying to become us, in his own way, with information that was inconsistent with ours.”

In particular, Pollack and the other head-lice moderates took aim at the unforgiving “no nit” policies in schools, which might exclude a child from the classroom on the basis of a single empty egg sac, and they were winning some important hearts and minds. In 2002, when the American Academy of Pediatrics put out its first clinical report on treating head lice, its authors called no-nit policies “detrimental” and “a response to infestations that is out of proportion to their medical significance.” Two years later, the nation’s school nurses—who have been focused on the scourge of head lice since their profession’s founding in 1902—followed suit. Eventually, some school systems would ease up too. In New York City, where I live, the public schools that service a million students had done away with no-nit policies by 2008.

These were salutary changes, to be sure. Fewer kids would be ejected from the classroom, and presumably some degree of lice-related learning loss was curbed. But the new guidelines did not eliminate the broader set of problems that the parasites create for children and their families. Even though some schools now tolerate nits on children’s heads during the school day, they still communicate aggressively with parents on lice-related matters, encouraging frequent checks at home and better-safe-than-sorry interventions.

No one ever articulates the rationale for treating the mere possibility of lice with more concern than a cold or even COVID. (No one ever gets a letter home from school saying A case of rhinovirus has been detected in your classroom.) But the thinking surely boils down to this: It’s icky to imagine that your child’s hair—and yours!—might at this very moment be aswarm with bloodsucking bugs. Once the lice have gotten to your head, and in your head, no amount of “Just calm down” can make them go away.


If the fear of lice can be recalcitrant, the lice themselves are even more defiant. “It’s an insect that is abnormally subjected to stresses on its survival,” Burgess, the entomologist who tests insect-control products, told me. A louse’s clawlike feet hold fast in both a shower and a swimming pool. Its physiology weathers perfumes and soaps and the bristles on our hairbrushes. And given time enough, its genome may adapt to shrug off almost any poison we apply.

You don’t need to hire experts to dislodge an infestation—humans have managed to delouse themselves since deep antiquity—but the process takes some work, along with proper information. I’m sorry to say that the latter isn’t always near at hand. The New York City schools, for instance, suggest that parents give their kids chemical treatments, naming two kinds in particular—permethrins and pyrethrins—that were obsolesced long ago by insect evolution. America’s head-lice problem will never improve, Mumcuoglu told me with some annoyance in his voice, so long as our institutions insist on making this mistake, particularly about permethrin. (In an emailed statement, the New York City Department of Health said that over-the-counter, permethrin-based products are “an accessible starting point for many families,” but other products may be necessary if “there is treatment failure or suspected resistance.”)

In fact, we do have many other ways of getting rid of lice, even so-called drug-resistant superlice. Ivermectin may not be an anti-cancer, anti-COVID wonder drug, but it’s miraculous at treating parasites: A topical ivermectin cream can likely cure your child of an infestation, and may be purchased over the counter. Ivermectin pills—which could be even more effective—can be prescribed off-label. (Dawn Nolt, the lead author of the American Academy of Pediatrics’ head-lice report, told me that ivermectin may be upgraded to a first-line treatment in the next update to the guidelines.) I happen to be partial to dimethicone, a lubricant that will coat a louse’s body and seal up the holes it uses to get rid of excess water. Since dimethicone’s effect on lice is physical instead of chemical, it may be less likely to provoke resistance over the long term. But Burgess, who first identified dimethicone as a lice-killer, warned me that he’s seeing signs of its waning potency. (Perhaps the louse’s body has been changing shape, he said, and their excretion holes are now harder to plug up.)

[Read: How ivermectin became right-wing aspirin]

And then, of course, there is the fine-toothed comb—a medical device that seems to have been in use by human populations for at least 5,000 years. (Archaeologists have found lice on human mummies, and lice combs in mummies’ tombs.) Combing, when performed with diligence, can sometimes be effective on its own accord, even if it’s also highly, highly inconvenient. Proper combing technique involves sectioning out and clipping up the hair in strips, then combing out each section repeatedly while inspecting for lice and nits that may be the size of sesame seeds. After that, one might need to repeat the entire procedure as soon as two or three days later. Strangely, the AAP guidelines say this process might be beneficial, head lice notwithstanding, in the sense that it allows “a caregiver and child or adolescent to have some close, extended time together.” Extended? Yes. Close? Please be serious. I asked Nolt why the AAP was spreading this absurd misinformation. “We were trying to have a silver lining,” she said.

But there is no silver lining, I’m afraid. In the end, the lice wars have only brought us back to where we started, and also where we’ve always been: worried, inconvenienced, and confused. Pollack says he’s proud of what his 40 years of advocacy accomplished—“an awful lot of kids stayed in school, rather than being sent home,” he said. But he also knows that certain head-lice myths have never gone away, and maybe never will.

When I spoke with Altschuler, she lamented the idea, sometimes floated by today’s head-lice experts, that head lice aren’t so important. “They are important for the people who have them,” she said. This, at least, is a scientific fact.

When he was interviewed onstage at the Conservative Political Action Conference on Saturday, Health and Human Services Secretary Robert F. Kennedy Jr. was asked a question unlikely to be on anyone’s mind in the midst of upheaval in the department he oversees and a conflict in the Middle East: “Who’s stronger—you or Secretary of War Pete Hegseth?”

The exchange was emblematic of the role that Kennedy and other HHS officials played during the four-day conference. As some MAGA attendees grumbled over the war in Iran, they were met with a whole lot of MAHA. Kennedy went after Froot Loops and bemoaned how Americans don’t know how to cook anymore. Mehmet Oz, the Centers for Medicare & Medicaid Services chief, warned about hospice fraud in California. And Jay Bhattacharya, the director of the National Institutes of Health and the acting director (of a sort) of the CDC, explained the value of repurposing already-approved drugs for new diseases. “This sounds geeky,” he told the crowd, “but it is really, really important.”

Perhaps the administration thought that leaning into health would distract from what’s going on in Iran—a sore subject for Republicans who want President Trump to focus on domestic issues. Kennedy did say, unprompted, that he believed that his war-averse uncle and his father would have approved of the military action. But he also was the most prominent member of the Trump administration to take the stage at CPAC. Notably absent was anyone named Trump or any official involved in the decision to bomb Iran.

If that was indeed the administration’s strategy, it didn’t seem to work among MAGA world’s staunchest opponents to Trump’s recent military action. Although a poll of this year’s CPAC attendees found that 89 percent approve of the administration’s actions in Iran, others I spoke with worried about the specter of another “forever war” in the Middle East. As Madeline Elizabeth, a Republican strategist who attended CPAC, told me, “I think that the MAHA movement is almost the only thing that’s ‘America First’ about this administration.”

As he has in his other recent public appearances, Kennedy mostly stuck to his talking points. He touted what he considers to be his wins at HHS—notably, the flipping of the food pyramid to emphasize protein consumption—and insisted that the president was “on my side on virtually every issue” when Kennedy decided to endorse Trump in 2024. But Kennedy didn’t mention his remaking of the CDC’s vaccine-advisory committee or the changes to the recommended childhood-immunization schedule, which are easily among the most consequential policies of his first year in office. Perhaps he avoided doing so because, as The Washington Post has reported, the White House has instructed him to stop taking action on vaccines for fear of Republicans losing the midterms. (The HHS spokesperson Andrew Nixon told me, “We remain focused on the priorities Americans consistently say matter most to them, including tackling chronic disease, improving nutrition and food quality, and lowering the cost of care and prescription drugs.” The White House did not respond to a request for comment.)

The closest Kennedy came to any talk of immunization was when he mentioned that, growing up, he didn’t know any kids who had autism—a seemingly veiled reference to his long-standing belief that vaccines have contributed to the rise in autism since the 1990s. That rise, according to experts, is largely due to better surveillance and broader diagnostic criteria. When Bhattacharya did talk about vaccines—to praise the shingles shot and to tease research on whether it might reduce the risk of Alzheimer’s—the crowd was unmoved. Bhattacharya’s only real applause line came when he said, “It’s no longer Tony Fauci’s NIH.”

[Read: RFK Jr. is losing his grip on the CDC]

Most of the CPAC-goers I spoke with told me they loved Kennedy. Usually in those words: I love him. They weren’t always sure about the specifics of his agenda, but they liked that he was in favor of being healthy. A CPAC attendee named Michael Smith, who was promoting the posting of the Ten Commandments in public schools and dressed as Moses—complete with a staff and a stick-on beard—told me that Kennedy was “getting us back to the Garden of Eden diet.” Several people, including a woman who says she lost 50 pounds just eating meat, told me their personal health stories. (Kennedy recently acknowledged that he’s on the carnivore diet.) Everyone, it seemed, had read Kennedy’s best seller, The Real Anthony Fauci, and many volunteered to me that they had not gotten the COVID vaccine.

The Kennedy adoration created an odd sense of tension at the meeting. It’s true that some GOP strategists have argued that the MAHA coalition is the key to Republicans winning the midterms later this year. For the most part, they’re talking about health-conscious voters who might be persuaded to back Republican candidates. Perhaps it should be no surprise that, at CPAC, the fans are MAGA first and MAHA as a kind of bonus.

But at the same time, the MAGA loyalties clearly supersede the MAHA hype. It was striking how no one seemed to be there solely, or even primarily, to support MAHA. At one point during Kennedy’s fireside chat, the interviewer, Mercedes Schlapp, asked attendees whether they were “MAHA moms.” In a crowd of several hundred, maybe a dozen hands shot up. Scanning the crowd, I saw no shortage of Trump-themed apparel, but no one wearing MAHA T-shirts or hats. Talking about MAHA priorities “doesn’t electrify anybody, and quite honestly, it’s not the conversation, like, broadly, that’s being had,” Vish Burra, a Republican strategist and MAGA provocateur, told me. (Last fall, Burra was fired from his job as a producer for One America News Network after he posted an anti-Semitic AI-generated video on his personal X account. He later deleted the post.)

[Read: The meme-washing of RFK Jr.]

Kennedy, who recently underwent surgery for a rotator-cuff injury, ended up telling Schlapp that Hegseth might have the edge in a test of strength: “He’s got a couple of pounds on me.” But even if Hegseth can bench more than the HHS secretary, at the country’s best-known conservative gathering, Kennedy was the one the Trump administration seems to have tasked with lifting up its disappointed fans.

Gurpreet Dhaliwal sat onstage in a hotel ballroom in Minneapolis. The gray curtains behind him were illuminated by bright blue lights, giving the slightest hint of performance at an otherwise typical medical conference. The presentation was among the most anticipated at the Society to Improve Diagnosis in Medicine’s 2022 meeting. The attendees were there to watch a kind of showcase: a complex diagnosis in action.

Dhaliwal, a professor of medicine at UC San Francisco, was given the details of a patient he had never seen before. As another physician slowly revealed pieces of the case, Dhaliwal narrated his thinking out loud: why he was considering one possibility and rejecting another, and what each new clue revealed for him. Eventually, he decided that the patient was likely suffering from a dangerous buildup of pressure in her abdomen. Left untreated, she could experience organ failure. It was the correct diagnosis, and the audience responded with applause.

Dhaliwal is regarded as one of the country’s most gifted diagnosticians. Colleagues have praised not only his command of physiology but also his ability to make his reasoning legible—to turn clinical uncertainty into something teachable. “To observe him at work is like watching Steven Spielberg tackle a script or Rory McIlroy a golf course,” a New York Times reporter wrote in 2012.

“I appreciate the designation but sort of reject it, only because of my own philosophical stance, which is that it’s very hard to master the diagnostic process,” Dhaliwal told me when I talked with him for my book about diagnosis. He considers himself a student of diagnosis, committed to getting better. “To me, the concept of the master diagnostician is that you’re never good enough.”

That belief puts Dhaliwal on one side of a core question of medicine: Are some doctors inherently better diagnosticians than others, or is diagnostic excellence a skill that any clinician can achieve? Doctors usually get it right—some estimates suggest about 90 percent of the time. But with roughly 1 billion physician-office visits each year in America, even a low error rate can still affect a large number of  people. A 2023 study estimated that 371,000 people die a year and 424,000 are disabled following a misdiagnosis.

In 2015, the National Academies of Sciences, Engineering, and Medicine published a seminal report on diagnostic error with a startling finding: Most people will experience at least one (such as a delayed, wrong, or missed diagnosis) in their lifetime, “sometimes with devastating consequences.” That report prompted a small but vocal group of physicians and other health providers to look inward. They argue that the number of diagnostic errors is unacceptable and must be improved. Dhaliwal has been part of the movement to figure out how.


Some research suggests that many, if not most, diagnostic errors arise from failures in thinking—cognitive bias, premature closure, insufficient reflection. Accordingly, some researchers frame diagnostic error as largely a problem in clinical judgment: the ability to reason through uncertainty and weigh competing explanations in order to reach the right diagnosis and make decisions about care. “Regrettably, how to think in medicine has been a much‑neglected area for medical educators, who stalled somewhere in the Middle Ages, or a century or two earlier,” Pat Croskerry, a retired professor in emergency medicine at Dalhousie University in Canada who’s known for his work on cognitive errors in the diagnosis, told me.

Dhaliwal credits his own abilities to paying close attention to his own thinking. “I do think you can train yourself to be a better diagnostician,” he said. Early in his training, he closely observed the physicians he most admired. Some of them had a knack for identifying rare diseases that evaded their peers. Others mastered the diagnosis of common conditions so thoroughly that they could recognize every permutation of pneumonia. Dhaliwal wanted to excel at both.

But when he asked physicians how to become that kind of doctor, their advice was usually the same: See a lot. Read a lot. It felt unsatisfying. Every physician sees patients. Every physician reads. What, he wondered, truly separates an exceptional diagnostician from a competent one?

He hung on to this question, and about two years after finishing residency in 2003, during a yearlong faculty-development course for medical educators, he encountered a session on clinical reasoning—an emerging field at the time. The physician and medical historian Adam Rodman has described clinical reasoning as “the study of the ability for expert physicians to see what others don’t.” Researchers were beginning to investigate what actually happens in doctors’ minds when they make diagnoses: how they organize their knowledge and put it into practice. Dhaliwal quickly recognized this as the quality he had seen in his role models, even though “they didn’t have a term for it, and neither did I.” The idea of clinical reasoning helped clarify the process; the next question was how to get better at it.

Dhaliwal laid out the key steps of a doctor’s reasoning process: collecting data from a patient; synthesizing that information; accessing “files” in the­ mind, including the details about diseases and how they present; listing possible diagnoses; and choosing one over others. He also began studying the science of expertise and how people—whether Nobel laureates, Olympic swimmers, or mechanics—become exceptional in their field. “They seek out challenges, whereas most of us instinctively try to minimize challenges after we’re competent,” he said.

They also learn from their mistakes. In a 2017 paper, Dhaliwal wrote that ordinary people develop “extraordinary judgment by extracting as much wisdom as possible from their inevitable errors,” a lesson he drew from Philip Tetlock and Dan Gardner’s book, Superforecasting: The Art and Science of Prediction. But medicine doesn’t make that easy for doctors, who may treat a patient once and never see them again. If the patient’s condition worsens, or they receive a different diagnosis later on from someone else, that information may never make its way back to the first doctor. With these ideas in mind, Dhaliwal set out to sharpen his skills. Today, he works in the San Francisco VA Medical Center’s emergency room, where he sees a variety of illnesses and necessarily follows that early advice to see a lot of patients. But, crucially, he also started keeping track of his own cases so that he could follow up on what happened. When he discovers he was wrong, he tries to figure out why. Did he miss something important? Was he exhausted at the end of a long shift? Did he anchor himself to a particular conclusion too quickly?

“I started to get kind of addicted to it,” he said. He explained that the mind wants closure; without knowing the outcome, people tend to assume that things turned out well. His habit of tracking down a patient’s outcome echoes advice delivered more than a century ago by William Osler, one of modern medicine’s founding figures: “Learn to play the game fair, no self-deception, no shrinking from the truth; mercy and consideration for the other man, but none for yourself, upon whom you have to keep an incessant watch.” Diagnostic mastery, Dhaliwal illustrates, is not a mysterious gift bestowed on a talented few. It is the result of examining one’s own thinking and practice without mercy.


But the reasoning that goes into diagnosis may start to look very different. Since his third year of medical school, Dhaliwal has read The New England Journal of Medicine’s Clinicopathological Conference, or CPC. The CPC is a teaching exercise in which doctors are presented with a real patient’s case and asked to reason aloud toward a diagnosis, similar to Dhaliwal’s Minneapolis presentation. Last fall, Dhaliwal participated in a CPC that put him in competition with an AI agent called Dr. CaBot, a medical-education tool developed by researchers at Harvard Medical School.

Both Dhaliwal and Dr. CaBot reached the correct diagnosis and explained their reasoning step by step. They correctly concluded that the patient had a problem in the upper part of his digestive system, which caused a bacterial infection to trigger sepsis, among other complications. Dr. CaBot didn’t identify the cause of the problem, whereas Dhaliwal deduced, correctly, that the man had swallowed a toothpick, which poked through his gut and caused the infection. He had seen that kind of case before.

That Dr. CaBot’s problem-solving came as close as it did to Dhaliwal’s is both promising and disconcerting: It suggests that machines may be able to match the performance of elite diagnosticians. More formal evidence also indicates that large language models may be able to approximate the kind of clinical reasoning expected of physicians. One study published in July 2024 found that when OpenAI’s GPT‑­4 examined the medical information of 100 patients in an emergency room, the AI was able to diagnose them with 97 percent accuracy, outperforming resident physicians. (OpenAI’s models have advanced since then.) Another study found that ChatGPT scored higher on a clinical-reasoning measure than internal-medicine residents and attending physicians at two academic medical centers. Other studies have been more mixed.

Serious concerns about reliability, sycophancy, and hallucinations remain. But in some ways, what a diagnostician does is not so different from what AI claims to do. Both use enormous amounts of information to recognize patterns in symptoms and diagnoses that tend to appear together. A doctor does this through medical education and personal experience; AI does it by predicting plausible explanations based on statistical patterns it has learned from its training materials.

“This is an electric moment in medicine,” Mark Graber, a physician and co-founder of the nonprofit Community Improving Diagnosis in Medicine, told me. “If you can come up with an AI agent that’s as good as Gurpreet Dhaliwal, that is an amazing accomplishment that will surpass the abilities of 99.9 percent of doctors.”

How medicine embraces any of this is an open question. Perhaps AI will strengthen clinicians’ reasoning and close the gap between the Dhaliwals and everyone else. Or it could become a crutch for clinicians, and lead them to lose skills. A 2025 study found that after just three months of using an AI tool to find precancerous growths during colonoscopies, doctors were less likely to identify the growths on their own.

For his part, Dhaliwal is equanimous. “I think AI is going to transform health care radically. I don’t think it’s going to change doctoring radically,” he said. He believes that AI is likely to perform best at the extremes of diagnosis: the very simple cases (such as a poison-ivy rash) and the very complex ones (rare or novel diseases). In the not-so-distant future, people may be able to get answers to routine medical questions at home—What’s this spot? Is my cough concerning? How’s my blood pressure?—without ever needing to see a physician. That may be entirely appropriate, because attending to these everyday concerns usually does not require sophisticated clinical judgment or nuanced decision making.

AI could also prove valuable in identifying conditions that a physician may never encounter in their career, or in helping diagnose patients that have stumped multiple clinicians. These cases tend to hinge on how encyclopedic a doctor’s knowledge of the medical literature is; AI can recognize obscure patterns across millions of cases and publications, and surface possibilities that may lie outside any single physician’s experience.

“What I think is less likely to change is sort of the muddy middle, which is what I think the vast majority of medical practice is,” Dhaliwal said. Much of medicine involves choosing between possibilities: Does a person have an infection, an allergic reaction, or an autoimmune disease? Is it a psychiatric or medical issue? AI could certainly help parse through the options. But medical judgment goes beyond identifying what’s most likely; it involves deciding what the diagnosis means for a particular patient. Two people diagnosed with the same cancer may desire different futures. One may want the most aggressive treatment available, whereas the other may decline interventions that would trade quality of life for longevity. These are value-laden decisions that, at least for now, still require something irreducibly human to navigate. An LLM can recite treatment options and survival rates, but it cannot share responsibility for the choices that follow.

Relying on AI for certain aspects of diagnosis could help free doctors to focus on those more human parts of the job. In the United States, more than 100 million people don’t have a primary-care provider, and the profession itself is dwindling. “If in some form AI is able to beat us, or help us improve our ability to do clinical reasoning, you don’t have to be the smartest person in the room to be a physician, which I think is better for the community,” Jeffrey Goddard, a medical student at the University of Iowa who uses chatbots in his training, told me. A diagnosis, most simply, is an answer to the question What is making me ill? But it can offer much more than that—reassurance, coherence, and, ultimately, relief. Not all of that can be outsourced.


This essay was adapted from Alexandra Sifferlin’s book, The Elusive Body: Patients, Doctors, and the Diagnosis Crisis, published today.

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

“What do you mean, you just take the stomach out?” Karyn Paringatai wondered, when doctors first said her stomach had to be surgically removed. Could she still eat? Yes, but differently. What would replace it? Nothing. She would have to live the rest of her life missing a major organ.  

Paringatai was not actually sick, not yet. Her stomach was fine. But her cousin, just a few years older, had recently died of an aggressive stomach cancer at age 33, leaving behind three children. In a video recorded after her diagnosis turned terminal, the cousin told her little kids to be good for their father. “Please don’t be too mean to the lady that he meets,” she added, anticipating how the void left by her death might be filled. But she must have known that this void could not be filled, not ever. The cousin’s own mother had died young of stomach cancer. So had her grandmother. So had her sister.  

To the doctors who saw Paringatai’s cousin in Tauranga, New Zealand, this pattern was hauntingly familiar. Her cancer was an unusual and distinct kind called diffuse gastric cancer, in which cancerous cells percolate undetected through the stomach, forming obvious masses only in advanced stages—usually too late to treat. The doctors had witnessed the same rare cancer run through a large Māori family near Tauranga. In that family, one woman lost six of her siblings to stomach cancer; a boy had died at 14. The family now reached out to Paringatai’s. It’s genetic, they said. You have to get tested.

Paringatai, whose father was also Māori, got tested. And indeed, she carried a mutation in the same gene, known as CDH1, as the other family. This gave her a 70 percent lifetime risk of developing advanced diffuse gastric cancer. Because this form of cancer can metastasize so quickly and unpredictably, the only surefire method of prevention is a complete removal of the stomach, or total gastrectomy. It’s analogous to a preventive mastectomy for breast cancer—but far more physically taxing. A number of women with a CDH1 mutation have actually had both their stomach and breasts removed because this mutation can also confer a 40 percent risk of breast cancer. One of them told me, about her gastrectomy, “If you can do that, you can do anything.”

Paringatai’s surgeon could not answer all of her questions about living without a stomach—her total gastrectomy was the first he would ever perform in a healthy person. But she went through with the procedure in 2010, and she credits it with saving her life. In the operating room, her surgeon made a long incision down her abdomen, cut out the fist-size pouch of her stomach, and stitched her esophagus to her small intestine. She was the first in her family to have her stomach removed prophylactically. Others followed. On a recent trip to visit her father’s family, Paringatai found herself sitting on a porch with her aunties and cousins. Of the eight people there, she realized, only one still had a stomach: her partner. “You’re the odd one out,” she teased.

Mutations in CDH1 seem to be unusually prevalent in Māori families, where they arose multiple times, possibly because they once conferred some evolutionary advantage. But mutated versions of CDH1 have been found around the world too, and thousands of patients have likely now had gastrectomies to head off cancer.

In New Zealand, “we’re coming up to nearly 30 years of people living with no stomachs,” says Paringatai, who is now a Māori-studies professor at the University of Otago. For the past several years, she has been documenting the experiences of Māori with CDH1. That people can live this long without a stomach is a testament to the adaptability and resilience of the human body. That doctors resort to such radical measures exposes the limits of what modern medicine can offer.

The first Māori to undergo prophylactic gastrectomies were the family that warned Paringatai’s about the cancer gene. They knew all about the gene because they had helped find it. Back in 1994, Maybelle McLeod contacted a genetics lab at the University of Otago about the premature cancer deaths stalking her relatives. Among themselves, she told me, “nobody even talked about it.” The family believed they lived under a curse for letting their land be sold for a quarry. McLeod grew up listening to the quarry’s warning sirens, learning to take cover indoors before the blasting began. She watched as the hill near her home was stripped bare.

McLeod eventually moved away, became a nurse, and learned of the then-nascent field of cancer genetics. This, she thought, explained the so-called curse. The geneticist she contacted, Parry Guilford, agreed to take the case. But her family still had to be persuaded to trust this pakeha, this white man, with their DNA. Over a series of meetings—attended by as many as 100 members—Guilford explained that his motive was the same as McLeod’s: to find the cause of so much death. They ultimately agreed to a contract where only the family, not Guilford, could directly approach members and gather their DNA samples. From there, the gene mapping went quickly, and scientists homed in on CDH1. The gene encodes a protein that normally orients and aligns cells in the stomach; without it, the cells become lopsided, rogue, and possibly cancerous. Any parent with a mutation in the gene has a 50 percent chance of passing it on to their children.

This breakthrough meant that a genetic test could now reveal who was at risk of diffuse gastric cancer; the family would no longer have to live in fear of where cancer would strike next. McLeod herself tested negative for the mutation. She was in the clear.

But those who tested positive for the mutation now faced an agonizing new dilemma. Doctors could not guarantee that endoscopies, even annual ones, would reliably catch such an aggressive cancer in time. Total gastrectomies had been performed before, in patients whose stomachs were already ridden with tumors—but never routinely in healthy people who did not have cancer and may ultimately never have cancer at all. So now they had to choose: 70 percent chance of deadly cancer or surgery with a 100 percent chance of significant side effects?

Rangi McLeod, who worked alongside Maybelle in urging relatives to join the study, was the first of the family to test positive for the CDH1 mutation. Not long after, doctors found a tumor in his stomach. It’s not all bad news, Guilford recalls Rangi saying. “I can lead my family to the next stage.” His gastrectomy would no longer be strictly prophylactic, but he’d have his stomach out, he’d recover, and he’d show anyone who still feared the surgery that it was safe. Rangi did not recover, though. He fell into a coma after the new connection between his esophagus and intestine grew weak. He died a few weeks later. “The whole project almost fell apart immediately on the spot,” Guilford told me.

In time, the family decided that Rangi would not want them to stop—he would not want for their children and grandchildren to continue to succumb to cancer. The family found a more experienced surgeon in a bigger hospital in Auckland; the next 10 gastrectomies were a success. Since then, stomach cancers in the family have plummeted. And the hill where the quarry once operated is green again. The land, it turned out, had not been sold but taken by the government, and was returned to the family. In any case, the curse, some in the family said, had finally lifted.

A successful gastrectomy looks like this: For at least the first six months, your life revolves completely around food, and not in a fun way. You eat tiny portions 10 times a day. You have to chew, chew, chew food like gum to make up for the lack of a stomach. Your digestive system spews from both ends. Your blood sugar rises and crashes unpredictably; you faint at the worst times. You are tired all the time. You lose a lot of weight, which might feel welcome at first but then feels scary. You are unable to work for a couple of months—longer if your job is physical or your recovery is difficult. About one in 10 patients has complications serious enough to warrant hospitalization, according to studies done in U.S. hospitals.

Gradually, the upper part of the intestine adapts into a sort of stomach. You start eating larger portions, less often. You gain weight. You still need regular shots of B12, which you cannot absorb without a stomach. But several people more than 10 years out from surgery told me they eat almost normally—with only small, lingering quirks. Plain water, for example, can be strangely hard to drink, possibly because of its surface tension, while flavored water goes down fine. Young women who have had their stomach out routinely go on to get pregnant and have healthy children.

Still, those who ultimately recovered well knew of family members who continue to struggle years later with nausea or reflux or fatigue. One of Paringatai’s cousins left her teaching position because she could no longer physically keep up in the classroom. Another cousin, Isaia Piho, was a firefighter. He, too, switched to a less demanding job afterward. Isaia and his younger brother, James, told me they had watched their mother die of stomach cancer. They are fathers themselves now, and they did not want their children to experience the same.

But not everyone who weighs the odds decides on surgery. Guilford knows another guy, also a firefighter, who decided to keep his stomach. “I run into burning buildings every second day,” he told Guilford. “I’m good with risk.” He’s chosen to have regular endoscopies instead. Still others might prefer not to know that they carry a CDH1 mutation. In the McLeod family, a young man in his 30s who went untested recently died of cancer, Guilford said. Diffuse gastric cancer has remained stubbornly difficult to treat over the three decades he’s been studying it. At the stage when it can be easily detected, the survival rate is just 20 percent.

Doctors are still trying to fully grasp the long-term consequences of losing a stomach, which makes it more difficult for the body to take up nutrients. “We’re learning that gastrectomy severely impacts bone health in the long run,” Daniel Coit, a surgical oncologist who recently retired from Memorial Sloan Kettering Cancer Center, told me. The loss of stomach acid may make calcium harder to absorb. As these patients age and continue to lose bone density, they will be particularly vulnerable to fractures.

Coit, who performed numerous prophylactic gastrectomies in his career, thinks the social and psychological consequences of losing a stomach deserve more attention too. He had one family in which multiple people died prematurely of suicide or alcohol use after their surgeries. Did the procedure lead directly to their struggles, or unmask a previous predisposition? His example is only anecdotal, he said, but the issue should be studied.

This is anecdotal as well, but alcohol also came up repeatedly in conversations I had with people who have been through gastrectomies—without me asking. Either they themselves started drinking too much, or they had family members who started drinking too much.

James Piho told me he drank to numb his fear of cancer and then he drank to numb his depression after his surgery, when he was unable to provide for his daughter. James actually works in a drug-and-alcohol rehab center, and his experience had him wondering about a link between total gastrectomy and alcohol. Could it be psychological or biological, or both? For people who find drinking plain water uncomfortable, alcohol seems to literally go down easier. And bariatric surgery, in which the stomach is shrunk but not entirely removed, is correlated with a well-documented increase in alcohol-use disorder. Losing even part of a stomach may make patients’ bodies more sensitive to alcohol—two drinks, according to one study, might feel like four. Haupiua Steventon, a member of McLeod’s family who had her stomach removed at 18, got a job at a bar after the long recovery derailed her university studies. “I fell into alcoholism very easily,” she told me. She eventually found her footing and has two kids now, but looking back, she wishes she had been warned about alcohol post-surgery. She wouldn’t have chosen to work in a bar.

In interviews that Paringatai conducted with different generations of McLeod’s family, she observed that some of the younger members struggled more, mentally, post-surgery. The older generation, she told me, had witnessed the deaths of so many “mothers, sisters, fathers, first cousins, children, grandchildren.” Today, young people in the family have not experienced those tragedies firsthand—a sign of progress that nevertheless made the sacrifice of a stomach harder to bear.

“I think we’ll look back one day and we’ll go, ‘Man, I can’t believe that we did such draconian surgery on those people,’” Guilford said. His lab continues to investigate diffuse gastric cancer, with the hope of developing a treatment or drug that makes a total gastrectomy obsolete. In recent years, doctors have become more open to patients choosing surveillance over surgery, especially as it’s become clear that CDH1 carriers with no family history are at lower risk for diffuse gastric cancer—perhaps a lifetime risk of about 10 to 40 percent rather than 70 percent. But even Coit, who is among the more skeptical of surgery, recommends prophylactic gastrectomies for people with a strong family history. Their risk is high. Surgery is the best solution we’ve got. But of course, Guilford said, “people would love to keep their stomachs.”  

Paringatai found that some of the Māori took their stomachs home, rather than allow the hospital to discard the organ as medical waste. In Māori culture, she explained, the body is sacrosanct. They wanted to honor the stomach, thanking it for its service. Several buried theirs on their family land. In a way, this is how they can, for now, keep their stomachs.