Category:

Medical News

Updated at 11:11 a.m. ET on January 13, 2025

In 1900, a former schoolteacher named Carrie Nation walked into a bar in Kiowa, Kansas, proclaimed, “Men, I have come to save you from a drunkard’s fate,” and proceeded to hurl bricks and stones at bottles of liquor. The men, interested less in spiritual salvation and more in physical safety, fled to a corner. Nation destroyed three saloons that day, using a billiard ball when she ran out of bricks and rocks, which she called “smashers.” She eventually—and famously—switched to a hatchet, using it across years of attacks on what she considered to be the cause of society’s moral failings. She referred to this period of her life as one of “hatchetation.”

By comparison, U.S. Surgeon General Vivek Murthy, an internist of mild disposition perhaps best known for raising alarm about the “loneliness epidemic,” has taken a gentler approach to the obstinate challenge of alcohol. His recent call to add cancer warnings to alcoholic products was made without violence or yelling. But the recommendation, if followed, would be the most significant action taken against alcohol since at least the 1980s, when new laws set the national drinking age at 21 and mandated warning labels concerning, among other things, alcohol’s pregnancy-related risks. Murthy’s proposal is part of ever-grimmer messaging from public-health officials about even moderate drinking, and comes during a notable shift in cultural attitudes toward alcohol, especially among “sober-curious” young people. In 2020, my colleague Olga Khazan asked why no one seemed interested in creating a modern temperance movement. Now that movement has arrived with a distinctly 21st-century twist. Carrie Nation was trying to transform the soul of her country. Today’s temperance is focused on the transformation of self.

The movement of the 19th and early 20th centuries—which eventually brought about Prohibition—went hand in hand with broad religious revivalism and the campaign for women’s rights. It considered alcohol to be unhealthy for women, families, and the general state of humanity. The depth of the problem posed by alcohol in pre-Prohibition America is hard to fathom: In 1830, Americans drank three times the amount of spirits that we do today, the equivalent of 90 bottles of 80-proof booze a year. As distilled liquor became widely available, men were wasting most of their wages on alcohol and staying out all night at saloons, and what we now call domestic abuse was rampant, the food historian Sarah Wassberg Johnson told me. Members of the Woman’s Christian Temperance Union saw themselves as a progressive group helping the disadvantaged. “They were protecting the home, protecting the family, and protecting the nation by getting rid of alcohol,” Dan Malleck, a health and sciences professor at Brock University, told me. In the latter half of the 19th century, young people signaled their moral virtue by taking temperance pledges.

Today’s sober-curious, by contrast, post on Instagram about how Dry January has reduced their inflammation, sharpened their jawline, and improved their sleep score. The sanctity of the home, or the overall moral health of society—not to mention the 37 Americans who die in drunk-driving crashes every day—appears to be less of a concern. (To be fair, this focus on health is partially a response to research on moderate alcohol consumption’s detrimental effects on heart health, cancer risk, and lifespan.) In a 2020 Gallup poll, 86 percent of respondents said that drinking alcohol was morally acceptable, an increase from 78 percent in 2018. By contrast, more than half of young adults surveyed in 2023 expressed concerns about the health risks of moderate drinking.

[From the July/August 2021 issue: America has a drinking problem]

Colleen Myles, a professor at Texas State University who studies how alcoholic drinks change cultures, told me that such responses don’t mean that the national conversation about alcohol has abandoned morality—simply that Americans’ ethical center of gravity has drifted. She considers modern teetotaling to be steered by a great moral project of our age: self-optimization. In her book Sober Curious, Ruby Warrington wrote that lower alcohol intake “is the next logical step in the wellness revolution.” Myles said that choosing not to drink in an alcohol-soaked culture is seen as an act of authenticity or self-care; social change, but enacted through the individual. In 2019, a nonalcoholic-spirit producer, who calls her product a “euphoric” instead of a mocktail, almost echoed Carrie Nation when she told The New York Times, “Alcohol is a women’s lib issue, an LGBTTQQIAAP issue, a race issue.” But her vision of temperance was much less socially minded: Sober-curiosity, she said, was about a person’s “freedom to choose.” One can hardly imagine Congress or a radical activist like Nation attempting to restrict that freedom by outlawing the sale of espresso martinis. The proposed warning label, however, with its nod to individual health (and absence of radical social action), is more fitting for our age of wellness. It won’t cure society of all its ills, but it at least has a shot of persuading some people to tone down their drinking.

The original temperance movement’s end result—Prohibition—was more ambitious, and took place at the societal level. Prohibition didn’t make the personal act of drinking illegal, but rather the sale, purchase, and transport of alcohol. After Congress proposed the Eighteenth Amendment in 1917, it allowed seven years for the measure to pass; thanks to widespread enthusiasm, the states ratified it in only 13 months. The amendment and the Volstead Act, the law that enforced it, passed in 1919, and Prohibition officially kicked off in 1920.

In this century, “I don’t think we’re going to have Prohibition again,” Myles said, not least because the sober-curious are not advocating for policy change at this scale. Instead, neo-temperates are shifting social and, yes, moral norms about alcohol by emphasizing its effects on health. They also, crucially, are creating markets for nonalcoholic drinks and spaces. The original temperance movement similarly popularized a number of new beverages, such as sodas and fruit juices. But unlike the modern version, it directly attacked the alcoholic-beverage industry. In 1916, the United States was home to 1,300 breweries that made full-strength beer; 10 years later, they were all gone.

[From the April 1921 issue: Relative values in Prohibition]

Alcohol consumption, and the deaths associated with it, decreased significantly during Prohibition. But many people continued to buy alcohol illegally or make it themselves. Part of the reason the temperance movement didn’t usher in utopia, Malleck said, is that it failed to recognize how drunkenness could be fueled by still other societal problems, such as low wages or 12-hour workdays in factories where you were liable to lose a limb or have to urinate in a corner. These issues persisted even when alcohol was outlawed. In 1933, during the Great Depression, legislators decided the country needed the economic boost from alcohol sales and repealed Prohibition. President Herbert Hoover called Prohibition a noble experiment, but many historians consider it a failure. Today, about 60 percent of Americans drink, and that figure has held steady for more than four decades.

And yet, over the past several years, signs have appeared that fewer young people are drinking. If bricks and hatchets couldn’t convince Americans to transform their relationship to alcohol, perhaps the promise of finding your best self through phony negronis and nonalcoholic IPAs will.


This article originally misstated Colleen Myles’s title and the name of the Woman’s Christian Temperance Union.

Three years ago, when it was trickling into the United States, the bird-flu virus that recently killed a man in Louisiana was, to most Americans, an obscure and distant threat. Now it has spread through all 50 states, affecting more than 100 million birds, most of them domestic poultry; nearly 1,000 herds of dairy cattle have been confirmed to be harboring the virus too. At least 66 Americans, most of them working in close contact with cows, have fallen sick. A full-blown H5N1 pandemic is not guaranteed—the CDC judges the risk of one developing to be “moderate.” But this virus is fundamentally more difficult to manage than even a few months ago and is now poised to become a persistent danger to people.

That didn’t have to be the reality for the United States. “The experiment of whether H5 can ever be successful in human populations is happening before our eyes,” Seema Lakdawala, a flu virologist at Emory University, told me. “And we are doing nothing to stop it.” The story of bird flu in this country could have been shorter. It could have involved far fewer cows. The U.S. has just chosen not to write it that way.

[Read: America’s infectious-disease barometer is off]

The USDA and the CDC have doggedly defended their response to H5N1, arguing that their interventions have been appropriately aggressive and timely. And governments, of course, don’t have complete control over outbreaks. But compared at least with the infectious threat most prominent in very recent memory, H5N1 should have been a manageable foe, experts outside of federal agencies told me. When SARS-CoV-2, the virus that sparked the coronavirus pandemic, first spilled into humans, almost nothing stood in its way. It was a brand-new pathogen, entering a population with no preexisting immunity, public awareness, tests, antivirals, or vaccines to fight it.

H5N1, meanwhile, is a flu virus that scientists have been studying since the 1990s, when it was first detected in Chinese fowl. It has spent decades triggering sporadic outbreaks in people. Researchers have tracked its movements in the wild and studied it in the lab; governments have stockpiled vaccines against it and have effective antivirals ready. And although this virus has proved itself capable of infiltrating us, and has continued to evolve, “this virus is still very much a bird virus,” Richard Webby, the director of the World Health Organization Collaborating Centre for Studies on the Ecology of Influenza in Animals and Birds, told me. It does not yet seem capable of moving efficiently between people, and may never develop the ability to. Most human cases in the United States have been linked to a clear animal source, and have not turned severe.

The U.S., in other words, might have routed the virus early on. Instead, agencies tasked with responding to outbreaks and upholding animal and human health held back on mitigation tactics—testing, surveillance, protective equipment, quarantines of potentially infected animals—from the very start. “We are underutilizing the tools available to us,” Carol Cardona, an avian-influenza expert at the University of Minnesota, told me. As the virus ripped through wild-animal populations, devastated the nation’s poultry, spilled into livestock, started infecting farmworkers, and accumulated mutations that signaled better adaptation to mammals, the country largely sat back and watched.

When I asked experts if the outbreak had a clear inflection point—a moment at which it was crucial for U.S. leaders to more concertedly intervene—nearly all of them pointed to the late winter or early spring of last year, when farmers and researchers first confirmed that H5N1 had breached the country’s cattle, in the Texas panhandle. This marked a tipping point. The jump into cattle, most likely from wild birds, is thought to have happened only once. It may have been impossible to prevent. But once a pathogen is in domestic animals, Lakdawala told me, “we as humans have a lot of control.” Officials could have immediately halted cow transport, and organized a careful and concerted cull of infected herds. Perhaps the virus “would never have spread past Texas” and neighboring regions, Lakdawala told me. Dozens of humans might not have been infected.

[Read: America’s bird-flu luck has officially run out]

Those sorts of interventions would have at least bought more of the nation time to provision farmworkers with information and protection, and perhaps develop a plan to strategically deploy vaccines. Government officials could also have purchased animals from the private sector to study how the virus was spreading, Cardona told me. “We could have figured it out,” she said. “By April, by May, we would have known how to control it.” This sliver of opportunity was narrow but clear, Sam Scarpino, an infectious-disease modeler and flu researcher at Northeastern University, whose team has been closely tracking a timeline of the American outbreak, told me. In hindsight, “realistically, that was probably our window,” he said. “We were just too slow.”

The virus, by contrast, picked up speed. By April, a human case had been identified in Texas; by the end of June, H5N1 had infected herds in at least a dozen states and more than 100 dairy farms. Now, less than 10 months after the USDA first announced the dairy outbreak, the number of herds affected is verging on 1,000—and those are just the ones that officials know about.

The USDA has repeatedly disputed that its response has been inadequate, pointing out to The Atlantic and other publications that it quickly initiated studies this past spring to monitor the virus’s movements through dairy herds. “It is patently false, and a significant discredit to the many scientists involved in this work, to say that USDA was slow to respond,” Eric Deeble, the USDA’s deputy undersecretary for marketing and regulatory programs, wrote in an email.

And the agency’s task was not an easy one: Cows had never been a known source of H5N1, and dairy farmers had never had to manage a disease like this. The best mitigation tactics were also commercially formidable. The most efficient ways to milk cows invariably send a plume of milk droplets into the air—and sanitizing equipment is cumbersome. Plus, “the dairy industry has been built around movement” of herds, a surefire way to move infections around too, Cardona told me. The dairy-worker population also includes many undocumented workers who have little incentive to disclose their infections, especially to government officials, or heed their advice. At the start of the outbreak, especially, “there was a dearth of trust,” Nirav Shah, the principal deputy director of the CDC, told me. “You don’t cure that overnight.” Even as, from the CDC’s perspective, that situation has improved, such attitudes have continued to impede efforts to deploy protective equipment on farms and catch infections, Shah acknowledged.

Last month, the USDA did announce a new plan to combat H5N1, which requires farms nationwide to comply with requests for milk testing. But Lakdawala and others still criticized the strategy as too little, too late. Although the USDA has called for farms with infected herds to enhance biosecurity, implementation is left up to the states. And even now, testing of individual cows is largely left up to the discretion of farmers. That leaves too few animals tested, Lakdawala said, and cloaks the virus’s true reach.

The USDA’s plan also aims to eliminate the virus from the nation’s dairy herds—a tall order, when no one knows exactly how many cattle have been affected or even how, exactly, the virus is moving among its hosts. “How do you get rid of something like this that’s now so widespread?” Webby told me. Eliminating the virus from cattle may no longer actually be an option. The virus also shows no signs of exiting bird populations—which have historically been responsible for the more severe cases of avian flu that have been detected among humans, including the lethal Louisiana case. With birds and cows both harboring the pathogen, “we’re really fighting a two-fronted battle,” Cardona told me.

Most of the experts I spoke with also expressed frustration that the CDC is still not offering farmworkers bird-flu-specific vaccines. When I asked Shah about this policy, he defended his agency’s focus on protective gear and antivirals, noting that worker safety remains “top of mind.” In the absence of consistently severe disease and evidence of person-to-person transmission, he told me, “it’s far from clear that vaccines are the right tool for the job.”

[Read: How much worse would a bird-flu pandemic be?]

With flu season well under way, getting farmworkers any flu vaccine is one of the most essential measures the country has to limit H5N1’s threat. The spread of seasonal flu will only complicate health officials’ ability to detect new H5N1 infections. And each time bird flu infects a person who’s already harboring a seasonal flu, the viruses will have the opportunity to swap genetic material, potentially speeding H5N1’s adaptation to us. Aubree Gordon, a flu epidemiologist at the University of Michigan, told me that’s her biggest worry now. Already, Lakdawala worries that some human-to-human transmission may be happening; the United States just hasn’t implemented the infrastructure to know. If and when testing finally confirms it, she told me, “I’m not going to be surprised.”

In the world of nutrition, few words are more contentious than healthy. Experts and influencers alike are perpetually warring over whether fats are dangerous for the heart, whether carbs are good or bad for your waistline, and how much protein a person truly needs. But if identifying healthy food is not always straightforward, actually eating it is an even more monumental feat.

As a reporter covering food and nutrition, I know to limit my salt and sugar consumption. But I still struggle to do it. The short-term euphoria from snacking on Double Stuf Oreos is hard to forgo in favor of the long-term benefit of losing a few pounds. Surveys show that Americans want to eat healthier, but the fact that more than 70 percent of U.S. adults are overweight underscores just how many of us fail.

The challenge of improving the country’s diet was put on stark display late last month, when the FDA released its new guidelines for which foods can be labeled as healthy. The roughly 300-page rule—the government’s first update to its definition of healthy in three decades—lays out in granular detail what does and doesn’t count as healthy. The action could make it much easier to walk down a grocery-store aisle and pick products that are good for you based on the label alone: A cup of yogurt laced with lots of sugar can no longer be branded as “healthy.” Yet the FDA estimates that zero to 0.4 percent of people trying to follow the government’s dietary guidelines will use the new definition “to make meaningful, long-lasting food purchasing decisions.” In other words, virtually no one.

All of this is a bad omen for Donald Trump’s pick to lead the Department of Health and Human Services. As part of his agenda to “make America healthy again,” Robert F. Kennedy Jr. has pledged to improve the country’s eating habits by overthrowing a public-health establishment that he sees as ineffective. He has promised mass firings at the FDA, specifically calling out its food regulators. Indeed, for decades, the agency’s efforts to encourage better eating habits have largely focused on giving consumers more information about the foods they are eating. It hasn’t worked. If confirmed, Kennedy may face the same problem as many of his predecessors: It’s maddeningly hard to get Americans to eat healthier.

[Read: Everyone agrees Americans aren’t healthy]

Giving consumers more information about what they’re eating might seem like a no-brainer, but when these policies are tested in the real world, they often do not lead to healthier eating habits. Since 2018, chain restaurants have had to add calorie counts to their menus; however, researchers have consistently found that doing so doesn’t have a dramatic effect on what foods people eat. Even more stringent policies, such as a law in Chile that requires food companies to include warnings on unhealthy products, have had only a modest effect on improving a country’s health.

The estimate that up to 0.4 percent of people will change their habits as a consequence of the new guidelines was calculated based on previous academic research quantifying the impacts of food labeling, an FDA spokesperson told me. Still, in spite of the underwhelming prediction, the FDA doesn’t expect the new rule to be for naught. Even a tiny fraction of Americans adds up over time: The agency predicts that enough people will eat healthier to result in societal benefits worth $686 million over the next 20 years.

These modest effects underscore that health concerns aren’t the only priority consumers are weighing when they decide whether to purchase foods. “When people are making food choices,” Eric Finkelstein, a health economist at Duke University’s Global Health Institute, told me, “price and taste and convenience weigh much heavier than health.” When I asked experts about better ways to get Americans to eat healthier, some of them talked vaguely about targeting agribusiness and the subsidies it receives from the government, and others mentioned the idea of taxing unhealthy foods, such as soda. But nearly everyone I spoke with struggled to articulate anything close to a silver bullet for fixing America’s diet issues.

RFK Jr. seems to be caught in the same struggle. Most of his ideas for “making America healthy again” revolve around small subsets of foods that he believes, often without evidence, are causing America’s obesity problems. He has warned, for example, about the unproven risks of seed oils and has claimed that if certain food dyes were removed from the food supply, “we’d lose weight.” Kennedy has also called for cutting the subsidies doled out to corn farmers, who grow the crops that make the high-fructose corn syrup that’s laden in many unhealthy foods, and has advocated for getting processed foods out of school meals.

There’s a reason previous health secretaries haven’t opted for the kinds of dramatic measures that Kennedy is advocating for. Some of them would be entirely out of his control. As the head of the HHS, he couldn’t cut crop subsidies; Congress decides how much money goes to farmers. He also couldn’t ban ultra-processed foods in school lunches; that would fall to the secretary of agriculture. And although he could, hypothetically, work with the FDA to ban seed oils, it’s unlikely that he would be able to generate enough legitimate scientific evidence about their harms to prevail in an inevitable legal challenge.

The biggest flaw in Kennedy’s plan is the assumption that he can change people’s eating habits by telling them what is and isn’t healthy, and banning a select few controversial ingredients. Changing those habits will require the government to tackle the underlying reasons Americans are so awful at keeping up with healthy eating. Not everyone suffers from an inability to resist Double Stuf Oreos: A survey from the Cleveland Clinic found that 46 percent of Americans see the cost of healthy food as the biggest barrier to improving their diet, and 23 percent said they lack the time to cook healthy meals.

If Kennedy figures out how to actually get people like me to care enough about healthy eating to resist the indulgent foods that give them pleasure, or if he figures out a way to get cash-strapped families on public assistance to turn down cheap, ready-to-eat foods, he will have made significant inroads into actually making America healthy again. But getting there is going to require a lot more than a catchy slogan and some sound bites.

It took my father nearly 70 years to become a social butterfly. After decades of tinkering with Photoshop on a decrepit Macintosh, he upgraded to an iPad and began uploading collages of photos he took on nighttime walks around London to Flickr and then to Instagram. The likes came rolling in. A photographer from Venezuela applauded his composition. A violinist in Italy struck up a conversation about creativity.  

And then, as quickly as he had made his new friends, he lost them. One night in 2020, he had a seizure. Then he began forgetting things that he’d just been told and sleeping most of the day. When he picked up his iPad again, it was incomprehensible to him. A year or so later, he put an electric kettle on the gas stove. Not long after, he was diagnosed with Alzheimer’s.

An estimated 7 million Americans age 65 and older are currently living with Alzheimer’s; by 2050, that number is expected to rise to nearly 13 million. Millions more have another form of dementia or cognitive decline. These diseases can make simple tasks confusing, language hard to understand, and memory fleeting, none of which is conducive to social connection. And because apps and websites constantly update, they pose a particular challenge for patients who cannot learn or remember, which means that people like my father, who rely heavily on social media to stay in touch, may face an even higher barrier to communication.  

When my father turned on his iPad again about a year after his seizure, he couldn’t find the Photoshop app because the logo had changed. Instagram, which now had Reels and a shopping tab, was unnavigable. Some of his followers from Instagram and Flickr had moved on to a new app—TikTok—that he had no hope of operating. Whenever we speak, he asks me where his former life has disappeared to: “Where are all my photos?” “Why did you delete your profile?” “I wrote a reply to a message; where has it gone?” Of all the losses caused by Alzheimer’s, the one that seems to have brought him the most angst is that of the digital world he had once mastered, and the abilities to create and connect that it had afforded him.

[Read: My dad had dementia. He also had Facebook.]

In online support forums, caretakers of Alzheimer’s and dementia patients describe how their loved ones struggle to navigate the platforms they were once familiar with. One member of the r/dementia Subreddit, who requested not to be identified out of respect for her father’s privacy, told me that, about a decade ago, her father had been an avid emailer and used a site called Friends Reunited to recall the past and reconnect with old acquaintances. Then he received his dementia diagnosis after back-to-back strokes; his PC now sits unused. Amy Evans, a 62-year-old in Sacramento, told me that her father, who passed away in May at the age of 92, started behaving erratically online at the onset of Alzheimer’s. He posted on Facebook that he was looking for a sex partner. Then he began responding to scam emails and ordering, among other things, Xanax from India. Evans eventually installed child-protection software on his computer and gave him a GrandPad to connect with family and friends. But he kept forgetting how to use it. Nasrin Chowdhury, a former public-school teacher’s aide who lives in New York City, once used Facebook to communicate daily with family and friends, but now, after a stroke and subsequent Alzheimer’s diagnosis at 55, she will sit for hours tapping the screen with her finger—even if nothing is there, her daughter Eshita Nusrat told me. “I’ll come home from work, and she’ll say she texted me and I never replied, but then I’ll look at her phone and she tried to type it out in YouTube and post it as a video,” Chowdhury’s other daughter, Salowa Jessica, said. Now Chowdhury takes calls with the aid of her family, but she told me that, because she can’t use social media, she feels she has no control of her own life.

Many patients with dementia and related cognitive disorders lose the ability to communicate, regardless of whether they use technology to do it. It’s a vicious cycle, Joel Salinas, a clinical assistant professor of neurology at NYU Grossman School of Medicine, told me, because social disconnect can, in turn, hasten the cognitive degeneration caused by Alzheimer’s and dementia. Social media, by its very nature, is an especially acute challenge for people with dementia. The online world is a largely visual medium with a complex array of workflows, and dementia commonly causes visual processing to be interrupted or delayed. And unlike face-to-face conversation, landlines, or even flip phones, social media is always evolving. Every few months on a given platform, buttons might be changed, icons reconfigured, or new features released. Tech companies say that such changes make the user experience more seamless, but those with short-term memory loss can find the user experience downright impossible.

On the whole, social-media companies have not yet found good solutions for users with dementia, JoAnne Juett, Meta’s enterprise product manager for accessibility, told me. “I would say that we’re tackling more the loss of vision, the loss of hearing, mobility issues,” she said. Design changes that address such disabilities might help many dementia patients who, thanks to their advanced age, have limited mobility. But to accommodate the unique needs of an aging or cognitively disabled user, Juett believes that AI might be crucial. “If, let’s say, Windows 7 is gone, AI could identify my patterns of use, and adapt Windows 11 for me,” she said. Juett also told me her 97-year-old mother now uses Siri to make calls. It allows her to maintain social ties even when she can’t keep track of where the Phone app lives on her iPhone’s screen.

[Read: How people with dementia make sense of the world]

The idea of a voice assistant that could reconnect my father to his online world is enticing. I wish he had a tool that would allow him to connect in the ways that once gave him joy. Such solutions will become only more necessary: Americans are, on average, getting both older and more reliant on technology to communicate. The oldest Americans, who are most likely to experience cognitive decline, came to social media later in life—and still, nearly half of the population over 65 uses it. Social media is an inextricable part of how younger generations connect. If the particular loneliness of forgetting how to use social media is already becoming apparent, what will happen when an entire generation of power users comes of age?

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

A quiet monologue runs through my head at all times. It is this: dinner dinner dinner dinner. The thing about dinner is that you have to deal with it every single night. Figuring out what to eat is a pleasure until it becomes a constant low-grade grind. It’s not just the cooking that wears me down, but the meal planning and the grocery shopping and the soon-to-be-rotting produce sitting in my fridge. It is the time it sucks up during the week. It is the endless mental energy. Huh, I think, at 6 p.m., dicing onions. So we’re still doing this?

I can compromise on breakfast. It is absolutely normal to eat the same breakfast every single day for years, and equally normal to eat nothing. Lunch: Eat it, skip it, have some carrot sticks, who cares. Lunch is a meal of convenience. But dinner is special. Dinner isn’t just the largest meal in the standard American diet; it is the most important, the most nourishing, the most freighted with moral weight. The mythical dream of dinner is that after a hard but wholesome day at school or work, the family unit is reunited over a hot meal, freshly prepared. Even if you’re dining solo, dinner tends to be eaten in a state of relative leisure, signaling a transition into the time of day when you are no longer beholden to your job. “You could eat a full bag of Doritos,” Margot Finn, a food-studies scholar at the University of Michigan, told me, but that doesn’t quite cut it for dinner: “There’s some paucity there. There’s some lack.”

[Read: The people who eat the same meal every day ]

The Dinner Problem might be especially acute for working parents like me—children are unrelenting in their demand to eat at regular intervals—but it spares almost no one. Disposable income helps mitigate the issue (disposable income helps mitigate most issues), but short of a paid staff, money does not solve it. I could accept this as the price of being human, if everywhere I looked there was not someone promising a way out. The sheer number of hacks and services and appliances and start-ups suggests that some kind of dinner resolution is forthcoming: How could it not be solvable, with this many options? We are living in what might be the world-historic peak of dinner solutions: A whole canon of cookbooks is devoted to quick-and-easy weeknight dinners for busy families and entire freezer cases dedicated to microwavable meals. There is takeout and prepared food and DoorDash and a staggering number of prep guides outlining how to cook in bulk one day a week. And yet, none of it has managed to solve the problem: Dinner exists, daunting and ominous.

As it stands, dinner is a game of trade-offs: You can labor over beautiful and wholesome meals, but it is so much work. You can heat up a Trader Joe’s frozen burrito or grab McDonald’s—there is a reason that as of 2016, the last time the government counted, one-third of American adults ate fast food on any given day—but you don’t have to be a health fanatic to aspire to a more balanced diet. You could get takeout, but it’s notoriously expensive and frequently soggy, more a novelty than a regular occurrence. Delivery apps, at least, offer the promise of extreme convenience, except that they are even more expensive, and the food is often even soggier.

In spite of all these options, if you cannot free yourself from dinner, you’re not alone. The many attempts to make dinner painless have not lived up to their promise. Remember Soylent? One of the bolder possibilities, for a while, was a shake that pledged to make “things a lot less complicated” by replacing conventional food with a deconstructed slurry of nutrients. I do want things to be less complicated, but I also want variety. I want to chew. A lot of other people seemed to want these things too, which is presumably one reason food-based dinner persists and Soylent has mellowed into a “nutritional supplement lifestyle brand.”

[Read: The man who could make food obsolete]

Given the general enthusiasm for eating, most proposed innovations have focused on easing the labor of making dinner. Grocery stores offer pre-chopped produce; Whole Foods briefly experimented with an on-site “produce butcher” who would slice or dice or julienne your vegetables. Meal kits that ship portioned ingredients to your doorstep ought to be an obvious solution, and for a minute, it seemed like maybe they were. In 2015, Blue Apron was valued at $2 billion and, according to TechCrunch, was poised to reach “99 percent of potential home cooks.” It did not, in fact, reach 99 percent of potential home cooks, nor did any of its competitors. “There are still people who really love meal kits,” Jeff Wells, the lead editor of Grocery Dive, a trade publication, told me. “There just aren’t that many of them relative to the overall food-shopping population.” The problem is the cost, or the menu, or the quality, or the lack of leftovers, or the prep time.

When one dinner solution fizzles, there is always another, and another, which will be superseded by still more. Lately, Wells said, grocery stores have been investing in their prepared to-go options, with in-store pizza counters and plastic clamshells of deli salads and ready-to-heat containers of spaghetti. Everywhere I look, I seem to be inundated with new and somehow improved solutions. On Instagram, I learned about a new delivery service that is in the process of expanding to my area. While streaming a movie, I was introduced, repeatedly, to a company that sells healthy meals I could have ready in two minutes. Every time I turn on a podcast, I am informed about a meal-kit company that, if I use the promo code, will give me free dessert for life. They all promise the same thing: that dinner could be painless, if I let it. I could have it all, my dinner and my sanity.

Of course, all of these options still require divesting from the Norman Rockwell dream of home-cooked dinner. The ideal of dinner has made me resentful and occasionally unpleasant, and at the same time, I viscerally do not want to eat a vat of precooked spaghetti. I can make spaghetti, I thought. But then I was back where I began. Most of us have two basic choices: You can make the necessary compromises and accept something less than optimal, or you can surrender to a wholesome trap of your own making. You can buy the pre-chopped onions, or you can suck it up and chop your own onions. Those are the choices. The notion that there is a permanent way out—a hack, a kit, a service that gives you all the benefits of dinner cooked from scratch without the labor—is an illusion. You cannot have a meal that both is and is not homemade: Schrödinger’s salmon over couscous with broccoli rabe.

Dinner resists optimization. It can be creative, and it can be pleasurable. None of this negates the fact that it is a grind. It will always be a grind. You will always have to think about it, unless you have someone else to think about it for you, and it will always require too much time or too much energy or too much money or some combination of the three. It is unrelenting, in the way that breathing is unrelenting. There is freedom in surrendering to this, that even in this golden age of technological progress, dinner refuses to be solved.

Germs are in the air again: Indicators show that the winter wave of flu and COVID is finally under way. Are you on the verge of getting sick? Am I? My 5-year-old does feel a little warm to me; his sister seems okay. Maybe I should take their temperature?

Maybe I should not. Here’s my resolution for the year ahead: I will not take their temperature. No parent should be taking temperatures. Because doing so is next to useless. Home thermometers are trash.

The thermometer I have is the kind you point at someone’s head. Clearly it’s a scam. At times, I’ll pull the trigger and the number that I get seems almost right. At other times, the readout is absurd. I know when it’s the latter case because, as a human being, I possess a sensate hand. Evolution has deployed a field of thermo-sensing cells on the glabrous surface of my skin, and I’ve found that when these are laid against the forehead of my child, they may produce the following diagnosis: He is hot. Or else: He seems normal. No further probing is required.

I bought my noncontact fever gun in 2020, during what was, in retrospect, a fever-screening fervor, when thermal bouncers were deployed at concert halls and other venues to test your forehead from however far away. I think we all knew in our hearts that this was silly, even those of us who thought the fever guns might be better used in other circumstances. But to call the practice “silly” may have been too kind.

The published evidence on fever guns is damning. One study from the FDA compared their readings, as produced under ideal conditions, with those from oral thermometers; it found that they were often grossly out of whack. The very best-performing models, according to this research, were able to detect a threshold fever—100.4 degrees Fahrenheit—about two-thirds of the time; the very worst could never make the proper diagnosis. Another study, led by Adrian Haimovich, who is now an assistant professor of emergency medicine at Beth Israel Deaconess Medical Center, identified visits to emergency departments in which patients had received both forehead temperature checks and readings with oral or rectal thermometers. The forehead guns were successful at identifying fevers in fewer than one-third of cases.

But let’s not single out the gun, which was to some extent a product of its COVID moment. The standard infrared tympanic probe—which takes a temperature quickly in the ear—is also, in important ways, a waste of time. “I was an ER doc practicing full time when these tympanic thermometers came out,” Edmond Hooker, a professor of health-services administration at Xavier University, told me. He quickly came to think they didn’t work: “I would have a kid come back who was so hot, I could fry an egg on their forehead, and the tympanic thermometer had said 98.6 or 99.” So he started running tests. A paper from 1993 found that the ear thermometers were missing children’s fevers. Another of his studies, conducted in adults, found that the devices were dangerously miscalibrated. (More recent research has raised similar concerns.)

Oral thermometers are fairly accurate, but they present some challenges for use with small children. Rectal probes are the most precise. As for armpit readings, those are also pretty unreliable, Hooker told me. I began to ask him about another means of checking temperature, the light-up fever strip that my parents used to lay across my forehead, but he wouldn’t even let me finish the question. “Absolutely worthless! Your mother was better,” he declared. “That’s what my other study showed: Mom was pretty damn good.”

His other study: Having demonstrated that ear thermometers were ineffective, Hooker decided to compare them with human touch. A parent’s hand—nature’s thermometer—did pretty well: It correctly flagged some 82 percent of children’s fevers, versus the tympanic probe’s 75 percent. Parents’ hands were more prone to overdiagnosis, though: Among the kids with normal temperatures, nearly one-quarter felt warm to their parents. (The false-positive rate for ear thermometers was much lower.)

Many such experiments have been conducted now, in health-care settings all around the world: so much effort spent to measure our ability to diagnose a fever with nothing more than touch. (In medical lingo, this practice is properly—and ickily—described as “parental palpation.”) As a rule, these studies aren’t large, and they may be subject to some bias. For instance, all of the ones that I reviewed were carried out in health-care settings—Hooker’s took place in an emergency department—so the participants weren’t quite “your average kids who might or might not have a cold.” Rather, it’s likely that those kids would have had a higher baseline rate of being feverish, and their parents might have been unusually prone to thinking that their children were very sick.

Some researchers have tried to look at all the little studies of parental touch in aggregate, and although this can be an iffy practice—pooling weak research won’t make it any stronger—these studies do yield about the same result as Hooker’s when taken on the whole: Parents’ hands have a solid sensitivity to fever, of nearly 90 percent, but their specificity is low, at about 55 percent. Put another way: When a kid does have a fever, his parents can usually detect it with their hands, but when he doesn’t, they might mistakenly believe he does.

The latter isn’t great, given that a kid with a fever is supposed to stay home from school or day care. In that case, a thermometer could provide a helpful (moderating) second opinion. But taken as a measure of the risk to your child’s health, palpation must be good enough. The very hottest children—the ones whose infection may be most imperiling—are also the least likely to be misdiagnosed by touch: If your kid’s head feels like the side of a convection oven, then you’d almost certainly say he’s sick, and you’d almost certainly be correct. (And you’d be correct to call his doctor.) As for the borderline conditions—a temperature of, say, 101 or 99 or 100.4—your hand won’t name his fever with as much precision as a good thermometer would. But the added benefit that thermometer provides, both to your child’s health and to your peace of mind, is next to nothing.

It’s important to remember that the very definition of a threshold fever is arbitrary and subject to the ancient scientific law of Hey, that sounds like a nice, round number. Converted into Celsius, 100.4 degrees Fahrenheit comes out to an even 38 degrees. The established “normal” temperature of 98.6 degree Fahrenheit maps on to 37 degrees Celsius. (In truth, the temperatures of healthy, older adults will range from 98.9 to 99.9 degrees throughout the day, as measured with an oral thermometer.) Under normal conditions, a measured fever is nothing more than a single aspect of a broader picture that informs the course of treatment, both Haimovich and Hooker told me. An elderly patient with symptoms of a urinary-tract infection might receive a more comprehensive course of antibiotics if she also has a fever, Haimovich said; the heightened temperature suggests that an infection may have spread. But a kid who has a mild fever and is otherwise okay won’t need any treatment. Some evidence suggests that a light fever may even fortify an immune response; so in principle, slightly elevated temperatures should be left alone unless your child is uncomfortable, in which case, maybe ibuprofen? (Conversely, a kid whose temperature is “only” 99 but who seems listless and confused should probably be seen.)

Haimovich said he has small children, so I asked him how he checks their temperature—does he ever feel their head? “Oh, yeah,” he said. He told me that his wife seems better at detecting fever than he is, which fits with known neurophysiology: Some research suggests that women’s hands are more sensitive to warmth than men’s, on average. One study, though, done at a hospital in Canada, found that dads are just as good as moms at detecting fever with their hands. (The moms were much more likely to believe that they possessed this skill.) Other research has examined whether having multiple children—and thus perhaps having more experience feeling heads—might also be a factor. The answer is no. This suggests that sussing out a child’s fever is not so much a practiced art as a basic fact of our perception.

  

As for Hooker, he said he doesn’t even own a thermometer. He has four kids, and he used to feel their heads all the time. “They’re now all grown adults,” he told me. “They all survived me and my lack of concern for fever.” He advises parents not to waste their money on fancy thermometers that probe the ear or forehead. “Just buy an ice-cream cone for your kid; it’s a lot better,” he said. “And if you really feel you need to know your child’s temperature—if it’s an infant—go up their butt” with a rectal thermometer.

Infants are a special case: Tiny babies with any sort of fever could need treatment right away. But for parents who are beyond that stage, your plan of action will be easy: I will not take their temperature. No parent should be taking temperatures. Just place your hand against their forehead, or use your lips instead. Perhaps your child has a fever. Or maybe he just needs a kiss.

You probably remember when you took your last shower, but if I ask you to examine your routine more closely, you might discover some blank spots. Which hand do you use to pick up the shampoo bottle? Which armpit do you soap up first?

Bathing, brushing your teeth, driving to work, making coffee—these are all core habits. In 1890, the psychologist William James observed that living creatures are nothing if not “bundles of habits.” Habits, according to James’s worldview, are a bargain with the devil. They make life easier by automating behaviors you perform regularly. (I would rather attend to what I read in the news on a given morning, for example, than to the minutiae of how I steep my daily tea.) But once an action becomes a habit, you can lose sight of what prompts it, or if you even like it very much. (Maybe the tea would taste better if I steeped it longer.)

Around the new year, countless people pledge to reform their bad habits and introduce new, better ones. Yet the science of habits reveals that they are not beholden to our desires. “We like to think that we’re doing things for a reason, that everything is driven by a goal,” Wendy Wood, a provost professor emerita who studies habit at the University of Southern California, told me. But goals seem like our primary motivation only because we’re more conscious of them than of how strong our habits are. In fact, becoming aware of your invisible habits can boost your chances of successfully forming new, effective habits or breaking harmful ones this resolution season, so that you can live a life dictated more by what you enjoy and less by what you’re used to.

James was prescient about habits, even though he described them more than 100 years ago. Habitual action “goes on of itself,” he wrote. Indeed, modern researchers have discerned that habits are practically automatic “context-response associations”—they form when people repeat an action cued by some trigger in an environment. After you repeat an action enough times, you’ll do it mindlessly if you encounter the cue and the environment. “That doesn’t mean that people have no recollection of what they did,” David Neal, a psychologist who specializes in behavior change, told me. “It just means that your conscious mind doesn’t need to participate in the initiation or execution of the behavior.”

[Read: Make a to-don’t list]

Our conscious goals might motivate us to repeat a particular behavior, and so serve as the spark that gets the habit engine going. In fact, “people who are best at achieving their goals are the ones who purposefully form habits to automate some of the things that they do,” Benjamin Gardner, a psychologist of habitual behavior at the University of Surrey, told me. He recently enacted a flossing habit by flossing each day in the same environment (the bathroom), following the same contextual cues (brushing his teeth). “There are days when I think, I can’t remember if I flossed yesterday, but I just trust I definitely did, because it’s such a strong part of my routine,” he said.

But even habits that are deliberately begun are worth reevaluating every so often, because once they solidify, they can break away from the goals that inspired them. If our goals shift, context cues will still trigger habitual behavior. A 1998 meta-analysis found that intentions could predict only actions that are done occasionally, such as getting a flu shot, and not actions that were repeated regularly, such as wearing a seat belt. In one study from 2012, students who often went to a sports stadium raised their voices when they saw an image of that stadium, even if they didn’t intend to. And scientists have shown that habitual behaviors and goal-directed behaviors involve different pathways in the brain. When an action becomes a habit, it becomes more automatic and relies more on the sensorimotor system. When scientists damage the parts of animals’ brains that are related to goal-directed behavior, the animals start behaving more habitually. (There remains some debate, however, about whether any human action can truly be independent from goals.)

And yet, people tend to explain their habitual behavior by appealing to their goals and desires. A 2011 study found that people who said they’d eat when they got emotional weren’t actually more likely to snack in response to negative feelings; eating behaviors were better explained by habit. In a 2022 study, Wood and her colleagues asked people why they drank coffee. The participants said they did so when they were tired, but in fact, when they logged their coffee drinking, it was only weakly correlated with their fatigue. “They didn’t have a desire to drink coffee,” Wood said. “It was just the time when they typically did during the day.”

[Read: The long-held habits you might need to reconsider]

Habits also maintain their independence by not being as sensitive to rewards. If you don’t like something the first time you try it, you probably won’t repeat the experience. But habits can persist even if their outcome stops being pleasing. In one study Wood worked on with Neal and other colleagues, people with a habit of eating popcorn at the movies ate more stale popcorn than those without the habit. Those with a popcorn habit reported later that they could tell the popcorn was gross, but they just kept eating it. “It’s not that they are totally unaware that they don’t like it,” Wood said. “The behavior continues to be triggered by the context that they’re in.” It’s not so terrible to endure some stale popcorn, but consider the consequences if more complex habitual actions—ones related to, say, work-life balance, relationships, or technology—hang around past their expiration date.

In the face of invisible habits, awareness and attention are powerful weapons. In a recent study, Gardner asked people who slept fewer than six hours a night to describe their bedtime routines in detail. Doing so revealed pernicious bedtime habits they weren’t aware of before. James Clear, the author of Atomic Habits, has similarly suggested making a “Habits Scorecard,” a written list of all of your daily habits that includes a rating of how positively, negatively, or neutrally they affect your life.

[Read: You can’t simply decide to be a different person]

Neutral habits, such as the timing of my yoga session, can be hardest to take stock of. And if they’re just humming along making your life easier, identifying them might feel pointless. But because habits won’t always have your latest intentions in mind, it’s worth keeping an eye on them to make sure they don’t start working against you. Like it or not, people are destined to be bundled up with habits. But knowing how they work—simply becoming aware of how unaware of them we can be—can help get you to a life with as little stale popcorn as possible.

In the summer of 2018, 59-year-old David Gould went for his annual checkup, expecting to hear the usual: Everything looks fine. Instead, he was told that he was newly—and oddly—anemic.

Two months later, Gould began to experience a strange cascade of symptoms. His ankles swelled to the width of his calves. The right side of his face became so bloated that he could not open his eye. He developed a full-body rash, joint pain, fever, and drenching night sweats. His anemia worsened, and he was requiring frequent blood transfusions. Gould’s physicians were baffled; he was scared. “I started to get my will and affairs in order,” he told me.

Almost two years into his ordeal, Gould learned of an initiative at the National Institutes of Health that focuses on solving the country’s most puzzling medical cases. He applied for the program, and his file soon reached the desks of Donna Novacic and David Beck, two scientists then at the NIH. The pair had helped identify a still-unnamed disease, which they had tied to a particular gene and to a particular somatic mutation—a genetic change that had not been passed down from a parent and was present only in certain cells. Gould’s symptoms seemed uncannily similar to those of patients known to have this new disease, and a blood test confirmed the scientists’ hunch: Gould had the mutation.

The NIH doctors reached Gould by phone the day he was set to start chemotherapy, which had proved dangerous in another person with the same disease. A bone-marrow transplant, they told him, could be a risky but more effective intervention—one he ultimately chose after extensive discussions with his own physicians. Within weeks, he was no longer anemic, and his once unrelenting symptoms dissipated. A few months after his transplant, Gould felt normal again—and has ever since.

When the NIH team published its findings in 2020, the paper created a sensation in the medical community, not only because it described a new genetic disease (now known as VEXAS) but also because of the role a somatic mutation had played in a condition that appeared in adulthood. For many doctors like me—I practice rheumatology, which focuses on the treatment of autoimmune illnesses—the term genetic disease has always implied an inherited condition, one shared by family members and present at birth. Yet what physicians are only now beginning to realize is that somatic mutations may help explain illnesses that were never considered “genetic” at all.


Somatic mutations occur after conception—after egg meets sperm—and continue over our lives, spurred by exposure to tobacco smoke, ultraviolet light, or other harmful substances. Our bodies are adept at catching these mistakes, but sometimes errors slip through. The result is a state called “somatic mosaicism,” in which two or more groups of cells in the same body possess different genetic compositions. In recent years, the discovery of conditions such as VEXAS have forced scientists to question their assumptions about just how relevant somatic mosaicism might be to human disease, and, in 2023, the NIH launched the Somatic Mosaicism Across Human Tissues (SMaHT) Network, meant to deepen our understanding of genetic variation across the human body’s cells.

Over the past decade, genetic sequencing has become dramatically faster, cheaper, and more detailed, which has made sequencing the genomes of different cells in the same person more practical and has led scientists to understand just how much genetic variation exists in each of us. Tweaks in DNA caused by somatic mutations mean that we have not just one genome, perfectly replicated in every cell of our body. Jake Rubens, the CEO and a co-founder of Quotient Therapeutics, a company that uses somatic genomics to develop novel therapies, has calculated that we each have closer to 30 trillion genomes, dispersed across our many cells. Two adjacent cells, seemingly identical under the microscope, can have about 1,000 differences in their genomes.

One medical specialty has long understood the implications of this variation: oncology. Since the 1990s, doctors have known that most cancers arise from somatic mutations in genes that promote or suppress tumor growth, but discoveries such as VEXAS are convincing more researchers that these mutations could help explain or define other types of illnesses too. “We have the data that says many conditions are genetic, but we don’t understand the machinery that makes this so,” Richard Gibbs, the founding director of the Human Genome Sequencing Center at Baylor University, told me. “Maybe somatic mutations are the events that serve as the missing link.” James Bennett, a SMaHT-funded researcher, is confident that the more scientists look at mutations in different cells of the body, the more connections they are likely to find to specific diseases. Until recently, genetic sequencing has been applied almost exclusively to the most accessible type of cells—blood cells—but, as Bennett told me, these cells sometimes have little to do with diseases affecting various organs. The result of SMaHT, he said, will be that “for the first time, we will have an atlas of somatic mutations across the entire body.”

The brain, for instance, is often thought of as our most genetically bland organ, because adult brain cells don’t replicate much, and it has rarely been subject to genetic investigation. But in 2015, scientists in South Korea demonstrated that people with a disease called focal epilepsy can develop seizures because of somatic mutations that create faulty genes in a subset of brain cells. This finding has led researchers such as Christopher Walsh, the chief of the genetics and genomics division at Boston Children’s Hospital, to consider what other brain disorders might arise from somatic mutations. He hypothesized that somatic mutations in different parts of the brain could, for instance, explain the varied ways that autism can affect different people, and, in a series of studies, demonstrated that this is indeed the case for a small portion of children with autism. Other researchers have published work indicating that somatic mutations in brain cells likely contribute to the development of schizophrenia, Parkinson’s disease, and Alzheimer’s disease (though, these researchers note, mutations are just one of several factors that contribute to these complex conditions).

As much as these mutations might help us better understand disease, some scientists caution that few other examples will be as tidy as cancer, or VEXAS. Yiming Luo, a rheumatologist and genetics expert at Columbia University Irving Medical Center (which I am also affiliated with), told me told me that finding germ-line mutations, which are changes to DNA that a person inherits from a parent’s egg or sperm cell, is much easier than finding significant somatic mutations. A germ-line mutation looks like a red ball in a sea of white balls—difficult, but not impossible, to spot; a somatic mutation is gray, and more easily blends in. “In genetics, it can be hard to separate sound from noise,” Luo said. And even when a scientist feels confident that they have found a real somatic mutation, the next steps—understanding the biologic and clinical implications of the mutation—can take years.  

Oncologists have had a head start on translating somatic-mutation science into practice, but doing the same in other specialties—including mine—may prove challenging. Dan Kastner, a rheumatologist and one of the lead NIH scientists responsible for the discovery of VEXAS, told me that, although cancer involves mountains of cellular clones that are easily identifiable and begging to be genetically analyzed, pinpointing a single cell that drives, say, a rheumatologic disease is much harder. The story of VEXAS was remarkable because the mutation causing the disease was found in blood cells, which are easy to sample and are the cells most often tested for genetic variation. Finding other disease-causing somatic mutations in rheumatology and related specialties will take skill, cunning, and a willingness to test cells and organs throughout the body.


Yet my colleagues and I can no longer ignore the possibility that somatic mutations may be affecting our adult patients. VEXAS, which was unknown to doctors five years ago, may be present in 15,000 people across the U.S. (making it as common as ALS, also known as Lou Gehrig’s disease); if its global prevalence matches that of this country, it could affect about half a million people worldwide. And if, while seeking diagnoses for patients, we stop and consider the possibility that diseases we already know are linked to somatic mutations, this could help improve our practice.

Recently, I was called to evaluate a man in his 60s whose medical history was littered with unexplained symptoms and signs—swollen lymph nodes, joint pain, abnormal blood-cell countsthat had stumped his team of specialists. I was struck that his skin was riddled with xanthomas—yellowish, waxy-appearing deposits of fatty tissue—even though his cholesterol levels were normal, and I learned through Googling that among their potential causes was Erdheim-Chester disease, a rare blood-cell disorder that arises due to somatic mutations.

I wondered whether I was losing perspective, given my newfound obsession, but because the patient had already had biopsies of a lymph node and his bone marrow, we sent those off for molecular testing. Both samples came back with an identical finding: a somatic mutation associated with Erdheim-Chester. When I emailed a local expert on the disease, I still expected a gentle admonishment for being too eager to invoke an exceedingly uncommon diagnosis. But within minutes, he replied that, yes, this patient likely had Erdheim-Chester and that he would be happy to see the man in his clinic right away.

I sat at my computer staring at this reply. I could not have even contemplated the likely diagnosis for this patient a year ago, yet here it was: an adult-onset condition, masquerading as an autoimmune illness, but actually due to a somatic mutation. The diagnosis felt too perfect to be true, and in some ways, it was. Fewer than 1,500 patients have ever been found to have this particular condition. But, at the same time, it made me wonder: If rethinking genetic disease helped this one person, how many others out there are waiting for a similar answer?  

Older Posts