Category:

Medical News

For the past year, the United States has gone without its doctor. Ever since Vivek Murthy resigned as surgeon general last January, the role has remained empty despite President Trump’s attempts to fill it. He first nominated the physician Janette Nesheiwat but withdrew her nomination in May after reports that she completed her M.D. not in Arkansas, as she had claimed, but in St. Maarten. In her place, Trump nominated Casey Means, whose background is odd, to say the least.

Means is a Stanford Medicine graduate who dropped out of her surgical residency and has since made a career infusing spiritual beliefs into her wellness company, social-media accounts, and best-selling book. The exact nature of her spirituality is hard to parse: Means adopts an anti-institutionalist, salad-bar approach. She might share Kabbalah or Buddhist teachings, or quote Rumi or the movie Moana. She has written about speaking to trees and participating in full-moon ceremonies, both of which drew ridicule by the conservative activist and unofficial Trump adviser Laura Loomer. Her belief in “the divine feminine” (which she doesn’t quite explain) seems to have led her to renounce hormonal birth-control pills for halting the “cyclical life-giving nature of women.”

Although months have passed since her nomination, Means has still not appeared before Congress—in part because she went into labor with her first child hours before her confirmation hearing was scheduled to begin. (Means did not respond to questions for this story. A spokesperson for Bill Cassidy, who chairs the relevant Senate committee, told me that “the hearing will be rescheduled in the future when Dr. Means is ready” but did not offer a more detailed timeline.) The United States’ year without a surgeon general raises questions about how necessary the role really is. But the surgeon general still serves as the government’s leading spokesperson on public health, and if Means is eventually confirmed, her theology will become rather consequential because it is deeply tied to her beliefs about health. In 2024, she declared in a Senate roundtable on chronic disease that “what we are dealing with here is so much more than a physical health crisis. This is a spiritual crisis.” Part of her solution to both of these crises is to reject experts and institutions in favor of something far more alluring: intuition.

Means wrote in 2024 that she grew up in the Catholic faith, but left the Church in college. She grew fascinated by lectures at the Self-Realization Fellowship Lake Shrine, a spiritual center in Pacific Palisades, California. SRF, the religious organization behind it, was founded in 1920 by Paramahansa Yogananda, the “father of yoga in the West,” whose image graced the album cover of Sgt. Pepper’s Lonely Hearts Club Band. It accepts the teachings of Jesus and other spiritual masters and divinities, but nothing is seemingly as important as one’s personal relationship with God. Yogananda’s book, The Second Coming of Christ, posits that the Second Coming is not necessarily literal, but instead entails an awakening of the divine consciousness in ourselves.

SRF’s influence is apparent in Means’s advice that people follow their “heart intelligence” and “divine intuition” and avoid “blindly ‘trusting the science.’” In a newsletter sponsored by a probiotic-supplement company, she wrote that “applying the scientific method to health and disease has immense utility for helping us understand the natural world and live healthy, longer lives, but it feels increasingly like there is a campaign being enacted against our divine gifts of intuition and heart intelligence.” In another newsletter, she wrote about the role of divine intuition in deciding whether to drink raw milk: She wants to be free to look a local farmer in the eye, “pet his cow, and then decide if I feel safe to drink the milk from his farm.” (One could very well have a lovely experience with a farmer, Kevin Klatt, a registered dietitian and research scientist at UC Berkeley, told me, “but it isn’t going to change the fact that raw milk might give you listeria.”)

In the same newsletter championing bovine contact, Means laments a spiritual crisis of connection to nature. She frequently portrays nature as a force with humanity’s best interests at heart, nearly synonymous with God. In her book, she suggests that chronic stress and trauma can be treated by, among other things, spending time in nature and through “plant medicine”—specifically, psilocybin-assisted therapy. (Means has also written that psychedelics helped her be “one with the moon.”) In that sponsored newsletter, she warned of a prophecy she says was put forth by the Indigenous Kogi people of Colombia, in which humanity has only until 2026 to prove we want to right the wrongs we have foisted upon the Earth, or we will all die. “I use the Kogi prophecy metaphorically,” she wrote. “But I do feel we are on a road to disaster. I think we should take these messages seriously.” Natural disasters, she implied, are a “communication from God.”

Nature worship might be especially appealing at a time when trust in experts is declining and technology has become ever more inscrutable and overwhelming, Alan Levinovitz, a professor of religion at James Madison University and the author of Natural: How Faith in Nature’s Goodness Leads to Harmful Fads, Unjust Laws, and Flawed Science, told me. Means’s appeal to nature and intuition, he said, is empowering because it puts expertise back into everyday Americans’ hands.

The ambiguity of Means’s spiritual views strengthens her appeal—they can be interpreted to fit a wide array of belief systems. Her 2024 New York Times best seller, Good Energy, uses terms such as energy and life force, along with scientific-sounding descriptions of metabolic processes, to insinuate that the vibes are off in the American diet and lifestyle. (Means wrote Good Energy with her brother, Calley, who is now a close adviser to Robert F. Kennedy Jr., the secretary of Health and Human Services.) In her newsletter, she encourages her readers to “avoid conventionally grown foods at all costs,” and warns that buying nonorganic food is a vote to “diminish the life force on this planet” while the use of synthetic pesticides “is giving a poor signal to God (Source!) that we want this miracle to continue.” (Source insinuates a godlike or all-powerful entity.) “She’s drawing on lots of different ideas very freely and without much rigor in ways that feel good,” Joseph Baker, a sociologist specializing in religion at East Tennessee State University, told me. “That sort of allows her to seem like a visionary without having to specify anything.”

Emily Hilliard, a press secretary for the Department of Health and Human Services, wrote in an email that religious and spiritual beliefs should not be held against anyone who seeks a government job, and that Means’s “credentials, research background, and experience in public life give her the right insights to be the surgeon general who helps make sure America never again becomes the sickest nation on earth.” The surgeon general has little power to enforce policy, but can call on Congress to put warnings on products like the ones seen on cigarette packets, release guidelines and reports, and lend support to various initiatives. Means’s belief system—which Baker characterized as a “sacralization of the individual”—suggests that she will use that platform to invite Americans to master their own health. In Good Energy, Means writes of chronic conditions such as depression, anxiety, infertility, insomnia, heart disease, erectile dysfunction, and cancer, “The ability to prevent and reverse these conditions—and feel incredible today—is under your control and simpler than you think.”

That statement is one of many in which Means echoes elements of manifestation: the belief that thinking good thoughts and putting in effort begets good things, which Means says is real. She advocates “tapping into the abundance that is a sheer law of our universe” and calling on a higher power—“When was the last time you simply sat quietly and asked God/spirit/ancestors/nature to help show you the way and guide you to your highest purpose?” she wrote in her newsletter—but also putting in the hard, hard work.

Means goes beyond intuition and heart intelligence to offer concrete suggestions for labor (and spending) that will be divinely rewarded—essentially, a reimagined prosperity gospel. The nature of that work is detailed in the penultimate section of Good Energy. Means recommends eating minimally processed and mostly organic foods, and taking regular cold plunges or showers. (In her newsletter, she also advises Americans to grow the majority of their food; instead of pets, they could “raise chickens and goats and have abundant eggs and milk.”) She includes checklists upon checklists of habits and tests that “enable Good Energy” (and recommends getting a comprehensive lab panel from Function Health, of which she was an investor). She suggests buying a glucose monitor through her own company, Levels, and also recommends various personal-care apps, water filters, and trackers for sleep, food, and activity. Some of these items are sold by the wellness company True Medicine, which helps customers use their health savings account for a wide range of purchases, and in which Means has invested; her brother co-founded it. According to financial disclosures made public in September, Means has also received more than $275,000 from supplement companies. (Means has pledged to divest from True Medicine and other wellness interests if she is confirmed.)

Besides potentially boosting her own bottom line, Means’s embrace of individualism in health is wholly unrealistic. Americans work longer hours than people in many other developed nations, and many don’t have enough time to cook dinner, let alone raise goats. Many of the most important nutrition victories over the past century, such as the fortification of foods and the removal of trans fats, were communal and systemic, Klatt, the dietitian and UC Berkeley researcher, told me—the type of science-backed, population-level interventions that Means hasn’t demonstrated much interest in. A different prospective surgeon general might recommend repeated visits with a dietitian and fight for insurance to cover them, instead of “advocating for this kind of woo-woo stuff that has no data behind it,” Klatt said. Means, though, “is not an individual who seems to be wedded to the scientific process,” Timothy Caulfield, a professor and the research director at the Health Law Institute at the University of Alberta, told me. “This is someone who seems to pull things out of thin air and then look for sciencey-sounding rhetoric” to support them.

Perhaps Means’s eventual confirmation hearing will clarify what, exactly, she intends to do as the face of American public health. But even she may not be sure. “The future of medicine will be about light,” Means wrote to her newsletter subscribers last year, before admitting, “I don’t exactly know how yet.”

Milk is mundane in most contexts, but you can’t help noticing when it is smeared across the upper lips of America’s government officials. An image of Donald Trump sporting a milk mustache and glowering over a glass of milk was just one of many dairy-themed posts shared by government accounts on X during the past week, all of which made clear that the milk was whole. In one video, a seemingly AI-generated Robert F. Kennedy Jr. takes a sip and is transported to a nightclub, suddenly milk-mustachioed; in another, former Housing Secretary Ben Carson raises a glass of full-fat and sports a white ’stache. The upper lips of the former collegiate swimmer Riley Gaines and the former NBA player Enes Kantor Freedom, among other personalities embraced by the right, also got the whole-milk treatment.

The posts were shared to celebrate a big month for whole milk. On January 7, the Department of Agriculture released its updated Dietary Guidelines for Americans, which newly recommend whole dairy over low-fat products, and placed a carton of whole milk near the top of a revamped, upside-down food pyramid. Then, on Wednesday, President Trump signed into law a bill allowing schools to serve whole milk after more than a decade of being restricted to low-fat.

Medical professionals, who have long advised people to avoid full-fat dairy because it contains high levels of saturated fat, were generally critical of the new dietary guidelines for milk. But Kennedy and Trump, along with other government officials, have framed it as a major win for health. Kennedy recently argued that America’s children have been missing out on key nutrients such as calcium and vitamin D because they don’t want to drink the low-fat milk served in schools. The new law, he said at its signing, embodies the new dietary guidelines’ directive to “eat real food.”

The low-fat-versus-whole controversy is a real, active scientific debate. For roughly the past two decades, reduced-fat milk (2 percent milk fat, by weight) has dominated American refrigerators largely thanks to fears about fat in general, and saturated fat in particular. Copious research has linked saturated-fat intake with health issues including cardiovascular disease and cancer, as well as death from all causes. It also leads to higher LDL (“bad”) cholesterol, which has been shown to cause strokes and heart attacks, Kyla Lara-Breitinger, a cardiologist at the Mayo Clinic, told me.

[Read: The most miraculous (and overlooked) type of milk]

Saturated fat generally isn’t a huge concern for children, so giving them the option to drink whole milk at school is somewhat less fraught, Steven Abrams, a child-nutrition expert and a member of the American Academy of Pediatrics, told me. And some researchers propose that, because whole milk is more satiating, kids who drink it are less likely to reach for other high-calorie foods. “Full-fat dairy is especially important for kids ages 12 months to 10 years to meet energy needs and promote brain development,” a spokesperson for the Department of Agriculture wrote in an email. But the AAP holds that kids should switch to drinking low-fat or skim at age 2.

In contrast to most nutritionists, Kennedy is all in on saturated fat, championing foods such as butter, beef tallow, and red meat. At a press conference to announce the new dietary guidelines, Kennedy proclaimed that the government was “ending the war on saturated fats.” The reality is more confusing. The new dietary guidelines promote more foods that are high in saturated fat, but they retain the old recommendation to limit daily saturated-fat intake to 10 percent of total calories, or about 20 grams a day in a 2,000-calorie diet. A single cup of whole milk has 5 grams. If a person consumes the recommended three daily servings of full-fat dairy, it would be “pretty close to impossible” to stay within the saturated fat limit, Caitlin Dow, a senior nutrition scientist at the Center for Science in the Public Interest, told me. (The White House and the Department of Health and Human Services did not respond to a request for comment.)

A relatively new and controversial school of thought posits that full-fat milk isn’t as harmful as other sources of saturated fat. A 2018 study that involved participants from 21 countries found that dairy consumption—even whole-fat dairy—was negatively correlated with mortality and major cardiovascular-disease events. Other studies have shown that the consumption of whole-fat dairy is linked to decreased diabetes risk and doesn’t cause weight gain. “There’s no convincing evidence that low-fat dairy is preferable to whole-fat dairy for any health outcome,” Dariush Mozaffarian, a cardiologist at Tufts University who was a co-author on the 2018 study, told me. The broader research community has so far resisted this idea, but has acknowledged that the science on dairy fat has become more complex. “The reason you’re getting so many conflicting opinions is that the evidence is very controversial,” Lara-Breitinger said, noting the lack of randomized clinical trials comparing whole-fat and low-fat dairy.

[Read: Go ahead, try to explain milk]

Ultimately, milk isn’t “going to make or break a diet,” Dow said. Dairy makes up just 10 percent of the average American’s caloric intake, and most of that is cheese. Even for kids, very real concerns, such as obesity and diabetes, will probably not be solved—or meaningfully exacerbated—by a switch to whole milk. “You could probably have either low-fat or whole-fat, and it doesn’t matter,” Mozaffarian said.

As I have written previously, Americans have spent roughly the past 150 years quarreling about various aspects of milk, including its benefits, safety, and chemical composition. That’s partly because dairy is a powerful industry; last year, dairy products in the U.S. had an economic impact of nearly $780 billion. But since 2012, when the USDA under then-President Barack Obama required schools to serve only low-fat milk, student milk consumption has declined; according to the dairy industry, that’s because low-fat milk doesn’t taste as good. The Trump administration’s promotion of whole milk, Dow said, “really, really supports the dairy industry’s bottom line.” In fact, many of the reviewers of the new dietary guidelines were recently found to have ties to the beef and dairy industries. (When I asked the USDA about allegations of industry influence on the push for whole milk, the spokesperson asserted that the evidence “was evaluated based solely on scientific rigor, study design, consistency of findings, and biological plausibility.”)

[Read: Milk has divided Americans for more than 150 years]

Beyond serving as an economic engine, milk is a potent cultural symbol. It has long evoked an idealized past: a simpler time when cows roamed through pastures and produced pure, wholesome milk, and the Americans who tended them thrived in harmony with the natural world. Dairy companies have leaned into that aesthetic, featuring barns, fields, and words such as pure on milk cartons. Milk is also culturally linked to strength, wealth, and beauty, thanks in no small part to the celebrity-studded dairy ads of the late 20th century, including the “Got Milk?” campaign referenced by the Trump administration’s milk mustaches. Such positive associations make milk a powerful metaphor for what America could be—if certain unsavory elements of modernity could be undone or erased.

Perhaps unsurprisingly, this association has also been invoked in racist contexts for more than a century. In a 1923 speech, Herbert Hoover, who was then commerce secretary and would be elected president five years later, framed milk as a means to ensure “the very growth and virility of the white races.” Modern-day white nationalists and alt-right groups hold up dairy milk as a symbol of whiteness and masculinity, in contrast to soy milk, which they associate with the woke, feminist, multiracial left. (Yes, seriously.)

The idealized era of perfectly safe, perfectly wholesome dairy never really existed. “This whole idea that there was a time when we were healthy, and during that time we were eating steak and drinking whole milk, is not rooted in any reality,” Dow said. Nevertheless, it resonates with the MAHA and MAGA agendas, which both center on the belief that America will return to its former glory if it can re-create the past. The Trump administration’s endorsement of whole milk may nominally be about public health. But a recent White House post featuring a retro illustration of the president as an old-fashioned milkman, captioned “Make Whole Milk Great Again,” was all about aspiration—and the purified nation, untainted by modernity, that America could someday become.

Chief among the burdens weighing upon the weary sports parent—worse than the endless commutes, the exorbitant fees, the obnoxious parents on the other team—is the sense that your every decision has the power to make or break your child’s future. Should your 11-year-old show up to her elementary-school holiday concert, even if it means missing a practice with the elite soccer team to which you’ve pledged 100 percent attendance? What if this turns out to be the fork in the road that consigns her to the athletic scrap heap?

These are heavy decisions—at least they are for me, a soccer dad who happens to have spent years writing about the science of athletic success. Making it to the pros, the conventional wisdom says, is a consequence of talent and hard work. Best-selling books have bickered over the precise ratio—whether, say, 10,000 hours of practice trumps having the so-called sports gene. But the bottom line is that you need a sufficient combination of both. If you’re talented enough and do the work, you’ll make it. If not—well, decisions (and holiday concerts) have consequences.

Rationally, stressing out over missing a single practice is ridiculous. Believing that it matters, though, can be strangely reassuring, because of the suggestion that the future is under your control. Forecasting athletic careers is an imperfect science: Not every top draft pick pans out; not every star was a top draft pick. Unexpected injuries aside, the imprecision of our predictions is usually seen as a measurement problem. If we could only figure out which factors mattered most—how to quantify talent, which types of practice best develop it—we would be able to plot athletic trajectories with confidence.

Unless, of course, this tidy relationship between cause and effect is an illusion. What if the real prerequisite for athletic stardom is that you have to get lucky?

Joseph Baker, a scientist at the University of Toronto’s Sport Insight Lab, thinks that the way talent development is usually framed leaves out this crucial ingredient. Baker is a prominent figure in the academic world of “optimal human development,” who moonlights as a consultant for organizations such as the Texas Rangers. He’s also a longtime skeptic of the usual stories we tell ourselves about athletic talent. The most prominent is that early performance is the best predictor of later performance. In reality,  many cases of early success just mean an athlete was born in the first months of the year, went through puberty at a young age, or had rich and highly enthusiastic parents.

This critique of talent is not entirely new. It’s been almost two decades since Malcolm Gladwell’s Outliers spurred a cohort of hyper-ambitious soon-to-be parents to begin plotting January birth dates (or at least to tell people they were considering it). Over time, the debate about what factors actually matter has devolved into a game of whack-a-mole. If physical development isn’t the best predictor of long-term success, then it must be reaction time, or visual acuity, or hours of deliberate practice. The default assumption is that there must be something that reveals the presence of future athletic greatness.

Baker’s perspective changed, he told me, when he read Success and Luck, a 2016 book by the former Cornell University economics professor Robert H. Frank. Frank describes a hypothetical sports tournament whose outcome depends 49 percent on talent, 49 percent on effort, and 2 percent on luck. In mathematical simulations where as many as 100,000 competitors are randomly assigned values for each of these traits, it turns out that the winner is rarely the person with the highest combination of talent and effort. Instead, it will be someone who ranks relatively highly on those measures and also gets lucky.

This turns out to be something like a law of nature: It has been replicated and extended by others since Frank’s book came out. Among the most influential models is “Talent Versus Luck,” created by the Italian theoretical physicist Andrea Rapisarda and his colleagues, which simulates career trajectories over dozens of years and reaches the same conclusion. This model earned a 2022 Ig Nobel Prize “for explaining, mathematically, why success most often goes not to the most talented people, but instead to the luckiest.”

To Baker, these models suggest that it’s not just hard to reliably predict athletic futures; it’s impossible. He cites examples including a youth-soccer player for Northampton Town who missed a text message from the team’s manager telling him that he’d been dropped from the roster for an upcoming game. He showed up for the bus, went along for the ride, subbed in when another player got injured, impressed the manager, earned a spot for the rest of the season, and went on to play in the Premier League. Luck takes many forms, such as genetics, family resources, and what sports happen to be popular at a given place at a given time. But sometimes, it’s simply random chance: a gust of wind or an errant bounce or a missed text.

It’s easy to see how luck shapes individual moments in sport—how it changes the course of a game, a series, even an entire season. But what’s harder to accept is that luck might also play a role in longer arcs—not just what happens in games but who appears on the court in the first place. The more you reckon with this, the more disorienting it can be, as things start to feel ever more arbitrary and unfair. As Michael Mauboussin, an investor who writes about luck in his 2012 book, The Success Equation, put it to me: “Talking about luck really quickly spills into the philosophical stuff.”

You might think that the growing professionalization of youth sports offers an escape from this randomness—that by driving to this many practices and paying for that many coaches, you’re ensuring the cream will rise to the top. But the opposite is actually true, according to Mauboussin. In The Success Equation, he describes what he calls the “paradox of skill.” Now that every soccer hopeful is exhaustively trained from a young age, an army of relatively homogeneous talent is vying for the same prizes. “Everyone’s so good that luck becomes more important in determining outcomes,” Mauboussin said.

Baker and one of his colleagues at the University of Toronto, Kathryn Johnston, recently published a paper on the role of luck in athletic development in the journal Sports Medicine–Open. I felt a curious sense of relief when I read it. My daughters, who are 9 and 11, both play competitive soccer on teams requiring a level of commitment that I had naively thought went out of style with the fall of the Soviet Union. Seeing the evidence that future athletic success is not entirely predictable felt like a license for parents to loosen up a bit—to choose the holiday concert over the soccer practice without worrying about the long-term ramifications.

Linda Flanagan, the author of the 2022 book Take Back the Game and a frequent critic of today’s youth-sports culture, doesn’t share my optimism. She has no trouble believing that luck is involved with athletic success, but she doesn’t think that acknowledging this fact will change parental behavior. “Hell, they might double down on the investment in time and money, thinking that they need to give their child more chances to get lucky and impress the right coach,” she told me.

But that sort of luck—getting a job on your hundredth interview because the interviewer went to the same high school as you did, say—arguably is more about hustle than it is about serendipity. So is showing up to every soccer practice. Mauboussin’s definition of luck is narrower: It’s the factors you can’t control. No matter how much luck you try to “create” for yourself or your kids, some irreducible randomness might still make or break you.

To Baker, the takeaways from recognizing the role of luck are less about individual parents and more about how sports are organized. His advice to teams and governing bodies: “If there’s any way possible for you to avoid a selection, don’t select.” Keep as many athletes as you can in the system for as long as you can, and don’t allocate all of your resources to a chosen (and presumably lucky) few. When real-world constraints eventually and inevitably do require you to select—when you’re anointing these lucky few as your future stars, and casting out those who perhaps sang in one too many holiday concerts—try to leave the door open for future decisions and revisions. After all, Baker says, no matter how carefully you’ve weighed your predictions, “you’re probably wrong.”

Updated at 9:30 a.m. ET on January 9, 2026

Every few weeks I turn up in a hospital gown at a medical exam room in Massachusetts and describe a set of symptoms that I don’t really have. Students listen to my complaints of stomach pain, a bad cough, severe fatigue, rectal bleeding, shortness of breath, a bum knee, HIV infection, even stab wounds; on one occasion I simply shouted incoherently for several minutes, as if I’d had a stroke. Then the students do their best to help.

I have been given nearly 100 ultrasounds in just the past year, and referred to behavioral counseling dozens of times. I have been consoled for my woes, thanked for my forthrightness, congratulated for my efforts to improve my diet. I have received apologies when they need to lower my gown, press on my abdomen, or touch me with a cold stethoscope. Our encounters, which sometimes run as long as 40 minutes, end with the students giving me their diagnoses; detailing every test, treatment, and drug they want me to have; and then answering all of my questions without ever looking at their watch. Before leaving, they commend me for coming in and promise to check back in on me. It’s a shame I have to feign an illness to get that kind of care.

I learned about fake medical care four years ago when my son, an M.D.-Ph.D. student, mentioned that he was being graded on his skill at treating “standardized patients”: people who are paid to role-play illness. I’m fascinated by the practice of medicine, so I found this notion irresistible. I applied for a job in the standardized-patient program at the University of Massachusetts, and after two full days of training, plus a lot of reading and videos, I was ready to get started.

The practice of faking medical encounters for the sake of education dates back to 1963 at the University of Southern California, but UMass developed one of the first formalized programs in 1982 and has been a model since. Such programs are now, well, standard: According to a count published in a 2023 review of the practice, 187 of the 195 accredited medical schools in the U.S. describe the use of standardized patients on their websites.

Each specific case that an SP might inhabit—and there are hundreds—comes with a minimum of two hours of additional training in person or via Zoom, along with more reading. We’re buried in a blizzard of unique details to memorize about the patients we portray. By the time I’m ready for my fake exam, I can rattle off what vaccinations I’ve had, how long I’ve worked at my job, whether I’ve had my tonsils out, when my mother died, how much weight I’ve gained or lost in recent months, which vitamins I take, how much coffee I drink, how chatty I tend to be, and whether I’ve traveled recently (and might have parasites!).

There’s no script for my encounters, because you never know what the students might ask, say, and do. So I improvise most of my responses, in keeping with the facts I’ve been given. What do I usually eat for breakfast? What do they make at the factory where I work? What sexual acts do my partner and I engage in? My ad-libs are acceptable, according to the grades I get from staff members who occasionally observe the encounters via camera. But many of my colleagues are professional actors, and their performances are superb. We sometimes work in pairs, and more than once I’ve found myself deeply moved—even to the verge of tears—by my partner’s fake suffering.

Of course, we SPs are not the only ones faking it in these sessions; the students are playing along, too. We score them on as many as 50 different elements, including their tone of voice (was it friendly but professional?), their body language (did they lean in to show engagement?), and their facility at palpating our spleens (did they dig in firmly in the right spot?). Most important, we are meant to check that they are learning empathy. Numerous studies have shown that more empathetic care is correlated with better clinical outcomes, perhaps because it makes patients more inclined to share their full medical history, and more likely to stick with whatever treatment has been recommended. In one survey, orthopedic-surgery patients reported that a doctor’s empathy was more central to their satisfaction than the time it took to get an appointment, how long they were stuck in the waiting room, or even what sort of treatment they ended up receiving.

It may not even matter if the doctor’s kindness is sincere, as long as it sounds that way to patients. Dave Hatem, an internist and professor emeritus at UMass who has helped oversee the school’s SP curriculum, told me that even just the act of trying to say empathetic things is valuable for students. “If you get the right words to come out of your mouth, and you do it often enough, then you get to the point where you really mean it,” he said.

Most of the medical students who examine me do seem genuine in their concern. I suspect that if it were up to them, they’d practice medicine this way for the whole of their careers. But however much they might want to provide the superb treatment that I experience as a standardized patient, the health-care system won’t let them.


Elaine Thompson is a recent graduate of Emory University’s medical school, where she learned to provide the same sort of long, thoughtful, whole-person interactions that I get from students. For the past three years, she has been an ear, nose, and throat resident at Johns Hopkins Medicine, one of the best medical centers in the world. Her real-life patient encounters now last for an average of 10 minutes.

“You quickly learn as a resident that the job is to move things along,” Thompson told me. “I’m still curious about my patients as people and want to learn about their families, but if it’s not relevant to their current problem, then asking about it opens a door that will add time to the visit.” So much for chatting to put them at ease, soliciting a full narrative of their symptoms, hearing all their concerns, asking about their job, uncovering anxieties, addressing financial and social challenges, and encouraging their questions. (In an emailed statement, a spokesperson for Johns Hopkins Medicine said that it is committed to delivering “patient-centered training” and “whole person care.”)

[Read: Learning empathy from the dead]

The same is true for Emily Chin, who received her medical degree from UMass in 2023 and is now an ob-gyn resident at UC San Francisco. She told me that she got the message about keeping visits short early on from senior residents, who made a point of tracking the length of her encounters. “I’d just have time to check the cervix, do a quick ultrasound, and then make a decision about admitting or discharging the patient,” she said. Another source of pressure is the knowledge that spending any extra time with a patient means that dozens of other patients will be waiting longer to be seen: “You see the patients piling up in the waiting room, and you see the schedule screen going red.” (UCSF’s vice dean for education, Karen Hauer, did not object to this characterization, but noted that the school advises its residents on how to establish patient rapport when time is short.)

Residents also learn that time is money. Hospitals and practices view a doctor’s interactions with a patient in terms of “relative value units.” Reimbursement for seeing a patient whose high cholesterol leads to a prescription for a statin might bring $60 into the hospital or clinic. Reimbursement for extra time spent discussing the patient’s fears of side effects and concerns about affording the drug’s co-pay or making dietary changes brings in $0. “That doesn’t exactly encourage providing the most empathetic, patient-centered care,” a UMass Memorial Health resident named Hans Erickson told me.

The residents I spoke with worried that these time pressures were only going to get worse when they finished residency and became full-fledged doctors. In light of those constraints, does it still make sense to emphasize highly empathetic care for students? I asked that question of Melissa Fischer, the physician who directs the SP program and other simulation training at UMass. Fischer argues that the lessons we impart to students can survive the crush of residency, even if they have to be applied in abbreviated ways. “That interest in building connections to patients stays,” she said. “They just have to find faster ways to build them.”

[Read: How to teach doctors empathy]

Lisa Howley, an educational psychologist who serves as the senior director for transforming medical education at the Association of American Medical Colleges, told me that training up a generation of more empathetic medical students will make the health-care system better. “We think of young medical learners as agents of potential change,” she told me. “They’ll see the gaps and weaknesses, and they’ll look for ways to make improvements.” Besides, what would be the benefit of forcing medical students to learn about patient encounters in the hectic, abbreviated format they’ll confront as residents? “It doesn’t make sense to apply those pressures early in their education,” she said. After all, we don’t teach student pilots how to fly a plane while trying to make up for time lost to flight delays or dealing with unruly passengers.

All of the residents I spoke with said they look for ways to connect with patients despite the harsh realities of the system. “The desire to get to know the patient as a whole person doesn’t go away; it’s just a matter of finding ways to bring it to the surface as a stressed resident,” Erickson said. Chin put it this way: “It’s not that it’s challenging to keep up empathy, it’s that it’s hard to be empathetic all the time.”

At the end of my fake encounters, I try to be encouraging. I tell the students how I, as a patient, felt treated by them, and then I challenge them to give ideas for how they might improve. Sometimes, when one of them has done a bang-up job of making me feel heard, I tell them that I hope they’ll be able to sustain that level of engagement when they’re a practicing doctor—and I always get the sense that the students hope so too.


This article originally described “relative value units” as “revenue value units.”

Eat More Deer

by

Updated at 5:14 p.m. ET on January 9, 2026

The deer were out there. The crisp tracks in the snow made that clear. Three hours into our hunt through the frigid New Hampshire woods, Ryan Calsbeek, a rangy 51-year-old biology professor at Dartmouth, guessed that 200 animals were hiding in the trees around us. Calsbeek and I were 20 feet up a pignut hickory, crouching on a creaky platform. His friend Max Overstrom-Coleman, a stocky 46-year-old bar owner from Vermont, had climbed a distant tree and strung himself up by a harness, readying his compound bow and swaying in the wind. Shivering in camo jackets and neon-orange beanies, we peered into the darkening forest, daring it to move.

I had joined Calsbeek’s December hunt to try to get my hands on high-quality red meat. Calsbeek had yet to kill a deer that season, but in previous years, he told me, a single animal kept his family of four well fed through the winter. His young daughters especially liked to eat deer heart; apparently, it’s marvelously rich and tender. My mouth watered at the thought. The last time I’d tasted venison was more than a decade ago at a fancy restaurant in Toronto, where it was served as carpaccio, drizzled in oil and so fresh that it may as well have pranced out of the woods and onto my plate.

A bounty of such succulent, free-range meat is currently running through America’s backyards. The continental United States is home to some 30 million white-tailed deer, and in many areas, their numbers are growing too rapidly for comfort. Each year, a white-tailed doe can typically birth up to three fawns, which themselves can reproduce as soon as six months later.

Wherever deer are overabundant, they are at best a nuisance and at worst a plague. They trample gardens, destroy farmland, carry ticks that spread Lyme disease, and disrupt forest ecosystems, allowing invasive species to spread. They are involved in tens of thousands of car crashes each year in New York and New Jersey, where state wildlife departments have encouraged hunters to harvest more deer. In especially populated regions, wildlife agencies hire sharpshooters to cull the animals. Last year, New Hampshire legislators expanded the deer-hunting season in an attempt to keep the population under control. By the looks of the forest floor, which was pitted with hoof marks and scattered with marble-shaped droppings, that effort was falling short.

Over the past decade, some states have proposed a simple, if controversial, strategy for bringing deer under control: Couldn’t people like me—who don’t hunt but aren’t opposed to it—eat more venison?

Venison may not be a staple of American cuisine, but it has a place in many people’s diets. Health influencers laud it as a lean, low-calorie, nutrient-dense source of protein. Venison jerky sticks are sold at big-box stores and advertised as snacks for people on Whole30 and keto diets. Higher-end grocery stores, such as Wegmans and Whole Foods, sell ground venison for upwards of $12 a pound, roughly twice the cost of ground beef.

Part of the reason venison is so expensive is that most of it is not homegrown. It’s mostly imported from New Zealand, which has sent more than 5 million pounds of the stuff to the U.S. every year since 2020. Beef, the dominant red meat in the States, has historically been more affordable. But beef prices jumped nearly 15 percent in 2025, and the conventional kind sold in most supermarkets comes from cattle raised in abysmal conditions. If high-quality venison were cheaper and more widely available, it could be an appetizing alternative.

In recent years, a few deer-swamped states, including New Jersey and Maryland, have tried to legalize the sale of hunted venison, which would deliver two key benefits: more deer out of the ecosystem and more venison on people’s plates. Despite the sport’s association with trophies, many deer hunters are motivated by the prospect of obtaining meat, and they can only consume so much. “It’s for your own table,” Overstrom-Coleman said as he fixed climbing sticks onto a tree to form a makeshift ladder. He had already stocked his freezer full of venison this season (“That son of a bitch,” Calsbeek whispered, once we’d left our companion in his tree) and planned, as many hunters do, to donate any excess meat to a food bank.

Hunting is waning in popularity, in part because younger people are less keen on participating than older generations. Efforts to bring in more hunters, such as programs to train women and youth in outdoor skills, are under way in many states. Women are the fastest-growing demographic, and they participate largely to acquire food, Moira Tidball, the executive director at the Cornell Cooperative Extension who leads hunting classes for women, told me. Still, interest is not growing fast enough for the subsistence-and-donation system to keep deer numbers in check.  

[Read: America needs hunting more than it knows]

It’s hard to imagine a better incentive for deer hunting than allowing hunters to sell their venison to stores and restaurants. But the idea is antithetical to a core tenet of American conservation. For more than 100 years, the country’s wild game has flourished under the protection of hunters and their allies, steadfast in their belief that the nation’s animals are not for sale.

The last time this many white-tailed deer roamed America’s woodlands, the country didn’t yet exist. To the English colonists who arrived in the New World, the deer bounding merrily through the forests may as well have been leaping bags of cash. Back home, deer belonged to the Crown, and as such, could be hunted only by the privileged few, Keith Tidball, a hunter and an environmental anthropologist at Cornell (and Moira’s spouse), told me. In the colonies, they were free for the taking.

Colonists founded a robust trans-Atlantic trade for deer hide, a particularly popular leather for making work boots and breeches, which drastically reduced the deer population. In Walden, Henry David Thoreau notes a man who preserved the horns “of the last deer that was killed in this vicinity.” The animals were already close to disappearing from many areas at the beginning of what ecologists have called the “exploitation era” of white-tailed deer, starting in the mid-19th century. Fifty years later, America was home to roughly half a million deer, down 99 percent from precolonial days.

The commerce-driven decimation of the nation’s wildlife—not just deer but birds, elk, bears, and many other animals—unsettled many Americans, especially hunters. In 1900, Representative John Lacey of Iowa, a hunter and close friend of Theodore Roosevelt’s, introduced a bill to ban the trafficking of America’s wildlife. (As Roosevelt, who notoriously hunted to collect trophies, wrote in 1913, “If there is to be any shooting there must be something to shoot.”) The Lacey Act remains one of the most binding federal conservation laws in existence today.  

[From the May 1906 issue: Camping with President Theodore Roosevelt]

The law is partly contingent on state policies, which make exceptions for certain species. Hunters in most states, for example, can legally harvest and sell the pelts of fur-bearing species such as otters, raccoons, and coyotes. But attempts to carve out similar exceptions for hunted venison, including the bills in Maryland and New Jersey, have failed. In 2022, the Mississippi attorney general published a statement that opened up the possibility of legalizing the sale of hunted deer, provoking fierce opposition from hunters and conservationists; today, the option remains open but has not led to any policy changes. Last year, an Indiana state representative introduced a bill that would allow the sale of hunted venison, but so far it has gone nowhere.  

The practical reason such proposals keep failing is that allowing the sale of hunted meat would require huge investments in infrastructure. Systems to process meat according to state and federal laws would have to be developed, as would rapid testing for chronic wasting disease, an illness akin to mad cow that could, theoretically, spread to humans who eat infected meat, though no cases have ever been reported. Such systems could, of course, be implemented. Hunted deer is sold in some common grocery stores in the United Kingdom, such as Waitrose and Aldi. (Notably, chronic wasting disease is not a concern there.)

[Read: Deer are beta-testing a nightmare disease]

Although the sheer abundance of deer makes them easy to imagine as steaks on legs, several experts cautioned that some people’s affection for the animals runs deep. Deer are cute; they’re docile; they’re Bambi. David Drake, a forestry and wildlife professor at the University of Wisconsin at Madison, likens them to America’s “sacred cow.” As Drake and a colleague have outlined in a paper proposing a model for commercialized venison hunting in the U.S., any modern system would be fundamentally different from the colonial-era approach because it would be regulated, mostly by state wildlife agencies. But powerful coalitions of hunters and conservationists remain both faithful to the notion that wild game shouldn’t be sold and fearful that history will repeat itself. As the Congressional Sportsmen’s Foundation, a national hunting association, puts it, “Any effort to recreate markets for game species represents a significant threat to the future of our nation’s sportsmen-led conservation efforts.” Some of the fiercest pushback to the New Jersey law, Drake told me, came from the state wildlife agency.

The only U.S. state with a deer-related exception to the Lacey Act is Vermont. During the open deer-hunting season (which spans roughly from fall to winter in the Northeast) and for 20 days afterward, Vermonters can legally sell any meat that they harvest. This policy was introduced in 1961, and yet, “I am not aware of anyone who actually takes advantage of it,” Nick Fortin, a wildlife biologist at Vermont’s Fish and Wildlife Department, told me. He added that the department, which manages the exasperated homeowners and destabilized forests that deer leave in their path, has been discussing how to raise awareness about the law.

Even after I explained the 1961 law to several Vermont hunters, they were hesitant to sell me any meat. Hunted meat is meant to be shared freely, or at most bartered for other items or goodwill, Greg Boglioli, a Vermont hunter and store owner, told me. I met Boglioli at the rural home of his friend Fred Waite, a lifelong hunter whose front room alone was decorated with 20 deer heads. I had hoped to buy venison from Waite, but he insisted on sharing it for free. After all, he had plenty. His pantry was crammed with mason jars of stewed venison in liver-colored brine. On a table in the living room was the scarlet torso of a deer that his son had accidentally hit with his truck the other day, half-thawed and waiting to be cooked.

During our hunt, I found Overstrom-Coleman to be more open to the idea of selling the venison he hunted. “I guess that would be a pretty excellent way to share it,” he said. Earlier in the season, he’d killed a deer in Vermont, and he was willing to sell me some of the meat the next day. At least, I thought as I stared into the motionless woods, I’d be going home with something.

[From the July/August 2005 issue: Masters of the hunt]

By the time the sun went down, the only deer I’d seen was a teetering doe in a video that Overstrom-Coleman had taken from his tree and sent to Calsbeek. “Too small to kill,” he texted; he’d meet us in the parking lot. The air was glacial as Calsbeek and I trudged empty-handed toward the trailhead, hoofprints glinting mockingly in the light of our headlamps. From the trunk of the car, we took a consolation swig of Wild Turkey from a frosted bottle, and Overstrom-Coleman reminded me to visit the next day.

I found his chest freezer stuffed with paper-wrapped packages stamped with Deer 2025. He handed me three and refused to let me pay. Back home a few days later, I used one to make meatballs. Their sheer depth of flavor—earthy and robust, with a hint of nuttiness—made me wonder why I bothered to eat farmed meat at all.


This article originally misidentified Max Overstrom-Coleman’s hunting weapon.

Every night before bedtime, my daughter tilts back her head so that a pair of metal plates inside her mouth can be cranked apart another quarter of a millimeter. We turn a jackscrew with a wire tip; it spreads the bones within her upper jaw. At times she groans or even cries: she says that she can feel the pressure up into her nose.

This is normal. My daughter is 9 years old. She has a palate expander.

So does her best friend, and, by her count, so does nearly one in four of the kids in her fourth-grade class. On Reddit’s r/braces forum, a practitioner based in Frisco, Texas, said he was surprised by “how many parents ask me, ‘Hey, does my child need an expander? Everyone else seems to have one.’” His colleagues seemed to notice something similar. “Everybody’s being told they have a narrow jaw, and everyone’s being given an expander,” Neal Kravitz, the editor in chief of the Journal of Clinical Orthodontics, told me.

A generation ago, getting braces was a rite of passage into seventh grade. Today, the reshaping of a child’s smile may commence a few years earlier, at 7, 8, or 9 years old. At that point, the two sides of the upper jawbone haven’t yet joined together, a fact that is propitious for a different orthodontic process: instead of straightening, expansion. During this phase of life, when kids still have some baby teeth, a tiny dungeon rack may be wedged between a child’s upper teeth, then used to spread her upper jaw and—proponents say—introduce essential room for sprouting teeth.

The expander is an old device; debates about its use are hardly any younger. What seems to have been the first expander was described in 1860, in the journal The Dental Cosmos, by a San Francisco dentist named Emerson Angell. He wrote of “an apparatus, simple and efficient,” that he’d placed into the mouth of a young patient. Then he’d told her to expand it, day by day, by advancing a central screw—just as my daughter does today. But the journal’s editors were skeptical of Angell’s work. We “must beg leave to differ with the writer in the conclusion arrived at,” they announced in a prefatory note, foreshadowing a long disagreement within the field.

This concerned the merits of expansion versus those of extraction—whether a child’s jaw should be broadened to accommodate her teeth, or whether certain teeth should be pulled to accommodate her jaw. Around the turn of the 20th century, the influential orthodontist Edward Angle favored jaw broadening; he believed that all children should have their teeth intact, nestled in a capacious jaw, as exemplified by a human skull that had been ransacked from an Indian burial mound not far from where he practiced, which he called “Old Glory.” A few decades later, though, orthodontic research found that expanded jaws might still “relapse” into a narrow shape. By the 1970s, pulling teeth became the rule, Daniel Rinchuse, a Seton Hill University professor of orthodontics, told me.

This consensus was itself short-lived, he said—not because the field had come across some new and better mouth-expanding tech but because of fears about the supposed ill effects of doing too many extractions. Some dentists claimed that what was then the standard approach in orthodontics could even lead to painful disorders of the temporomandibular joint, or TMJ. In the face of these concerns, expanders made a comeback.

Eventually, some orthodontists started claiming that expanders had another major benefit—that prying open a child’s palate could improve her breathing and prevent sleep apnea. Some now recommend this airway-focused intervention not just for kids my daughter’s age but for toddlers too.

The basis for the trend was never really scientific, though. “Do expanders prevent obstructive sleep apnea? In capital letters: NO WAY,” Kravitz said. “There are endless research papers on this stuff.” The problem isn’t that expanders have no value, he continued; it’s that they’re clearly overused. According to Rinchuse, who co-edited the book Evidence-Based Clinical Orthodontics, the idea that extracting teeth will lead to joint disorders has never been proved. Indeed, no “high-quality evidence” supports expansion of the upper jaw for any reason, he said, except in cases where a child has been diagnosed with posterior “crossbite.” He said that, overall, orthodontic practice is less constrained by evidence than other fields of health care are, because the ill effects of bad decisions will be slight. As he put it, “In orthodontics, no one dies.”

Steven Siegel, the current president of the American Association of Orthodontists, acknowledged that some practitioners may be inclined to put a rack on every child’s palate: “There are some abuses,” he told me. But he also argued that the recent increase in expander use hasn’t really been dramatic, and that for the most part, the devices are used to positive effect. For people with a narrow jaw and crowded teeth, he said, expanders can prevent the need for extractions down the road; some kids, at least, could see improvements in their breathing. When I noted that I’d heard the opposite on both counts from Kravitz and Rinchuse, he responded that they simply disagreed. “I have great respect for both of them,” he said. “I would say that there is a controversy.”

For the record, my daughter is delighted by the treatment she’s received: In a recent family interview, conducted over breakfast, she described her course of orthodontics as “cool and fun.” Her orthodontist (who happens to be a former high-school classmate) has been thoughtful and communicative, and I’ve recommended her to several other families. Still, despite the fact that no one dies from orthodontics, one might also choose to avoid a treatment that costs several thousand dollars, has disputed benefits, and may cause modest pain—not to mention any moral injury that may accrue from tilting back your daughter’s head and cranking open metal plates to wrench her face apart.

And despite whatever caused expander mania, its existence can be jarring for a parent who grew up in the prior era of orthodontics. Indeed, the period during which this trend developed—from, say, the late 1980s until the early 2020s—happens to coincide with the stretch that intervened between my own entry into middle school and my daughter’s. For my fellow members of this cohort, expansion of the fourth-grade palate appears to be a strange and sudden social norm. During one visit to the orthodontist, my daughter and I found a handful of children about her age seated in a line of dental chairs, with technicians leaning over each of them to turn the screw of their expander. It was like we’d all gathered there for some initiation rite for children of the tribe that dwells on Cobble Hill in Brooklyn—a ritual of widening.

Not long after that, I called up Luke Glowacki, an anthropologist at Boston University who co-directs a research project in Ethiopia’s Omo Valley, where body modifications—and dental modifications in particular—are not uncommon. He told me about social groups there and elsewhere in which a child’s teeth might be filed down to points or a person’s lower lip stretched out with a plate.

Is orthodontics any different? It presents itself as curative and scientific, but many orthodontists’ websites are replete with beauty claims as well: An expander may “protect your child’s facial appearance” or provide “enhancement to the facial profile.” Siegel said that a broadened palate gives “a more aesthetic width of the smile.” Kravitz said that it could help shrink the unattractive gaps inside a person’s cheeks—“dark buccal corridors,” in the language of the field.

In East Africa, dental and other body modifications carry similar ambiguities of purpose. Filing down a person’s teeth, for instance, or removing them altogether “may also be done for ostensible health reasons,” Glowacki said. Some body-modification rituals could be understood to ward off harmful spirits, for example. In other words, they’re prophylactic. Glowacki also told me about a Nyangatom woman he knows who has scars carved into both her shoulder and forehead. The former are purely decorative, but she’d received the latter on account of being sick.

Glowacki is a parent, too, and I asked him whether his training as an anthropologist affected how he thought about expanders or other anatomical procedures, such as ear piercing, that are carried out on children in the United States at industrial scale. “You’re not gonna find any society in the world that doesn’t modify their body in some way in accordance with their ideas of beauty or of health,” he said. “We’re doing what societies all over the world do.” If now I’ve paid an orthodontist to reshape my daughter’s mouth, maybe that’s just human nature.

On a recent Tuesday morning, I was blessed with a miracle in a mini-mart. I had set out to find the protein bar I kept hearing about, only to find a row of empty boxes. But then I spotted the shimmer. Pushed to the back of one carton, gleaming in its gold wrapper, was a single Salted Peanut Butter David Protein Bar. It was mine.

David bars are putty-like rectangles of pure nutritional efficiency: 28 grams of protein stuffed into 150 calories, or roughly the equivalent of eight egg whites cooked without oil. They are booming right now. After all, in this era of protein mania, one must always be optimizing. A Quest bar might get you 20 grams of protein for just under 200 calories, but David—named after Michelangelo’s masterpiece—does more for less. “Humans aren’t perfect,” promises one David tagline, “but David is.” Why, given the possibility of perfection, would you accept eight grams less?

If a food with more protein is better, then it follows that a food with less is worse. After eating my David bar, I couldn’t help but feel a little bit bad about my dinner of brown rice and spicy chickpeas. A cup of Eden Foods organic chickpeas (240 calories) gets you a measly 12 grams. Now that I was living in the world of David, I was newly ambivalent about eating anything that wasn’t chunks of unadulterated protein. I am fueling, I thought, shoving cubes of baked tofu into my mouth. Did you know that green peas have an unusual amount of protein for a vegetable? With unsettling frequency, I began to add frozen peas to my dinners. (They’re not great on cacio e pepe, it turns out.)

I have become quietly obsessed with this one single macronutrient. How could I not be? Everything is protein now: There are protein chips and protein ice creams and cinnamon protein Cheerios. Lemonade is protein, and so is water. Last month, Chipotle introduced a “high protein cup” consisting of four ounces of cubed chicken. Melanie Masarin, the founder and CEO of Ghia, a nonalcoholic-drink brand, recently told me that an investor asked her whether Ghia has plans for a high-protein aperitif. No, but the investor’s logic was obvious: Healthy people, the kind who tend to watch their drinking, only want one thing. This week, the federal government released its latest set of dietary guidelines—including a newly inverted food pyramid. At the top is protein.

[Read: Protein madness has gone too far]

In some ways, protein is just the latest all-consuming nutritional fixation. For decades, the goal was to avoid fat, which meant that pretzels were good and peanut butter was bad and fat-free Snackwell’s devil’s-food cookie cakes were a cultural phenomenon. Then Americans rediscovered fat and villainized carbs. But protein is different. Whatever your dreams are, protein seems to be the answer. It supports muscle gain, for those trying to bulk up, but it’s also satiating, which means people trying to lose weight are also advised to eat more protein. It has the power to make you bigger and more jacked, but also smaller and more delicate. People on GLP-1s are supposed to be especially mindful of their protein intake, to prevent muscle loss on extremely low-calorie diets, but so are weight lifters.

It is a nutritional philosophy that encourages not restriction but abundance: as much protein as possible, all the time. You can have your cake and eat it too (as long as it is made with “protein flour”). In a world where the very act of eating feels fraught, layered with a lifetime of rules and fads and judgments about what food is and is not “good,” protein offers absolution: You don’t have to feel bad about this. It has so many grams! What a beautifully straightforward recommendation: Eat more of this one thing that happens to be everywhere, and that frequently tastes good.

The low rattle of protein mania—the protein matchas and protein Pop-Tarts and protein seasonings to sprinkle on your protein chicken cubes—can be as maddening as it is inescapable. Everybody knows that you are supposed to eat a varied diet with many different types of foods that provide many different nutrients. But only protein is endowed with a special kind of redemptive power. Nobody is pretending that tortilla chips are a cornerstone of a balanced diet, but if they’re protein tortilla chips (7 grams), well, then maybe they’re at least fine. This is fantastic news if your goal is to enjoy tortilla chips, but it does have a tendency to recast all food that has not been protein-ified—either by nature or by the addition of whey-protein isolate—as a minor failure. It is depressing to look at a pile of roasted vegetables, arranged elegantly over couscous, and think: I will try harder tomorrow. I know, because I do it.

Protein is supposed to allow people to realize their untapped potential—to make us stronger and sharper. I suspect, though, that I would be stronger and sharper if I could stop ambiently thinking about my protein intake. That the world is now covered in a protein-infused haze provides constant reminders that I am falling short. Lots of protein evangelists will tell you that this is how cavemen ate, and therefore it is good. I think the best part of being a caveman would be not worrying about protein.

As nutritional trends go, there are worse obsessions than protein. Even if there is still significant debate about how much protein one needs, you are unlikely to send yourself into kidney failure because you protein-maxxed too hard. But the fanatical focus on protein as the true answer, the universal key to transforming the body you have into the one you want—7 grams, 28 grams, 11 grams, a chicken smoothie—feels eerily familiar. We counted calories, grams of fat, carbohydrates, trying to distill the messy science of nutrition into one single quantitative metric. Protein, for all its many virtues, is just another thing to count.

The flu situation in the United States right now is, in a word, bad. Infections have skyrocketed in recent weeks, filling hospitals nearly to capacity; viral levels are “high” or “very high” in most of the country. In late December, New York reported the most flu cases the state had ever recorded in a single week. My own 18-month-old brought home influenza six days before Christmas: He spiked a fever above 103 degrees for days, refusing foods and most fluids; I spent the holiday syringing electrolyte water into his mouth, while battling my own fever and chills. This year’s serving of flu already seems set to be more severe than average, Seema Lakdawala, a flu virologist at Emory University, told me. This season could be a reprise of last winter’s, the most severe on record since the start of the coronavirus pandemic—or, perhaps, worse.

At the same time, what the U.S. is experiencing right now “fits within the general spectrum of what we would expect,” Taison Bell, an infectious-disease and critical-care physician at the University of Virginia Health System, told me. This is simply how the flu behaves: The virus is responsible for one of the roughest respiratory illnesses that Americans regularly suffer, routinely causing hundreds of thousands of people to be hospitalized annually in the U.S., tens of thousands of whom die. (So far this season, the flu has killed more than 5,000 people, including at least nine children.) Influenza is capable of even worse—sparking global pandemics, for instance, including some of the deadliest in history. These current tolls, however, are well within the bounds of just how awful the “seasonal” flu can be. “It’s another flu year, and it sucks,” Bell said.

Although flu is a ubiquitous winter illness, it is also one of the least understood. Scientists have been puzzling over the virus for decades, but many aspects of its rapid evolution and transmission patterns, as well as the ways in which our bodies defend against it, remain frustratingly mysterious. Flu seasons, as a rule, differ drastically from one another, and “we don’t have a great understanding of why one ends up being more severe than another,” Samuel Scarpino, an infectious-disease-modeling researcher at Northeastern University, told me. Experts’ flu-dar has also been especially out of whack in recent years, since the arrival of COVID-19 disrupted typical flu-transmission patterns. (An entire lineage of flu, for instance, may have been driven to extinction by pandemic-mitigation measures.) The virus is still finding its new norm.

Even so, a few things about this season’s ongoing torment are clear. Much of the blame rests on the season’s dominant flu variant—subclade K, which belongs to the H3N2 group of influenza. As flus go, H3N2s tend to be more likely to hospitalize and kill people; most of the worst flu seasons of the past decade in the U.S. have been driven by H3N2 surges. Subclade K doesn’t seem to be an unusually virulent variant, which is to say it’s probably no more likely to cause severe disease than a typical version of H3N2. But it does seem to be better at dodging our immune defenses, making the net effect similar, because it can lead to more people getting sicker than they otherwise would. That’s not a trivial effect for a disease that, even in mild cases, can cause days of high fevers and chills, followed by potentially weeks of that delightful run-over-by-a-truck feeling.

At UVA Health, Bell has seen a major uptick in people testing positive for the virus in recent weeks. Like others, his hospital is close to full, straining its capacity to treat other illnesses, he said. In Michigan, too, where Molly O’Shea cares for children at multiple pediatric practices, “we are seeing a ton of influenza, just a ton,” she told me. “Our schedule is overflowing.” Several of her school-age patients have wound up in the hospital, despite being previously healthy; a few have ended up with serious complications such as pneumonia and brain inflammation. The worst cases, she said, have been among the children who didn’t get their annual flu shot.

Flu vaccines are not among the most impressive immunizations in our roster. Although they’re generally pretty effective at protecting against severe disease, hospitalization, and death, they don’t reliably stave off infection or transmission. And they’re frequently bamboozled by the virus itself, which shape-shifts so frequently throughout the year, as it ping-pongs from hemisphere to hemisphere, that by the time flu vaccines roll out to the public, they’re often at least a little out of sync with what’s currently circulating.

That’s another aggravating factor this year. Researchers first detected subclade K in June, months after experts selected the strains that would go into the fall flu-vaccine formulation. Recent data suggest that vaccination may still elicit some immune defenses that recognize subclade K, and preliminary estimates from the United Kingdom suggest that this year’s formulations may be especially effective at preventing severe disease in children, who, along with the elderly, are highly vulnerable to the flu. (For all the misery my family endured, none of us ended up in the hospital—which suggests that our vaccinations did their job.)

Children also tend to be the biggest drivers of flu’s spread. “They are the source, many times, of explosions of transmission,” Lakdawala told me. In the U.K., for instance, which experienced an unusually early start to the flu season, school-age kids appear to have driven much of the epidemic, Scarpino pointed out. In the U.S., too, case rates among children have been particularly high. Although the vaccine primarily limits severe disease, it can also affect how quickly the virus travels through a community. And yet only about half of American kids get the vaccine each year, despite long-standing universal recommendations for annual immunization. “It’s a vaccine that parents have never really treated as a vaccine that every child should get,” O’Shea said.

Those choices might be influenced by the ways many people underestimate the flu—a term often used to describe any cold-weather ailment that comes with a runny nose, cough, or even gastrointestinal upset. In reality, flu has long ranked as one of the U.S.’s top 10 or top 15 causes of death—a scourge that, through its impact on the health-care system, the workforce, and the economy at large, costs the country billions of dollars each year. Against such a substantial threat, we should be using “everything in our toolbox to protect ourselves,” Lakdawala said.

Yet the Trump administration is actively impeding the process of flu vaccination. Health and Human Services Secretary Robert F. Kennedy Jr. has also said that it may be “a better thing” if fewer people are immunized against the flu—and insisted, incorrectly, that “there is no scientific evidence that the flu vaccine prevents serious illness, hospitalizations, or death in children.” The federal government recommended annual flu vaccines for all children until earlier this month, when HHS pushed through changes that demoted multiple immunizations from its recommended schedule. HHS now says that families should consult with their health-care provider before taking the shot. Such a recommendation suggests that the vaccines’ overall benefits are ambiguous enough to require discussion—and puts an additional burden on both patients and health-care providers, who can administer what was once a routine vaccine only after a conversation that must then be documented.

The nation’s leaders have also compromised one of the country’s best chances to develop more effective, better-matched flu vaccines in the future, by defunding research into mRNA vaccines. The current flu-vaccine manufacturing process takes so long that the included strains for the Northern Hemisphere must be selected by February or so—which provides plenty of time for the virus to evolve before the autumn rollout begins, as happened this year. “We pretty regularly have a bad match for the flu,” Scarpino said. mRNA vaccines promised the possibility of faster development, allowing researchers to stay more closely on the flu’s heels and switch out viral ingredients in as little as two or three months. That degree of flexibility also would have sped the response to the next flu pandemic.

In an email, Andrew Nixon, HHS’s deputy assistant secretary for media relations, disputed the characterization that the department’s new policies impede flu vaccination, writing, “Providers continue to offer flu vaccines, and insurance coverage remains unchanged. The recommendation supports shared clinical decision-making between patients and clinicians and does not prevent timely vaccination. People can continue to receive flu vaccines if they choose to do so.”

For the current season, much of the U.S.’s fate may already be sealed: Fewer than half of Americans have gotten a flu vaccine this season, while the virus continues to spread. “If you find yourself in a place where there are people sick with flu, you’re probably gonna get sick,” Scarpino said. That logic likely holds true for his own family, in Massachusetts, where flu activity has been high for weeks. They’ve so far made it through unscathed, but Scarpino said, “I feel like it’s a matter of time.”

Updated at 6:00 PM ET on January 12, 2026

Antiviral drugs for influenza, the best known of which is Tamiflu, are—let’s be honest—not exactly miracle cures. They marginally shorten the course of illness, especially if taken within the first 48 hours. But amid possibly the worst flu season in 25 years, driven by a variant imperfectly matched to the vaccine, these underused drugs can make a bout of flu a little less miserable. So consider an antiviral. And specifically, consider Xofluza, a lesser-known drug that is in fact better than Tamiflu.

The culprit behind this awful flu season is subclade K, a variant of H3N2 discovered too late to be incorporated into this year’s flu vaccine. Early data suggest the shot likely does confer at least some protection against this variant, but the jury is still out on whether that protection is much eroded from usual. What is undeniable, though, is a recent explosion of influenza cases. In New York, which was hit early and hard, the number of people hospitalized for flu broke records. Across the rest of the country, cases have been going up a “straight line,” nearly everywhere all at once, which is highly unusual, Arnold Monto, an epidemiologist at the University of Michigan who has been studying influenza for some 60 years, told me last week. Cases seem to be finally leveling off now, but much misery still lies ahead.

For flu, antivirals are a second but oft-overlooked line of defense after vaccines. “We are dramatically and drastically underutilizing influenza antivirals,” Janet Englund, a pediatric-infectious-disease specialist at the University of Washington, told me. Even the older, more commonly prescribed drug Tamiflu reaches only a tiny percentage of flu patients every year. Actual numbers are hard to come by, but compare the estimated 1.2 million prescriptions for Tamiflu and its generic form in 2023 with the some 40 million people who likely got the flu in the winter of 2023–24. Xofluza is even less popular, and exact prescription numbers even harder to find. But they are possibly somewhere from just 1 to 10 percent that of Tamiflu.

The two antivirals are equally effective at allaying symptoms, both shortening the duration of flu by about a day. But Xofluza, which was approved in 2018, offers some tangible benefits over Tamiflu.

First, Xofluza is simply more convenient, a single dose compared with Tamiflu’s 10, which are taken over five days, twice a day. It also causes fewer of the gastrointestinal side effects, such as vomiting and nausea, that patients on Tamiflu will sometimes experience. All in all, a course of Xofluza might be easier for you—or your kid already queasy from the flu itself—to get down and keep down. (That is, if they are old enough to take it: Xofluza is approved for kids ages 5 and up in the United States, but ages 1 and up in Europe; only Tamiflu is recommended for kids down to newborn age as well as for women who are pregnant or breastfeeding.)

Second, Xofluza makes you less contagious to the rest of your family. It drives down the amount of virus spewed by sick patients more quickly than Tamiflu, possibly because of differences in how the two drugs work. Whereas Xofluza stops the virus from replicating, Tamiflu can only prevent already replicated viruses from exiting infected cells to infect others. In a study that Monto led last year, Xofluza cut household transmission by almost one-third compared with a placebo. Tamiflu might reduce transmission too, according to other studies, but probably to a lesser degree than Xofluza.

Third, Xofluza is better at heading off serious post-flu complications such as pneumonia or myocarditis. Patients on Xofluza needed fewer ER visits and hospitalizations than did those on Tamiflu, according to studies of large real-world data sets from insurance claims and medical records. This means that Xofluza should be the antiviral of choice for high-risk patients, including those over 65, who are most prone to these complications, Frederick Hayden, a flu expert at the University of Virginia who led one of the original Xofluza trials, told me. (Hayden has consulted on an unpaid basis, aside from travel expenses, for the companies behind Xofluza.)

The fourth advantage is less relevant to this season because the dominant subclade belongs to the influenza A family. But Xofluza is noticeably more effective against influenza B than Tamiflu, which tends to falter against this family of viruses.

Despite these benefits, awareness of Xofluza remains low. “It hasn’t been used as much as it should be,” Monto said, for reasons of cost and accessibility. Tamiflu, first approved in 1999, is available as a generic for less than $30 even without insurance. Xofluza is still patented and runs $150 to $200 a person. Because it’s less popular, pharmacies are less likely to stock it, making doctors less eager to prescribe it, and so on. In October, though, the company that markets Xofluza in the U.S. launched a direct-to-customer program that sells the drug for the comparably bargain price of $50 without insurance, along with same-day delivery in some areas. Even the flu-drug experts I spoke with, though, were not all aware of this new, more accessible route. The CDC still lists Tamiflu first and foremost in its recommendations, too.

For flu antivirals to be more widely used would also require better testing. Both Xofluza and Tamiflu are most effective within the first 48 hours of symptoms, and the earlier the better. Traditionally, a sick person would have to get to a doctor, get a flu test, get a prescription, and finally get to a pharmacy—which can easily put them past the first 48 hours. But COVID popularized at-home rapid testing, and combination COVID-flu tests have landed on pharmacies shelves recently. With telehealth and home delivery, you can get an antiviral without ever leaving the house.  

Still, the at-home tests are expensive, Englund pointed out, about $20 a pop here, compared with just a couple of bucks in Europe. The expense can add up for a whole family. In Japan, where antivirals are widely used, nearly everyone with a flu-like illness gets a routine rapid test and, if necessary, antivirals, both largely covered by the public health-care system. (Xofluza was developed by the Japanese company Shionogi, which also makes Xocova, a promising COVID antiviral my colleague Rachel Gutman-Wei has written about that is not available in the U.S.)

If the U.S. were better at using antivirals, especially in the high-risk patients, the number of Americans dying of flu—roughly 38,000 last year—would likely drop, Cameron Wolfe, an infectious-diseases expert at Duke, told me. Doctors recommend that people at high risk for flu take antivirals prophylactically, upon exposure to anyone with flu, before symptoms appear. Both Xofluza and Tamiflu as prophylaxis can cut the chances of getting sick by upwards of 80 percent.

For healthy people who fall ill, antivirals can ease the burden of flu, which is nasty even when it is not deadly. “I don’t want you to be out of work longer than you need to be. I don’t want you to not be a caregiver for your kids,” Wolfe said. Maybe you have business travel coming up, and I don’t want you to be sick still on that plane.” With challenges around access to antivirals, he said that “the best drug is the one you can get.” Both Tamiflu and Xofluza can make this historically bad flu season a little more bearable.


This story originally stated that Xocova, not Xofluza, when given as a prophylaxis for flu, cut the chance of illness by 80 percent. Xocova is a COVID antiviral.

Before Adam Sharples became a molecular physiologist studying muscle memory, he played professional rugby. Over his years as an athlete, he noticed that he and his teammates seemed to return to form after the offseason, or even from an injury, faster than expected. Rebuilding muscle mass and strength came easy: It was as if their muscles remembered what to do.

In 2018, Sharples and his research lab, now at the Norwegian School of Sport Sciences in Oslo, were the first to show that exercise could change how our muscle-building genes work over the long term. The genes themselves don’t change, but repeated periods of exertion turns certain genes on, spurring cells to build muscle mass more quickly than before. These epigenetic changes have a lasting effect: Your muscles remember these periods of strength and respond favorably in the future.

Intuitively, this makes sense. Past exercise primes your muscles to respond more robustly to more exercise. Over the past few years, Sharples’s lab has found that muscles have additional molecular mechanisms for remembering exercise; he and other scientists have been building on this research, too, confirming epigenetic muscle memory in young and aged human muscle, after different modes of training, as well as in mice. Now 40 years old, Sharples is still thinking about how our muscles remember but has lately been investigating the inverse trajectory: Do muscles have a similar memory for weakness?

The answer appears to be yes. “Our new data shows that muscle does not just remember growth—it also remembers wasting,” Sharples told me, of a study published in preprint on bioRxiv and currently in peer review for Advanced Science. “The more encounters you have with injury and illness, the more susceptible your muscle is to further atrophy. And, well—that’s what aging is, isn’t it?”

The Norwegian government’s research council has been funding Sharples’s research and has a vested interest in the lab’s discoveries. In the next decade, Norway is expected to become a “super-aged society,” in which more than one in five people are age 65 or older. Japan and Germany have already crossed this threshold, and the United States is expected to reach it by 2030. Age-related muscle weakness is a major factor in falling risk; falling is a leading cause worldwide of injury and death in people 65 and older. Better understanding how muscles remember and react to their weakest moments is a crucial step toward knowing what to do about it.

As part of the new study, Sharples’s team studied repeated periods of atrophy in young human muscle, using a knee brace and crutches to immobilize participants’ legs for two weeks at a time. This level of disuse, Sharples said, is comparable to real-world situations in which muscle rapidly loses size and function—limb immobilization after fractures or other injuries, periods of hospitalization or bed rest, reduced weight-bearing during recovery. A couple of years ago, I went to observe this research for my book On Muscle; one study participant, an avid skier and cyclist, told me he was shocked by how significantly the muscles in his leg deteriorated after just a couple of weeks of immobilization. The team also ran a concurrent study in aged rat muscle, in collaboration with Liverpool John Moores University; in both studies, repeated periods of disuse led to epigenetic changes—shifts in the way genes were expressed.

These changes affected the core functions of muscle cells, hampering the genes in mitochondria—the powerhouses of the cell, which generate the energy required to contract and relax muscle fibers. Letting muscles weaken suppressed genes involved in mitochondrial function and energy production in particular, including genes that are essential for muscle endurance and recovery. The researchers also found that a key marker of mitochondrial abundance dropped more drastically after repeated atrophy than after the first episode, indicating that repeated disuse makes muscle more vulnerable. In other words, the evidence suggests that every time you fall down the hole, it becomes more difficult to climb back out.  

Similar changes occurred in both the young human muscle and the aged rat muscle. But the young muscle could adapt and recover. After repeated atrophy, it showed a less exaggerated gene-expression response than the aged muscle did. “There seems to be some resilience and protection with young muscle the second time around,” Sharples said. He likened this to an immune-system response: Young muscle responds better to atrophy the second time because it has encountered it before and knows how to bounce back. By contrast, aged muscle becomes more sensitive after repeated atrophy, showing a worsened response with the second episode.

How long our muscles hold on to any of these memories is still up for debate. “Because of our study periods, we do know with some certainty that epigenetic memories can last at least three to four months, and that protein changes can also be retained,” Sharples said. “How long after that is difficult to say. But we know from our studies of cancer patients that epigenetic changes in muscle were retained even 10 years out from cancer survival.”

This was startling to hear. If an adverse health event is dramatic enough, like cancer, our muscles can carry the effects of that for a decade or more. More typically, though, inactivity, aging, and repeated episodes of disuse may gradually shift the system toward a state in which weakness becomes more entrenched and recovery slower.

Understanding what drives muscle to remember being in stress situations—either beneficial, like exercise, or damaging, like illness—could help us better judge what to do about this, says Kevin Murach, an associate professor at the University of Arkansas who studies aging and skeletal muscle and who was not involved in the new study. Knowing the mechanisms that drive beneficial changes at the molecular level could help develop drugs with similar effects. On the other end of the spectrum, if illness and immobilization have long-term negative effects, Murach told me, the next question to answer is: “Can we use exercise to offset that?”

Both Murach and Sharples said the data are getting only more robust that strength training, paired with endurance or high-intensity interval training, is the best therapy to protect against age-related loss of muscle and function. “Perhaps the key takeaway is that at any point along this continuum, new exercise or loading stimuli can still shift the balance back towards growth and health,” Sharples said. “I don’t think there is a point at which muscle can’t respond at all—it simply becomes less efficient when repeatedly weakened or when older.”

Identifying genes associated with muscle growth, as well as pharmaceutical targets, could mean that drugs or gene therapy may eventually be able to assist with boosting muscle response for people who cannot exercise. Murach and Sharples cautioned, though, that stimulating muscle-cell growth can have unintended consequences, in part because growth pathways are common across cell types—including cancer cells.

What the new work does show is that our muscle mass is not a blank slate. “What we’re finding suggests that our muscles may carry a history of both strength and weakness,” Sharples said. It’s shaped by factors including age, baseline muscle health, previous atrophy events, and previous exercise training. “And that history shapes how our muscles respond in the future.” I came away from our conversation thinking about the battle between positive muscle memory for strength and negative muscle memory for atrophy as a kind of tug-of-war: The two are constantly in tension, but the more experiences you have of one or the other, the more it pulls you into its embrace.