Category:

Medical News

Robert F. Kennedy Jr. has a new rallying cry: Eat real food. It’s an intuitive piece of advice—snack on some grapes instead of potato chips, trade that microwaveable mystery meat for a grilled chicken breast. The tagline has accompanied the new Dietary Guidelines for Americans, the government’s official nutrition recommendations, which call for Americans to prioritize whole foods and limit processed ones. “It’s time to start eating real food again,” the health secretary said during a speech in Pennsylvania last week. The Trump administration has even launched a new website, realfood.gov, which welcomes visitors with an animation of a steak, a carton of whole milk, and a head of broccoli.

The path toward Kennedy’s goal runs through an overlooked piece of the food landscape: convenience stores. The purveyors of late-night hot dogs, tins of Zyn, and countless varieties of gummy worms generally don’t sell a lot of “real food.” But in America’s food deserts, convenience stores are more than just places to pick up a snack—they’re grocery stores. The USDA estimates that tens of millions of Americans live in low-income areas with limited access to grocery stores. Mini-marts such as 7-Eleven and Sheetz “have an outsized role in our food system because they are the only food access point in many neighborhoods,” Hilary Seligman, an expert on food insecurity at UC San Francisco, told me.

The Trump administration does have a tool it can use to force the humble mini-mart to stock healthier options. Food stamps are an essential part of convenience stores’ business, but to participate in the program, retailers must carry products across a range of food groups. Those standards are poised to get an overhaul: “We are requiring that every outlet that accepts food stamps double the amount of real food that it sells in its facility,” Kennedy said last week. That decision is really up to Secretary of Agriculture Brooke Rollins, whose agency runs the food-stamp program, and she made similar comments at a press conference earlier this month. When I asked the Department of Agriculture for more details about the planned changes, a spokesperson directed me to a proposal released in draft form last fall. Under those new rules, stores would be required to carry seven varieties of food across four categories: protein, dairy, grains, and vegetables and fruits.

[Read: The new food-stamp rules will make your head spin]

The move could put us one step closer to healthy convenience stores, which have long been a dream of some nutrition reformers. Several experiments in different cities—including in Baltimore, New Orleans, and Denver—have tried to alleviate the problem of food deserts by improving the offerings at local mini-marts. A review of these pilot programs found “consistent improvements across most of the trials in the availability and sale of healthy foods, the purchase and consumption of those foods, and consumer knowledge.”

But the Trump administration’s plan isn’t perfect. Under the draft proposal, corner stores won’t have to follow any specific nutritional guidelines, so long as they meet the food-group standards. The Center for Science in the Public Interest, a consumer-advocacy group, has warned that the new standards mean that stores could stock Fruity Pebbles to fulfill the grain requirement, and sugary canned peaches for the fruit requirement. “It would be very easy for a retailer to meet the proposed rule by stocking foods in each staple food category that are high in added sugar, sodium, saturated fat, and refined grains,” Joelle Johnson, the deputy director of CSPI, told me.

[Read: Why don’t convenience stores sell better food?]

Changing mini-marts for good won’t be easy. Owners stock their stores based on what sells, and available data suggest that “real food” isn’t exactly driving sales. A recent analysis by Lula Commerce, an online-shopping tool for convenience stores, found that alcohol, soda, and energy drinks were the top items ordered by their stores’ customers. And a 2021 report by the National Association of Convenience Stores pointed to packaged beverages, cigarettes, and salty snacks as among the best sellers. Convenience stores are concerned that stocking healthy products will just mean wasted shelf space in their already crowded stores. They typically don’t have the margins necessary to swap out a section of the drink cooler to stock cauliflower or salmon filets, especially when these products may rot. NACS supports aspects of the Trump administration’s push to heighten the requirements for what these stores have to stock, Margaret Hardin Mannion, the group’s director of government relations, told me. But, she added, “we have to be able to sell what our customers want to buy.” Though the Trump administration is within its power to mandate that convenience stores participating in the food-stamps program stock as many healthy goods as it deems fit, that could easily backfire if retailers back out of the program entirely.

Until convenience stores actually give customers the option to buy strawberries instead of Slim Jims, it’s hard to know exactly how well-founded their fears really are. The smattering of experiments that stocked mini-marts with fresh produce suggest that people do have some interest in eating healthier when they can. But many of these trials also included efforts meant specifically to drum up sales of healthy items. In one experiment in Baltimore, convenience stores were encouraged to stock certain healthy foods through subsidies from researchers. Retailers hung up posters to promote these new healthy foods, and some gave out coupons to encourage specific purchases. Replicating an experiment like this on the national scale would cost some major cash. (There are more than 150,000 convenience stores in the United States.) And it would require a real leap of faith from an administration that to date has focused much of its efforts on restricting the products that can be purchased with food stamps, and on rooting out alleged fraud within the program.

For now, RFK Jr. and other health officials are stuck with a convenience-store conundrum. The mini-mart embodies how hard it is to actually improve the American diet. It’s enough of a challenge to get stores to stock their shelves with broccoli. But that still isn’t a guarantee that people will put down the Doritos.

Polio Was That Bad

by

In the United States, polio is a memory, and a fading one at that. The last major outbreak here happened in 1952; the virus was declared eliminated in 1979. With all of that behind us, you can see how someone—say, Kirk Milhoan, the chair of the CDC’s vaccine advisory committee—might wonder whether giving the polio vaccine to American kids still makes sense. “We need to not be afraid to consider that we are in a different time now,” Milhoan said on the podcast Why Should I Trust You? last week.

To be fair, Milhoan didn’t endorse yanking the polio vaccine from the CDC’s childhood-immunization schedule, as other vaccines were earlier this month. But he didn’t rule it out. And right now, when it comes to vaccines in America, anything seems possible. With Robert F. Kennedy Jr. at the helm of the Department of Health and Human Services, and with the CDC’s vaccine advisory committee stacked with his allies, every inoculation—no matter how well studied or successful—seems to be under new scrutiny, and at least potentially on the chopping block. Next on the committee’s agenda is looking into the safety of aluminum salts, which are used in numerous vaccines to boost the recipient’s immune response. For the record, a study of more than 1 million Danish children, published last July, found no statistically significant evidence linking aluminum in vaccines to asthma, autoimmune conditions, or neurodevelopmental disorders, including autism.

The polio vaccine, which doesn’t contain aluminum, hasn’t received much attention so far from Kennedy’s HHS; the department did not respond to my questions about whether it is considering changing its recommendations about this vaccine, and my queries to Milhoan went unanswered. But its time may be coming. As anti-vaccine activists are quick to point out, an American kid’s risk of catching polio in 2026 is vanishingly low. So why don’t we drop it from the recommendation list? Or perhaps, as we did with the smallpox vaccine in the early 1970s, just stop offering it altogether?

In anti-vaccine circles, the official story of polio—iron lungs, kids with leg braces, the triumph of Jonas Salk—has long been dismissed as misleading. In a 2020 debate with the lawyer Alan Dershowitz, Kennedy raised doubts about whether the vaccine had really been responsible for ridding the country of polio, crediting instead factors such as sanitation and hygiene. Last June, Aaron Siri, a lawyer who has worked closely with Kennedy, called for the polio vaccine to be struck from the CDC’s recommendations. In his recent book, Vaccines, Amen, Siri argues that the seriousness of polio has been overblown—a sentiment shared by others in the health secretary’s orbit, including Del Bigtree, who served as the communications director for Kennedy’s presidential campaign. Joe Rogan suggested on his podcast last March that the pesticide DDT, rather than the virus, deserved blame for symptoms attributed to polio.

[Read: Here’s how we know RFK Jr. is wrong about vaccines]

That’s all nonsense. Polio, which usually spreads through contact with an infected person’s feces via contaminated hands or water, can be a devastating disease. That 1952 outbreak killed some 3,000 people and left more than 20,000 paralyzed. About one in 200 people who contract polio will experience a form of paralysis. As many as 40 percent of people who recover from the virus, even a mild form of it, develop post-polio syndrome, which can emerge decades after an infection and cause weakened muscles and trouble breathing and swallowing. The vaccines—the original shot that contains inactivated-virus particles, plus an oral solution of weakened live viruses—did indeed lead to the near-elimination of polio globally and has spared millions from the worst outcomes of the disease.

Yet polio has proved stubbornly hard to stamp out. In 1988, as the virus was still endemic to more than 100 countries, the World Health Organization set a goal to eradicate polio by the year 2000. That deadline came and went, as did the ones that followed. A wild strain of the virus remains endemic in Pakistan and Afghanistan, and periodic outbreaks continue across Africa and elsewhere, including one in the Gaza Strip in 2024. One obstacle to eradicating the virus has been vaccine hesitancy, and even violence against health workers. In the early 2000s, five states in northern Nigeria boycotted the polio vaccine in part because of rumors that it was an American plot to spread HIV. Polio workers have been killed in Pakistan and Afghanistan, likely by the Taliban, which has alleged that vaccines are intended to sterilize Muslim children (and has denied responsibility for some of the attacks).

In regions where polio is rampant, the oral vaccine is preferred: It’s both cheaper and more effective at stopping transmission than the inactivated-virus injection. The United States and other countries where the disease is almost nonexistent exclusively use the inactivated version—in part because in rare instances, the oral vaccine can cause polio infection, which may in turn lead to paralysis. Anti-vaccine activists like to point to this unfortunate irony as proof that vaccination is the real villain. Taking the oral vaccine remains far less risky than contracting the wild virus, and it has driven down overall infection rates. But it’s an imperfect tool, and the WHO intends to phase it out by the end of 2029. (Given historical precedent, along with the Trump administration’s dramatic pullback from polio-vaccination efforts as part of its dismantling of foreign aid, this timeline might be optimistic.)

[Read: Polio is exploiting a very human weakness]

Ending polio vaccination altogether, according to the WHO’s plan, will take considerably longer. After polio is declared eradicated worldwide, the organization wants countries to wait 10 years before stopping use of the inactivated-virus shots to be certain that it is no longer circulating, Oliver Rosenbaum, a spokesperson for WHO’s Global Polio Eradication Initiative, told me.

Some polio experts, though, told me that they think ending vaccination at any point is unrealistic. Because in many cases the disease spreads without a person showing symptoms (unlike smallpox, which is not contagious before symptoms develop), large numbers of people can be infected before authorities are even aware of an outbreak. Some experts, such as Konstantin Chumakov, a virologist who began researching polio in 1989, worry that the virus could be used as a biological weapon in an entirely unvaccinated country. “In my opinion, and in the opinion of many respected polio experts, this is absolutely unacceptable because you can never assure that polio is completely eradicated,” Chumakov told me.

Ending or reducing polio vaccination in the U.S. before global eradication, as Milhoan seems to imply the country should consider, would be an even worse idea. A paper published last April in the Journal of the American Medical Association projected the consequences of U.S. vaccination rates declining by half and concluded that the country would see a significant return of paralytic polio. Some paralyzed people would probably require a ventilator, the modern equivalent of the iron lung, and, based on typical fatality rates, about 5 to 10 percent of them would die.

If Americans stopped vaccinating their kids against polio entirely, several years might pass without the U.S. having any cases, or with it seeing just a few here and there, Kimberly Thompson, a public-health researcher and the president of the research nonprofit Kid Risk, told me. (Even under the current vaccination system, occasional spread of the virus occurs. In 2022, for instance, an unvaccinated young adult in Rockland County, just north of New York City, tested positive for a vaccine-derived strain of polio even though they hadn’t traveled overseas.) But eventually, we could find ourselves back in the same situation as in the early 1950s.

Or it could be even worse. Seventy-five years ago, many American children inherited some polio immunity from their mother, and so they had at least partial protection against the virus’s worst effects. The introduction of polio to a population that has little to no immunity could cause its mortality rates to exceed those experienced in the first half of the previous century, Chumakov said. A 2021 study, co-authored by Thompson, modeled what might happen in an extreme scenario: if global polio vaccination ended, no one were immune, and the virus had somehow been reintroduced. It estimates that such a scenario could cause tens of millions of cases of paralysis worldwide.

[Read: South Carolina is America’s new measles norm]

Milhoan insisted on the podcast that Americans shouldn’t be afraid to rethink vaccine policy. And he’s right that health authorities should reevaluate risk and offer the most up-to-date medical advice. Still, there’s something to be said for the utility of fear. Most Americans, including me, are too young to have any personal knowledge of polio. It’s all textbook summaries and black-and-white newsreels; we’ve never worried about shaking someone’s hand or going swimming and then ending up in a wheelchair. But vaccines are the only thing stopping us from getting another firsthand look.

Last year, starting in January, the United States experienced its largest documented measles outbreak in more than three decades, when an epidemic centered on West Texas sickened at least 762 people. Now a fast-moving outbreak in South Carolina seems poised to surpass it: Local officials have logged 700 infections, and the virus is still rapidly spreading.

As public-health officials scramble to contain the virus, they’re also trying to figure out whether these two outbreaks are connected—specifically, whether the version of the pathogen that kick-started the West Texas cases has been circulating within the nation’s borders ever since. If the answer is yes, it will mean that measles has once again become a permanent resident of this country, after 26 years of only limited outbreaks imported from abroad. Given that the U.S. clocked more than 2,200 measles cases in 2025—more than it has had in a single year since 1991—the experts I spoke with already consider this the reality that Americans are living in. One of the fastest-spreading viral diseases ever documented has once again become a routine threat.

At this point, researchers are working to find the connective tissue among some of the largest measles outbreaks in the U.S. within the past year, including the ones centered in West Texas, Utah, Arizona, and South Carolina. Technically, the epidemics still could have been caused by separate reintroductions of measles from at least one international source. But “that’s a hard stretch,” Robert Bednarczyk, a global-health researcher and epidemiologist at Emory’s Rollins School of Public Health, told me. The most likely and so far best-supported scenario, he said, is also the simplest one—that the virus spread so fiercely and quickly through these communities that it was able to hitch a ride elsewhere in the country when infected people traveled.

If further evidence proves that scenario true, the Pan American Health Organization could strip the U.S. of its official measles-elimination status—which the country has held since 2000—at a meeting scheduled for April. (A country achieves elimination status when it can show that the virus hasn’t been circulating for 12 consecutive months; it loses the status when researchers show that measles has been spreading for a year straight.) Already, PAHO has publicly confirmed that scientists have found the same strain of measles in Texas, New Mexico, Utah, Arizona, and South Carolina, as well as in Canada, Mexico, and multiple other North American countries. (In response to a request for comment, a PAHO spokesperson clarified that although these detections had been made, the committee was still seeking further evidence.)

Health officials also recently announced that South Carolina’s outbreak has seeded cases elsewhere, including Washington State. Still, the case for measles’ continuous transmission can’t yet be considered a slam dunk. To prove it definitively, researchers will need to show that geographically distant outbreaks in the U.S. are epidemiologically linked and that there is not   sufficient evidence suggesting that the virus bounced back and forth between countries.

Two types of information are essential to these investigations. First, researchers look into the travel histories of infected people, who might have brought the virus from one state to another. Second, they compare genetic sequences pulled from the virus across locations. Measles mutates slowly enough that researchers can in many cases search for essentially the same strain when tracking its movements. But the virus does accumulate some changes in its genome, and the further apart two cases are in time, the more genetically distinct their genetic material should be. If measles was being continuously transmitted, scientists might expect to see slightly different iterations of the virus racking up mutations as it traveled, say, from Texas to South Carolina. If measles had been introduced separately to those locations, the sequences pulled from each state might more closely resemble genetic information from an international source, Pavitra Roychoudhury, a pathogen-genomics expert at the University of Washington, told me.

But some of the data that officials need may be lacking. For months, experts have been concerned that the U.S. has been severely undercounting its measles cases and that the virus had been circulating in some communities long before it was officially detected. Cases of the disease can be easy to miss, Helen Chu, an immunologist at the University of Washington, told me. The early days of measles are usually marked by common symptoms such as fever and cough; to the untrained eye, the virus’s rash can look like many of the reddish, patchy blemishes that many other pathogens cause. The overwhelming majority of measles cases in the U.S. have also concentrated in communities that have low vaccination rates, which often have less access to medical care and the sort of testing that would also collect viral samples. Many people who deliberately decline vaccination for their families are also skeptical of seeking medical care in general, or of public-health officials investigating outbreaks.

With inconsistent data, researchers may be left sorting through genetic sequences that neither point clearly to one another nor obviously implicate separate sources. “There’s a judgment call in that gray area,” William Moss, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health, told me. “How different is different?” The last time measles spread endemically in the United States, this sort of genomic analysis was not commonplace.

Should PAHO find that measles is spreading concertedly in the U.S. again, the nation’s leaders may shrug off the change. At times, top officials at the Department of Health and Human Services appear to have dismissed the notion of continuous spread: In November, Jim O’Neill, Health Secretary Robert F. Kennedy Jr.’s deputy and the CDC’s acting director, posted on social media that “preliminary genomic analysis suggests the Utah and Arizona cases are not directly linked to Texas.” (Neither the Trump administration nor PAHO has released the details about the data it reviewed or how similar any identified strains are.) And last week, the CDC’s new Principal Deputy Director Ralph Abraham described the prospect of the U.S. losing its measles-elimination status as “just the cost of doing business with our borders being somewhat porous for global and international travel.” Certain communities, he added, “choose to be unvaccinated. That’s their personal freedom.”

In an email, Emily G. Hilliard, HHS’s press secretary, echoed Abraham’s comments, describing current outbreaks as “largely concentrated in close-knit, under-vaccinated communities with prevalent international travel that raises the risk of measles importation,” and noting that the United States still has a lower measles burden than Canada, Mexico, and much of Europe do.

In practice, the April decision will be a matter of semantics. Whatever the outcome, the U.S. has been weathering a worsening measles situation for years now, as vaccination rates have ticked down and outbreaks have grown larger and more common. In the past year, the Trump administration has made it substantially more difficult for local public-health-response teams to address and contain outbreaks too. HHS reportedly delayed communications from the CDC to officials in West Texas and held back federal funds to fight the outbreak for two months. More recently, HHS pledged to send $1.4 million to address South Carolina’s outbreak, though it began months ago. (One recent analysis suggests that measles outbreaks of this scale can cost upwards of $10 million.) The administration has repeatedly downplayed the benefits of immunization, while exaggerating the importance of nutritional supplementation for combatting measles. Kennedy has also spent decades repeating disproved claims that vaccines such as the measles-mumps-rubella immunization can cause autism. (Hilliard wrote that Kennedy has consistently said that vaccination is the most effective way to prevent measles, but she also emphasized in her email that people should consult with health-care providers about whether vaccination is best for their family.)  

Should immunity erode further—as experts watching the Trump administration’s actions expect it to—measles will find it even easier to move across the country, until epidemics bleed so thoroughly together that their links become irrefutable. Already, the nation’s leaders have made clear where the U.S. stands on measles: It is an acceptable norm.

Last year, starting in January, the United States experienced its largest documented measles outbreak in more than three decades, when an epidemic centered on West Texas sickened at least 762 people. Now a fast-moving outbreak in South Carolina seems poised to surpass it: Local officials have logged 700 infections, and the virus is still rapidly spreading.

As public-health officials scramble to contain the virus, they’re also trying to figure out whether these two outbreaks are connected—specifically, whether the version of the pathogen that kick-started the West Texas cases has been circulating within the nation’s borders ever since. If the answer is yes, it will mean that measles has once again become a permanent resident of this country, after 26 years of only limited outbreaks imported from abroad. Given that the U.S. clocked more than 2,200 measles cases in 2025—more than it has had in a single year since 1991—the experts I spoke with already consider this the reality that Americans are living in. One of the fastest-spreading viral diseases ever documented has once again become a routine threat.

At this point, researchers are working to find the connective tissue among some of the largest measles outbreaks in the U.S. within the past year, including the ones centered in West Texas, Utah, Arizona, and South Carolina. Technically, the epidemics still could have been caused by separate reintroductions of measles from at least one international source. But “that’s a hard stretch,” Robert Bednarczyk, a global-health researcher and epidemiologist at Emory’s Rollins School of Public Health, told me. The most likely and so far best-supported scenario, he said, is also the simplest one—that the virus spread so fiercely and quickly through these communities that it was able to hitch a ride elsewhere in the country when infected people traveled.

If further evidence proves that scenario true, the Pan American Health Organization could strip the U.S. of its official measles-elimination status—which the country has held since 2000—at a meeting scheduled for April. (A country achieves elimination status when it can show that the virus hasn’t been circulating for 12 consecutive months; it loses the status when researchers show that measles has been spreading for a year straight.) Already, PAHO has publicly confirmed that scientists have found the same strain of measles in Texas, New Mexico, Utah, Arizona, and South Carolina, as well as in Canada, Mexico, and multiple other North American countries. (In response to a request for comment, a PAHO spokesperson clarified that although these detections had been made, the committee was still seeking further evidence.)

Health officials also recently announced that South Carolina’s outbreak has seeded cases elsewhere, including Washington State. Still, the case for measles’ continuous transmission can’t yet be considered a slam dunk. To prove it definitively, researchers will need to show that geographically distant outbreaks in the U.S. are epidemiologically linked and that there is not   sufficient evidence suggesting that the virus bounced back and forth between countries.

Two types of information are essential to these investigations. First, researchers look into the travel histories of infected people, who might have brought the virus from one state to another. Second, they compare genetic sequences pulled from the virus across locations. Measles mutates slowly enough that researchers can in many cases search for essentially the same strain when tracking its movements. But the virus does accumulate some changes in its genome, and the further apart two cases are in time, the more genetically distinct their genetic material should be. If measles was being continuously transmitted, scientists might expect to see slightly different iterations of the virus racking up mutations as it traveled, say, from Texas to South Carolina. If measles had been introduced separately to those locations, the sequences pulled from each state might more closely resemble genetic information from an international source, Pavitra Roychoudhury, a pathogen-genomics expert at the University of Washington, told me.

But some of the data that officials need may be lacking. For months, experts have been concerned that the U.S. has been severely undercounting its measles cases and that the virus had been circulating in some communities long before it was officially detected. Cases of the disease can be easy to miss, Helen Chu, an immunologist at the University of Washington, told me. The early days of measles are usually marked by common symptoms such as fever and cough; to the untrained eye, the virus’s rash can look like many of the reddish, patchy blemishes that many other pathogens cause. The overwhelming majority of measles cases in the U.S. have also concentrated in communities that have low vaccination rates, which often have less access to medical care and the sort of testing that would also collect viral samples. Many people who deliberately decline vaccination for their families are also skeptical of seeking medical care in general, or of public-health officials investigating outbreaks.

With inconsistent data, researchers may be left sorting through genetic sequences that neither point clearly to one another nor obviously implicate separate sources. “There’s a judgment call in that gray area,” William Moss, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health, told me. “How different is different?” The last time measles spread endemically in the United States, this sort of genomic analysis was not commonplace.

Should PAHO find that measles is spreading concertedly in the U.S. again, the nation’s leaders may shrug off the change. At times, top officials at the Department of Health and Human Services appear to have dismissed the notion of continuous spread: In November, Jim O’Neill, Health Secretary Robert F. Kennedy Jr.’s deputy and the CDC’s acting director, posted on social media that “preliminary genomic analysis suggests the Utah and Arizona cases are not directly linked to Texas.” (Neither the Trump administration nor PAHO has released the details about the data it reviewed or how similar any identified strains are.) And last week, the CDC’s new Principal Deputy Director Ralph Abraham described the prospect of the U.S. losing its measles-elimination status as “just the cost of doing business with our borders being somewhat porous for global and international travel.” Certain communities, he added, “choose to be unvaccinated. That’s their personal freedom.”

In an email, Emily G. Hilliard, HHS’s press secretary, echoed Abraham’s comments, describing current outbreaks as “largely concentrated in close-knit, under-vaccinated communities with prevalent international travel that raises the risk of measles importation,” and noting that the United States still has a lower measles burden than Canada, Mexico, and much of Europe do.

In practice, the April decision will be a matter of semantics. Whatever the outcome, the U.S. has been weathering a worsening measles situation for years now, as vaccination rates have ticked down and outbreaks have grown larger and more common. In the past year, the Trump administration has made it substantially more difficult for local public-health-response teams to address and contain outbreaks too. HHS reportedly delayed communications from the CDC to officials in West Texas and held back federal funds to fight the outbreak for two months. More recently, HHS pledged to send $1.4 million to address South Carolina’s outbreak, though it began months ago. (One recent analysis suggests that measles outbreaks of this scale can cost upwards of $10 million.) The administration has repeatedly downplayed the benefits of immunization, while exaggerating the importance of nutritional supplementation for combatting measles. Kennedy has also spent decades repeating disproved claims that vaccines such as the measles-mumps-rubella immunization can cause autism. (Hilliard wrote that Kennedy has consistently said that vaccination is the most effective way to prevent measles, but she also emphasized in her email that people should consult with health-care providers about whether vaccination is best for their family.)  

Should immunity erode further—as experts watching the Trump administration’s actions expect it to—measles will find it even easier to move across the country, until epidemics bleed so thoroughly together that their links become irrefutable. Already, the nation’s leaders have made clear where the U.S. stands on measles: It is an acceptable norm.

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Three days into 2026, the United States military seized a foreign leader: Nicolás Maduro. Four days after that, the U.S. health department freed a longtime prisoner of war: saturated fats.

At a recent press conference announcing the publication of the government’s new dietary guidelines, Health and Human Services Secretary Robert F. Kennedy Jr. declared two different military operations in the span of less than a minute: The nation would be retreating from its war on fatty steaks and whole milk, he said, and redeploying for another war, this one on added sugars. News about a third campaign arrived a few days later, when the White House shared a dark and menacing photo of Kennedy with the caption “WE ARE ENDING THE WAR ON PROTEIN.”

This appears to be what happens when someone who has spent years fighting mainstream medicine suddenly finds himself at the center of it. Like a revolutionary turned generalissimo, Kennedy has transformed the former palace into a military command center. He has promised to defeat his enemies in Big Pharma and to purge conflicts of interest from the agencies he leads, so as to end what he has referred to as a “war on public health.” Elsewhere he has promised to withdraw from the “war on alternative medicine,” the “war on stem cells,” the “war on chelating drugs,” the “war on peptides,” the “war on vitamins,” and the “war on minerals.” Anything that his administration hopes to do may now be put in terms of martial conflict: Under Kennedy, policy making and saber rattling go hand in hand.

Kennedy’s deputies and chief advisers are culture warriors in their own right, and they seem to share their leader’s bellicosity. Jim O’Neill, the second in command at HHS, has talked about the need to fight back against gender-affirming medical care, which he describes as being part of an evil war on biology; Mehmet Oz, now in charge of Medicare and Medicaid, says that he will “wage a war on fraud, waste, and abuse”; and Calley Means, a top Kennedy advisor, points to an admittedly less catchy “war on the American public having transparency,” which the secretary intends to halt.

This repeated phrasing is more than just a rhetorical tic, and it extends far beyond the typical military analogies—like the wars on cancer and smoking—that have long been embedded in health discussions. As Kennedy and his aides press their case in public, they adopt a persistently antagonistic tone not only toward disease but also toward the medical and scientific establishment. It is as if anyone who has disagreed with the administration must be an enemy combatant. The current HHS regime has already taken shots at supposedly corrupt pediatricians, conflicted ob-gyns, “fake news” science journalists, and “sock puppet” regulators. (In response to questions about the department’s aggressive posture, HHS press secretary Emily Hilliard asserted that Americans’ trust in public health has been declining, and that Kennedy is restoring it.)

RFK Jr. has, in this regard, been a rather effective secretary of war. He has quickly put the old medical establishment on the defensive. My colleagues in academia and medicine are worried about what might happen to them if they write one of the government’s newly forbidden words in a scientific grant or provide the wrong sort of medical care to their transgender patients; venerable scientific outlets such as The New England Journal of Medicine have had to deal with letters from a government lawyer accusing them of bias; and earlier this month, an ex-FDA official nervously joked that he hoped that the IRS wouldn’t audit his taxes as punishment for criticizing agency operations.

When it comes to public-health advice, Kennedy’s agenda has proved to be more focused on attacking previous suggestions than promoting new ones. “Prior guidelines were driven not by health interests,” Kennedy said in a Fox News interview this month, “but by mercantile interests of the food industry.” FDA Commissioner Marty Makary frames his agency’s actions as “setting the record straight” after years of dogma, and NIH Director Jay Bhattacharya—an author of the anti-lockdown “Great Barrington Declaration”—remains engaged in a heated struggle against COVID-related restrictions that ended years ago. On the whole, the administration appears to have adopted a Promethean view of science and medicine: New knowledge is not gradually discovered, but rather rescued from the grasp of special interests by an elite squadron of iconoclasts.

The contrarian brigade sometimes seems to be waging a war on irony itself. Despite the secretary’s repeated promises to wipe out conflicts of interest, some of the experts who advised on the new dietary guidelines have financial ties to the meat and dairy industries. (When those ties were first reported, HHS responded with a statement calling it “absurd to suggest that anything other than gold standard science guided our work on this presidential priority.”) Makary, an avid podcast guest, has used his airtime to issue devastating takedowns of nutrition education, only to be interrupted by advertisements for unproven dietary supplements. Vinay Prasad, another top FDA official and medical provocateur, has joined Makary on the agency’s FDA Direct podcast for what they called a “bashing session” of The Wall Street Journal’s opinion desk—which has a long history of publishing Makary, Bhattacharya, and Kennedy.

Rousing citizens with patriotic calls to battle is a tried-and-true political strategy, but the hostility generated by this public-health administration may not be sustainable. Adrenaline surges don’t last forever, and overreliance on extreme rhetoric will flatten important differences between public-health problems. Repeated attempts to discredit trusted medical experts may also backfire. Earlier this month, a federal judge temporarily prevented the government from terminating millions of dollars in public-health grants awarded to the American Academy of Pediatrics. In her ruling, Judge Beryl Howell cited multiple combative social-media posts made by HHS officials and advisers against the physician group. At the very least, the MAHA strategy of picking fights against the nation’s primary care doctors, news outlets, and career officials is unlikely to inspire a resurgence of the public’s trust. Outrage over skim milk and Froot Loops can only go so far, and Americans may soon grow tired of Kennedy’s forever wars.

For the past year, the United States has gone without its doctor. Ever since Vivek Murthy resigned as surgeon general last January, the role has remained empty despite President Trump’s attempts to fill it. He first nominated the physician Janette Nesheiwat but withdrew her nomination in May after reports that she completed her M.D. not in Arkansas, as she had claimed, but in St. Maarten. In her place, Trump nominated Casey Means, whose background is odd, to say the least.

Means is a Stanford Medicine graduate who dropped out of her surgical residency and has since made a career infusing spiritual beliefs into her wellness company, social-media accounts, and best-selling book. The exact nature of her spirituality is hard to parse: Means adopts an anti-institutionalist, salad-bar approach. She might share Kabbalah or Buddhist teachings, or quote Rumi or the movie Moana. She has written about speaking to trees and participating in full-moon ceremonies, both of which drew ridicule by the conservative activist and unofficial Trump adviser Laura Loomer. Her belief in “the divine feminine” (which she doesn’t quite explain) seems to have led her to renounce hormonal birth-control pills for halting the “cyclical life-giving nature of women.”

Although months have passed since her nomination, Means has still not appeared before Congress—in part because she went into labor with her first child hours before her confirmation hearing was scheduled to begin. (Means did not respond to questions for this story. A spokesperson for Bill Cassidy, who chairs the relevant Senate committee, told me that “the hearing will be rescheduled in the future when Dr. Means is ready” but did not offer a more detailed timeline.) The United States’ year without a surgeon general raises questions about how necessary the role really is. But the surgeon general still serves as the government’s leading spokesperson on public health, and if Means is eventually confirmed, her theology will become rather consequential because it is deeply tied to her beliefs about health. In 2024, she declared in a Senate roundtable on chronic disease that “what we are dealing with here is so much more than a physical health crisis. This is a spiritual crisis.” Part of her solution to both of these crises is to reject experts and institutions in favor of something far more alluring: intuition.

Means wrote in 2024 that she grew up in the Catholic faith, but left the Church in college. She grew fascinated by lectures at the Self-Realization Fellowship Lake Shrine, a spiritual center in Pacific Palisades, California. SRF, the religious organization behind it, was founded in 1920 by Paramahansa Yogananda, the “father of yoga in the West,” whose image graced the album cover of Sgt. Pepper’s Lonely Hearts Club Band. It accepts the teachings of Jesus and other spiritual masters and divinities, but nothing is seemingly as important as one’s personal relationship with God. Yogananda’s book, The Second Coming of Christ, posits that the Second Coming is not necessarily literal, but instead entails an awakening of the divine consciousness in ourselves.

SRF’s influence is apparent in Means’s advice that people follow their “heart intelligence” and “divine intuition” and avoid “blindly ‘trusting the science.’” In a newsletter sponsored by a probiotic-supplement company, she wrote that “applying the scientific method to health and disease has immense utility for helping us understand the natural world and live healthy, longer lives, but it feels increasingly like there is a campaign being enacted against our divine gifts of intuition and heart intelligence.” In another newsletter, she wrote about the role of divine intuition in deciding whether to drink raw milk: She wants to be free to look a local farmer in the eye, “pet his cow, and then decide if I feel safe to drink the milk from his farm.” (One could very well have a lovely experience with a farmer, Kevin Klatt, a registered dietitian and research scientist at UC Berkeley, told me, “but it isn’t going to change the fact that raw milk might give you listeria.”)

In the same newsletter championing bovine contact, Means laments a spiritual crisis of connection to nature. She frequently portrays nature as a force with humanity’s best interests at heart, nearly synonymous with God. In her book, she suggests that chronic stress and trauma can be treated by, among other things, spending time in nature and through “plant medicine”—specifically, psilocybin-assisted therapy. (Means has also written that psychedelics helped her be “one with the moon.”) In that sponsored newsletter, she warned of a prophecy she says was put forth by the Indigenous Kogi people of Colombia, in which humanity has only until 2026 to prove we want to right the wrongs we have foisted upon the Earth, or we will all die. “I use the Kogi prophecy metaphorically,” she wrote. “But I do feel we are on a road to disaster. I think we should take these messages seriously.” Natural disasters, she implied, are a “communication from God.”

Nature worship might be especially appealing at a time when trust in experts is declining and technology has become ever more inscrutable and overwhelming, Alan Levinovitz, a professor of religion at James Madison University and the author of Natural: How Faith in Nature’s Goodness Leads to Harmful Fads, Unjust Laws, and Flawed Science, told me. Means’s appeal to nature and intuition, he said, is empowering because it puts expertise back into everyday Americans’ hands.

The ambiguity of Means’s spiritual views strengthens her appeal—they can be interpreted to fit a wide array of belief systems. Her 2024 New York Times best seller, Good Energy, uses terms such as energy and life force, along with scientific-sounding descriptions of metabolic processes, to insinuate that the vibes are off in the American diet and lifestyle. (Means wrote Good Energy with her brother, Calley, who is now a close adviser to Robert F. Kennedy Jr., the secretary of Health and Human Services.) In her newsletter, she encourages her readers to “avoid conventionally grown foods at all costs,” and warns that buying nonorganic food is a vote to “diminish the life force on this planet” while the use of synthetic pesticides “is giving a poor signal to God (Source!) that we want this miracle to continue.” (Source insinuates a godlike or all-powerful entity.) “She’s drawing on lots of different ideas very freely and without much rigor in ways that feel good,” Joseph Baker, a sociologist specializing in religion at East Tennessee State University, told me. “That sort of allows her to seem like a visionary without having to specify anything.”

Emily Hilliard, a press secretary for the Department of Health and Human Services, wrote in an email that religious and spiritual beliefs should not be held against anyone who seeks a government job, and that Means’s “credentials, research background, and experience in public life give her the right insights to be the surgeon general who helps make sure America never again becomes the sickest nation on earth.” The surgeon general has little power to enforce policy, but can call on Congress to put warnings on products like the ones seen on cigarette packets, release guidelines and reports, and lend support to various initiatives. Means’s belief system—which Baker characterized as a “sacralization of the individual”—suggests that she will use that platform to invite Americans to master their own health. In Good Energy, Means writes of chronic conditions such as depression, anxiety, infertility, insomnia, heart disease, erectile dysfunction, and cancer, “The ability to prevent and reverse these conditions—and feel incredible today—is under your control and simpler than you think.”

That statement is one of many in which Means echoes elements of manifestation: the belief that thinking good thoughts and putting in effort begets good things, which Means says is real. She advocates “tapping into the abundance that is a sheer law of our universe” and calling on a higher power—“When was the last time you simply sat quietly and asked God/spirit/ancestors/nature to help show you the way and guide you to your highest purpose?” she wrote in her newsletter—but also putting in the hard, hard work.

Means goes beyond intuition and heart intelligence to offer concrete suggestions for labor (and spending) that will be divinely rewarded—essentially, a reimagined prosperity gospel. The nature of that work is detailed in the penultimate section of Good Energy. Means recommends eating minimally processed and mostly organic foods, and taking regular cold plunges or showers. (In her newsletter, she also advises Americans to grow the majority of their food; instead of pets, they could “raise chickens and goats and have abundant eggs and milk.”) She includes checklists upon checklists of habits and tests that “enable Good Energy” (and recommends getting a comprehensive lab panel from Function Health, of which she was an investor). She suggests buying a glucose monitor through her own company, Levels, and also recommends various personal-care apps, water filters, and trackers for sleep, food, and activity. Some of these items are sold by the wellness company True Medicine, which helps customers use their health savings account for a wide range of purchases, and in which Means has invested; her brother co-founded it. According to financial disclosures made public in September, Means has also received more than $275,000 from supplement companies. (Means has pledged to divest from True Medicine and other wellness interests if she is confirmed.)

Besides potentially boosting her own bottom line, Means’s embrace of individualism in health is wholly unrealistic. Americans work longer hours than people in many other developed nations, and many don’t have enough time to cook dinner, let alone raise goats. Many of the most important nutrition victories over the past century, such as the fortification of foods and the removal of trans fats, were communal and systemic, Klatt, the dietitian and UC Berkeley researcher, told me—the type of science-backed, population-level interventions that Means hasn’t demonstrated much interest in. A different prospective surgeon general might recommend repeated visits with a dietitian and fight for insurance to cover them, instead of “advocating for this kind of woo-woo stuff that has no data behind it,” Klatt said. Means, though, “is not an individual who seems to be wedded to the scientific process,” Timothy Caulfield, a professor and the research director at the Health Law Institute at the University of Alberta, told me. “This is someone who seems to pull things out of thin air and then look for sciencey-sounding rhetoric” to support them.

Perhaps Means’s eventual confirmation hearing will clarify what, exactly, she intends to do as the face of American public health. But even she may not be sure. “The future of medicine will be about light,” Means wrote to her newsletter subscribers last year, before admitting, “I don’t exactly know how yet.”

Milk is mundane in most contexts, but you can’t help noticing when it is smeared across the upper lips of America’s government officials. An image of Donald Trump sporting a milk mustache and glowering over a glass of milk was just one of many dairy-themed posts shared by government accounts on X during the past week, all of which made clear that the milk was whole. In one video, a seemingly AI-generated Robert F. Kennedy Jr. takes a sip and is transported to a nightclub, suddenly milk-mustachioed; in another, former Housing Secretary Ben Carson raises a glass of full-fat and sports a white ’stache. The upper lips of the former collegiate swimmer Riley Gaines and the former NBA player Enes Kantor Freedom, among other personalities embraced by the right, also got the whole-milk treatment.

The posts were shared to celebrate a big month for whole milk. On January 7, the Department of Agriculture released its updated Dietary Guidelines for Americans, which newly recommend whole dairy over low-fat products, and placed a carton of whole milk near the top of a revamped, upside-down food pyramid. Then, on Wednesday, President Trump signed into law a bill allowing schools to serve whole milk after more than a decade of being restricted to low-fat.

Medical professionals, who have long advised people to avoid full-fat dairy because it contains high levels of saturated fat, were generally critical of the new dietary guidelines for milk. But Kennedy and Trump, along with other government officials, have framed it as a major win for health. Kennedy recently argued that America’s children have been missing out on key nutrients such as calcium and vitamin D because they don’t want to drink the low-fat milk served in schools. The new law, he said at its signing, embodies the new dietary guidelines’ directive to “eat real food.”

The low-fat-versus-whole controversy is a real, active scientific debate. For roughly the past two decades, reduced-fat milk (2 percent milk fat, by weight) has dominated American refrigerators largely thanks to fears about fat in general, and saturated fat in particular. Copious research has linked saturated-fat intake with health issues including cardiovascular disease and cancer, as well as death from all causes. It also leads to higher LDL (“bad”) cholesterol, which has been shown to cause strokes and heart attacks, Kyla Lara-Breitinger, a cardiologist at the Mayo Clinic, told me.

[Read: The most miraculous (and overlooked) type of milk]

Saturated fat generally isn’t a huge concern for children, so giving them the option to drink whole milk at school is somewhat less fraught, Steven Abrams, a child-nutrition expert and a member of the American Academy of Pediatrics, told me. And some researchers propose that, because whole milk is more satiating, kids who drink it are less likely to reach for other high-calorie foods. “Full-fat dairy is especially important for kids ages 12 months to 10 years to meet energy needs and promote brain development,” a spokesperson for the Department of Agriculture wrote in an email. But the AAP holds that kids should switch to drinking low-fat or skim at age 2.

In contrast to most nutritionists, Kennedy is all in on saturated fat, championing foods such as butter, beef tallow, and red meat. At a press conference to announce the new dietary guidelines, Kennedy proclaimed that the government was “ending the war on saturated fats.” The reality is more confusing. The new dietary guidelines promote more foods that are high in saturated fat, but they retain the old recommendation to limit daily saturated-fat intake to 10 percent of total calories, or about 20 grams a day in a 2,000-calorie diet. A single cup of whole milk has 5 grams. If a person consumes the recommended three daily servings of full-fat dairy, it would be “pretty close to impossible” to stay within the saturated fat limit, Caitlin Dow, a senior nutrition scientist at the Center for Science in the Public Interest, told me. (The White House and the Department of Health and Human Services did not respond to a request for comment.)

A relatively new and controversial school of thought posits that full-fat milk isn’t as harmful as other sources of saturated fat. A 2018 study that involved participants from 21 countries found that dairy consumption—even whole-fat dairy—was negatively correlated with mortality and major cardiovascular-disease events. Other studies have shown that the consumption of whole-fat dairy is linked to decreased diabetes risk and doesn’t cause weight gain. “There’s no convincing evidence that low-fat dairy is preferable to whole-fat dairy for any health outcome,” Dariush Mozaffarian, a cardiologist at Tufts University who was a co-author on the 2018 study, told me. The broader research community has so far resisted this idea, but has acknowledged that the science on dairy fat has become more complex. “The reason you’re getting so many conflicting opinions is that the evidence is very controversial,” Lara-Breitinger said, noting the lack of randomized clinical trials comparing whole-fat and low-fat dairy.

[Read: Go ahead, try to explain milk]

Ultimately, milk isn’t “going to make or break a diet,” Dow said. Dairy makes up just 10 percent of the average American’s caloric intake, and most of that is cheese. Even for kids, very real concerns, such as obesity and diabetes, will probably not be solved—or meaningfully exacerbated—by a switch to whole milk. “You could probably have either low-fat or whole-fat, and it doesn’t matter,” Mozaffarian said.

As I have written previously, Americans have spent roughly the past 150 years quarreling about various aspects of milk, including its benefits, safety, and chemical composition. That’s partly because dairy is a powerful industry; last year, dairy products in the U.S. had an economic impact of nearly $780 billion. But since 2012, when the USDA under then-President Barack Obama required schools to serve only low-fat milk, student milk consumption has declined; according to the dairy industry, that’s because low-fat milk doesn’t taste as good. The Trump administration’s promotion of whole milk, Dow said, “really, really supports the dairy industry’s bottom line.” In fact, many of the reviewers of the new dietary guidelines were recently found to have ties to the beef and dairy industries. (When I asked the USDA about allegations of industry influence on the push for whole milk, the spokesperson asserted that the evidence “was evaluated based solely on scientific rigor, study design, consistency of findings, and biological plausibility.”)

[Read: Milk has divided Americans for more than 150 years]

Beyond serving as an economic engine, milk is a potent cultural symbol. It has long evoked an idealized past: a simpler time when cows roamed through pastures and produced pure, wholesome milk, and the Americans who tended them thrived in harmony with the natural world. Dairy companies have leaned into that aesthetic, featuring barns, fields, and words such as pure on milk cartons. Milk is also culturally linked to strength, wealth, and beauty, thanks in no small part to the celebrity-studded dairy ads of the late 20th century, including the “Got Milk?” campaign referenced by the Trump administration’s milk mustaches. Such positive associations make milk a powerful metaphor for what America could be—if certain unsavory elements of modernity could be undone or erased.

Perhaps unsurprisingly, this association has also been invoked in racist contexts for more than a century. In a 1923 speech, Herbert Hoover, who was then commerce secretary and would be elected president five years later, framed milk as a means to ensure “the very growth and virility of the white races.” Modern-day white nationalists and alt-right groups hold up dairy milk as a symbol of whiteness and masculinity, in contrast to soy milk, which they associate with the woke, feminist, multiracial left. (Yes, seriously.)

The idealized era of perfectly safe, perfectly wholesome dairy never really existed. “This whole idea that there was a time when we were healthy, and during that time we were eating steak and drinking whole milk, is not rooted in any reality,” Dow said. Nevertheless, it resonates with the MAHA and MAGA agendas, which both center on the belief that America will return to its former glory if it can re-create the past. The Trump administration’s endorsement of whole milk may nominally be about public health. But a recent White House post featuring a retro illustration of the president as an old-fashioned milkman, captioned “Make Whole Milk Great Again,” was all about aspiration—and the purified nation, untainted by modernity, that America could someday become.

Chief among the burdens weighing upon the weary sports parent—worse than the endless commutes, the exorbitant fees, the obnoxious parents on the other team—is the sense that your every decision has the power to make or break your child’s future. Should your 11-year-old show up to her elementary-school holiday concert, even if it means missing a practice with the elite soccer team to which you’ve pledged 100 percent attendance? What if this turns out to be the fork in the road that consigns her to the athletic scrap heap?

These are heavy decisions—at least they are for me, a soccer dad who happens to have spent years writing about the science of athletic success. Making it to the pros, the conventional wisdom says, is a consequence of talent and hard work. Best-selling books have bickered over the precise ratio—whether, say, 10,000 hours of practice trumps having the so-called sports gene. But the bottom line is that you need a sufficient combination of both. If you’re talented enough and do the work, you’ll make it. If not—well, decisions (and holiday concerts) have consequences.

Rationally, stressing out over missing a single practice is ridiculous. Believing that it matters, though, can be strangely reassuring, because of the suggestion that the future is under your control. Forecasting athletic careers is an imperfect science: Not every top draft pick pans out; not every star was a top draft pick. Unexpected injuries aside, the imprecision of our predictions is usually seen as a measurement problem. If we could only figure out which factors mattered most—how to quantify talent, which types of practice best develop it—we would be able to plot athletic trajectories with confidence.

Unless, of course, this tidy relationship between cause and effect is an illusion. What if the real prerequisite for athletic stardom is that you have to get lucky?

Joseph Baker, a scientist at the University of Toronto’s Sport Insight Lab, thinks that the way talent development is usually framed leaves out this crucial ingredient. Baker is a prominent figure in the academic world of “optimal human development,” who moonlights as a consultant for organizations such as the Texas Rangers. He’s also a longtime skeptic of the usual stories we tell ourselves about athletic talent. The most prominent is that early performance is the best predictor of later performance. In reality,  many cases of early success just mean an athlete was born in the first months of the year, went through puberty at a young age, or had rich and highly enthusiastic parents.

This critique of talent is not entirely new. It’s been almost two decades since Malcolm Gladwell’s Outliers spurred a cohort of hyper-ambitious soon-to-be parents to begin plotting January birth dates (or at least to tell people they were considering it). Over time, the debate about what factors actually matter has devolved into a game of whack-a-mole. If physical development isn’t the best predictor of long-term success, then it must be reaction time, or visual acuity, or hours of deliberate practice. The default assumption is that there must be something that reveals the presence of future athletic greatness.

Baker’s perspective changed, he told me, when he read Success and Luck, a 2016 book by the former Cornell University economics professor Robert H. Frank. Frank describes a hypothetical sports tournament whose outcome depends 49 percent on talent, 49 percent on effort, and 2 percent on luck. In mathematical simulations where as many as 100,000 competitors are randomly assigned values for each of these traits, it turns out that the winner is rarely the person with the highest combination of talent and effort. Instead, it will be someone who ranks relatively highly on those measures and also gets lucky.

This turns out to be something like a law of nature: It has been replicated and extended by others since Frank’s book came out. Among the most influential models is “Talent Versus Luck,” created by the Italian theoretical physicist Andrea Rapisarda and his colleagues, which simulates career trajectories over dozens of years and reaches the same conclusion. This model earned a 2022 Ig Nobel Prize “for explaining, mathematically, why success most often goes not to the most talented people, but instead to the luckiest.”

To Baker, these models suggest that it’s not just hard to reliably predict athletic futures; it’s impossible. He cites examples including a youth-soccer player for Northampton Town who missed a text message from the team’s manager telling him that he’d been dropped from the roster for an upcoming game. He showed up for the bus, went along for the ride, subbed in when another player got injured, impressed the manager, earned a spot for the rest of the season, and went on to play in the Premier League. Luck takes many forms, such as genetics, family resources, and what sports happen to be popular at a given place at a given time. But sometimes, it’s simply random chance: a gust of wind or an errant bounce or a missed text.

It’s easy to see how luck shapes individual moments in sport—how it changes the course of a game, a series, even an entire season. But what’s harder to accept is that luck might also play a role in longer arcs—not just what happens in games but who appears on the court in the first place. The more you reckon with this, the more disorienting it can be, as things start to feel ever more arbitrary and unfair. As Michael Mauboussin, an investor who writes about luck in his 2012 book, The Success Equation, put it to me: “Talking about luck really quickly spills into the philosophical stuff.”

You might think that the growing professionalization of youth sports offers an escape from this randomness—that by driving to this many practices and paying for that many coaches, you’re ensuring the cream will rise to the top. But the opposite is actually true, according to Mauboussin. In The Success Equation, he describes what he calls the “paradox of skill.” Now that every soccer hopeful is exhaustively trained from a young age, an army of relatively homogeneous talent is vying for the same prizes. “Everyone’s so good that luck becomes more important in determining outcomes,” Mauboussin said.

Baker and one of his colleagues at the University of Toronto, Kathryn Johnston, recently published a paper on the role of luck in athletic development in the journal Sports Medicine–Open. I felt a curious sense of relief when I read it. My daughters, who are 9 and 11, both play competitive soccer on teams requiring a level of commitment that I had naively thought went out of style with the fall of the Soviet Union. Seeing the evidence that future athletic success is not entirely predictable felt like a license for parents to loosen up a bit—to choose the holiday concert over the soccer practice without worrying about the long-term ramifications.

Linda Flanagan, the author of the 2022 book Take Back the Game and a frequent critic of today’s youth-sports culture, doesn’t share my optimism. She has no trouble believing that luck is involved with athletic success, but she doesn’t think that acknowledging this fact will change parental behavior. “Hell, they might double down on the investment in time and money, thinking that they need to give their child more chances to get lucky and impress the right coach,” she told me.

But that sort of luck—getting a job on your hundredth interview because the interviewer went to the same high school as you did, say—arguably is more about hustle than it is about serendipity. So is showing up to every soccer practice. Mauboussin’s definition of luck is narrower: It’s the factors you can’t control. No matter how much luck you try to “create” for yourself or your kids, some irreducible randomness might still make or break you.

To Baker, the takeaways from recognizing the role of luck are less about individual parents and more about how sports are organized. His advice to teams and governing bodies: “If there’s any way possible for you to avoid a selection, don’t select.” Keep as many athletes as you can in the system for as long as you can, and don’t allocate all of your resources to a chosen (and presumably lucky) few. When real-world constraints eventually and inevitably do require you to select—when you’re anointing these lucky few as your future stars, and casting out those who perhaps sang in one too many holiday concerts—try to leave the door open for future decisions and revisions. After all, Baker says, no matter how carefully you’ve weighed your predictions, “you’re probably wrong.”

Updated at 9:30 a.m. ET on January 9, 2026

Every few weeks I turn up in a hospital gown at a medical exam room in Massachusetts and describe a set of symptoms that I don’t really have. Students listen to my complaints of stomach pain, a bad cough, severe fatigue, rectal bleeding, shortness of breath, a bum knee, HIV infection, even stab wounds; on one occasion I simply shouted incoherently for several minutes, as if I’d had a stroke. Then the students do their best to help.

I have been given nearly 100 ultrasounds in just the past year, and referred to behavioral counseling dozens of times. I have been consoled for my woes, thanked for my forthrightness, congratulated for my efforts to improve my diet. I have received apologies when they need to lower my gown, press on my abdomen, or touch me with a cold stethoscope. Our encounters, which sometimes run as long as 40 minutes, end with the students giving me their diagnoses; detailing every test, treatment, and drug they want me to have; and then answering all of my questions without ever looking at their watch. Before leaving, they commend me for coming in and promise to check back in on me. It’s a shame I have to feign an illness to get that kind of care.

I learned about fake medical care four years ago when my son, an M.D.-Ph.D. student, mentioned that he was being graded on his skill at treating “standardized patients”: people who are paid to role-play illness. I’m fascinated by the practice of medicine, so I found this notion irresistible. I applied for a job in the standardized-patient program at the University of Massachusetts, and after two full days of training, plus a lot of reading and videos, I was ready to get started.

The practice of faking medical encounters for the sake of education dates back to 1963 at the University of Southern California, but UMass developed one of the first formalized programs in 1982 and has been a model since. Such programs are now, well, standard: According to a count published in a 2023 review of the practice, 187 of the 195 accredited medical schools in the U.S. describe the use of standardized patients on their websites.

Each specific case that an SP might inhabit—and there are hundreds—comes with a minimum of two hours of additional training in person or via Zoom, along with more reading. We’re buried in a blizzard of unique details to memorize about the patients we portray. By the time I’m ready for my fake exam, I can rattle off what vaccinations I’ve had, how long I’ve worked at my job, whether I’ve had my tonsils out, when my mother died, how much weight I’ve gained or lost in recent months, which vitamins I take, how much coffee I drink, how chatty I tend to be, and whether I’ve traveled recently (and might have parasites!).

There’s no script for my encounters, because you never know what the students might ask, say, and do. So I improvise most of my responses, in keeping with the facts I’ve been given. What do I usually eat for breakfast? What do they make at the factory where I work? What sexual acts do my partner and I engage in? My ad-libs are acceptable, according to the grades I get from staff members who occasionally observe the encounters via camera. But many of my colleagues are professional actors, and their performances are superb. We sometimes work in pairs, and more than once I’ve found myself deeply moved—even to the verge of tears—by my partner’s fake suffering.

Of course, we SPs are not the only ones faking it in these sessions; the students are playing along, too. We score them on as many as 50 different elements, including their tone of voice (was it friendly but professional?), their body language (did they lean in to show engagement?), and their facility at palpating our spleens (did they dig in firmly in the right spot?). Most important, we are meant to check that they are learning empathy. Numerous studies have shown that more empathetic care is correlated with better clinical outcomes, perhaps because it makes patients more inclined to share their full medical history, and more likely to stick with whatever treatment has been recommended. In one survey, orthopedic-surgery patients reported that a doctor’s empathy was more central to their satisfaction than the time it took to get an appointment, how long they were stuck in the waiting room, or even what sort of treatment they ended up receiving.

It may not even matter if the doctor’s kindness is sincere, as long as it sounds that way to patients. Dave Hatem, an internist and professor emeritus at UMass who has helped oversee the school’s SP curriculum, told me that even just the act of trying to say empathetic things is valuable for students. “If you get the right words to come out of your mouth, and you do it often enough, then you get to the point where you really mean it,” he said.

Most of the medical students who examine me do seem genuine in their concern. I suspect that if it were up to them, they’d practice medicine this way for the whole of their careers. But however much they might want to provide the superb treatment that I experience as a standardized patient, the health-care system won’t let them.


Elaine Thompson is a recent graduate of Emory University’s medical school, where she learned to provide the same sort of long, thoughtful, whole-person interactions that I get from students. For the past three years, she has been an ear, nose, and throat resident at Johns Hopkins Medicine, one of the best medical centers in the world. Her real-life patient encounters now last for an average of 10 minutes.

“You quickly learn as a resident that the job is to move things along,” Thompson told me. “I’m still curious about my patients as people and want to learn about their families, but if it’s not relevant to their current problem, then asking about it opens a door that will add time to the visit.” So much for chatting to put them at ease, soliciting a full narrative of their symptoms, hearing all their concerns, asking about their job, uncovering anxieties, addressing financial and social challenges, and encouraging their questions. (In an emailed statement, a spokesperson for Johns Hopkins Medicine said that it is committed to delivering “patient-centered training” and “whole person care.”)

[Read: Learning empathy from the dead]

The same is true for Emily Chin, who received her medical degree from UMass in 2023 and is now an ob-gyn resident at UC San Francisco. She told me that she got the message about keeping visits short early on from senior residents, who made a point of tracking the length of her encounters. “I’d just have time to check the cervix, do a quick ultrasound, and then make a decision about admitting or discharging the patient,” she said. Another source of pressure is the knowledge that spending any extra time with a patient means that dozens of other patients will be waiting longer to be seen: “You see the patients piling up in the waiting room, and you see the schedule screen going red.” (UCSF’s vice dean for education, Karen Hauer, did not object to this characterization, but noted that the school advises its residents on how to establish patient rapport when time is short.)

Residents also learn that time is money. Hospitals and practices view a doctor’s interactions with a patient in terms of “relative value units.” Reimbursement for seeing a patient whose high cholesterol leads to a prescription for a statin might bring $60 into the hospital or clinic. Reimbursement for extra time spent discussing the patient’s fears of side effects and concerns about affording the drug’s co-pay or making dietary changes brings in $0. “That doesn’t exactly encourage providing the most empathetic, patient-centered care,” a UMass Memorial Health resident named Hans Erickson told me.

The residents I spoke with worried that these time pressures were only going to get worse when they finished residency and became full-fledged doctors. In light of those constraints, does it still make sense to emphasize highly empathetic care for students? I asked that question of Melissa Fischer, the physician who directs the SP program and other simulation training at UMass. Fischer argues that the lessons we impart to students can survive the crush of residency, even if they have to be applied in abbreviated ways. “That interest in building connections to patients stays,” she said. “They just have to find faster ways to build them.”

[Read: How to teach doctors empathy]

Lisa Howley, an educational psychologist who serves as the senior director for transforming medical education at the Association of American Medical Colleges, told me that training up a generation of more empathetic medical students will make the health-care system better. “We think of young medical learners as agents of potential change,” she told me. “They’ll see the gaps and weaknesses, and they’ll look for ways to make improvements.” Besides, what would be the benefit of forcing medical students to learn about patient encounters in the hectic, abbreviated format they’ll confront as residents? “It doesn’t make sense to apply those pressures early in their education,” she said. After all, we don’t teach student pilots how to fly a plane while trying to make up for time lost to flight delays or dealing with unruly passengers.

All of the residents I spoke with said they look for ways to connect with patients despite the harsh realities of the system. “The desire to get to know the patient as a whole person doesn’t go away; it’s just a matter of finding ways to bring it to the surface as a stressed resident,” Erickson said. Chin put it this way: “It’s not that it’s challenging to keep up empathy, it’s that it’s hard to be empathetic all the time.”

At the end of my fake encounters, I try to be encouraging. I tell the students how I, as a patient, felt treated by them, and then I challenge them to give ideas for how they might improve. Sometimes, when one of them has done a bang-up job of making me feel heard, I tell them that I hope they’ll be able to sustain that level of engagement when they’re a practicing doctor—and I always get the sense that the students hope so too.


This article originally described “relative value units” as “revenue value units.”

Eat More Deer

by

Updated at 5:14 p.m. ET on January 9, 2026

The deer were out there. The crisp tracks in the snow made that clear. Three hours into our hunt through the frigid New Hampshire woods, Ryan Calsbeek, a rangy 51-year-old biology professor at Dartmouth, guessed that 200 animals were hiding in the trees around us. Calsbeek and I were 20 feet up a pignut hickory, crouching on a creaky platform. His friend Max Overstrom-Coleman, a stocky 46-year-old bar owner from Vermont, had climbed a distant tree and strung himself up by a harness, readying his compound bow and swaying in the wind. Shivering in camo jackets and neon-orange beanies, we peered into the darkening forest, daring it to move.

I had joined Calsbeek’s December hunt to try to get my hands on high-quality red meat. Calsbeek had yet to kill a deer that season, but in previous years, he told me, a single animal kept his family of four well fed through the winter. His young daughters especially liked to eat deer heart; apparently, it’s marvelously rich and tender. My mouth watered at the thought. The last time I’d tasted venison was more than a decade ago at a fancy restaurant in Toronto, where it was served as carpaccio, drizzled in oil and so fresh that it may as well have pranced out of the woods and onto my plate.

A bounty of such succulent, free-range meat is currently running through America’s backyards. The continental United States is home to some 30 million white-tailed deer, and in many areas, their numbers are growing too rapidly for comfort. Each year, a white-tailed doe can typically birth up to three fawns, which themselves can reproduce as soon as six months later.

Wherever deer are overabundant, they are at best a nuisance and at worst a plague. They trample gardens, destroy farmland, carry ticks that spread Lyme disease, and disrupt forest ecosystems, allowing invasive species to spread. They are involved in tens of thousands of car crashes each year in New York and New Jersey, where state wildlife departments have encouraged hunters to harvest more deer. In especially populated regions, wildlife agencies hire sharpshooters to cull the animals. Last year, New Hampshire legislators expanded the deer-hunting season in an attempt to keep the population under control. By the looks of the forest floor, which was pitted with hoof marks and scattered with marble-shaped droppings, that effort was falling short.

Over the past decade, some states have proposed a simple, if controversial, strategy for bringing deer under control: Couldn’t people like me—who don’t hunt but aren’t opposed to it—eat more venison?

Venison may not be a staple of American cuisine, but it has a place in many people’s diets. Health influencers laud it as a lean, low-calorie, nutrient-dense source of protein. Venison jerky sticks are sold at big-box stores and advertised as snacks for people on Whole30 and keto diets. Higher-end grocery stores, such as Wegmans and Whole Foods, sell ground venison for upwards of $12 a pound, roughly twice the cost of ground beef.

Part of the reason venison is so expensive is that most of it is not homegrown. It’s mostly imported from New Zealand, which has sent more than 5 million pounds of the stuff to the U.S. every year since 2020. Beef, the dominant red meat in the States, has historically been more affordable. But beef prices jumped nearly 15 percent in 2025, and the conventional kind sold in most supermarkets comes from cattle raised in abysmal conditions. If high-quality venison were cheaper and more widely available, it could be an appetizing alternative.

In recent years, a few deer-swamped states, including New Jersey and Maryland, have tried to legalize the sale of hunted venison, which would deliver two key benefits: more deer out of the ecosystem and more venison on people’s plates. Despite the sport’s association with trophies, many deer hunters are motivated by the prospect of obtaining meat, and they can only consume so much. “It’s for your own table,” Overstrom-Coleman said as he fixed climbing sticks onto a tree to form a makeshift ladder. He had already stocked his freezer full of venison this season (“That son of a bitch,” Calsbeek whispered, once we’d left our companion in his tree) and planned, as many hunters do, to donate any excess meat to a food bank.

Hunting is waning in popularity, in part because younger people are less keen on participating than older generations. Efforts to bring in more hunters, such as programs to train women and youth in outdoor skills, are under way in many states. Women are the fastest-growing demographic, and they participate largely to acquire food, Moira Tidball, the executive director at the Cornell Cooperative Extension who leads hunting classes for women, told me. Still, interest is not growing fast enough for the subsistence-and-donation system to keep deer numbers in check.  

[Read: America needs hunting more than it knows]

It’s hard to imagine a better incentive for deer hunting than allowing hunters to sell their venison to stores and restaurants. But the idea is antithetical to a core tenet of American conservation. For more than 100 years, the country’s wild game has flourished under the protection of hunters and their allies, steadfast in their belief that the nation’s animals are not for sale.

The last time this many white-tailed deer roamed America’s woodlands, the country didn’t yet exist. To the English colonists who arrived in the New World, the deer bounding merrily through the forests may as well have been leaping bags of cash. Back home, deer belonged to the Crown, and as such, could be hunted only by the privileged few, Keith Tidball, a hunter and an environmental anthropologist at Cornell (and Moira’s spouse), told me. In the colonies, they were free for the taking.

Colonists founded a robust trans-Atlantic trade for deer hide, a particularly popular leather for making work boots and breeches, which drastically reduced the deer population. In Walden, Henry David Thoreau notes a man who preserved the horns “of the last deer that was killed in this vicinity.” The animals were already close to disappearing from many areas at the beginning of what ecologists have called the “exploitation era” of white-tailed deer, starting in the mid-19th century. Fifty years later, America was home to roughly half a million deer, down 99 percent from precolonial days.

The commerce-driven decimation of the nation’s wildlife—not just deer but birds, elk, bears, and many other animals—unsettled many Americans, especially hunters. In 1900, Representative John Lacey of Iowa, a hunter and close friend of Theodore Roosevelt’s, introduced a bill to ban the trafficking of America’s wildlife. (As Roosevelt, who notoriously hunted to collect trophies, wrote in 1913, “If there is to be any shooting there must be something to shoot.”) The Lacey Act remains one of the most binding federal conservation laws in existence today.  

[From the May 1906 issue: Camping with President Theodore Roosevelt]

The law is partly contingent on state policies, which make exceptions for certain species. Hunters in most states, for example, can legally harvest and sell the pelts of fur-bearing species such as otters, raccoons, and coyotes. But attempts to carve out similar exceptions for hunted venison, including the bills in Maryland and New Jersey, have failed. In 2022, the Mississippi attorney general published a statement that opened up the possibility of legalizing the sale of hunted deer, provoking fierce opposition from hunters and conservationists; today, the option remains open but has not led to any policy changes. Last year, an Indiana state representative introduced a bill that would allow the sale of hunted venison, but so far it has gone nowhere.  

The practical reason such proposals keep failing is that allowing the sale of hunted meat would require huge investments in infrastructure. Systems to process meat according to state and federal laws would have to be developed, as would rapid testing for chronic wasting disease, an illness akin to mad cow that could, theoretically, spread to humans who eat infected meat, though no cases have ever been reported. Such systems could, of course, be implemented. Hunted deer is sold in some common grocery stores in the United Kingdom, such as Waitrose and Aldi. (Notably, chronic wasting disease is not a concern there.)

[Read: Deer are beta-testing a nightmare disease]

Although the sheer abundance of deer makes them easy to imagine as steaks on legs, several experts cautioned that some people’s affection for the animals runs deep. Deer are cute; they’re docile; they’re Bambi. David Drake, a forestry and wildlife professor at the University of Wisconsin at Madison, likens them to America’s “sacred cow.” As Drake and a colleague have outlined in a paper proposing a model for commercialized venison hunting in the U.S., any modern system would be fundamentally different from the colonial-era approach because it would be regulated, mostly by state wildlife agencies. But powerful coalitions of hunters and conservationists remain both faithful to the notion that wild game shouldn’t be sold and fearful that history will repeat itself. As the Congressional Sportsmen’s Foundation, a national hunting association, puts it, “Any effort to recreate markets for game species represents a significant threat to the future of our nation’s sportsmen-led conservation efforts.” Some of the fiercest pushback to the New Jersey law, Drake told me, came from the state wildlife agency.

The only U.S. state with a deer-related exception to the Lacey Act is Vermont. During the open deer-hunting season (which spans roughly from fall to winter in the Northeast) and for 20 days afterward, Vermonters can legally sell any meat that they harvest. This policy was introduced in 1961, and yet, “I am not aware of anyone who actually takes advantage of it,” Nick Fortin, a wildlife biologist at Vermont’s Fish and Wildlife Department, told me. He added that the department, which manages the exasperated homeowners and destabilized forests that deer leave in their path, has been discussing how to raise awareness about the law.

Even after I explained the 1961 law to several Vermont hunters, they were hesitant to sell me any meat. Hunted meat is meant to be shared freely, or at most bartered for other items or goodwill, Greg Boglioli, a Vermont hunter and store owner, told me. I met Boglioli at the rural home of his friend Fred Waite, a lifelong hunter whose front room alone was decorated with 20 deer heads. I had hoped to buy venison from Waite, but he insisted on sharing it for free. After all, he had plenty. His pantry was crammed with mason jars of stewed venison in liver-colored brine. On a table in the living room was the scarlet torso of a deer that his son had accidentally hit with his truck the other day, half-thawed and waiting to be cooked.

During our hunt, I found Overstrom-Coleman to be more open to the idea of selling the venison he hunted. “I guess that would be a pretty excellent way to share it,” he said. Earlier in the season, he’d killed a deer in Vermont, and he was willing to sell me some of the meat the next day. At least, I thought as I stared into the motionless woods, I’d be going home with something.

[From the July/August 2005 issue: Masters of the hunt]

By the time the sun went down, the only deer I’d seen was a teetering doe in a video that Overstrom-Coleman had taken from his tree and sent to Calsbeek. “Too small to kill,” he texted; he’d meet us in the parking lot. The air was glacial as Calsbeek and I trudged empty-handed toward the trailhead, hoofprints glinting mockingly in the light of our headlamps. From the trunk of the car, we took a consolation swig of Wild Turkey from a frosted bottle, and Overstrom-Coleman reminded me to visit the next day.

I found his chest freezer stuffed with paper-wrapped packages stamped with Deer 2025. He handed me three and refused to let me pay. Back home a few days later, I used one to make meatballs. Their sheer depth of flavor—earthy and robust, with a hint of nuttiness—made me wonder why I bothered to eat farmed meat at all.


This article originally misidentified Max Overstrom-Coleman’s hunting weapon.

Newer Posts