Category:

Medical News

Updated at 9:30 a.m. ET on January 9, 2026

Every few weeks I turn up in a hospital gown at a medical exam room in Massachusetts and describe a set of symptoms that I don’t really have. Students listen to my complaints of stomach pain, a bad cough, severe fatigue, rectal bleeding, shortness of breath, a bum knee, HIV infection, even stab wounds; on one occasion I simply shouted incoherently for several minutes, as if I’d had a stroke. Then the students do their best to help.

I have been given nearly 100 ultrasounds in just the past year, and referred to behavioral counseling dozens of times. I have been consoled for my woes, thanked for my forthrightness, congratulated for my efforts to improve my diet. I have received apologies when they need to lower my gown, press on my abdomen, or touch me with a cold stethoscope. Our encounters, which sometimes run as long as 40 minutes, end with the students giving me their diagnoses; detailing every test, treatment, and drug they want me to have; and then answering all of my questions without ever looking at their watch. Before leaving, they commend me for coming in and promise to check back in on me. It’s a shame I have to feign an illness to get that kind of care.

I learned about fake medical care four years ago when my son, an M.D.-Ph.D. student, mentioned that he was being graded on his skill at treating “standardized patients”: people who are paid to role-play illness. I’m fascinated by the practice of medicine, so I found this notion irresistible. I applied for a job in the standardized-patient program at the University of Massachusetts, and after two full days of training, plus a lot of reading and videos, I was ready to get started.

The practice of faking medical encounters for the sake of education dates back to 1963 at the University of Southern California, but UMass developed one of the first formalized programs in 1982 and has been a model since. Such programs are now, well, standard: According to a count published in a 2023 review of the practice, 187 of the 195 accredited medical schools in the U.S. describe the use of standardized patients on their websites.

Each specific case that an SP might inhabit—and there are hundreds—comes with a minimum of two hours of additional training in person or via Zoom, along with more reading. We’re buried in a blizzard of unique details to memorize about the patients we portray. By the time I’m ready for my fake exam, I can rattle off what vaccinations I’ve had, how long I’ve worked at my job, whether I’ve had my tonsils out, when my mother died, how much weight I’ve gained or lost in recent months, which vitamins I take, how much coffee I drink, how chatty I tend to be, and whether I’ve traveled recently (and might have parasites!).

There’s no script for my encounters, because you never know what the students might ask, say, and do. So I improvise most of my responses, in keeping with the facts I’ve been given. What do I usually eat for breakfast? What do they make at the factory where I work? What sexual acts do my partner and I engage in? My ad-libs are acceptable, according to the grades I get from staff members who occasionally observe the encounters via camera. But many of my colleagues are professional actors, and their performances are superb. We sometimes work in pairs, and more than once I’ve found myself deeply moved—even to the verge of tears—by my partner’s fake suffering.

Of course, we SPs are not the only ones faking it in these sessions; the students are playing along, too. We score them on as many as 50 different elements, including their tone of voice (was it friendly but professional?), their body language (did they lean in to show engagement?), and their facility at palpating our spleens (did they dig in firmly in the right spot?). Most important, we are meant to check that they are learning empathy. Numerous studies have shown that more empathetic care is correlated with better clinical outcomes, perhaps because it makes patients more inclined to share their full medical history, and more likely to stick with whatever treatment has been recommended. In one survey, orthopedic-surgery patients reported that a doctor’s empathy was more central to their satisfaction than the time it took to get an appointment, how long they were stuck in the waiting room, or even what sort of treatment they ended up receiving.

It may not even matter if the doctor’s kindness is sincere, as long as it sounds that way to patients. Dave Hatem, an internist and professor emeritus at UMass who has helped oversee the school’s SP curriculum, told me that even just the act of trying to say empathetic things is valuable for students. “If you get the right words to come out of your mouth, and you do it often enough, then you get to the point where you really mean it,” he said.

Most of the medical students who examine me do seem genuine in their concern. I suspect that if it were up to them, they’d practice medicine this way for the whole of their careers. But however much they might want to provide the superb treatment that I experience as a standardized patient, the health-care system won’t let them.


Elaine Thompson is a recent graduate of Emory University’s medical school, where she learned to provide the same sort of long, thoughtful, whole-person interactions that I get from students. For the past three years, she has been an ear, nose, and throat resident at Johns Hopkins Medicine, one of the best medical centers in the world. Her real-life patient encounters now last for an average of 10 minutes.

“You quickly learn as a resident that the job is to move things along,” Thompson told me. “I’m still curious about my patients as people and want to learn about their families, but if it’s not relevant to their current problem, then asking about it opens a door that will add time to the visit.” So much for chatting to put them at ease, soliciting a full narrative of their symptoms, hearing all their concerns, asking about their job, uncovering anxieties, addressing financial and social challenges, and encouraging their questions. (In an emailed statement, a spokesperson for Johns Hopkins Medicine said that it is committed to delivering “patient-centered training” and “whole person care.”)

[Read: Learning empathy from the dead]

The same is true for Emily Chin, who received her medical degree from UMass in 2023 and is now an ob-gyn resident at UC San Francisco. She told me that she got the message about keeping visits short early on from senior residents, who made a point of tracking the length of her encounters. “I’d just have time to check the cervix, do a quick ultrasound, and then make a decision about admitting or discharging the patient,” she said. Another source of pressure is the knowledge that spending any extra time with a patient means that dozens of other patients will be waiting longer to be seen: “You see the patients piling up in the waiting room, and you see the schedule screen going red.” (UCSF’s vice dean for education, Karen Hauer, did not object to this characterization, but noted that the school advises its residents on how to establish patient rapport when time is short.)

Residents also learn that time is money. Hospitals and practices view a doctor’s interactions with a patient in terms of “relative value units.” Reimbursement for seeing a patient whose high cholesterol leads to a prescription for a statin might bring $60 into the hospital or clinic. Reimbursement for extra time spent discussing the patient’s fears of side effects and concerns about affording the drug’s co-pay or making dietary changes brings in $0. “That doesn’t exactly encourage providing the most empathetic, patient-centered care,” a UMass Memorial Health resident named Hans Erickson told me.

The residents I spoke with worried that these time pressures were only going to get worse when they finished residency and became full-fledged doctors. In light of those constraints, does it still make sense to emphasize highly empathetic care for students? I asked that question of Melissa Fischer, the physician who directs the SP program and other simulation training at UMass. Fischer argues that the lessons we impart to students can survive the crush of residency, even if they have to be applied in abbreviated ways. “That interest in building connections to patients stays,” she said. “They just have to find faster ways to build them.”

[Read: How to teach doctors empathy]

Lisa Howley, an educational psychologist who serves as the senior director for transforming medical education at the Association of American Medical Colleges, told me that training up a generation of more empathetic medical students will make the health-care system better. “We think of young medical learners as agents of potential change,” she told me. “They’ll see the gaps and weaknesses, and they’ll look for ways to make improvements.” Besides, what would be the benefit of forcing medical students to learn about patient encounters in the hectic, abbreviated format they’ll confront as residents? “It doesn’t make sense to apply those pressures early in their education,” she said. After all, we don’t teach student pilots how to fly a plane while trying to make up for time lost to flight delays or dealing with unruly passengers.

All of the residents I spoke with said they look for ways to connect with patients despite the harsh realities of the system. “The desire to get to know the patient as a whole person doesn’t go away; it’s just a matter of finding ways to bring it to the surface as a stressed resident,” Erickson said. Chin put it this way: “It’s not that it’s challenging to keep up empathy, it’s that it’s hard to be empathetic all the time.”

At the end of my fake encounters, I try to be encouraging. I tell the students how I, as a patient, felt treated by them, and then I challenge them to give ideas for how they might improve. Sometimes, when one of them has done a bang-up job of making me feel heard, I tell them that I hope they’ll be able to sustain that level of engagement when they’re a practicing doctor—and I always get the sense that the students hope so too.


This article originally described “relative value units” as “revenue value units.”

Eat More Deer

by

Updated at 5:14 p.m. ET on January 9, 2026

The deer were out there. The crisp tracks in the snow made that clear. Three hours into our hunt through the frigid New Hampshire woods, Ryan Calsbeek, a rangy 51-year-old biology professor at Dartmouth, guessed that 200 animals were hiding in the trees around us. Calsbeek and I were 20 feet up a pignut hickory, crouching on a creaky platform. His friend Max Overstrom-Coleman, a stocky 46-year-old bar owner from Vermont, had climbed a distant tree and strung himself up by a harness, readying his compound bow and swaying in the wind. Shivering in camo jackets and neon-orange beanies, we peered into the darkening forest, daring it to move.

I had joined Calsbeek’s December hunt to try to get my hands on high-quality red meat. Calsbeek had yet to kill a deer that season, but in previous years, he told me, a single animal kept his family of four well fed through the winter. His young daughters especially liked to eat deer heart; apparently, it’s marvelously rich and tender. My mouth watered at the thought. The last time I’d tasted venison was more than a decade ago at a fancy restaurant in Toronto, where it was served as carpaccio, drizzled in oil and so fresh that it may as well have pranced out of the woods and onto my plate.

A bounty of such succulent, free-range meat is currently running through America’s backyards. The continental United States is home to some 30 million white-tailed deer, and in many areas, their numbers are growing too rapidly for comfort. Each year, a white-tailed doe can typically birth up to three fawns, which themselves can reproduce as soon as six months later.

Wherever deer are overabundant, they are at best a nuisance and at worst a plague. They trample gardens, destroy farmland, carry ticks that spread Lyme disease, and disrupt forest ecosystems, allowing invasive species to spread. They are involved in tens of thousands of car crashes each year in New York and New Jersey, where state wildlife departments have encouraged hunters to harvest more deer. In especially populated regions, wildlife agencies hire sharpshooters to cull the animals. Last year, New Hampshire legislators expanded the deer-hunting season in an attempt to keep the population under control. By the looks of the forest floor, which was pitted with hoof marks and scattered with marble-shaped droppings, that effort was falling short.

Over the past decade, some states have proposed a simple, if controversial, strategy for bringing deer under control: Couldn’t people like me—who don’t hunt but aren’t opposed to it—eat more venison?

Venison may not be a staple of American cuisine, but it has a place in many people’s diets. Health influencers laud it as a lean, low-calorie, nutrient-dense source of protein. Venison jerky sticks are sold at big-box stores and advertised as snacks for people on Whole30 and keto diets. Higher-end grocery stores, such as Wegmans and Whole Foods, sell ground venison for upwards of $12 a pound, roughly twice the cost of ground beef.

Part of the reason venison is so expensive is that most of it is not homegrown. It’s mostly imported from New Zealand, which has sent more than 5 million pounds of the stuff to the U.S. every year since 2020. Beef, the dominant red meat in the States, has historically been more affordable. But beef prices jumped nearly 15 percent in 2025, and the conventional kind sold in most supermarkets comes from cattle raised in abysmal conditions. If high-quality venison were cheaper and more widely available, it could be an appetizing alternative.

In recent years, a few deer-swamped states, including New Jersey and Maryland, have tried to legalize the sale of hunted venison, which would deliver two key benefits: more deer out of the ecosystem and more venison on people’s plates. Despite the sport’s association with trophies, many deer hunters are motivated by the prospect of obtaining meat, and they can only consume so much. “It’s for your own table,” Overstrom-Coleman said as he fixed climbing sticks onto a tree to form a makeshift ladder. He had already stocked his freezer full of venison this season (“That son of a bitch,” Calsbeek whispered, once we’d left our companion in his tree) and planned, as many hunters do, to donate any excess meat to a food bank.

Hunting is waning in popularity, in part because younger people are less keen on participating than older generations. Efforts to bring in more hunters, such as programs to train women and youth in outdoor skills, are under way in many states. Women are the fastest-growing demographic, and they participate largely to acquire food, Moira Tidball, the executive director at the Cornell Cooperative Extension who leads hunting classes for women, told me. Still, interest is not growing fast enough for the subsistence-and-donation system to keep deer numbers in check.  

[Read: America needs hunting more than it knows]

It’s hard to imagine a better incentive for deer hunting than allowing hunters to sell their venison to stores and restaurants. But the idea is antithetical to a core tenet of American conservation. For more than 100 years, the country’s wild game has flourished under the protection of hunters and their allies, steadfast in their belief that the nation’s animals are not for sale.

The last time this many white-tailed deer roamed America’s woodlands, the country didn’t yet exist. To the English colonists who arrived in the New World, the deer bounding merrily through the forests may as well have been leaping bags of cash. Back home, deer belonged to the Crown, and as such, could be hunted only by the privileged few, Keith Tidball, a hunter and an environmental anthropologist at Cornell (and Moira’s spouse), told me. In the colonies, they were free for the taking.

Colonists founded a robust trans-Atlantic trade for deer hide, a particularly popular leather for making work boots and breeches, which drastically reduced the deer population. In Walden, Henry David Thoreau notes a man who preserved the horns “of the last deer that was killed in this vicinity.” The animals were already close to disappearing from many areas at the beginning of what ecologists have called the “exploitation era” of white-tailed deer, starting in the mid-19th century. Fifty years later, America was home to roughly half a million deer, down 99 percent from precolonial days.

The commerce-driven decimation of the nation’s wildlife—not just deer but birds, elk, bears, and many other animals—unsettled many Americans, especially hunters. In 1900, Representative John Lacey of Iowa, a hunter and close friend of Theodore Roosevelt’s, introduced a bill to ban the trafficking of America’s wildlife. (As Roosevelt, who notoriously hunted to collect trophies, wrote in 1913, “If there is to be any shooting there must be something to shoot.”) The Lacey Act remains one of the most binding federal conservation laws in existence today.  

[From the May 1906 issue: Camping with President Theodore Roosevelt]

The law is partly contingent on state policies, which make exceptions for certain species. Hunters in most states, for example, can legally harvest and sell the pelts of fur-bearing species such as otters, raccoons, and coyotes. But attempts to carve out similar exceptions for hunted venison, including the bills in Maryland and New Jersey, have failed. In 2022, the Mississippi attorney general published a statement that opened up the possibility of legalizing the sale of hunted deer, provoking fierce opposition from hunters and conservationists; today, the option remains open but has not led to any policy changes. Last year, an Indiana state representative introduced a bill that would allow the sale of hunted venison, but so far it has gone nowhere.  

The practical reason such proposals keep failing is that allowing the sale of hunted meat would require huge investments in infrastructure. Systems to process meat according to state and federal laws would have to be developed, as would rapid testing for chronic wasting disease, an illness akin to mad cow that could, theoretically, spread to humans who eat infected meat, though no cases have ever been reported. Such systems could, of course, be implemented. Hunted deer is sold in some common grocery stores in the United Kingdom, such as Waitrose and Aldi. (Notably, chronic wasting disease is not a concern there.)

[Read: Deer are beta-testing a nightmare disease]

Although the sheer abundance of deer makes them easy to imagine as steaks on legs, several experts cautioned that some people’s affection for the animals runs deep. Deer are cute; they’re docile; they’re Bambi. David Drake, a forestry and wildlife professor at the University of Wisconsin at Madison, likens them to America’s “sacred cow.” As Drake and a colleague have outlined in a paper proposing a model for commercialized venison hunting in the U.S., any modern system would be fundamentally different from the colonial-era approach because it would be regulated, mostly by state wildlife agencies. But powerful coalitions of hunters and conservationists remain both faithful to the notion that wild game shouldn’t be sold and fearful that history will repeat itself. As the Congressional Sportsmen’s Foundation, a national hunting association, puts it, “Any effort to recreate markets for game species represents a significant threat to the future of our nation’s sportsmen-led conservation efforts.” Some of the fiercest pushback to the New Jersey law, Drake told me, came from the state wildlife agency.

The only U.S. state with a deer-related exception to the Lacey Act is Vermont. During the open deer-hunting season (which spans roughly from fall to winter in the Northeast) and for 20 days afterward, Vermonters can legally sell any meat that they harvest. This policy was introduced in 1961, and yet, “I am not aware of anyone who actually takes advantage of it,” Nick Fortin, a wildlife biologist at Vermont’s Fish and Wildlife Department, told me. He added that the department, which manages the exasperated homeowners and destabilized forests that deer leave in their path, has been discussing how to raise awareness about the law.

Even after I explained the 1961 law to several Vermont hunters, they were hesitant to sell me any meat. Hunted meat is meant to be shared freely, or at most bartered for other items or goodwill, Greg Boglioli, a Vermont hunter and store owner, told me. I met Boglioli at the rural home of his friend Fred Waite, a lifelong hunter whose front room alone was decorated with 20 deer heads. I had hoped to buy venison from Waite, but he insisted on sharing it for free. After all, he had plenty. His pantry was crammed with mason jars of stewed venison in liver-colored brine. On a table in the living room was the scarlet torso of a deer that his son had accidentally hit with his truck the other day, half-thawed and waiting to be cooked.

During our hunt, I found Overstrom-Coleman to be more open to the idea of selling the venison he hunted. “I guess that would be a pretty excellent way to share it,” he said. Earlier in the season, he’d killed a deer in Vermont, and he was willing to sell me some of the meat the next day. At least, I thought as I stared into the motionless woods, I’d be going home with something.

[From the July/August 2005 issue: Masters of the hunt]

By the time the sun went down, the only deer I’d seen was a teetering doe in a video that Overstrom-Coleman had taken from his tree and sent to Calsbeek. “Too small to kill,” he texted; he’d meet us in the parking lot. The air was glacial as Calsbeek and I trudged empty-handed toward the trailhead, hoofprints glinting mockingly in the light of our headlamps. From the trunk of the car, we took a consolation swig of Wild Turkey from a frosted bottle, and Overstrom-Coleman reminded me to visit the next day.

I found his chest freezer stuffed with paper-wrapped packages stamped with Deer 2025. He handed me three and refused to let me pay. Back home a few days later, I used one to make meatballs. Their sheer depth of flavor—earthy and robust, with a hint of nuttiness—made me wonder why I bothered to eat farmed meat at all.


This article originally misidentified Max Overstrom-Coleman’s hunting weapon.

Every night before bedtime, my daughter tilts back her head so that a pair of metal plates inside her mouth can be cranked apart another quarter of a millimeter. We turn a jackscrew with a wire tip; it spreads the bones within her upper jaw. At times she groans or even cries: she says that she can feel the pressure up into her nose.

This is normal. My daughter is 9 years old. She has a palate expander.

So does her best friend, and, by her count, so does nearly one in four of the kids in her fourth-grade class. On Reddit’s r/braces forum, a practitioner based in Frisco, Texas, said he was surprised by “how many parents ask me, ‘Hey, does my child need an expander? Everyone else seems to have one.’” His colleagues seemed to notice something similar. “Everybody’s being told they have a narrow jaw, and everyone’s being given an expander,” Neal Kravitz, the editor in chief of the Journal of Clinical Orthodontics, told me.

A generation ago, getting braces was a rite of passage into seventh grade. Today, the reshaping of a child’s smile may commence a few years earlier, at 7, 8, or 9 years old. At that point, the two sides of the upper jawbone haven’t yet joined together, a fact that is propitious for a different orthodontic process: instead of straightening, expansion. During this phase of life, when kids still have some baby teeth, a tiny dungeon rack may be wedged between a child’s upper teeth, then used to spread her upper jaw and—proponents say—introduce essential room for sprouting teeth.

The expander is an old device; debates about its use are hardly any younger. What seems to have been the first expander was described in 1860, in the journal The Dental Cosmos, by a San Francisco dentist named Emerson Angell. He wrote of “an apparatus, simple and efficient,” that he’d placed into the mouth of a young patient. Then he’d told her to expand it, day by day, by advancing a central screw—just as my daughter does today. But the journal’s editors were skeptical of Angell’s work. We “must beg leave to differ with the writer in the conclusion arrived at,” they announced in a prefatory note, foreshadowing a long disagreement within the field.

This concerned the merits of expansion versus those of extraction—whether a child’s jaw should be broadened to accommodate her teeth, or whether certain teeth should be pulled to accommodate her jaw. Around the turn of the 20th century, the influential orthodontist Edward Angle favored jaw broadening; he believed that all children should have their teeth intact, nestled in a capacious jaw, as exemplified by a human skull that had been ransacked from an Indian burial mound not far from where he practiced, which he called “Old Glory.” A few decades later, though, orthodontic research found that expanded jaws might still “relapse” into a narrow shape. By the 1970s, pulling teeth became the rule, Daniel Rinchuse, a Seton Hill University professor of orthodontics, told me.

This consensus was itself short-lived, he said—not because the field had come across some new and better mouth-expanding tech but because of fears about the supposed ill effects of doing too many extractions. Some dentists claimed that what was then the standard approach in orthodontics could even lead to painful disorders of the temporomandibular joint, or TMJ. In the face of these concerns, expanders made a comeback.

Eventually, some orthodontists started claiming that expanders had another major benefit—that prying open a child’s palate could improve her breathing and prevent sleep apnea. Some now recommend this airway-focused intervention not just for kids my daughter’s age but for toddlers too.

The basis for the trend was never really scientific, though. “Do expanders prevent obstructive sleep apnea? In capital letters: NO WAY,” Kravitz said. “There are endless research papers on this stuff.” The problem isn’t that expanders have no value, he continued; it’s that they’re clearly overused. According to Rinchuse, who co-edited the book Evidence-Based Clinical Orthodontics, the idea that extracting teeth will lead to joint disorders has never been proved. Indeed, no “high-quality evidence” supports expansion of the upper jaw for any reason, he said, except in cases where a child has been diagnosed with posterior “crossbite.” He said that, overall, orthodontic practice is less constrained by evidence than other fields of health care are, because the ill effects of bad decisions will be slight. As he put it, “In orthodontics, no one dies.”

Steven Siegel, the current president of the American Association of Orthodontists, acknowledged that some practitioners may be inclined to put a rack on every child’s palate: “There are some abuses,” he told me. But he also argued that the recent increase in expander use hasn’t really been dramatic, and that for the most part, the devices are used to positive effect. For people with a narrow jaw and crowded teeth, he said, expanders can prevent the need for extractions down the road; some kids, at least, could see improvements in their breathing. When I noted that I’d heard the opposite on both counts from Kravitz and Rinchuse, he responded that they simply disagreed. “I have great respect for both of them,” he said. “I would say that there is a controversy.”

For the record, my daughter is delighted by the treatment she’s received: In a recent family interview, conducted over breakfast, she described her course of orthodontics as “cool and fun.” Her orthodontist (who happens to be a former high-school classmate) has been thoughtful and communicative, and I’ve recommended her to several other families. Still, despite the fact that no one dies from orthodontics, one might also choose to avoid a treatment that costs several thousand dollars, has disputed benefits, and may cause modest pain—not to mention any moral injury that may accrue from tilting back your daughter’s head and cranking open metal plates to wrench her face apart.

And despite whatever caused expander mania, its existence can be jarring for a parent who grew up in the prior era of orthodontics. Indeed, the period during which this trend developed—from, say, the late 1980s until the early 2020s—happens to coincide with the stretch that intervened between my own entry into middle school and my daughter’s. For my fellow members of this cohort, expansion of the fourth-grade palate appears to be a strange and sudden social norm. During one visit to the orthodontist, my daughter and I found a handful of children about her age seated in a line of dental chairs, with technicians leaning over each of them to turn the screw of their expander. It was like we’d all gathered there for some initiation rite for children of the tribe that dwells on Cobble Hill in Brooklyn—a ritual of widening.

Not long after that, I called up Luke Glowacki, an anthropologist at Boston University who co-directs a research project in Ethiopia’s Omo Valley, where body modifications—and dental modifications in particular—are not uncommon. He told me about social groups there and elsewhere in which a child’s teeth might be filed down to points or a person’s lower lip stretched out with a plate.

Is orthodontics any different? It presents itself as curative and scientific, but many orthodontists’ websites are replete with beauty claims as well: An expander may “protect your child’s facial appearance” or provide “enhancement to the facial profile.” Siegel said that a broadened palate gives “a more aesthetic width of the smile.” Kravitz said that it could help shrink the unattractive gaps inside a person’s cheeks—“dark buccal corridors,” in the language of the field.

In East Africa, dental and other body modifications carry similar ambiguities of purpose. Filing down a person’s teeth, for instance, or removing them altogether “may also be done for ostensible health reasons,” Glowacki said. Some body-modification rituals could be understood to ward off harmful spirits, for example. In other words, they’re prophylactic. Glowacki also told me about a Nyangatom woman he knows who has scars carved into both her shoulder and forehead. The former are purely decorative, but she’d received the latter on account of being sick.

Glowacki is a parent, too, and I asked him whether his training as an anthropologist affected how he thought about expanders or other anatomical procedures, such as ear piercing, that are carried out on children in the United States at industrial scale. “You’re not gonna find any society in the world that doesn’t modify their body in some way in accordance with their ideas of beauty or of health,” he said. “We’re doing what societies all over the world do.” If now I’ve paid an orthodontist to reshape my daughter’s mouth, maybe that’s just human nature.

On a recent Tuesday morning, I was blessed with a miracle in a mini-mart. I had set out to find the protein bar I kept hearing about, only to find a row of empty boxes. But then I spotted the shimmer. Pushed to the back of one carton, gleaming in its gold wrapper, was a single Salted Peanut Butter David Protein Bar. It was mine.

David bars are putty-like rectangles of pure nutritional efficiency: 28 grams of protein stuffed into 150 calories, or roughly the equivalent of eight egg whites cooked without oil. They are booming right now. After all, in this era of protein mania, one must always be optimizing. A Quest bar might get you 20 grams of protein for just under 200 calories, but David—named after Michelangelo’s masterpiece—does more for less. “Humans aren’t perfect,” promises one David tagline, “but David is.” Why, given the possibility of perfection, would you accept eight grams less?

If a food with more protein is better, then it follows that a food with less is worse. After eating my David bar, I couldn’t help but feel a little bit bad about my dinner of brown rice and spicy chickpeas. A cup of Eden Foods organic chickpeas (240 calories) gets you a measly 12 grams. Now that I was living in the world of David, I was newly ambivalent about eating anything that wasn’t chunks of unadulterated protein. I am fueling, I thought, shoving cubes of baked tofu into my mouth. Did you know that green peas have an unusual amount of protein for a vegetable? With unsettling frequency, I began to add frozen peas to my dinners. (They’re not great on cacio e pepe, it turns out.)

I have become quietly obsessed with this one single macronutrient. How could I not be? Everything is protein now: There are protein chips and protein ice creams and cinnamon protein Cheerios. Lemonade is protein, and so is water. Last month, Chipotle introduced a “high protein cup” consisting of four ounces of cubed chicken. Melanie Masarin, the founder and CEO of Ghia, a nonalcoholic-drink brand, recently told me that an investor asked her whether Ghia has plans for a high-protein aperitif. No, but the investor’s logic was obvious: Healthy people, the kind who tend to watch their drinking, only want one thing. This week, the federal government released its latest set of dietary guidelines—including a newly inverted food pyramid. At the top is protein.

[Read: Protein madness has gone too far]

In some ways, protein is just the latest all-consuming nutritional fixation. For decades, the goal was to avoid fat, which meant that pretzels were good and peanut butter was bad and fat-free Snackwell’s devil’s-food cookie cakes were a cultural phenomenon. Then Americans rediscovered fat and villainized carbs. But protein is different. Whatever your dreams are, protein seems to be the answer. It supports muscle gain, for those trying to bulk up, but it’s also satiating, which means people trying to lose weight are also advised to eat more protein. It has the power to make you bigger and more jacked, but also smaller and more delicate. People on GLP-1s are supposed to be especially mindful of their protein intake, to prevent muscle loss on extremely low-calorie diets, but so are weight lifters.

It is a nutritional philosophy that encourages not restriction but abundance: as much protein as possible, all the time. You can have your cake and eat it too (as long as it is made with “protein flour”). In a world where the very act of eating feels fraught, layered with a lifetime of rules and fads and judgments about what food is and is not “good,” protein offers absolution: You don’t have to feel bad about this. It has so many grams! What a beautifully straightforward recommendation: Eat more of this one thing that happens to be everywhere, and that frequently tastes good.

The low rattle of protein mania—the protein matchas and protein Pop-Tarts and protein seasonings to sprinkle on your protein chicken cubes—can be as maddening as it is inescapable. Everybody knows that you are supposed to eat a varied diet with many different types of foods that provide many different nutrients. But only protein is endowed with a special kind of redemptive power. Nobody is pretending that tortilla chips are a cornerstone of a balanced diet, but if they’re protein tortilla chips (7 grams), well, then maybe they’re at least fine. This is fantastic news if your goal is to enjoy tortilla chips, but it does have a tendency to recast all food that has not been protein-ified—either by nature or by the addition of whey-protein isolate—as a minor failure. It is depressing to look at a pile of roasted vegetables, arranged elegantly over couscous, and think: I will try harder tomorrow. I know, because I do it.

Protein is supposed to allow people to realize their untapped potential—to make us stronger and sharper. I suspect, though, that I would be stronger and sharper if I could stop ambiently thinking about my protein intake. That the world is now covered in a protein-infused haze provides constant reminders that I am falling short. Lots of protein evangelists will tell you that this is how cavemen ate, and therefore it is good. I think the best part of being a caveman would be not worrying about protein.

As nutritional trends go, there are worse obsessions than protein. Even if there is still significant debate about how much protein one needs, you are unlikely to send yourself into kidney failure because you protein-maxxed too hard. But the fanatical focus on protein as the true answer, the universal key to transforming the body you have into the one you want—7 grams, 28 grams, 11 grams, a chicken smoothie—feels eerily familiar. We counted calories, grams of fat, carbohydrates, trying to distill the messy science of nutrition into one single quantitative metric. Protein, for all its many virtues, is just another thing to count.

The flu situation in the United States right now is, in a word, bad. Infections have skyrocketed in recent weeks, filling hospitals nearly to capacity; viral levels are “high” or “very high” in most of the country. In late December, New York reported the most flu cases the state had ever recorded in a single week. My own 18-month-old brought home influenza six days before Christmas: He spiked a fever above 103 degrees for days, refusing foods and most fluids; I spent the holiday syringing electrolyte water into his mouth, while battling my own fever and chills. This year’s serving of flu already seems set to be more severe than average, Seema Lakdawala, a flu virologist at Emory University, told me. This season could be a reprise of last winter’s, the most severe on record since the start of the coronavirus pandemic—or, perhaps, worse.

At the same time, what the U.S. is experiencing right now “fits within the general spectrum of what we would expect,” Taison Bell, an infectious-disease and critical-care physician at the University of Virginia Health System, told me. This is simply how the flu behaves: The virus is responsible for one of the roughest respiratory illnesses that Americans regularly suffer, routinely causing hundreds of thousands of people to be hospitalized annually in the U.S., tens of thousands of whom die. (So far this season, the flu has killed more than 5,000 people, including at least nine children.) Influenza is capable of even worse—sparking global pandemics, for instance, including some of the deadliest in history. These current tolls, however, are well within the bounds of just how awful the “seasonal” flu can be. “It’s another flu year, and it sucks,” Bell said.

Although flu is a ubiquitous winter illness, it is also one of the least understood. Scientists have been puzzling over the virus for decades, but many aspects of its rapid evolution and transmission patterns, as well as the ways in which our bodies defend against it, remain frustratingly mysterious. Flu seasons, as a rule, differ drastically from one another, and “we don’t have a great understanding of why one ends up being more severe than another,” Samuel Scarpino, an infectious-disease-modeling researcher at Northeastern University, told me. Experts’ flu-dar has also been especially out of whack in recent years, since the arrival of COVID-19 disrupted typical flu-transmission patterns. (An entire lineage of flu, for instance, may have been driven to extinction by pandemic-mitigation measures.) The virus is still finding its new norm.

Even so, a few things about this season’s ongoing torment are clear. Much of the blame rests on the season’s dominant flu variant—subclade K, which belongs to the H3N2 group of influenza. As flus go, H3N2s tend to be more likely to hospitalize and kill people; most of the worst flu seasons of the past decade in the U.S. have been driven by H3N2 surges. Subclade K doesn’t seem to be an unusually virulent variant, which is to say it’s probably no more likely to cause severe disease than a typical version of H3N2. But it does seem to be better at dodging our immune defenses, making the net effect similar, because it can lead to more people getting sicker than they otherwise would. That’s not a trivial effect for a disease that, even in mild cases, can cause days of high fevers and chills, followed by potentially weeks of that delightful run-over-by-a-truck feeling.

At UVA Health, Bell has seen a major uptick in people testing positive for the virus in recent weeks. Like others, his hospital is close to full, straining its capacity to treat other illnesses, he said. In Michigan, too, where Molly O’Shea cares for children at multiple pediatric practices, “we are seeing a ton of influenza, just a ton,” she told me. “Our schedule is overflowing.” Several of her school-age patients have wound up in the hospital, despite being previously healthy; a few have ended up with serious complications such as pneumonia and brain inflammation. The worst cases, she said, have been among the children who didn’t get their annual flu shot.

Flu vaccines are not among the most impressive immunizations in our roster. Although they’re generally pretty effective at protecting against severe disease, hospitalization, and death, they don’t reliably stave off infection or transmission. And they’re frequently bamboozled by the virus itself, which shape-shifts so frequently throughout the year, as it ping-pongs from hemisphere to hemisphere, that by the time flu vaccines roll out to the public, they’re often at least a little out of sync with what’s currently circulating.

That’s another aggravating factor this year. Researchers first detected subclade K in June, months after experts selected the strains that would go into the fall flu-vaccine formulation. Recent data suggest that vaccination may still elicit some immune defenses that recognize subclade K, and preliminary estimates from the United Kingdom suggest that this year’s formulations may be especially effective at preventing severe disease in children, who, along with the elderly, are highly vulnerable to the flu. (For all the misery my family endured, none of us ended up in the hospital—which suggests that our vaccinations did their job.)

Children also tend to be the biggest drivers of flu’s spread. “They are the source, many times, of explosions of transmission,” Lakdawala told me. In the U.K., for instance, which experienced an unusually early start to the flu season, school-age kids appear to have driven much of the epidemic, Scarpino pointed out. In the U.S., too, case rates among children have been particularly high. Although the vaccine primarily limits severe disease, it can also affect how quickly the virus travels through a community. And yet only about half of American kids get the vaccine each year, despite long-standing universal recommendations for annual immunization. “It’s a vaccine that parents have never really treated as a vaccine that every child should get,” O’Shea said.

Those choices might be influenced by the ways many people underestimate the flu—a term often used to describe any cold-weather ailment that comes with a runny nose, cough, or even gastrointestinal upset. In reality, flu has long ranked as one of the U.S.’s top 10 or top 15 causes of death—a scourge that, through its impact on the health-care system, the workforce, and the economy at large, costs the country billions of dollars each year. Against such a substantial threat, we should be using “everything in our toolbox to protect ourselves,” Lakdawala said.

Yet the Trump administration is actively impeding the process of flu vaccination. Health and Human Services Secretary Robert F. Kennedy Jr. has also said that it may be “a better thing” if fewer people are immunized against the flu—and insisted, incorrectly, that “there is no scientific evidence that the flu vaccine prevents serious illness, hospitalizations, or death in children.” The federal government recommended annual flu vaccines for all children until earlier this month, when HHS pushed through changes that demoted multiple immunizations from its recommended schedule. HHS now says that families should consult with their health-care provider before taking the shot. Such a recommendation suggests that the vaccines’ overall benefits are ambiguous enough to require discussion—and puts an additional burden on both patients and health-care providers, who can administer what was once a routine vaccine only after a conversation that must then be documented.

The nation’s leaders have also compromised one of the country’s best chances to develop more effective, better-matched flu vaccines in the future, by defunding research into mRNA vaccines. The current flu-vaccine manufacturing process takes so long that the included strains for the Northern Hemisphere must be selected by February or so—which provides plenty of time for the virus to evolve before the autumn rollout begins, as happened this year. “We pretty regularly have a bad match for the flu,” Scarpino said. mRNA vaccines promised the possibility of faster development, allowing researchers to stay more closely on the flu’s heels and switch out viral ingredients in as little as two or three months. That degree of flexibility also would have sped the response to the next flu pandemic.

In an email, Andrew Nixon, HHS’s deputy assistant secretary for media relations, disputed the characterization that the department’s new policies impede flu vaccination, writing, “Providers continue to offer flu vaccines, and insurance coverage remains unchanged. The recommendation supports shared clinical decision-making between patients and clinicians and does not prevent timely vaccination. People can continue to receive flu vaccines if they choose to do so.”

For the current season, much of the U.S.’s fate may already be sealed: Fewer than half of Americans have gotten a flu vaccine this season, while the virus continues to spread. “If you find yourself in a place where there are people sick with flu, you’re probably gonna get sick,” Scarpino said. That logic likely holds true for his own family, in Massachusetts, where flu activity has been high for weeks. They’ve so far made it through unscathed, but Scarpino said, “I feel like it’s a matter of time.”

Updated at 6:00 PM ET on January 12, 2026

Antiviral drugs for influenza, the best known of which is Tamiflu, are—let’s be honest—not exactly miracle cures. They marginally shorten the course of illness, especially if taken within the first 48 hours. But amid possibly the worst flu season in 25 years, driven by a variant imperfectly matched to the vaccine, these underused drugs can make a bout of flu a little less miserable. So consider an antiviral. And specifically, consider Xofluza, a lesser-known drug that is in fact better than Tamiflu.

The culprit behind this awful flu season is subclade K, a variant of H3N2 discovered too late to be incorporated into this year’s flu vaccine. Early data suggest the shot likely does confer at least some protection against this variant, but the jury is still out on whether that protection is much eroded from usual. What is undeniable, though, is a recent explosion of influenza cases. In New York, which was hit early and hard, the number of people hospitalized for flu broke records. Across the rest of the country, cases have been going up a “straight line,” nearly everywhere all at once, which is highly unusual, Arnold Monto, an epidemiologist at the University of Michigan who has been studying influenza for some 60 years, told me last week. Cases seem to be finally leveling off now, but much misery still lies ahead.

For flu, antivirals are a second but oft-overlooked line of defense after vaccines. “We are dramatically and drastically underutilizing influenza antivirals,” Janet Englund, a pediatric-infectious-disease specialist at the University of Washington, told me. Even the older, more commonly prescribed drug Tamiflu reaches only a tiny percentage of flu patients every year. Actual numbers are hard to come by, but compare the estimated 1.2 million prescriptions for Tamiflu and its generic form in 2023 with the some 40 million people who likely got the flu in the winter of 2023–24. Xofluza is even less popular, and exact prescription numbers even harder to find. But they are possibly somewhere from just 1 to 10 percent that of Tamiflu.

The two antivirals are equally effective at allaying symptoms, both shortening the duration of flu by about a day. But Xofluza, which was approved in 2018, offers some tangible benefits over Tamiflu.

First, Xofluza is simply more convenient, a single dose compared with Tamiflu’s 10, which are taken over five days, twice a day. It also causes fewer of the gastrointestinal side effects, such as vomiting and nausea, that patients on Tamiflu will sometimes experience. All in all, a course of Xofluza might be easier for you—or your kid already queasy from the flu itself—to get down and keep down. (That is, if they are old enough to take it: Xofluza is approved for kids ages 5 and up in the United States, but ages 1 and up in Europe; only Tamiflu is recommended for kids down to newborn age as well as for women who are pregnant or breastfeeding.)

Second, Xofluza makes you less contagious to the rest of your family. It drives down the amount of virus spewed by sick patients more quickly than Tamiflu, possibly because of differences in how the two drugs work. Whereas Xofluza stops the virus from replicating, Tamiflu can only prevent already replicated viruses from exiting infected cells to infect others. In a study that Monto led last year, Xofluza cut household transmission by almost one-third compared with a placebo. Tamiflu might reduce transmission too, according to other studies, but probably to a lesser degree than Xofluza.

Third, Xofluza is better at heading off serious post-flu complications such as pneumonia or myocarditis. Patients on Xofluza needed fewer ER visits and hospitalizations than did those on Tamiflu, according to studies of large real-world data sets from insurance claims and medical records. This means that Xofluza should be the antiviral of choice for high-risk patients, including those over 65, who are most prone to these complications, Frederick Hayden, a flu expert at the University of Virginia who led one of the original Xofluza trials, told me. (Hayden has consulted on an unpaid basis, aside from travel expenses, for the companies behind Xofluza.)

The fourth advantage is less relevant to this season because the dominant subclade belongs to the influenza A family. But Xofluza is noticeably more effective against influenza B than Tamiflu, which tends to falter against this family of viruses.

Despite these benefits, awareness of Xofluza remains low. “It hasn’t been used as much as it should be,” Monto said, for reasons of cost and accessibility. Tamiflu, first approved in 1999, is available as a generic for less than $30 even without insurance. Xofluza is still patented and runs $150 to $200 a person. Because it’s less popular, pharmacies are less likely to stock it, making doctors less eager to prescribe it, and so on. In October, though, the company that markets Xofluza in the U.S. launched a direct-to-customer program that sells the drug for the comparably bargain price of $50 without insurance, along with same-day delivery in some areas. Even the flu-drug experts I spoke with, though, were not all aware of this new, more accessible route. The CDC still lists Tamiflu first and foremost in its recommendations, too.

For flu antivirals to be more widely used would also require better testing. Both Xofluza and Tamiflu are most effective within the first 48 hours of symptoms, and the earlier the better. Traditionally, a sick person would have to get to a doctor, get a flu test, get a prescription, and finally get to a pharmacy—which can easily put them past the first 48 hours. But COVID popularized at-home rapid testing, and combination COVID-flu tests have landed on pharmacies shelves recently. With telehealth and home delivery, you can get an antiviral without ever leaving the house.  

Still, the at-home tests are expensive, Englund pointed out, about $20 a pop here, compared with just a couple of bucks in Europe. The expense can add up for a whole family. In Japan, where antivirals are widely used, nearly everyone with a flu-like illness gets a routine rapid test and, if necessary, antivirals, both largely covered by the public health-care system. (Xofluza was developed by the Japanese company Shionogi, which also makes Xocova, a promising COVID antiviral my colleague Rachel Gutman-Wei has written about that is not available in the U.S.)

If the U.S. were better at using antivirals, especially in the high-risk patients, the number of Americans dying of flu—roughly 38,000 last year—would likely drop, Cameron Wolfe, an infectious-diseases expert at Duke, told me. Doctors recommend that people at high risk for flu take antivirals prophylactically, upon exposure to anyone with flu, before symptoms appear. Both Xofluza and Tamiflu as prophylaxis can cut the chances of getting sick by upwards of 80 percent.

For healthy people who fall ill, antivirals can ease the burden of flu, which is nasty even when it is not deadly. “I don’t want you to be out of work longer than you need to be. I don’t want you to not be a caregiver for your kids,” Wolfe said. Maybe you have business travel coming up, and I don’t want you to be sick still on that plane.” With challenges around access to antivirals, he said that “the best drug is the one you can get.” Both Tamiflu and Xofluza can make this historically bad flu season a little more bearable.


This story originally stated that Xocova, not Xofluza, when given as a prophylaxis for flu, cut the chance of illness by 80 percent. Xocova is a COVID antiviral.

Before Adam Sharples became a molecular physiologist studying muscle memory, he played professional rugby. Over his years as an athlete, he noticed that he and his teammates seemed to return to form after the offseason, or even from an injury, faster than expected. Rebuilding muscle mass and strength came easy: It was as if their muscles remembered what to do.

In 2018, Sharples and his research lab, now at the Norwegian School of Sport Sciences in Oslo, were the first to show that exercise could change how our muscle-building genes work over the long term. The genes themselves don’t change, but repeated periods of exertion turns certain genes on, spurring cells to build muscle mass more quickly than before. These epigenetic changes have a lasting effect: Your muscles remember these periods of strength and respond favorably in the future.

Intuitively, this makes sense. Past exercise primes your muscles to respond more robustly to more exercise. Over the past few years, Sharples’s lab has found that muscles have additional molecular mechanisms for remembering exercise; he and other scientists have been building on this research, too, confirming epigenetic muscle memory in young and aged human muscle, after different modes of training, as well as in mice. Now 40 years old, Sharples is still thinking about how our muscles remember but has lately been investigating the inverse trajectory: Do muscles have a similar memory for weakness?

The answer appears to be yes. “Our new data shows that muscle does not just remember growth—it also remembers wasting,” Sharples told me, of a study published in preprint on bioRxiv and currently in peer review for Advanced Science. “The more encounters you have with injury and illness, the more susceptible your muscle is to further atrophy. And, well—that’s what aging is, isn’t it?”

The Norwegian government’s research council has been funding Sharples’s research and has a vested interest in the lab’s discoveries. In the next decade, Norway is expected to become a “super-aged society,” in which more than one in five people are age 65 or older. Japan and Germany have already crossed this threshold, and the United States is expected to reach it by 2030. Age-related muscle weakness is a major factor in falling risk; falling is a leading cause worldwide of injury and death in people 65 and older. Better understanding how muscles remember and react to their weakest moments is a crucial step toward knowing what to do about it.

As part of the new study, Sharples’s team studied repeated periods of atrophy in young human muscle, using a knee brace and crutches to immobilize participants’ legs for two weeks at a time. This level of disuse, Sharples said, is comparable to real-world situations in which muscle rapidly loses size and function—limb immobilization after fractures or other injuries, periods of hospitalization or bed rest, reduced weight-bearing during recovery. A couple of years ago, I went to observe this research for my book On Muscle; one study participant, an avid skier and cyclist, told me he was shocked by how significantly the muscles in his leg deteriorated after just a couple of weeks of immobilization. The team also ran a concurrent study in aged rat muscle, in collaboration with Liverpool John Moores University; in both studies, repeated periods of disuse led to epigenetic changes—shifts in the way genes were expressed.

These changes affected the core functions of muscle cells, hampering the genes in mitochondria—the powerhouses of the cell, which generate the energy required to contract and relax muscle fibers. Letting muscles weaken suppressed genes involved in mitochondrial function and energy production in particular, including genes that are essential for muscle endurance and recovery. The researchers also found that a key marker of mitochondrial abundance dropped more drastically after repeated atrophy than after the first episode, indicating that repeated disuse makes muscle more vulnerable. In other words, the evidence suggests that every time you fall down the hole, it becomes more difficult to climb back out.  

Similar changes occurred in both the young human muscle and the aged rat muscle. But the young muscle could adapt and recover. After repeated atrophy, it showed a less exaggerated gene-expression response than the aged muscle did. “There seems to be some resilience and protection with young muscle the second time around,” Sharples said. He likened this to an immune-system response: Young muscle responds better to atrophy the second time because it has encountered it before and knows how to bounce back. By contrast, aged muscle becomes more sensitive after repeated atrophy, showing a worsened response with the second episode.

How long our muscles hold on to any of these memories is still up for debate. “Because of our study periods, we do know with some certainty that epigenetic memories can last at least three to four months, and that protein changes can also be retained,” Sharples said. “How long after that is difficult to say. But we know from our studies of cancer patients that epigenetic changes in muscle were retained even 10 years out from cancer survival.”

This was startling to hear. If an adverse health event is dramatic enough, like cancer, our muscles can carry the effects of that for a decade or more. More typically, though, inactivity, aging, and repeated episodes of disuse may gradually shift the system toward a state in which weakness becomes more entrenched and recovery slower.

Understanding what drives muscle to remember being in stress situations—either beneficial, like exercise, or damaging, like illness—could help us better judge what to do about this, says Kevin Murach, an associate professor at the University of Arkansas who studies aging and skeletal muscle and who was not involved in the new study. Knowing the mechanisms that drive beneficial changes at the molecular level could help develop drugs with similar effects. On the other end of the spectrum, if illness and immobilization have long-term negative effects, Murach told me, the next question to answer is: “Can we use exercise to offset that?”

Both Murach and Sharples said the data are getting only more robust that strength training, paired with endurance or high-intensity interval training, is the best therapy to protect against age-related loss of muscle and function. “Perhaps the key takeaway is that at any point along this continuum, new exercise or loading stimuli can still shift the balance back towards growth and health,” Sharples said. “I don’t think there is a point at which muscle can’t respond at all—it simply becomes less efficient when repeatedly weakened or when older.”

Identifying genes associated with muscle growth, as well as pharmaceutical targets, could mean that drugs or gene therapy may eventually be able to assist with boosting muscle response for people who cannot exercise. Murach and Sharples cautioned, though, that stimulating muscle-cell growth can have unintended consequences, in part because growth pathways are common across cell types—including cancer cells.

What the new work does show is that our muscle mass is not a blank slate. “What we’re finding suggests that our muscles may carry a history of both strength and weakness,” Sharples said. It’s shaped by factors including age, baseline muscle health, previous atrophy events, and previous exercise training. “And that history shapes how our muscles respond in the future.” I came away from our conversation thinking about the battle between positive muscle memory for strength and negative muscle memory for atrophy as a kind of tug-of-war: The two are constantly in tension, but the more experiences you have of one or the other, the more it pulls you into its embrace.

The Save gas station on the west side of Gary, Indiana, wants customers to know that they can pay for their groceries with food stamps. When I pulled into the parking lot last week, the first thing I saw was a blinking neon sign that read EBT for electronic benefits transfer, the prepaid cards used by food-stamp recipients. Inside, I spotted coolers packed with drinks, and shelves and shelves of snacks. But a black-and-white sign on the cashier window had a warning: As of January 1, soda and candy can no longer be purchased with food stamps.

Indiana is one of five states—along with Iowa, Nebraska, Utah, and West Virginia—that has begun banning the purchase of certain unhealthy treats with food stamps, which is formally known as the Supplemental Nutrition Assistance Program. They have all been spurred into action by Robert F. Kennedy Jr., who has made these bans a priority of his tenure as health secretary. “There’s no nutrition in these products,” Kennedy said in June, celebrating the policy at an event with Indiana’s governor. “We shouldn’t be paying for them with taxpayer money.” Later this year, 13 more states will start implementing similar changes to their food-stamp programs. The Trump administration is pushing more states to follow suit by giving those that do preferential access to a $50 billion pool of money meant to improve rural health care across the country.

In the two weeks since the first bans went into effect, the results have been messy. My trip to Indiana and conversations with officials in other states have suggested that the policies are disorienting, and the implementation has been inconsistent. Nowhere was that clearer than at the 20/20 Food Mart a few blocks away from Gary’s airport. When I entered the store, I was immediately confronted with a multi-shelf display of treats—chocolate-chip cookies, honey buns, double-chocolate muffins—all displayed next to handwritten signs that read Special: EBT item. This seemed like a mistake, but it wasn’t. Baked goods like these can still be bought with food stamps because Indiana’s new policy bans only the purchase of soft drinks and candy.

Baked treats aren’t alone in occupying this regulatory gray area. Protein bars can still be purchased with food stamps, even if they have the same amount of sugar as a chocolate candy bar; chocolate-covered nuts, however, cannot. Sugary, canned coffee is also okay, so long as it has milk. (The policy says that soft drinks do “not include beverages that contain milk or milk products.”) Iowa’s ban has a similar loophole. What can be purchased with SNAP is based on how food is taxed in the state, which has led to some perplexing scenarios. Iowans can use their EBT cards to buy a slice of cake—but not a fruit cup that comes with a spoon.

[Read: Republicans are right about soda]

What all of this shows is that banning junk food is more complicated than it seems. Previously, SNAP recipients could use their cards to purchase pretty much anything to eat besides hot food or alcohol. States are in the unenviable position of defining broad categories such as soda and candy and then figuring out whether any of the snacks you might find in a store are eligible for food stamps. On a public call with retailers, Indiana officials recently denied a request for a comprehensive list of the products that can and cannot be purchased, citing that they would need to wade through “tens of thousands, if not hundreds of thousands of products.” The inventory, the officials added, would quickly go out of date because of new product launches. However, the state has “a general list of commonly asked-about items,” a spokesperson for the Indiana Family and Social Services Administration told me.

Much of the burden for determining which items can or can’t be purchased falls to the best judgment of store clerks. In Indiana, retailers are responsible for knowing that the state defines soft drinks as “nonalcoholic beverages that contain natural or artificial sweeteners,” meaning that Gatorade is also banned. In spite of the challenges, stores appear to be implementing the changes fairly well, but some products are bound to fall through the cracks: At one gas station in Gary, I was incorrectly told that I could buy an energy drink with food stamps. At another, I was told I couldn’t buy bottled coffee, even though it had milk.

This puts food-stamp recipients in a tough situation. At a Family Dollar in Gary, the soda refrigerators were still decorated with SNAP stickers, implying that the drinks inside could be purchased. The store had also printed out signs warning about the new changes, but they were posted around boxes of cereal, which are still SNAP-eligible. At another Family Dollar in town, the signs were posted on a display of blankets, which never could be purchased with SNAP. Critics of these restrictions worry that such confusion could drive people away from the food-stamp program. “Singling out people who receive SNAP, policing their shopping carts, and delaying their purchases at the register would inevitably decrease participation rates,” states a recent essay in Georgetown University’s Journal on Poverty Law & Policy.

Before moving forward with these policies, Indiana, Iowa, and other states had to get approval from the Department of Agriculture, which oversees the food-stamp program. In previous administrations, the agency blocked attempts to crack down on junk food precisely because of the problems the states are now facing. In 2011, USDA, which oversees SNAP, denied New York City’s request to implement a soda ban, warning that “the proposal lacks a clear and practical means to determine product eligibility,” which would create problems for stores.

Much of the confusion that currently plagues these bans will likely subside over the next few months, as retailers and SNAP participants gain familiarity with the rules. (USDA has also announced that it will give retailers a 90-day grace period before it begins testing compliance.) Even with the messiness, the policies could still be a net positive for the health outcomes of food-stamp recipients, Alyssa Moran, a nutrition-policy researcher at the University of Pennsylvania, told me. According to the USDA’s own research, sugary drinks are among the most popular food-stamp purchases.

That said, the full effects of these bans will not be known until they are assessed by researchers, likely years from now. The USDA approved these bans as temporary pilots with the aim of evaluating exactly what cracking down on junk food would mean for public health. But according to Cindy Long, who in September stepped down as deputy undersecretary for food, nutrition, and consumer services at USDA plans for evaluation so far have been thin. Nebraska’s proposed evaluation plan, for example, appears to be just one paragraph, which says that the state will evaluate SNAP participants’ spending habits quarterly and work with retailers to “determine the reduction in purchases of soda and energy drinks.”

Nebraska could still bulk up its research plan in the coming months—the plan states an intention to work with USDA “to determine the appropriate evaluation measures”—but the fact that it was approved by USDA with such little specificity marks a shift in how the administration is approaching these requests. New York’s proposed approach included a telephone survey, an evaluation of retail-sales data, and surveys of SNAP participants leaving grocery stores. Even then, an agency official wrote that “the proposed evaluation design is not adequate to provide sufficient assurance of credible, meaningful results.”

Exactly how USDA will now assess whether one state’s policy worked better than another’s remains to be seen. A spokesperson did not answer my questions about whether the government would evaluate the policies itself. “USDA continues to work with states by providing technical assistance to support these efforts,” the spokesperson told me.

Many Americans enthusiastically partake in Dry January, but it is rarely pitched as fun. After the holiday stretch of office parties and family gatherings, Americans have come to use the start of every year to abstain from alcohol in the name of health and auspicious beginnings. It’s a time of discipline, of cleansing, of embodying your mood board, even if it makes you a drag at parties. And it is also, as weed companies have learned, a marketing opportunity.

In recent years, weed companies have started to lean into the argument that taking the edge off sobriety with a low-dose gummy or THC drink still counts as dry. My social-media feeds are flooded with posts from cannabis companies pitching their products as fun and approachable tools to get through an alcohol-free month. Mary and Jane, an edibles company, makes a tantalizing proposition: “Dry January made easy.” Artet, which specializes in beverages, sells a “High & Dry January” bundle that includes a bottle of its THC-laced aperitif. Some products are conspicuously health-coded: North Canna describes its cannabis drinks as “functional,” and Feals highlights its edibles’ low calorie count. Above all, the ads emphasize how little booze you drink when you get high instead.

This push for a weed-filled January is, of course, a blatant (and somewhat silly) attempt by cannabis companies to get more customers. But as restrictions on marijuana loosen, and more Americans find themselves able and willing to fit the drug into their lives, Dry January does appear to be offering an opportunity for experimentation. In fact, cannabis sales surged in January 2024, and 21 percent of Dry January participants who responded to a 2023 survey swapped booze for weed that month.

None of the four cannabis-company founders I spoke with framed their products as replacements for alcohol per se. Still, many products marketed as Dry January aids aim to approximate the effect of having a single drink, leaving users buzzed but in control. These products tend to contain a low dose of THC, usually five milligrams or less. (One milligram of THC could give a weed newbie a pleasant buzz, and a heavy user might not feel five milligrams at all.) Wims, which sells THC-laced drink mix-ins, is designed to take effect and wear off in roughly the same amount of time as a serving of alcohol, Lauren Miller, one of the company’s co-founders, told me.

[Read: Pour one out for weed seltzer]

Even if THC can induce a similarly loose state as alcohol at those doses, weed companies still have some distance to overcome. In general, using cannabis as a substitute for social drinking is a harder sell than using it to avoid alcohol at home—not only because most bars don’t serve THC but also because the drug has a better chance of spurring you to melt into the couch than to mingle. Cannabis companies are trying to position their products to be used in the same context as alcohol. In states with looser cannabis laws, such as Minnesota and Tennessee, THC beverages from Nowadays are served at bars and hotels, Justin Tidwell, the company’s CEO and co-founder, told me. Wims can be dissolved into a drink, so “you don’t have to change your rituals or the way that you’re socializing,” Miller said.

The shaky logic of replacing one drug with another during a month dedicated to sobriety is hard to ignore. If the point of Dry January is to improve health, replacing alcohol with cannabis—which is not a benign substance—seems counterproductive. Far less is known about the long-term use of cannabis compared with alcohol, but both can be abused, cause dependence, and interfere with daily function and productivity, Ryan Vandrey, who helps run Johns Hopkins’s Cannabis Science Laboratory, told me. Some people are predisposed to react negatively to cannabis, experiencing anxiety, paranoia, or even cyclical vomiting. Over time, long-term heavy cannabis use can exacerbate mental-health conditions such as schizophrenia and depression. Plus, Vandrey said, weed hangovers are very real (if different from alcohol hangovers).

Still, for people with a more benign response to the drug, cannabis can be a genuinely useful tool for cutting back on booze, Vandrey said. If cannabis helps people drink less, it might indeed lower the health risks associated with excessive alcohol use, such as liver disease, cardiovascular problems, and cancer. Whatever the relative health benefits may be, Rachel Dillon, a co-founder of Mary and Jane, argues that cannabis is a realistic way to satisfy the all-too-familiar desire to decompress.

[Read: The new war on weed]

This month, I decided to put Dillon’s theory to the test. So far, High January, as I’ve come to call it, has mostly replaced my nightly glass of wine. Taking an evening 1.5-milligram gummy has subdued the urgency of the post-work rush—and, importantly, quieted any cravings for alcohol in that context. My mind is clearer, I’m sleeping better, and my mornings are less sluggish.

Yet cannabis has proved to be an imperfect tool for cutting down on my own alcohol consumption. The drug can’t quite re-create the intimacy of sharing a drink; during a recent late-night chat with a friend, I gave myself a free pass to enjoy a glass of Bordeaux. I’ve even experienced the all-too-real weed hangover. And I’ve felt conflicted about the need to soften my reality with any drug. Certainly, there are healthier ways to relax. Maybe I’ll discover them next January.

Chief among the burdens weighing upon the weary sports parent—worse than the endless commutes, the exorbitant fees, the obnoxious parents on the other team—is the sense that your every decision has the power to make or break your child’s future. Should your 11-year-old show up to her elementary-school holiday concert, even if it means missing a practice with the elite soccer team to which you’ve pledged 100 percent attendance? What if this turns out to be the fork in the road that consigns her to the athletic scrap heap?

These are heavy decisions—at least they are for me, a soccer dad who happens to have spent years writing about the science of athletic success. Making it to the pros, the conventional wisdom says, is a consequence of talent and hard work. Best-selling books have bickered over the precise ratio—whether, say, 10,000 hours of practice trumps having the so-called sports gene. But the bottom line is that you need a sufficient combination of both. If you’re talented enough and do the work, you’ll make it. If not—well, decisions (and holiday concerts) have consequences.

Rationally, stressing out over missing a single practice is ridiculous. Believing that it matters, though, can be strangely reassuring, because of the suggestion that the future is under your control. Forecasting athletic careers is an imperfect science: Not every top draft pick pans out; not every star was a top draft pick. Unexpected injuries aside, the imprecision of our predictions is usually seen as a measurement problem. If we could only figure out which factors mattered most—how to quantify talent, which types of practice best develop it—we would be able to plot athletic trajectories with confidence.

Unless, of course, this tidy relationship between cause and effect is an illusion. What if the real prerequisite for athletic stardom is that you have to get lucky?

Joseph Baker, a scientist at the University of Toronto’s Sport Insight Lab, thinks that the way talent development is usually framed leaves out this crucial ingredient. Baker is a prominent figure in the academic world of “optimal human development,” who moonlights as a consultant for organizations such as the Texas Rangers. He’s also a longtime skeptic of the usual stories we tell ourselves about athletic talent. The most prominent is that early performance is the best predictor of later performance. In reality,  many cases of early success just mean an athlete was born in the first months of the year, went through puberty at a young age, or had rich and highly enthusiastic parents.

This critique of talent is not entirely new. It’s been almost two decades since Malcolm Gladwell’s Outliers spurred a cohort of hyper-ambitious soon-to-be parents to begin plotting January birth dates (or at least to tell people they were considering it). Over time, the debate about what factors actually matter has devolved into a game of whack-a-mole. If physical development isn’t the best predictor of long-term success, then it must be reaction time, or visual acuity, or hours of deliberate practice. The default assumption is that there must be something that reveals the presence of future athletic greatness.

Baker’s perspective changed, he told me, when he read Success and Luck, a 2016 book by the former Cornell University economics professor Robert H. Frank. Frank describes a hypothetical sports tournament whose outcome depends 49 percent on talent, 49 percent on effort, and 2 percent on luck. In mathematical simulations where as many as 100,000 competitors are randomly assigned values for each of these traits, it turns out that the winner is rarely the person with the highest combination of talent and effort. Instead, it will be someone who ranks relatively highly on those measures and also gets lucky.

This turns out to be something like a law of nature: It has been replicated and extended by others since Frank’s book came out. Among the most influential models is “Talent Versus Luck,” created by the Italian theoretical physicist Andrea Rapisarda and his colleagues, which simulates career trajectories over dozens of years and reaches the same conclusion. This model earned a 2022 Ig Nobel Prize “for explaining, mathematically, why success most often goes not to the most talented people, but instead to the luckiest.”

To Baker, these models suggest that it’s not just hard to reliably predict athletic futures; it’s impossible. He cites examples including a youth-soccer player for Northampton Town who missed a text message from the team’s manager telling him that he’d been dropped from the roster for an upcoming game. He showed up for the bus, went along for the ride, subbed in when another player got injured, impressed the manager, earned a spot for the rest of the season, and went on to play in the Premier League. Luck takes many forms, such as genetics, family resources, and what sports happen to be popular at a given place at a given time. But sometimes, it’s simply random chance: a gust of wind or an errant bounce or a missed text.

It’s easy to see how luck shapes individual moments in sport—how it changes the course of a game, a series, even an entire season. But what’s harder to accept is that luck might also play a role in longer arcs—not just what happens in games but who appears on the court in the first place. The more you reckon with this, the more disorienting it can be, as things start to feel ever more arbitrary and unfair. As Michael Mauboussin, an investor who writes about luck in his 2012 book, The Success Equation, put it to me: “Talking about luck really quickly spills into the philosophical stuff.”

You might think that the growing professionalization of youth sports offers an escape from this randomness—that by driving to this many practices and paying for that many coaches, you’re ensuring the cream will rise to the top. But the opposite is actually true, according to Mauboussin. In The Success Equation, he describes what he calls the “paradox of skill.” Now that every soccer hopeful is exhaustively trained from a young age, an army of relatively homogeneous talent is vying for the same prizes. “Everyone’s so good that luck becomes more important in determining outcomes,” Mauboussin said.

Baker and one of his colleagues at the University of Toronto, Kathryn Johnston, recently published a paper on the role of luck in athletic development in the journal Sports Medicine–Open. I felt a curious sense of relief when I read it. My daughters, who are 9 and 11, both play competitive soccer on teams requiring a level of commitment that I had naively thought went out of style with the fall of the Soviet Union. Seeing the evidence that future athletic success is not entirely predictable felt like a license for parents to loosen up a bit—to choose the holiday concert over the soccer practice without worrying about the long-term ramifications.

Linda Flanagan, the author of the 2022 book Take Back the Game and a frequent critic of today’s youth-sports culture, doesn’t share my optimism. She has no trouble believing that luck is involved with athletic success, but she doesn’t think that acknowledging this fact will change parental behavior. “Hell, they might double down on the investment in time and money, thinking that they need to give their child more chances to get lucky and impress the right coach,” she told me.

But that sort of luck—getting a job on your hundredth interview because the interviewer went to the same high school as you did, say—arguably is more about hustle than it is about serendipity. So is showing up to every soccer practice. Mauboussin’s definition of luck is narrower: It’s the factors you can’t control. No matter how much luck you try to “create” for yourself or your kids, some irreducible randomness might still make or break you.

To Baker, the takeaways from recognizing the role of luck are less about individual parents and more about how sports are organized. His advice to teams and governing bodies: “If there’s any way possible for you to avoid a selection, don’t select.” Keep as many athletes as you can in the system for as long as you can, and don’t allocate all of your resources to a chosen (and presumably lucky) few. When real-world constraints eventually and inevitably do require you to select—when you’re anointing these lucky few as your future stars, and casting out those who perhaps sang in one too many holiday concerts—try to leave the door open for future decisions and revisions. After all, Baker says, no matter how carefully you’ve weighed your predictions, “you’re probably wrong.”