Medical News

I grew up in a nonstick-pan home. No matter what was on the menu, my dad would reach for the Teflon-coated pan first: nonstick for stir-fried vegetables, for reheating takeout, for the sunny-side-up eggs, garlic fried rice, and crisped Spam slices that constituted breakfast. Nowadays, I’m a much fussier cook: A stainless-steel pan is my kitchen workhorse. Still, when I’m looking to make something delicate, such as a golden pancake or a classic omelet, I can’t help but turn back to that time-tested fave.

And what a dream it is to use. Nonstick surfaces are so frictionless that fragile crepes and scallops practically lift themselves off the pan; cleaning up sticky foods, such as oozing grilled-cheese sandwiches, becomes no more strenuous than rinsing a plate. No wonder 70 percent of skillets sold in the U.S. are nonstick. Who can afford to mangle a dainty snapper fillet or spend time scrubbing away crisped rice?

All of this convenience, however, comes with a cost: the unsettling feeling that cooking with a nonstick pan is somehow bad for you. My dad had a rule that we could only use a soft, silicon-edged spatula with the pan, born of his hazy intuition that any scratches on the coating would cause it to leach into our food and make us sick. Many home cooks have lived with these fears since at least the early 2000s, when we first began to hear about problems with Teflon, the substance that makes pans nonstick. Teflon is produced from chemicals that are part of an enormous family of chemicals known as perfluoroalkyl and polyfluoroakyl substances, or PFAS, and research has linked exposure to them to many health conditions, including certain cancers, reproductive issues, and high cholesterol. And that is about all we know: In kitchens over the past two decades, the same questions around safety have lingered unanswered amid the aromas of sizzling foods and, perhaps, invisible clouds of Teflon fumes.

It is objectively ridiculous that the safety of one of the most common household items in America remains such a mystery. But the reality is that it is nearly impossible to measure the risks of PFAS from nonstick cookware—and more important, it’s probably pointless to try. That’s because PFAS have for many decades imparted a valuable stain- and water-resistance to many types of surfaces, including carpets, car seats, and raincoats.

At this point, the chemicals are also ubiquitous in the environment, particularly in the water supply. Last June, the Environmental Protection Agency established new safety guidelines for the level of certain PFAS in drinking water; a study published around the same time showed that millions of deaths are correlated with PFAS exposure. By the Environmental Working Group’s latest count, PFAS have contaminated more than 2,850 sites in 50 states and two territories—an “alarming” level of pervasiveness, researchers wrote in a National Academies of Sciences, Engineering, and Medicine report last year. But something about nonstick pans has generated the biggest freak-out. This is not surprising, given their exposure to food and open flames. After all, people do not heat up and consume raincoats (as far as I know).

Since research into their health effects began, certain types of PFAS have been flagged as more dangerous than others. Two of them, PFOA and PFOS, were voluntarily phased out by manufacturers for several reasons, including the fact that they were deemed dangerous to the immune system; now many nonstick pans specify that their coatings are PFOA free. (If you’re confused by all the acronyms, you aren’t the only one.) But other types of PFAS are still used in these coatings, and their risks to humans aren’t clear. Teflon claims that any flakes of nonstick coating you might ingest are inert, but public studies backing up that claim are difficult to find.

In the absence of relevant data, everyone seems to have a different take on nonstick pans. The FDA, for example, allows PFAS to be used in nonstick cookware, but the EPA says that exposure to them can lead to adverse health effects, and last year proposed labeling certain members of the group as “hazardous substances.” According to the CDC, the health effects of low exposure to these chemicals are “uncertain.” Food experts are similarly undecided on nonstick pans: A writer for the culinary site Serious Eats said he “wouldn’t assume they’re totally safe,” whereas a Wirecutter review said they “seem to be safe”—if used correctly.

That’s about the firmest answer you’re going to get regarding the safety of nonstick cookware. “In no study has it been shown that people who use nonstick pans have higher levels” of PFAS, says Jane Hoppin, a North Carolina State University epidemiologist and a member of a National Academies of Sciences, Engineering, and Medicine committee to study PFAS. But she also told me that, with regard to the broader research on PFAS-related health risks, “I haven’t seen anybody say it’s safe to use.”

Certainly, more research could be done on PFAS, given the lack of relevant studies. There is no research, for example, showing that people who use nonstick pans are more likely to get sick. The one study on exposure from nonstick pans mentioned in the report that Hoppin and others published last year found inconclusive results after measuring gaseous PFAS released from heated nonstick pans, though the researchers tested only a few pans. Another study in which scientists used nonstick pans to cook beef and pork—and an assortment of more glamorous meats including chicken nuggets—and then measured the PFAS levels likewise failed to reach a conclusion, because too few meat samples were used.

More scientists could probably be convinced to pursue rigorous research in this field if PFAS exposure came only from nonstick pans. Investigating the risks would be tough, perhaps impossible: Designing a rigorous study to test the risks of PFAS exposure would likely involve forcing unwitting test subjects to breathe in PFAS fumes or eat from flaking pans. But given that we are exposed to PFAS in so many other ways—drinking water being chief among them—what would be the point? “They’re in dental floss, and they’re in your Gore-Tex jacket, and they’re in your shoes,” Hoppin said. “The relative contribution of any one of those things is minor.”

As long as PFAS keep proliferating in the environment, we might never fully know exactly what nonstick pans are doing to us. The best we can do for now is decide what level of risk we’re willing to accept in exchange for a slippery pan, based on the information available. And that information is frustratingly vague: Most nonstick products come with a disclosure of the types of PFAS they contain and the types they do not. Sometimes they also include instructions to avoid high heat, especially above 500 degrees Fahrenheit. Hoppin recommends throwing nonstick pans away once they start flaking; in general, it seems worth it to use the pans only when essential. There is likewise a dearth of guidance on breathing in the fumes from an overheated pan, though breathing in PFAS fumes in industrial settings has been known to cause flulike symptoms. If you’re concerned, Hoppin said, you could use any of the growing number of nonstick alternatives, including ceramic and carbon-steel cookware. (Her preference is well-seasoned cast iron.)

Still, perhaps it’s time to accept that exposure to PFAS is inevitable, much like exposure to microplastics and other carcinogens. At this point, so many harmful substances are all around us that there doesn’t seem to be any point in trying to limit them in individual products, though such efforts are under way for raincoats and period underwear. “What we really need to do is remove these chemicals from production,” Hoppin said. The hope is that doing so would broadly reduce our exposure to PFAS, and there’s evidence that it would work: After PFOS was phased out in the early 2000s, its levels in human blood declined significantly. But until PFAS are more tightly regulated, we’ll continue our endless slide through nonstick limbo, with our grasp of the cookware’s safety remaining slippery at best.

I’ve tried to cut down on my nonstick-pan use for sheer peace of mind. Many professional chefs reject nonstick pans as unnecessary if you know the proper technique; French chefs, after all, were flipping omelets long before the first Teflon pan was invented—by a French engineer—in 1954. Fancying myself a purist, I recently attempted to cook an omelet using All-Clad stainless steel, following a set of demanding instructions involving ungodly amounts of butter and a moderate amount of heat. Unlike my resolve to avoid nonstick pans, the eggs stuck.

A few weeks ago, a three-inch square of plastic and metal began, slowly and steadily, to upend my life.

The culprit was my new portable carbon-dioxide monitor, a device that had been sitting in my Amazon cart for months. I’d first eyed the product around the height of the coronavirus pandemic, figuring it could help me identify unventilated public spaces where exhaled breath was left to linger and the risk for virus transmission was high. But I didn’t shell out the $250 until January 2023, when a different set of worries, over the health risks of gas stoves and indoor air pollution, reached a boiling point. It was as good a time as any to get savvy to the air in my home.

I knew from the get-go that the small, stuffy apartment in which I work remotely was bound to be an air-quality disaster. But with the help of my shiny Aranet4, the brand most indoor-air experts seem to swear by, I was sure to fix the place up. When carbon-dioxide levels increased, I’d crack a window; when I cooked on my gas stove, I’d run the range fan. What could be easier? It would basically be like living outside, with better Wi-Fi. This year, spring cleaning would be a literal breeze!

The illusion was shattered minutes after I popped the batteries into my new device. At baseline, the levels in my apartment were already dancing around 1,200 parts per million (ppm)—a concentration that, as the device’s user manual informed me, was cutting my brain’s cognitive function by 15 percent. Aghast, I flung open a window, letting in a blast of frigid New England air. Two hours later, as I shivered in my 48-degree-Fahrenheit apartment in a coat, ski pants, and wool socks, typing numbly on my icy keyboard, the Aranet still hadn’t budged below 1,000 ppm, a common safety threshold for many experts. By the evening, I’d given up on trying to hypothermia my way to clean air. But as I tried to sleep in the suffocating trap of noxious gas that I had once called my home, next to the reeking sack of respiring flesh I had once called my spouse, the Aranet let loose an ominous beep: The ppm had climbed back up, this time to above 1,400. My cognitive capacity was now down 50 percent, per the user manual, on account of self-poisoning with stagnant air.

By the next morning, I was in despair. This was not the reality I had imagined when I decided to invite the Aranet4 into my home. I had envisioned the device and myself as a team with a shared goal: clean, clean air for all! But it was becoming clear that I didn’t have the power to make the device happy. And that was making me miserable.

[Read: Kill your gas stove]

CO2 monitors are not designed to dictate behavior; the information they dole out is not a perfect read on air quality, indoors or out. And although carbon dioxide can pose some health risks at high levels, it’s just one of many pollutants in the air, and by no means the worst. Others, such as nitrogen oxide, carbon monoxide, and ozone, can cause more direct harm. Some CO2-tracking devices, including the Aranet4, don’t account for particulate matter—which means that they can’t tell when air’s been cleaned up by, say, a HEPA filter. “It gives you an indicator; it’s not the whole story,” says Linsey Marr, an environmental engineer at Virginia Tech.

Still, because CO2 builds up alongside other pollutants, the levels are “a pretty good proxy for how fresh or stale your air is,” and how badly it needs to be turned over, says Paula Olsiewski, a biochemist and an indoor-air-quality expert at the Johns Hopkins Center for Health Security. The Aranet4 isn’t as accurate as, say, the $20,000 research-grade carbon-dioxide sensor in Marr’s lab, but it can get surprisingly close. When Jose-Luis Jimenez, an atmospheric chemist at the University of Colorado at Boulder, first picked one up three years ago, he was shocked that it could hold its own against the machines he used professionally. And in his personal life, “it allows you to find the terrible places and avoid them,” he told me, or to mask up when you can’t.

That rule of thumb starts to break down, though, when the terrible place turns out to be your home—or, at the very least, mine. To be fair, my apartment’s air quality has a lot working against it: two humans and two cats, all of us with an annoying penchant for breathing, crammed into 1,000 square feet; a gas stove with no outside-venting hood; a kitchen window that opens directly above a parking lot. Even so, I was flabbergasted by just how difficult it was to bring down the CO2 levels around me. Over several weeks, the best indoor reading I sustained, after keeping my window open for six hours, abstaining from cooking, and running my range fan nonstop, was in the 800s. I wondered, briefly, if my neighborhood just had terrible outdoor air quality—or if my device was broken. Within minutes of my bringing the meter outside, however, it displayed a chill 480.

[Read: The plan to stop every respiratory virus at once]

The meter’s cruel readings began to haunt me. Each upward tick raised my anxiety; I started to dread what I’d learn each morning when I woke up. After watching the Aranet4 flash figures in the high 2,000s when I briefly ignited my gas stove, I miserably deleted 10 wok-stir-fry recipes I’d bookmarked the month before. At least once, I told my husband to cool it with the whole “needing oxygen” thing, lest I upgrade to a more climate-friendly Plant Spouse. (I’m pretty sure I was joking, but I lacked the cognitive capacity to tell.) In more lucid moments, I understood the deeper meaning of the monitor: It was a symbol of my helplessness. I’d known I couldn’t personally clean the air at my favorite restaurant, or the post office, or my local Trader Joe’s. Now I realized that the issues in my home weren’t much more fixable. The device offered evidence of a problem, but not the means to solve it.

Upon hearing my predicament, Sally Ng, an aerosol chemist at Georgia Tech, suggested that I share my concerns with building management. Marr recommended constructing a Corsi-Rosenthal box, a DIY contraption made up of a fan lashed to filters, to suck the schmutz out of my crummy air. But they and other experts acknowledged that the most sustainable, efficient solutions to my carbon conundrum were mostly out of reach. If you don’t own your home, or have the means to outfit it with more air-quality-friendly appliances, you can only do so much. “And I mean, yeah, that is a problem,” said Jimenez, who’s currently renovating his home to include a new energy-efficient ventilation device, a make-up-air system, and multiple heat pumps.

Many Americans face much greater challenges than mine. I am not among the millions living in a city with dangerous levels of particulate matter in the air, spewed out by industrial plants, gas-powered vehicles, and wildfires, for whom an open window could risk additional peril; I don’t have to be in a crowded office or a school with poor ventilation. Since the first year of the pandemic—and even before—experts have been calling for policy changes and infrastructural overhauls that would slash indoor air pollution for large sectors of the population at once. But as concern over COVID has faded, “people have moved on,” Marr told me. Individuals are left on their own in the largely futile fight against stale air.

[Read: Put your face in airplane mode]

Though a CO2 monitor won’t score anyone victories on its own, it can still be informative: “It’s nice to have an objective measure, because all of this is stuff you can’t really see with the naked eye,” says Abraar Karan, an infectious-disease physician at Stanford, who’s planning to use the Aranet4 in an upcoming study on viral transmission. But he told me that he doesn’t let himself get too worked up over the readings from his monitor at home. Even Olsiewski puts hers away when she’s cooking on the gas range in her Manhattan apartment. She already knows that the levels will spike; she already knows what she needs to do to mitigate the harms. “I use the tools I have and don’t make myself crazy,” she told me. (Admittedly, she has a lot of tools, especially in her second home in Texas—among them, an induction stove and an HVAC with ultra-high-quality filters and a continuously running fan. When we spoke on the phone, her Aranet4 read 570 ppm; mine, 1,200.)

I’m now aiming for my own middle ground. Earlier this week, I dreamed of trying and failing to open a stuck window, and woke up in a cold sweat. I spent that day working with my (real-life) kitchen window cracked, but I shut it when the apartment got too chilly. More important, I placed my Aranet4 in a drawer, and didn’t pull it out again until nightfall. When my spouse came home, he marveled that our apartment, once again, felt warm.

The world has just seen the largest vaccination campaign in history. At least 13 billion COVID shots have been administered—more injections, by a sweeping margin, than there are human beings on the Earth. In the U.S. alone, millions of lives have been saved by a rollout of extraordinary scope. More than three-fifths of the population elected to receive the medicine even before it got its full approval from the FDA.

Yet the legacy of this achievement appears to be in doubt. Just look at where the country is right now. In Florida, the governor—a likely Republican presidential candidate—openly pursues the politics of vaccine resistance and denial. In Ohio, kids are getting measles. In New York, polio is back. A football player nearly died on national TV, and fears about vaccines fanned across the internet. Vaccinologists, pediatricians, and public-health experts routinely warn that confidence is wavering for every kind of immunization, and worry that it may collapse in years to come.

In other words, America is mired in a paradoxical and pessimistic moment. “We’ve just had a national vaccination campaign that has exceeded almost all previous efforts in a dramatic fashion,” says Noel Brewer, a psychologist at the University of North Carolina who has been studying decision making about vaccines for more than 20 years, “and people are talking about vaccination as if there’s something fundamentally wrong.”

It’s more than talk. Americans are arguing, Americans are worrying, Americans are obsessing over vaccines; and that fixation has produced its own, pathological anxiety. To fret about the state of public trust is rational: When vaccine adherence wobbles, lives are put in peril; in the midst of a pandemic, the mortal risk is even greater. More than 60 million Americans haven’t gotten a single COVID shot, and a few thousand deaths are attributed to the disease every week. But the scale of this concern—the measure of our instability—may be distorted by the heights to which we’ve climbed. Evidence that the nation has arrived at the brink of collapse does not hold up to scrutiny. No one knows where vaccination rates are really heading, and the coming crash is more an idea—a projection, even—than a certainty. The future of vaccination in America may be no worse than its recent past. In the end, it might be better.

The first alarms about a widespread vaccination crisis—the first suggestions that a leeriness of COVID shots had “spread its tentacles into other diseases”—were raised by clinicians. Megha Shah, a pediatrician with the Los Angeles public-health department, told me that she began to worry in the spring of 2021, while volunteering at a medical center. Two years earlier, she recalled, working there had been uneventful. She’d meet with parents—mostly from low-income Latino families—to discuss the standard vaccination schedule: Okay, here’s what we’re recommending for your child. This protects against this; that protects against that. The parents would ask a couple of questions, and she’d answer them. The child would be immunized, almost every time.

But in the middle of the COVID-vaccine rollout, she found that those conversations were playing out differently. “Oh, I’m just not sure,” she said some parents told her. Or, “I need to talk this over with my partner.” She saw families refuse, flat-out, to give their infants routine shots. “It just was very, very surprising,” Shah said. “I mean, questions are good. We want parents to be engaged and informed decision makers.” But it seemed to her—and her colleagues too—that healthy “engagement” had gone sour.

Last year, she and her colleagues took a closer look. For a study published in Pediatrics, they drew on national survey data collected from April 2020 through early 2022, of parents’ attitudes toward standard childhood vaccines. In some respects, the results looked good: Parents endorsed the importance and effectiveness of these vaccines at a high and stable rate throughout the pandemic—in the vicinity of 91 percent. But over the same period, concerns about potential harms marched upward. In April 2020, about 25 percent of those surveyed agreed that vaccines “have many known harmful side effects” and “may lead to illness or death”; by the end of the year, that number had increased to 30 percent, and then to nearly 35 percent the following June. “Parents still seemed very confident overall in the benefits of vaccinations,” Shah told me, “but there was a huge jump over the course of the pandemic about the safety.”

[Read: What’s really behind global vaccine hesitancy?]

Those results jibed with a theory that has now been invoked so many times, it reads as common knowledge: “Perhaps this was a spillover effect,” Shah said, “from all of the vaccine misinformation that was circling during the pandemic.” That effect—the spreading tentacles of doubt—can be seen around the world, says Heidi Larson, a professor at the London School of Hygiene & Tropical Medicine who has studied attitudes toward vaccination across Europe since the start of the coronavirus pandemic. “The public-health community was assuming that COVID would be a great boon to public confidence in vaccines, but it hasn’t worked out that way. The trend has been actually a negative knock-on effect,” Larson told me. In a troubling alignment, even anti-vaccine activists now endorse the notion of hesitancy spillover, calling it a “wonderful silver lining” to the pandemic.

But hold on a minute. Here in the U.S., it’s certainly true that vaccine worries have been broadcast and rebroadcast, at ever greater volumes, through a clamorous network of influencers and politicians. This campaign of hesitancy is growing more open and insistent by the day, and the consequences can be atrocious: Americans with false beliefs about vaccines are falling sick and dying stubborn and alone. But even as these anecdotes accrue, misinformation’s greater sway—the extent to which it shapes Americans’ behavior toward vaccines for COVID, measles, or the flu—remains murky, if not altogether undetectable. The best numbers to go on in this country, drawn from polls of people’s attitudes about vaccines and official vaccination surveys from the CDC, don’t hint at any comprehensive change. When concerning blips and mini-trends arise—shifts in parents’ attitudes, as seen in Shah’s research, or drops in local rates of children getting immunized—they’re set against a landscape with a flat horizon.

It’s not a pretty view, for that: The U.S. lags five points behind the average wealthy country in its rate of people fully vaccinated against COVID, and two points behind in its vaccination rate for measles. And even blips can translate into many thousands of at-risk kids, Shah pointed out. Yet one might still be grateful for the sameness overall. A seedbed of resistance to the COVID shots, disproportionately Republican, was already present near the start of the pandemic, and hasn’t seemed to thrive despite two years’ worth of fertilizer runoff from Fox News and other outlets spewing doubt. In August 2020, the Harris Poll’s weekly COVID-19 tracker found that 15 percent of American adults said they were “not at all likely” to get the vaccine when it finally became available. In August 2022, Harris reported that 17 percent weren’t planning to be immunized. Other long-running surveys have found similar results. In September 2020, Kaiser Family Foundation’s vaccine monitor pegged the rate of refusal at 20 percent. In December 2022, it was … still 20 percent.

The most recent uptake numbers from the CDC suggest that children born in 2018 and 2019 (who would have been babies or toddlers when COVID first appeared) had higher vaccination rates by age 2 than children born in 2016 and 2017. Some of these kids did miss out on shots amid the pandemic’s early lapses in routine medical care, but they quickly caught up. Another, more alarming batch of data from the CDC shows that measles-mumps-rubella coverage among the nation’s kindergartners has dropped for two years in a row, down from 95.2 to 93.5 percent, and is now lower than it’s been since at least 2013. Still, the proportion of kids who get exempted from school vaccine requirements for medical or philosophical reasons has hardly changed at all, and the headline-grabbing “slide” in rates appears instead to be at least in part a product of “provisional enrollments”—i.e., children who missed some vaccinations (perhaps in early 2020) and were allowed to enter school while they caught up. If there really is a wave of newly red-pilled, anti-vaxxer parents, then going by these data, they’re nowhere to be seen.

Some public-health disasters hit like hurricanes; others spread like rust. “We may not have a full picture yet,” Shah told me, referring to the latest evidence from the CDC on where vaccination rates are heading. “My gut and my clinical experience tell me that it’s too soon to say.”

Other experts share that view. Robert Bednarczyk, an epidemiologist at Emory University, has been estimating the susceptibility of U.S. children to measles outbreaks since 2016. National immunization surveys have not shown substantial drops in coverage for 2020 and 2021, he told me, “but there is a large caveat to this. These surveys have a lag time.” Any children from the CDC’s data set who were born in 2018, he noted, would have gotten most of their vaccines before the pandemic started, during their first year of life. The same problem applies to teens. The government’s latest stats for adolescents—which looked as good as ever in 2021—capture many who would have gotten all their shots pre-COVID. Until more data are released, researchers still won’t know whether or how far kids’ vaccination rates have really dipped during the 2020s.

The time delay is just one potential problem. Parents who are suspicious of vaccines, and angry at the government for encouraging their use, may be less willing to participate in CDC surveys, Daniel Salmon, the director of the Institute for Vaccine Safety at Johns Hopkins Bloomberg School of Public Health, told me. “Having studied this for 25 years, I would be surprised if we don’t see a substantial COVID effect on childhood vaccines,” he said. “These data are a little bit reassuring, that it’s not, like, an oh-my-god huge effect. But we need more time and more data to really know the answer.”

[Read: How many Republicans died because the GOP turned against vaccines?]

Uncertainty doesn’t have to be a source of terror, though. Early uptake data already provide some signs of a “vaccine-hesitancy spillover effect” happening in reverse, UNC’s Brewer told me, driving more enthusiasm, not less, for getting different kinds of shots. Just look at how the push to dose the nation with half a billion COVID shots goosed the rates of grown-ups getting flu shots: For decades now, our public-health establishment has pushed for better influenza coverage, even as the rate for older Americans was stuck at roughly 65 percent. Then COVID came along and, voilà, senior citizens’ flu-shot coverage jumped to 75 percent—higher than it ever was before. This all fits with a familiar idea in the field, Brewer said, that going in for any one vaccine makes you much more likely to get another in the future. “There does seem to be a sort of positive spillover,” he said, “probably because the forces that led to previous vaccinations are still mostly in place.”

Even some of the scariest signals we’ve seen so far—reports that anti-vaccine sentiment is clearly on the rise—can seem ambiguous, depending on one’s breadth of view. Consider the finding from Heidi Larson’s group, that vaccine confidence has declined across the whole of the European Union throughout the pandemic, according to surveys taken in 2020 and 2022. The same report says that attitudes have now returned to where they were in 2018 and that confidence in the MMR vaccine, in particular, remains higher than it was four years ago. Given that the 2020 surveys were conducted mostly in March, at the very onset of the first pandemic lockdowns, they might have captured a temporary spike of interest in vaccines. After all, vaccines can seem more useful when you’re terrified of death.

In other words, America may truly have experienced a recent drop in vaccine confidence—but from an inflated and unsustainable high. That could help explain other recent findings too, including Shah’s. “You need to take the long view,” says Douglas Opel, a pediatric bioethicist at Seattle Children’s Hospital who has been studying the ups and downs of vaccine hesitancy for more than a decade. For a paper published last July, he and colleagues looked at vaccine attitudes among 4,562 parents from late 2019 to the end of 2020. They found that the parents grew more enthusiastic about childhood immunizations when the pandemic started, but their feelings later returned to baseline.

Larson told me that a “transient COVID effect” may well explain some of what her team has found, but said it was very unlikely to account in full for the worrying trend. In any case, she told me, “we shouldn’t assume this and should instead make an extra effort to continue to build confidence.”

No crunching of the numbers can excuse the spread of vaccine misinformation, or suggest that those who peddle it are anything but a hateful scourge on individuals and a threat to public health. But you can’t simply ignore the fact that, as far as we can see, all the gnashing about vaccines’ supposed risks simply hasn’t changed a lot of people’s minds. It certainly hasn’t caused a steep and sudden rise in vaccine refusal. The idea that we’re in the midst of some new vaccine-hesitancy contagion is based as much on vibes as proven fact.

The problem is, bad vibes can leave us prone to misinterpretation. Take the recent measles outbreak in Ohio: It’s alarming, but not so relevant to recent trends in vaccination, despite many claims to that effect. More than one-quarter of the affected children were too young to have been eligible for the MMR vaccine, while others were old enough to have missed their first shot by 2020, before any hesitancy “spillover” could have taken place. And at least a meaningful proportion of the affected families, from the state’s Democratic-leaning Somali American community, wouldn’t seem to represent the GOP’s white, unvaccinated constituency.

The stark politicization of the COVID shots can be misread too. Despite the 30-point gap between Democrats and Republicans in COVID vaccination rates, those rates are much, much higher—for members of both parties—than they’ve ever been for flu shots. And interparty differences in flu-shot uptake seem to be long-standing. A preprint study from Minttu Rönn, a researcher at the Harvard T. H. Chan School of Public Health, and colleagues found a broadening divide in coverage between Democratic- and Republican-voting states, based on data going back to 2010. But this may not be a bad thing. Rönn doesn’t think the change arises from a loss of trust among Republicans; rather, she told me, it looks to be related to rising flu-shot coverage overall, with proportionally greater gains in Democratic-leaning areas. (That difference could be the result of local attitudes, ease of access, or insurance coverage, she said.) In other words, red states aren’t necessarily falling behind on vaccination. Blue states are surging forward.

[Read: Vaccination in America might have only one tragic path forward]

Optimism here may seem perverse. COVID booster uptake is absurdly low right now, even for the elderly. The politicization of vaccines (whenever it began) certainly isn’t letting up. Given what would happen if trust in vaccination really did collapse, perhaps it makes more sense to err on the side of freaking out. As Larson said, every effort should be taken to build confidence, no matter what.

But the truth of what we know right now ought to be important too. Maybe it’s okay to feel okay. Maybe there’s value in maintaining calm and taking stock of what we’ve accomplished or what we’ve maintained in the face of all these efforts to confuse us. At the risk of trying way too hard to find some solace in disturbing facts, here’s another case in point. Remember Shah’s results, that parents’ concerns about the health effects of childhood vaccines have steadily gone up throughout the pandemic, even as their belief in vaccines’ benefits stayed high? That increase wasn’t clearly more pronounced in any specific group. Belief that vaccination can result in illness or death went up across the board for men and women in the survey, for young and old, for Black and white alike. It rose among Republicans and also Democrats—in just about the same proportions. If America’s parents have been getting more attuned to potential risks from vaccination, we’re doing it together.

I’m in that number too. As a scientist by training and a science journalist by trade, I’ve been reporting and editing stories about vaccination for years. Still, I’ve never thought so hard about the topic, and in such critical detail, as I have since 2021. At no point in my life has vaccination been this pervasive, perplexing, and important. When it came time to get my children COVID shots, I learned everything I could about potential risks and benefits. I looked at data on the incidence of myocarditis, I considered very rare but deadly outcomes, and I weighed the efficacy of different shots against their measured side effects. These investigations did not arise from distrust of authority, podcast propaganda, or a belief in microchips so small they fit inside of a syringe. I wasn’t fearful; I was curious. I had questions, and I got answers—and now every member of my family has gotten their shots.

We’ve all been forced by circumstance to think in different ways about our health. Before the pandemic, Larson told me, most people simply didn’t have to pay attention to vaccines. Parents with young children, sure, but everybody else? “I think they probably said, Yeah, vaccines are important. Yeah, they’re safe enough,” she said. But now the stakes are raised across the population. “I mean, there are these groups around the world where you’re like, ‘why do they care about vaccines?’ And it’s because of COVID.”

The emergence of so many groups with newfound interest in vaccines could end up being dangerous, of course—in the same way that newly minted drivers are a menace on the road. “A lot of people went online asking questions about vaccines,” Larson told me, in a tone that made it sound as though online were a synonym for “straight to hell.” But sometimes asking questions gets you useful information, and sometimes useful information leads to wise decisions. Debates about vaccines may be louder than they’ve ever been before, but that doesn’t mean that vaccination rates are bound to fall.

Even if the situation isn’t getting that much worse, the country might still be left to wallow in its status quo. Yes, more than 200 million Americans have been fully immunized against COVID—and more than 100 million haven’t. “This has been a problem for a long time,” Daniel Salmon told me. “It was already ‘a crisis in confidence’ a dozen years ago. We don’t see a free fall—that’s somewhat reassuring—but that’s very different from saying that we’re good to go.”

The fact of this crisis, however long it’s been around, will never matter more than its effects. After all, “confidence” itself is not the only factor, or even the most important one, that determines who gets shots. “Generally speaking, access to vaccination is a much bigger driver than what people think and feel,” Noel Brewer told me. Early in the pandemic, lots of parents wanted to vaccinate their kids and simply couldn’t. Now many of them can. But obstacles persist, and their effects aren’t evenly distributed. According to the CDC, toddlers’ vaccination rates are somewhat lower among those who live in poverty, or reside in rural areas, or don’t identify as white or Asian. Since the pandemic started, these gaps in opportunity appear to have increased. A grand and tragic spillover of people’s vaccination doubts—the anti-vaxxers’ hoped-for “silver lining” to the pandemic—may or may not come. In the meantime, though, there are other problems to address.

For decades now, gay men have been barred from giving blood. In 2015, what had been a lifetime ban was loosened, such that gay men could be donors if they’d abstained from sex for at least a year. This was later shortened to three months. Last week, the FDA put out a new and more inclusive plan: Sexually active gay and bisexual people would be permitted to donate so long as they have not recently engaged in anal sex with new or multiple partners. Assistant Secretary for Health Rachel Levine, the first Senate-confirmed transgender official in the U.S., issued a statement commending the proposal for “advancing equity.” It “treats everyone the same,” she said, “regardless of gender and sexual orientation.”

As a member of the small but honorable league of gay pathologists, I’m affected by these proposed policy changes more than most Americans. I’m subject to restrictions on giving blood, and I’ve also been responsible for monitoring the complications that can arise from transfusions of infected blood. I am quite concerned about HIV, given that men who have sex with men are at much greater risk of contracting the virus than members of other groups. But it’s not the blood-borne illness that I, as a doctor, fear most. Common bacteria lead to far more transfusion-transmitted infections in the U.S. than any virus does, and most of those produce severe or fatal illness. The risk from viruses is extraordinarily low—there hasn’t been a single reported case of transfusion-associated HIV in the U.S. since 2008—because laboratories now use highly accurate tests to screen all donors and ensure the safety of our blood supply. This testing is so accurate that preventing anyone from donating based on their sexual behavior is no longer logical. Meanwhile, new dictates about anal sex, like older ones explicitly targeting men who have sex with men, still discriminate against the queer community—the FDA is simply struggling to find the most socially acceptable way to pursue a policy that it should have abandoned long ago.

Strict precautions made more sense 30 years ago, when screening didn’t work nearly as well as it does today. Patients with hemophilia, many of whom rely on blood products to live, were prominent, early victims of our inability to keep HIV out of the blood supply. One patient who’d acquired the virus through a transfusion lamented to The New York Times in 1993 that he had already watched an uncle and a cousin die of AIDS. Those days of “shock and denial,” as the Times described it, are thankfully behind us. But for older patients, memories of the crisis in the ’80s and early ’90s linger, and cause significant anxiety. Even people unaware of this historical context may consider the receipt of someone else’s blood disturbing, threatening, or sinful.

As a doctor, I’ve found that patients tend to be more hesitant about getting a blood transfusion than they are about taking a pill. I’ve had them ask for a detailed medical history of the donor, or say they’re willing to take blood only from a close relative. (Typically, neither of these requests can be fulfilled for reasons of privacy and practicality.) Yet the same patients may accept—without question—drugs that carry a risk of serious complication that is thousands of times higher than the risk of receiving infected blood. Even when it comes to blood-borne infections, patients seem to worry less about the greatest danger—bacterial contamination—than they do about the transfer of viruses such as HIV and hepatitis C. I can’t fault anyone for being sick and scared, but the risk of contracting HIV from a blood transfusion is not just low—it’s essentially nonexistent.

[Read: Blood plasma, sweat, and tears]

Donors’ feelings matter, too, and the FDA’s policies toward gay and bisexual men who wish to give blood have been unfair for many years. While officials speak in the supposedly objective language of risk and safety, their selective deployment of concern suggests a deeper homophobia. As one scholar put it in The American Journal of Bioethics more than a decade ago, “Discrimination resides not in the risk itself but in the FDA response to the risk.” Many demographic groups are at elevated risk of contracting HIV, yet the agency isn’t continually refining its exclusion criteria for young people or urban dwellers or Black and Hispanic people. Federal policy did prohibit Haitians from donating blood from 1983 to 1991, but activists successfully lobbied for the reversal of this ban with the powerful slogan “The H in HIV stands for human, not Haitian.” Nearly everyone today would find the idea of rejecting blood from one racial group to be morally repugnant. Under its new proposal, which purports to target anal sex instead of homosexuality itself, the FDA effectively persists in rejecting blood from sexual minorities.

The planned update would certainly be an improvement. It comes out of years of advocacy by LGBTQ-rights organizations, and its details are apparently supported by newly conducted government research. Peter Marks, the director of the Center for Biologics Evaluation and Research at the FDA, cited an unpublished study showing that “a significant fraction” of men who have sex with men would now be able to donate. But the plan is still likely to exclude a large portion of them—even those who wear condoms or regularly test for sexually transmitted infections. An FDA spokesperson told me via email that “additional data are needed to determine what proportion of [men who have sex with men] would be able to donate under the proposed change.”

Research done in France, Canada, and the U.K., where similar policies have since been adopted over the past two years, demonstrates the risk. A French blood-donation study, for instance, estimated that 70 percent of men who have sex with men had more than one recent partner; and when Canadian researchers surveyed queer communities in Montreal, Toronto, and Vancouver, they found that up to 63 percent would not be eligible to donate because they’d recently had anal sex with new or multiple partners. Just 1 percent of previously eligible donors would have been rejected by similar criteria. The U.K. assumed in its calculations that 35 to 50 percent of men who have sex with men would be ineligible under a policy much like the FDA’s, while only 1.4 percent of previous donors would be newly deferred. If the new rule’s net effect is that gay and bisexual men are turned away from blood centers at many times the rate of heterosexual individuals, what else can you call it but discrimination? The U.S. guidance is supposed to ban a lifestyle choice rather than an identity, but the implication is that too many queer men have chosen wrong. The FDA spokesperson told me, “Anal sex with more than one sexual partner has a significantly greater risk of HIV infection when compared to other sexual exposures, including oral sex or penile-vaginal sex.”

If the FDA wants to pry into my sex life, it should have a good reason for doing so. The increasing granularity and intimacy of these policies—specifying numbers of partners, kinds of sex—give the impression that the stakes are very high: If we don’t keep out the most dangerous donors, the blood supply could be ruined. But donor-screening questions are a crude tool for picking needles from a haystack. The only HIV infections that are likely to get missed by modern testing are those contracted within the previous week or two. This suggests that, at most, a couple thousand individuals—gay and straight—across the entire country are at risk of slipping past our testing defenses at any given time. Of course, very few of them will happen to donate blood right then. No voluntary questionnaire can ever totally exclude this possibility, but patients and doctors already accept other life-threatening transfusion risks that occur at much greater rates than HIV transmission ever could. When I would be on call for monitoring transfusion reactions at a single hospital, the phone would ring a few times every night. Yet blood has been given out tens of millions of times across the country since the last known instance of a transfusion resulting in a case of HIV.

[Read: How blood-plasma companies target the poorest Americans]

Early data suggest that the overall risk-benefit calculus of receiving blood isn’t likely to change. When eligibility criteria were first relaxed in the U.S. a few years ago, the already tiny rate of HIV-positive donations remained minuscule. Real-world results from other countries that have recently adopted sexual-orientation-neutral policies will become available in the coming years. But modeling studies already support removing any screening question that explicitly or implicitly targets queer men. A 2022 Canadian analysis suggested that removing all questions about men who have sex with men would not result in a significantly higher risk to patients. “Extra behavioral risk questions may not be necessary,” the researchers concluded. If there must be a restriction in place, then one narrowly tailored to the slim risk window of seven to 10 days before donation should be good enough. (The FDA says that its proposed policy “would be expected to reduce the likelihood of donations by individuals with new or recent HIV infection who may be in the window period.”)

As a gay man, I realize that, brief periods of crisis during the coronavirus pandemic aside, no one needs my blood. Only 6.8 percent of men in the U.S. identify as gay or bisexual, so our potential benefit to the overall supply is inherently modest. If we went back to being banned completely, patients would not be harmed. But reversing that ban, both in letter and in spirit, would send a vital message: Our government and health-care system view sexual minorities as more than a disease vector. A policy that uses anal sex as a stand-in for men who have sex with men only further stigmatizes this population by impugning one of its main sources of sexual pleasure. There is no question that nonmonogamous queer men have a greater chance of contracting HIV. But a policy that truly treats everyone the same would accept a tiny amount of risk as the price of working with human beings.

When it comes to treating disease with food, the quackery stretches back far. Through the centuries, raw garlic has been touted as a home treatment for everything from chlamydia to the common cold; Renaissance remedies for the plague included figs soaked in hyssop oil. During the 1918 flu pandemic, Americans wolfed down onions or chugged “fluid beef” gravy to keep the deadly virus at bay.

Even in modern times, the internet abounds with dubious culinary cure-alls: apple-cider vinegar for gonorrhea; orange juice for malaria; mint, milk, and pineapple for tuberculosis. It all has a way of making real science sound like garbage. Research on nutrition and immunity “has been ruined a bit by all the writing out there on Eat this to cure cancer,” Lydia Lynch, an immunologist and a cancer biologist at Harvard, told me.

In recent years, though, plenty of legit studies have confirmed that our diets really can affect our ability to fight off invaders—down to the fine-scale functioning of individual immune cells. Those studies belong to a new subfield of immunology sometimes referred to as immunometabolism. Researchers are still a long way off from being able to confidently recommend specific foods or dietary supplements for colds, flus, STIs, and other infectious illnesses. But someday, knowledge of how nutrients fuel the fight against disease could influence the way that infections are treated in hospitals, in clinics, and maybe at home—not just with antimicrobials and steroids but with dietary supplements, metabolic drugs, or whole foods.

Although major breakthroughs in immunometabolism are just now arriving, the concepts that underlie them have been around for at least as long as the quackery. People have known for millennia that in the hours after we fall ill, our appetite dwindles; our body feels heavy and sluggish; we lose our thirst drive. In the 1980s, the veterinarian Benjamin Hart argued that those changes were a package deal—just some of many sickness behaviors, as he called them, that are evolutionarily hardwired into all sorts of creatures. The goal, Hart told me recently, is to “help the animal stay in one place and conserve energy”—especially as the body devotes a large proportion of its limited resources to igniting microbe-fighting fevers.

The notion of illness-induced anorexia (not to be confused with the eating disorder anorexia nervosa) might seem, at first, like “a bit of a paradox,” says Zuri Sullivan, an immunologist at Harvard. Fighting pathogenic microbes is energetically costly—which makes eating less a very counterintuitive choice. But researchers have long posited that cutting down on calories could serve a strategic purpose: to deprive certain pathogens of essential nutrients. (Because viruses do not eat to acquire energy, this notion is limited to cell-based organisms such as bacteria, fungi, and parasites.) A team led by Miguel Soares, an immunologist at the Instituto Gulbenkian de Ciência, in Portugal, recently showed that this exact scenario might be playing out with malaria. As the parasites burst out of the red blood cells where they replicate, the resulting spray of heme (an oxygen-transporting molecule) prompts the liver to stop making glucose. The halt seems to deprive the parasites of nutrition, weakening them and tempering the infection’s worst effects.

[Read: Why science can be so indecisive about nutrition]

Cutting down on sugar can be a dangerous race to the bottom: Animals that forgo food while they’re sick are trying to starve out an invader before they themselves run out of energy. Let the glucose boycott stretch on too long, and the dieter might develop dangerously low blood sugar —a common complication of severe malaria—which can turn deadly if untreated. At the same time, though, a paucity of glucose might have beneficial effects on individual tissues and cells during certain immune fights. For example, low-carbohydrate, high-fat ketogenic diets seem to enhance the protective powers of certain types of immune cells in mice, making it tougher for particular pathogens to infiltrate airway tissue.

Those findings are still far from potential human applications. But Andrew Wang, an immunologist and a rheumatologist at Yale, hopes that this sort of research could someday yield better clinical treatments for sepsis, an often fatal condition in which an infection spreads throughout the body, infiltrating the blood. “It’s still not understood exactly what you’re supposed to feed folks with sepsis,” Wang told me. He and his former mentor at Yale, Ruslan Medzhitov, are now running a clinical trial to see whether shifting the balance of carbohydrates and lipids in their diet speeds recovery for people ill with sepsis. If the team is able to suss out clear patterns, doctors might eventually be able to flip the body’s metabolic switches with carefully timed doses of drugs, giving immune cells a bigger edge against their enemies.

But the rules of these food-illness interactions, to the extent that anyone understands them, are devilishly complex. Sepsis can be caused by a whole slew of different pathogens. And context really, really matters. In 2016, Wang, Medzhitov, and their colleagues discovered that feeding mice glucose during infections created starkly different effects depending on the nature of the pathogen driving disease. When the mice were pumped full of glucose while infected with the bacterium Listeria, all of them died—whereas about half of the rodents that were allowed to give in to their infection-induced anorexia lived. Meanwhile, the same sugary menu increased survival rates for mice with the flu.

In this case, the difference doesn’t seem to boil down to what the microbe was eating. Instead, the mice’s diet changed the nature of the immune response they were able to marshal—and how much collateral damage that response was able to inflict on the body, as James Hamblin wrote for The Atlantic at the time. The type of inflammation that mice ignited against Listeria, the team found, could imperil fragile brain cells when the rodents were well fed. But when the mice went off sugar, their starved livers started producing an alternate fuel source called ketone bodies—the same compounds people make when on a ketogenic diet—that helped steel their neurons. Even as the mice fought off their bacterial infections, their brain stayed resilient to the inflammatory burn. The opposite played out when the researchers subbed in influenza, a virus that sparks a different type of inflammation: Glucose pushed brain cells into better shielding themselves against the immune system’s fiery response.

[Read: Feed a cold, don’t starve it]

There’s not yet one unifying principle to explain these differences. But they are a reminder of an underappreciated aspect of immunity. Surviving disease, after all, isn’t just about purging a pathogen from the body; our tissues also have to guard themselves from shrapnel as immune cells and microbes wage all-out war. It’s now becoming clear, Soares told me, that “metabolic reprogramming is a big component of that protection.” The tactics that thwart a bacterium like Listeria might not also shield us from a virus, a parasite, or a fungus; they may not be ideal during peacetime. Which means our bodies must constantly toggle between metabolic states.

In the same way that the types of infections likely matter, so do the specific types of nutrients: animal fats, plant fats, starches, simple sugars, proteins. Like glucose, fats can be boons in some contexts but detrimental in others, as Lynch has found. In people with obesity or other metabolic conditions, immune cells appear to reconfigure themselves to rely more heavily on fats as they perform their day-to-day functions. They can also be more sluggish when they attack. That’s the case for a class of cells called natural killers: “They still recognize cancer or a virally infected cell and go to it as something that needs to be killed,” Lynch told me. “But they lack the energy to actually kill it.” Timing, too, almost certainly has an effect. The immune defenses that help someone expunge a virus in the first few days of an infection might not be the ones that are ideal later on in the course of disease.

Even starving out bacterial enemies isn’t a surefire strategy. A few years ago, Janelle Ayres, an immunologist at the Salk Institute for Biological Studies, and her colleagues found that when they infected mice with Salmonella and didn’t allow the rodents to eat, the hungry microbes in their guts began to spread outside of the intestines, likely in search of food. The migration ended up killing tons of their tiny mammal hosts. Mice that ate normally, meanwhile, fared far better—though the Salmonella inside of them also had an easier time transmitting to new hosts. The microbes, too, were responding to the metabolic milieu, and trying to adapt. “It would be great if it was as simple as ‘If you have a bacterial infection, reduce glucose,’” Ayres said. “But I think we just don’t know.”

All of this leaves immunometabolism in a somewhat chaotic state. “We don’t have simple recommendations” on how to eat your way to better immunity, Medzhitov told me. And any that eventually emerge will likely have to be tempered by caveats: Factors such as age, sex, infection and vaccination history, underlying medical conditions, and more can all alter people’s immunometabolic needs. After Medzhitov’s 2016 study on glucose and viral infections was published, he recalls being dismayed by a piece from a foreign outlet circulating online claiming that “a scientist from the USA says that during flu, you should eat candy,” he told me with a sigh. “That was bad.”

[Read: You can’t “starve” cancer, but you might help treat it with food]

But considering how chaotic, individualistic, and messy nutrition is for humans, it shouldn’t be a surprise that the dietary principles governing our individual cells can get pretty complicated too. For now, Medzhitov said, we may be able to follow our instincts. Our bodies, after all, have been navigating this mess for millennia, and have probably picked up some sense of what they need along the way. It may not be a coincidence that during viral infections, “something sweet like honey and tea can really feel good,” Medzhitov said. There may even be some immunological value in downing the sick-day classic, chicken soup: It’s chock-full of fluid and salts, helpful things to ingest when the body’s electrolyte balance has been thrown out of whack by disease.

The science around sickness cravings is far from settled. Still, Sullivan, who trained with Medzhitov, jokes that she now feels better about indulging in Talenti mango sorbet when she’s feeling under the weather with something viral, thanks to her colleagues’ 2016 finds. Maybe the sugar helps her body battle the virus without harming itself; then again, maybe not. For now, she figures it can’t hurt to dig in.

Stephen B. Thomas, the director of the Center for Health Equity at the University of Maryland, considers himself an eternal optimist. When he reflects on the devastating pandemic that has been raging for the past three years, he chooses to focus less on what the world has lost and more on what it has gained: potent antiviral drugs, powerful vaccines, and, most important, unprecedented collaborations among clinicians, academics, and community leaders that helped get those lifesaving resources to many of the people who needed them most. But when Thomas, whose efforts during the pandemic helped transform more than 1,000 Black barbershops and salons into COVID-vaccine clinics, looks ahead to the next few months, he worries that momentum will start to fizzle out—or, even worse, that it will go into reverse.

This week, the Biden administration announced that it would allow the public-health-emergency declaration over COVID-19 to expire in May—a transition that’s expected to put shots, treatments, tests, and other types of care more out of reach of millions of Americans, especially those who are uninsured. The move has been a long time coming, but for community leaders such as Thomas, whose vaccine-outreach project, Shots at the Shop, has depended on emergency funds and White House support, the transition could mean the imperilment of a local infrastructure that he and his colleagues have been building for years. It shouldn’t have been inevitable, he told me, that community vaccination efforts would end up on the chopping block. “A silver lining of the pandemic was the realization that hyperlocal strategies work,” he said. “Now we’re seeing the erosion of that.”

I called Thomas this week to discuss how the emergency declaration allowed his team to mobilize resources for outreach efforts—and what may happen in the coming months as the nation attempts to pivot back to normalcy.

Our conversation has been edited for clarity and length.

Katherine J. Wu: Tell me about the genesis of Shots at the Shop.

Stephen B. Thomas: We started our work with barbershops and beauty salons in 2014. It’s called HAIR: Health Advocates In-Reach and Research. Our focus was on colorectal-cancer screening. We brought medical professionals—gastroenterologists and others—into the shop, recognizing that Black people in particular were dying from colon cancer at rates that were just unacceptable but were potentially preventable with early diagnosis and appropriate screening.

Now, if I can talk to you about colonoscopy, I could probably talk to you about anything. In 2019, we held a national health conference for barbers and stylists. They all came from around the country to talk about different areas of health and chronic disease: prostate cancer, breast cancer, others. We brought them all together to talk about how we can address health disparities and get more agency and visibility to this new frontline workforce.

When the pandemic hit, all the plans that came out of the national conference were on hold. But we continued our efforts in the barbershops. We started a Zoom town hall. And we started seeing misinformation and disinformation about the pandemic being disseminated in our shops, and there were no countermeasures.

We got picked up on the national media, and then we got the endorsement of the White House. And that’s when we launched Shots at the Shop. We had 1,000 shops signed up in I’d say less than 90 days.

Wu: Why do you think Shots at the Shop was so successful? What was the network doing differently from other vaccine-outreach efforts that spoke directly to Black and brown communities?

Thomas: If you came to any of our clinics, it didn’t feel like you were coming into a clinic or a hospital. It felt like you were coming to a family reunion. We had a DJ spinning music. We had catered food. We had a festive environment. Some people showed up hesitant, and some of them left hesitant but fascinated. We didn’t have to change their worldview. But we treated them with dignity and respect. We weren’t telling them they’re stupid and don’t understand science.

And the model worked. It worked so well that even the health professionals were extremely pleased, because now all they had to do was show up with the vaccine, and the arms were ready for needles.

[Read: The flu-ification of COVID policy is almost complete]

The barbers and stylists saw themselves as doing health-related things anyway. They had always seen themselves as doing more than just cutting hair. No self-respecting Black barber is going to say, “We’ll get you in and out in 10 minutes.” It doesn’t matter how much hair you have: You’re gonna be in there for half a day.

Wu: How big of a difference do you think your network’s outreach efforts made in narrowing the racial gaps in COVID vaccination?

Thomas: Attribution is always difficult, and success has many mothers. So I will say this to you: I have no doubt that we made a huge difference. With a disease like COVID, you can’t afford to have any pocket unprotected, and we were vaccinating people who would otherwise have never been vaccinated. We were dealing with people at the “hell no” wall.

We were also vaccinating people who were homeless. They were treated with dignity and respect. At some of our shops, we did a coat drive and a shoe drive. And we had dentists providing us with oral-health supplies: toothbrush, floss, paste, and other things. It made a huge difference. When you meet people where they are, you’ve got to meet all their needs.

Wu: How big of a difference did the emergency declaration, and the freeing-up of resources, tools, and funds, make for your team’s outreach efforts?

Thomas: Even with all the work I’ve been doing in the barber shop since 2014, the pandemic got us our first grant from the state. Money flowed. We had resources to go beyond the typical mechanisms. I was able to secure thousands of KN95 masks and distribute them to shops. Same thing with rapid tests. We even sent them Corsi-Rosenthal boxes, a DIY filtration system to clean up indoor air.

Without the emergency declaration, we would still be in the desert screaming for help. The emergency declaration made it possible to get resources through nontraditional channels, and we were doing things that the other systems—the hospital system, the local health department—couldn’t do. We extended their reach to populations that have historically been underserved and distrustful.

Wu: The public-health-emergency declaration hasn’t yet expired. What signs of trouble are you seeing right now?

Thomas: The bridge between the barbershops and the clinical side has been shut down in almost all places, including here in Maryland. I go to the shop and they say to me, “Dr. T, when are we going to have the boosters here?” Then I call my clinical partners, who deliver the shots. Some won’t even answer my phone calls. And when they do, they say, “Oh, we don’t do pop-ups anymore. We don’t do community-outreach clinics anymore, because the grant money’s gone. The staff we hired during the pandemic, they use the pandemic funding—they’re gone.” But people are here; they want the booster. And my clinical partners say, “Send them down to a pharmacy.” Nobody wants to go to a pharmacy.

[Read: The COVID strategy America hasn’t really tried]

You can’t see me, so you can’t see the smoke still coming out of my ears. But it hurts. We got them to trust. If you abandon the community now, it will simply reinforce the idea that they don’t matter.

Wu: What is the response to this from the communities you’re talking to?

Thomas: It’s “I told you so, they didn’t care about us. I told you, they would leave us with all these other underlying conditions.” You know, it shouldn’t take a pandemic to build trust. But if we lose it now, it will be very, very difficult to build back.

We built a bridge. It worked. Why would you dismantle it? Because that’s exactly what’s happening right now. The very infrastructure we created to close the racial gaps in vaccine acceptance is being dismantled. It’s totally unacceptable.

Wu: The emergency declaration was always going to end at some point. Did it have to play out like this?

Thomas: I don’t think so. If you talk to the hospital administrators, they’ll tell you the emergency declaration and the money allowed them to add outreach. And when the money went away, they went back to business as usual. Even though the outreach proved you could actually do a better job. And the misinformation and the disinformation campaign hasn’t stopped. Why would you go back to what doesn’t work?

Wu: What is your team planning for the short and long term, with limited resources?

Thomas: As long as Shots at the Shop can connect clinical partners to access vaccines, we will definitely keep that going.

Nobody wants to go back to normal. So many of our barbers and stylists feel like they’re on their own. I’m doing my best to supply them with KN95 masks and rapid tests. We have kept the conversation going on our every-other-week Zoom town hall. We just launched a podcast. We put out some of our stories in the form of a graphic novel, The Barbershop Storybook. And we’re trying to launch a national association for barbers and stylists, called Barbers and Stylists United for Health.

The pandemic resulted in a mobilization of innovation, a recognition of the intelligence at the community level, the recognition that you need to culturally tailor your strategy. We need to keep those relationships intact. Because this is not the last time we’re going to see a pandemic even in our lifetime. I’m doing my best to knock on doors to continue to put our proposals out there. Hopefully, people will realize that reaching Black and Hispanic communities is worth sustaining.

If you’ve ever been to London, you know that navigating its wobbly grid, riddled with curves and dead-end streets, requires impressive spatial memory. Driving around London is so demanding, in fact, that in 2006 researchers found that it was linked with changes in the brains of the city’s cab drivers: Compared with Londoners who drove fixed routes, cabbies had a larger volume of gray matter in the hippocampus, a brain region crucial to forming spatial memory. The longer the cab driver’s tenure, the greater the effect.

The study is a particularly evocative demonstration of neuroplasticity: the human brain’s innate ability to change in response to environmental input (in this case, the spatially demanding task of driving a cab all over London). That hard-won neuroplasticity required years of mental and physical practice. Wouldn’t it be nice to get the same effects without so much work?

To hear some people tell it, you can: Psychedelic drugs such as psilocybin, LSD, ayahuasca, and Ecstasy, along with anesthetics such as ketamine, can enhance a user’s neuroplasticity within hours of administration. In fact, some users take psychedelics for the express purpose of making their brain a little more malleable. Just drop some acid, the thinking goes, and your brain will rewire itself—you’ll be smarter, fitter, more creative, and self-aware. You might even get a transcendent experience. Popular media abound with anecdotes suggesting that microdosing LSD or psilocybin can expand divergent thinking, a more free and associative type of thinking that some psychologists link with creativity.

[Read: Here’s what happens when a few dozen people take small doses of psychedelics]

Research suggests that psychedelic-induced neuroplasticity can indeed enhance specific types of learning, particularly in terms of overcoming fear and anxiety associated with past trauma. But claims about the transformative, brain-enhancing effects of psychedelics are, for the most part, overstated. We don’t really know yet how much microdosing, or a full-blown trip, will change the average person’s mental circuitry. And there’s reason to suspect that, for some people, such changes may be actively harmful.  

There is nothing new about the notion that the human and animal brain are pliant in response to everyday experience and injury. The philosopher and psychologist William James is said to have first used the term plasticity back in 1890 to describe changes in neural pathways that are linked to the formation of habits. Now we understand that these changes take place not only between neurons but also within them: Individual cells are capable of sprouting new connections and reorganizing in response to all kinds of experiences. Essentially, this is a neural response to learning, which psychedelics can rev up.

We also understand how potent psychedelic drugs can be in inducing changes to the brain. Injecting psilocybin into a mouse can stimulate neurons in the frontal cortex to grow by about 10 percent and sprout new spines, projections that foster connections to other neurons. It also alleviated their stress-related behaviors—effects that persisted for more than a month, indicating enduring structural change linked with learning. Presumably, a similar effect takes place in humans. (Comparable studies on humans would be impossible to conduct, because investigating changes in a single neuron would require, well, sacrificing the subject.)

The thing is, all those changes aren’t necessarily all good. Neuroplasticity just means that your brain—and your mind—is put into a state where it is more easily influenced. The effect is a bit like putting a glass vase back into the kiln, which makes it pliable and easy to reshape. Of course you can make the vase more functional and beautiful, but you might also turn it into a mess. Above all else, psychedelics make us exquisitely impressionable, thanks to their speed of action and magnitude of effect, though their ultimate effect is still heavily dependent on context and influence.

[Read: A new chapter in the science of psychedelic microdosing]

We have all experienced heightened neuroplasticity during the so-called sensitive periods of brain development, which typically unfold between the ages of 1 and 4 when the brain is uniquely responsive to environmental input. This helps explain why kids effortlessly learn all kinds of things, like how to ski or speak a new language. But even in childhood, you don’t acquire your knowledge and skills by magic; you have to do something in a stimulating enough environment to leverage this neuroplastic state. If you have the misfortune of being neglected or abused during your brain’s sensitive periods, the effects are likely to be adverse and enduring—probably more so than if the same events happened later in life.

Being in a neuroplastic state enhances our ability to learn, but it might also burn in negative or traumatic experiences—or memories—if you happen to have them while taking a psychedelic. Last year, a patient of mine, a woman in her early 50s, decided to try psilocybin with a friend. The experience was quite pleasurable until she started to recall memories of her emotionally abusive father, who had an alcohol addiction. In the weeks following her psilocybin exposure, she had vivid and painful recollections of her childhood, which precipitated an acute depression.

Her experience might have been very different—perhaps even positive—if she’d had a guide or therapist with her while she was tripping to help her reappraise these memories and make them less toxic. But without a mediating positive influence, she was left to the mercy of her imagination. This must have been just the sort of situation legislators in Oregon had in mind last month when they legalized recreational psilocybin use, but only in conjunction with a licensed guide. It’s the right idea.

[Read: What it’s like to trip on the most potent magic mushroom]

In truth, researchers and clinicians haven’t a clue whether people who microdose frequently with psychedelics—and are thus walking around in a state of enhanced neuroplasticity—are more vulnerable to the encoding of traumatic events. In order to find out, you would have to compare a group of people who microdose against a group of people who don’t over a period of time and see, for example, if they differ in rates of PTSD. Crucially, you’d have to randomly assign people to either microdose or abstain—not simply let them pick whether they want to try tripping. In the absence of such a study, we are all currently involved in a large, uncontrolled social experiment. The results will inevitably be messy and inconclusive.

Even if opening your brain to change were all to the good, the promise of neuroplasticity without limit—that you can rejuvenate and remodel the brain at any age—far exceeds scientific evidence. Despite claims to the contrary, each of us has an upper limit to how malleable we can make our brain. The sensitive periods, when we hit our maximum plasticity, is a finite window of opportunity that slams shut as the brain matures. We progressively lose neuroplasticity as we age. Of course we can continue to learn—it just takes more effort than when we were young. Part of this change is structural: At 75, your hippocampus contains neurons that are a lot less connected to one another than they were at 25. That’s one of the major reasons older people find that their memory is not as sharp as it used to be. You may enhance those connections slightly with a dose of psilocybin, but you simply can’t make your brain behave as if it’s five decades younger.

[Read: What it’s like to get worse at something]

This reality has never stopped a highly profitable industry from catering to people’s anxieties and hopes—especially seniors’. You don’t have to search long online before you find all kinds of supplements claiming to keep your brain young and sharp. Brain-training programs go even further, purporting to rewire your brain and boost your cognition (sound familiar?), when in reality the benefits are very modest, and limited to whatever cognitive task you’ve practiced. Memorizing a string of numbers will make you better at memorizing numbers; it won’t transfer to another skill and make you better at, say, chess.

We lose neuroplasticity as we age for good reason. To retain our experience, we don’t want our brain to rewire itself too much. Yes, we lose cognitive fluidity along the way, but we gain knowledge too. That’s not a bad trade-off. After all, it’s probably more valuable to an adult to be able to use all of their accumulated knowledge than to be able to solve a novel mathematical problem or learn a new skill. More important, our very identity is encoded in our neural architecture—something we wouldn’t want to tinker with lightly.

At their best, psychedelics and other neuroplasticity-enhancing drugs can do some wonderful things, such as speed up the treatment of depression, quell anxiety in terminally ill patients, and alleviate the worst symptoms of PTSD. That’s enough reason to research their uses and let patients know psychedelics are an option for psychiatric treatment when the evidence supports it. But limitless drug-induced self-enhancement is simply an illusion.

These days, strolling through downtown New York City, where I live, is like picking your way through the aftermath of a party. In many ways, it is exactly that: The limp string lights, trash-strewn puddles, and splintering plywood are all relics of the raucous celebration known as outdoor dining.

These wooden “streeteries” and the makeshift tables lining sidewalks first popped up during the depths of the coronavirus pandemic in 2020, when restaurants needed to get diners back in their seats. It was novel, creative, spontaneous—and fun during a time when there wasn’t much fun to be had. For a while, outdoor dining really seemed as though it could outlast the pandemic. Just last October, New York Magazine wrote that it would stick around, “probably permanently.”

But now someone has switched on the lights and cut the music. Across the country, something about outdoor dining has changed in recent months. With fears about COVID subsiding, people are losing their appetite for eating among the elements. This winter, many streeteries are empty, save for the few COVID-cautious holdouts willing to put up with the cold. Hannah Cutting-Jones, the director of food studies at the University of Oregon, told me that, in Eugene, where she lives, outdoor dining is “absolutely not happening” right now. In recent weeks, cities such as New York and Philadelphia have started tearing down unused streeteries. Outdoor dining’s sheen of novelty has faded; what once evoked the grands boulevards of Paris has turned out to be a janky table next to a parked car. Even a pandemic, it turns out, couldn’t overcome the reasons Americans never liked eating outdoors in the first place.

For a while, the allure of outdoor dining was clear. COVID safety aside, it kept struggling restaurants afloat, boosted some low-income communities, and cultivated joie de vivre in bleak times. At one point, more than 12,700 New York restaurants had taken to the streets, and the city—along with others, including Boston, Los Angeles, Chicago, and Philadelphia—proposed making dining sheds permanent. But so far, few cities have actually adopted any official rules. At this point, whether they ever will is unclear. Without official sanctions, mounting pressure from outdoor-dining opponents will likely lead to the destruction of existing sheds; already, people keep tweeting disapproving photos at sanitation departments. Part of the issue is that as most Americans’ COVID concerns retreat, the potential downsides have gotten harder to overlook: less parking, more trash, tacky aesthetics, and, oh God, the rats. Many top New York restaurants have voluntarily gotten rid of their sheds this winter.

The economics of outdoor dining may no longer make sense for restaurants, either. Although it was lauded as a boon to struggling restaurants during the height of the pandemic, the practice may make less sense now that indoor dining is back. For one thing, dining sheds tend to take up parking spaces needed to attract customers, Cutting-Jones said. The fact that most restaurants are chains doesn’t help: “If whatever conglomerate owns Longhorn Steakhouse doesn’t want to invest in outdoor dining, it will not become the norm,” Rebecca Spang, a food historian at Indiana University Bloomington, told me. Besides, she added, many restaurants are already short-staffed, even without the extra seats.

In a sense, outdoor dining was doomed to fail. It always ran counter to the physical makeup of most of the country, as anyone who ate outside during the pandemic inevitably noticed. The most obvious constraint is the weather, which is sometimes pleasant but is more often not. “Who wants to eat on the sidewalk in Phoenix in July?” Spang said.

The other is the uncomfortable proximity to vehicles. Dining sheds spilled into the streets like patrons after too many drinks. The problem was that U.S. roads were built for cars, not people. This tends not to be true in places renowned for outdoor dining, such as Europe, the Middle East, and Southeast Asia, which urbanized before cars, Megan Elias, a historian and the director of the gastronomy program at Boston University, told me. At best, this means that outdoor meals in America are typically enjoyed with a side of traffic. At worst, they end in dangerous collisions.

Cars and bad weather were easier to put up with when eating indoors seemed like a more serious health hazard than breathing in fumes and trembling with cold. It had a certain romance—camaraderie born of discomfort. You have to admit, there was a time when cozying up under a heat lamp with a hot drink was downright charming. But now outdoor dining has gone back to what it always was: something that most Americans would like to avoid in all but the most ideal of conditions. This sort of relapse could lead to fewer opportunities to eat outdoors even when the weather does cooperate.

But outdoor dining is also affected by more existential issues that have surmounted nearly three years of COVID life. Eating at restaurants is expensive, and Americans like to get their money’s worth. When safety isn’t a concern, shelling out for a streetside meal may simply not seem worthwhile for most diners. “There’s got to be a point to being outdoors, either because the climate is so beautiful or there’s a view,” Paul Freedman, a Yale history professor specializing in cuisine, told me. For some diners, outdoor seating may feel too casual: Historically, Americans associated eating at restaurants with special occasions, like celebrating a milestone at Delmonico’s, the legendary fine-dining establishment that opened in the 1800s, Cutting-Jones said.

Eating outdoors, in contrast, was linked to more casual experiences, like having a hot dog at Coney Island. “We have high expectations for what dining out should be like,” she said, noting that American diners are especially fussy about comfort. Even the most opulent COVID cabin may be unable to override these associations. “If the restaurant is going to be fancy and charge $200 a person,” said Freedman, most people can’t escape the feeling of having spent that much for “a picnic on the street.”

Outdoor dining isn’t disappearing entirely. In the coming years there’s a good chance that more Americans will have the opportunity to eat outside in the nicer months than they did before the pandemic—even if it’s not the widespread practice many anticipated earlier in the pandemic. Where it continues, it will almost certainly be different: more buttoned-up, less lawless—probably less exciting. Santa Barbara, for example, made dining sheds permanent last year but specified that they must be painted an approved “iron color.” It may also be less popular among restaurant owners: If outdoor-dining regulations are too far-reaching or costly, cautioned Hayrettin Günç, an architect with Global Designing Cities Initiative, that will “create barriers for businesses.”

For now, outdoor dining is yet another COVID-related convention that hasn’t quite stuck—like avoiding handshakes and universal remote work. As the pandemic subsides, the tendency is to default to the ways things used to be. Doing so is easier, certainly, than coming up with policies to accommodate new habits. In the case of outdoor dining, it’s most comfortable, too. If this continues to be the case, then outdoor dining in the U.S. may return to what it was before the pandemic: dining “al fresco” along the streetlamp-lined terraces of the Venetian Las Vegas, and beneath the verdant canopy of the Rainforest Cafe.

Airplane bathrooms are not most people’s idea of a good time. They’re barely big enough to turn around in. Their doors stick, like they’re trying to trap you in place. That’s to say nothing of the smell. But to the CDC, those same bathrooms might be a data gold mine.

This month, the agency has been speaking with Concentric, the public-health and biosecurity arm of the biotech company Ginkgo Bioworks, about screening airplane wastewater for COVID-19 at airports around the country. Although plane-wastewater testing had been in the works already (a pilot program at John F. Kennedy International Airport, in New York City, concluded last summer), concerns about a new variant arising in China after the end of its “zero COVID” policies acted as a “catalyst” for the project, Matt McKnight, Ginkgo’s general manager for biosecurity, told me. According to Ginkgo, even airport administrators are getting excited. “There have been a couple of airports who have actually reached out to the CDC to ask to be part of the program,” Laura Bronner, Ginkgo’s vice president of commercial strategies, told me.

Airplane-wastewater testing is poised to revolutionize how we track the coronavirus’s continued mutations around the world, along with other common viruses such as flu and RSV—and public-health threats that scientists don’t even know about yet. Unlike sewer-wide surveillance, which shows us how diseases are spreading among large communities, airplane surveillance is precisely targeted to catch new variants entering the country from abroad. And unlike with PCR testing, passengers don’t have to individually opt in. (The results remain anonymous either way.) McKnight compares the technique to radar: Instead of responding to an attack after it’s unfolded, America can get advance warning about new threats before they cause problems. As we enter an era in which most people don’t center their lives on avoiding COVID-19, our best contribution to public health might be using a toilet at 30,000 feet.

Fundamentally, wastewater testing on airplanes is a smaller-scale version of the surveillance that has been taking place at municipal water networks since early 2020: Researchers perform genetic testing on sewage samples to determine how much coronavirus is present, and which variants are included. But adapting the methodology to planes will require researchers to get creative. For one thing, airplane wastewater has a higher solid-to-liquid ratio. Municipal sewage draws from bathing, cooking, washing clothes, and other activities, whereas airplane sewage is “mainly coming from the toilet,” says Kata Farkas, a microbiologist at Bangor University. For a recent study tracking COVID-19 at U.K. airports, Farkas and her colleagues had to adjust their analytical methods, tweaking the chemicals and lab techniques used to isolate the coronavirus from plane sewage.

Researchers also need to select flights carefully to make sure the data they gather are worth the effort of collecting them. To put it bluntly, not everyone poops on the plane—and if the total number of sampled passengers is very small, the analysis isn’t likely to return much useful data. “The number of conversations we’ve had about how to inconspicuously know how many people on a flight have gone into a lavatory is hysterical,” says Casandra Philipson, who leads the Concentric bioinformatics program. (Concentric later clarified that they do not have plans to actually monitor passengers’ bathroom use.) Researchers ended up settling on an easier metric: Longer flights tend to have more bathroom use and should therefore be the focus of wastewater testing. (Philipson and her colleagues also work with the CDC to test flights from countries where the government is particularly interested in identifying new variants.)

[Read: Are our immune systems stuck in 2020?]

Beyond those technical challenges, scientists face the daunting task of collaborating with airports and airlines—large companies that aren’t used to participating in public-health surveillance. “It is a tricky environment to work in,” says Jordan Schmidt, the director of product applications at LuminUltra, a Canadian biotech company that tests wastewater at Toronto Pearson Airport. Strict security and complex bureaucracies in air travel can make collecting samples from individual planes difficult, he told me. Instead, LuminUltra samples from airport terminals and from trucks that pull sewage out of multiple planes, so the company doesn’t need to get buy-in from airlines.

Airplane surveillance seeks to track new variants, not individual passengers: Researchers are not contact-tracing exactly which person brought a particular virus strain into the country. For that reason, companies such as Concentric aren’t planning to alert passengers that COVID-19 was found on their flight, much as some of us might appreciate that warning. Testing airplane sewage can identify variants from around the world, but it won’t necessarily tell us about new surges in the city where those planes land.

Airplane-wastewater testing offers several advantages for epidemiologists. In general, testing sewage is “dramatically cheaper” and “dramatically less invasive” than nose-swab testing each individual person in a town or on a plane, says Rob Knight, a medical engineering professor at UC San Diego who leads the university’s wastewater-surveillance program. Earlier this month, a landmark report from the National Academies of Sciences, Engineering, and Medicine (which Knight co-authored) highlighted international airports as ideal places to seek out new coronavirus variants and other pathogens. “You’re going to capture people who are traveling from other parts of the world where they might be bringing new variants,” Knight told me. And catching those new variants early is key to updating our vaccines and treatments to ensure that they continue to work well against COVID-19. Collecting more data from people traveling within the country could be useful too, Knight said, since variants can evolve at home as easily as abroad. (XBB.1.5, the latest variant dominating COVID-19 spread in the U.S., is thought to have originated in the American Northeast.) To this end, he told me, the CDC should consider monitoring large train stations or seaports too.

[Read: The COVID data that are actually useful now]

When wastewater testing first took off during the pandemic, the focus was mostly on municipal facilities, because they could provide data for an entire city or county at once. But scientists have since realized that a more specific view of our waste can be helpful, especially in settings that are crucial for informing public-health actions. For example, at NYC Health + Hospitals, the city’s public health-care system, wastewater data help administrators “see 10 to 14 days in advance if there are any upticks” in coronavirus, flu, or mpox, Leopolda Silvera, Health + Hospitals’ global-health deputy, told me. Administrators use the data in decisions about safety measures and where to send resources, Silvera said: If one hospital’s sewage indicates an upcoming spike in COVID-19 cases, additional staff can be added to its emergency department.

Schools are another obvious target for small-scale wastewater testing. In San Diego, Rebecca Fielding-Miller directed a two-year surveillance program for elementary schools. It specifically focused on underserved communities, including refugees and low-income workers who were hesitant to seek out PCR testing. Regular wastewater testing picked up asymptomatic cases with high accuracy, providing school staff and parents with “up to the minute” information about COVID-19 spread in their buildings, Fielding-Miller told me. This school year, however, funding for the program ran out.

Even neighborhood-level surveillance, while not as granular as sampling at a plane, hospital, or school, can provide more useful data than city-wide testing. In Boston, “we really wanted hyperlocal surveillance” to inform placements of the city’s vaccine clinics, testing sites, and other public-health services, says Kathryn Hall, the deputy commissioner at the city’s public-health agency. She and her colleagues identified 11 manhole covers that provide “good coverage” of specific neighborhoods and could be tested without too much disruption to traffic. When a testing site lights up with high COVID-19 numbers, Hall’s colleagues reach out to community organizations such as health centers and senior-living facilities. “We make sure they have access to boosters, they have access to PPE, they understand what’s going on,” Hall told me. In the nearby city of Revere, a similar program run by the company CIC Health showed an uptick in RSV in neighborhood wastewater before the virus started making headlines. CIC shared the news with day-care centers and helped them respond to the surge with educational information and PPE.

[Read: Whatever happened to toilet plumes?]

According to wastewater experts, hyperlocal programs can’t usher in a future of disease omnipotence all by themselves. Colleen Naughton, an environmental-engineering professor at UC Merced who runs the COVIDPoops19 dashboard, told me she would like to see communities with no wastewater surveillance get resources to set it up before more funding goes into testing individual buildings or manhole covers. The recent National Academies report presents a future of wastewater surveillance that includes both broad monitoring across the country and testing targeted to places where new health threats might emerge or where certain communities need local information to stay safe.

This future will require sustained federal funding beyond the current COVID-19 emergency, which is set to expire if the Biden administration does not renew it in April. The United States needs “better and more technology, with a funding model that supports its development,” in order for wastewater’s true potential to be realized, Knight said. Airplane toilets may very well be the best first step toward that comprehensive sewage-surveillance future.

Of the dozens of hormones found in the human body, oxytocin might just be the most overrated. Linked to the pleasures of romance, orgasms, philanthropy, and more, the chemical has been endlessly billed as the “hug hormone,” the “moral molecule,” even “the source of love and prosperity.” It has inspired popular books and TED Talks. Scientists and writers have insisted that spritzing it up human nostrils can instill compassion and generosity; online sellers have marketed snake-oil oxytocin concoctions as “Liquid Trust.”

But as my colleague Ed Yong and others have repeatedly written, most of what’s said about the hormone is, at best, hyperbole. Sniffing the chemical doesn’t reliably make people more collaborative or trusting; trials testing it as a treatment for children with autism spectrum disorder have delivered lackluster results. And although decades of great research have shown that the versatile molecule can at times spark warm fuzzies in all sorts of species—cooperation in meerkats, monogamy in prairie voles, parental care in marmosets and sheep—under other circumstances, oxytocin can turn creatures from rodents to humans aggressive, fearful, even prejudiced.

Now researchers are finding that oxytocin may be not only insufficient for forging strong bonds, but also unnecessary. A new genetic study hints that prairie voles—fluffy, fist-size rodents that have long been poster children for oxytocin’s snuggly effects—can permanently partner up without it. The revelation could shake the foundations of an entire neuroscience subfield, and prompt scientists to reconsider some of the oldest evidence that once seemed to show that oxytocin was the be-all and end-all for animal affection. Cuddles, it turns out, can probably happen without the classic cuddle hormone—even in the most classically cuddly creatures of all.

[Read: The weak science behind the wrongly named moral molecule]

Oxytocin isn’t necessarily obsolete. “This shouldn’t be taken as, ‘Oh, oxytocin doesn’t do anything,’” says Lindsay Sailer, a neuroscientist at Cornell University. But researchers have good reason to be a bit gobsmacked. For all the messy, inconsistent, even shady data that have been gathered from human studies of the hormone, the evidence from prairie voles has always been considered rock-solid. The little rodents, native to the midwestern United States, are famous for being one of the few mammal species that monogamously mate for life and co-parent their young. Over many decades and across geographies, researchers have documented how the rodents nuzzle each other in their nests and console each other when stressed, how they aggressively rebuff the advances of other voles that attempt to homewreck. And every time they checked, “there was oxytocin, sitting in the middle of the story, over and over again,” says Sue Carter, a behavioral neurobiologist who pioneered some of the first studies on prairie-vole bonds. The molecular pathways driving the behaviors seemed just as clear-cut: When triggered by a social behavior, such as snuggling or sex, a region of the brain called the hypothalamus pumped out oxytocin; the hormone then latched on to its receptor, sparking a slew of lovey-dovey effects.

Years of follow-up studies continued to bear that thinking out. When scientists gave prairie voles drugs that kept oxytocin from linking up with its receptor, the rodents started snubbing their partners after any tryst. Meanwhile, simply stimulating the oxytocin receptor was enough to coax voles into settling down with strangers that they’d never mated with. The connection between oxytocin and pair bonding was so strong, so repeatable, so unquestionable that it became dogma. Zoe Donaldson, a neuroscientist at the University of Colorado at Boulder who studies the hormone, recalls once receiving dismissive feedback on a grant because, in the words of the reviewer, “We already know everything that there is to know about prairie voles and oxytocin.”

So more than a decade ago, when Nirao Shah, a neurogeneticist and psychiatrist at Stanford, and his colleagues set out to cleave the oxytocin receptor from prairie voles using a genetic technique called CRISPR, they figured that their experiments would be a slam dunk. Part of the goal was, Shah told me, proof of principle: Researchers have yet to perfect genetic tools for voles the way they have in more common laboratory animals, such as mice. If the team’s manipulations worked, Shah reasoned, they’d beget a lineage of rodents that was immune to oxytocin’s influence, leaving them unfaithful to their mates and indifferent to their young—thereby proving that the CRISPR machinery had done its job.

That’s not what happened. The rodents continued to snuggle up with their families, as if nothing had changed. The find was baffling. At first, the team wondered if the experiment had simply failed. “I distinctly remember sitting there and just being like, Wait a sec; how is there not a difference?” Kristen Berendzen, a neurobiologist and psychiatrist at UC San Francisco who led the study, told me. But when three separate teams of researchers repeated the manipulations, the same thing happened again. It was as if they had successfully removed a car’s gas tank and still witnessed the engine roaring to life after an infusion of fuel. Something might have gone wrong in the experiments. That seems unlikely, though, says Larry Young, a neuroscientist at Emory University who wasn’t involved in the new study: Young’s team, he told me, has produced nearly identical results in his lab.

The explanations for how decades of oxytocin research could be upended are still being sussed out. Maybe oxytocin can attach to more than one hormone receptor—something that studies have hinted at over the years, Carter told me. But some researchers, Young among them, suspect a more radical possibility. Maybe, in the absence of its usual receptor, oxytocin no longer does anything at all—forcing the brain to blaze an alternative path toward affection. “I think other things pick up the slack,” Young told me.

That idea isn’t a total repudiation of the old research. Other prairie-vole experiments that used drugs to futz with oxytocin receptors were performed in adult animals who grew up with the hormone, says Devanand Manoli, a psychiatrist and neuroscientist at UCSF who helped lead the new study. Wired to respond to oxytocin all through development, those rodent brains couldn’t compensate for its sudden loss late in life. But the Stanford-UCSF team bred animals that lacked the oxytocin receptor from birth, which could have prompted some other molecule, capable of binding to another receptor, to step in. Maybe the car never needed gas to run: Stripped of its tank from the get-go, it went all electric instead.

It would be easy to view this study as yet another blow to the oxytocin propaganda machine. But the researchers I spoke with think the results are more revealing than that. “What this shows us is how important pair bonding is,” Carter told me—to prairie voles, but also potentially to us. For social mammals, partnering up isn’t just sentimental. It’s an essential piece of how we construct communities, survive past childhood, and ensure that future generations can do the same. “These are some of the most important relationships that any mammal can have,” says Bianca Jones Marlin, a neuroscientist at Columbia University. When oxytocin’s around, it’s probably providing the oomph behind that intimacy. And if it’s not? “Evolution is not going to have a single point of failure for something that’s absolutely critical,” Manoli told me. Knocking oxytocin off its pedestal may feel like a letdown. But it’s almost comforting to consider that the drive to bond is just that unbreakable.

Newer Posts