Category:

Medical News

Five months into its unprecedented dismantling of foreign-aid programs, the Trump administration has given the order to incinerate food instead of sending it to people abroad who need it. Nearly 500 metric tons of emergency food—enough to feed about 1.5 million children for a week—are set to expire tomorrow, according to current and former government employees with direct knowledge of the rations. Within weeks, two of those sources told me, the food, meant for children in Afghanistan and Pakistan, will be ash. (The sources I spoke with for this story requested anonymity for fear of professional repercussions.)

Sometime near the end of the Biden administration, USAID spent about $800,000 on the high-energy biscuits, one current and one former employee at the agency told me. The biscuits, which cram in the nutritional needs of a child under 5, are a stopgap measure, often used in scenarios where people have lost their homes in a natural disaster or fled a war faster than aid groups could set up a kitchen to receive them. They were stored in a Dubai warehouse and intended to go to the children this year.

Since January, when the Trump administration issued an executive order that halted virtually all American foreign assistance, federal workers have sent the new political leaders of USAID repeated requests to ship the biscuits while they were useful, according to the two USAID employees. USAID bought the biscuits intending to have the World Food Programme distribute them, and under previous circumstances, career staff could have handed off the biscuits to the United Nations agency on their own. But since Elon Musk’s Department of Government Efficiency disbanded USAID and the State Department subsumed the agency, no money or aid items can move without the approval of the new heads of American foreign assistance, several current and former USAID employees told me. From January to mid-April, the responsibility rested with Pete Marocco, who worked across multiple agencies during the first Trump administration; then it passed to Jeremy Lewin, a law-school graduate in his 20s who was originally installed by DOGE and now has appointments at both USAID and State. Two of the USAID employees told me that staffers who sent the memos requesting approval to move the food never got a response and did not know whether Marocco or Lewin ever received them. (The State Department did not answer my questions about why the food was never distributed.)

In May, Secretary of State Marco Rubio told representatives on the House Appropriations Committee that he would ensure that food aid would reach its intended recipients before spoiling. But by then, the order to incinerate the biscuits (which I later reviewed) had already been sent. Rubio has insisted that the administration embraces America’s responsibility to continue saving foreign lives, including through food aid. But in April, according to NPR, the U.S. government eliminated all humanitarian aid to Afghanistan and Yemen, where, the State Department said at the time, providing food risks benefiting terrorists. (The State Department has offered no similar justification for pulling aid to Pakistan.) Even if the administration was unwilling to send the biscuits to the originally intended countries, other places—Sudan, say, where war is fueling the world’s worst famine in decades—could have benefited. Instead, the biscuits in the Dubai warehouse continue to approach their expiration date, after which their vitamin and fat content will begin to deteriorate rapidly. At this point, United Arab Emirates policy prevents the biscuits from even being repurposed as animal feed.

Over the coming weeks, the food will be destroyed at a cost of $130,000 to American taxpayers (on top of the $800,000 used to purchase the biscuits), according to current and former federal aid workers I spoke with. One current USAID staffer told me he’d never seen anywhere near this many biscuits trashed over his decades working in American foreign aid. Sometimes food isn’t stored properly in warehouses, or a flood or a terrorist group complicates deliveries; that might result in, at most, a few dozen tons of fortified foods being lost in a given year. But several of the aid workers I spoke with reiterated that they have never before seen the U.S. government simply give up on food that could have been put to good use.

The emergency biscuits slated for destruction represent only a small fraction of America’s typical annual investment in food aid. In fiscal year 2023, USAID purchased more than 1 million metric tons of food from U.S. producers. But the collapse of American foreign aid raises the stakes of every loss. Typically, the biscuits are the first thing that World Food Programme workers hand to Afghan families who are being forced out of Pakistan and back to their home country, which has been plagued by severe child malnutrition for years. Now the WFP can support only one of every 10 Afghans who are in urgent need of food assistance. The WFP projects that, globally, 58 million people are at risk for extreme hunger or starvation because this year, it lacks the money to feed them. Based on calculations from one of the current USAID employees I spoke with, the food marked for destruction could have met the nutritional needs of every child facing acute food insecurity in Gaza for a week.

Despite the administration’s repeated promises to continue food aid, and Rubio’s testimony that he would not allow existing food to go to waste, even more food could soon expire. Hundreds of thousands of boxes of emergency food pastes, also already purchased, are currently collecting dust in American warehouses. According to USAID inventory lists from January, more than 60,000 metric tons of food—much of it grown in America, and all already purchased by the U.S. government—were then sitting in warehouses across the world. That included 36,000 pounds of peas, oil, and cereal, which were stored in Djibouti and intended for distribution in Sudan and other countries in the Horn of Africa. A former senior official at USAID’s Bureau for Humanitarian Assistance told me that, by the time she’d left her job earlier this month, very little of the food seemed to have moved; one of the current USAID employees I spoke with confirmed her impression, though he noted that, in recent weeks, small shipments have begun leaving the Djibouti warehouse.

[Read: ‘In three months, half of them will be dead’]

Such operations are more difficult for USAID to manage today than they were last year because many of the humanitarian workers and supply-chain experts who once coordinated the movement of American-grown food to hungry people around the world no longer have their jobs. Last month, the CEOs of the two American companies that make another kind of emergency food for malnourished children both told The New York Times that the government seemed unsure of how to ship the food it had already purchased. Nor, they told me, have they received any new orders. (A State Department spokesperson told me that the department had recently approved additional purchases, but both CEOs told me they have yet to receive the orders. The State Department has not responded to further questions about these purchases.) But even if the Trump administration decides tomorrow to buy more food aid—or simply distribute what the government already owns while the food is still useful—it may no longer have the capacity to make sure anyone receives it.

Donald Trump—who is, by his own accounting, “the fertilization president” and “the father of IVF”—wants to help Americans reproduce. During his 2024 campaign, he promised that the government or insurance companies would cover the cost of in vitro fertilization. In February, he issued an executive order promising a plan to expand access to the procedure and reduce its steep cost. (The administration has yet to release this plan, but the White House spokesperson Kush Desai told me that the president’s advisers have completed their recommendations.)

In its broader push to boost the U.S. birth rate, the Trump administration has increased the child tax credit, implemented a new $1,000 baby bonus, and, according to reporting by The New York Times, floated affirmative action for parents who apply to Fulbright scholarships. But Trump’s push to expand IVF exposes a fault line in modern conservatism’s approach to fertility treatments in particular: Some pronatalists view the procedure and other fertility technologies as essential tools to reverse declining birth rates, but others, including many anti-abortion activists, are pressing for legal protections for the embryos that might be discarded or damaged during IVF. The latter group has instead coalesced around alternative fertility treatments, which it claims will boost birth rates while prompting a broader reexamination of the U.S. fertility industry.

This debate poses an obstacle to any easy policy wins for the Trump administration on IVF. But the conversation also routinely overlooks a major part of the fertility equation: men. If the Trump administration is serious about boosting fertility without alienating either its pro- or anti-IVF constituents, expanding access to infertility treatments specifically for men could offer a meaningful—and perhaps politically viable—path forward.

For decades, reproductive care in the United States has been considered a women’s issue. Among heterosexual couples struggling to conceive, infertility is roughly as likely to stem from male factors as from female ones. Yet in up to 25 percent of infertility cases, the male partner is never evaluated. Male infertility can sometimes be treated with hormone therapy or surgical correction of physical blockages. But male-infertility care is less likely to be included in state insurance laws than female treatments such as IVF. Plus, in most cases, “you can bypass male-infertility problems by just treating the woman more aggressively, even if she doesn’t have fertility issues herself,” Peter Schlegel, a urologist and male-infertility specialist who runs New York Men’s Health Medical, told me. According to CDC data, approximately one in six IVF cycles is initiated solely due to male infertility.

That means women disproportionately bear the medical and emotional demands of fertility treatment. IVF typically requires women to undergo daily hormone injections and invasive procedures. Hormone treatments can cause nausea, mood swings, bloating, and bruising at the injection site. Egg retrieval typically involves anesthesia, at least 24 hours of rest after, and days of recovery for lingering symptoms. Most people who use IVF need multiple cycles to conceive, and recent research has raised concerns about possible long-term health consequences from repeated treatments, including elevated cancer risks.

It’s no wonder, then, that patients and policy makers have been looking for alternatives to IVF. The Heritage Foundation, an influential conservative think tank that opposes abortion and has described the American IVF industry as the “Wild West,” has called for the U.S. government to embrace restorative reproductive medicine, or RRM. This model, which originated in the 1970s as a natural family-planning method, focuses on identifying and treating what proponents call the “root causes” of infertility, including hormonal imbalances and diseases such as endometriosis; IVF is a last resort. Some vocal RRM proponents reject the procedure outright, arguing that it treats embryos as commodities and women as vessels, subjecting them to expensive, dehumanizing procedures.

The American Society for Reproductive Medicine, which opposes restrictions on both IVF and abortion, has dismissed RRM as a “rebranding of standard medical practice” designed to stop short of the full range of modern pregnancy care. “Instead of getting 21st-century treatment based on a Nobel Prize–winning technology, anti-abortion groups like the Heritage Foundation want patients to have medicine circa 1977,” Sean Tipton, ASRM’s chief advocacy and policy officer, told me. RRM supporters, in turn, argue that they’re simply making room for less invasive and lower-cost options. (A single cycle of IVF currently costs $15,000 to $20,000, and treatments are usually paid out of pocket.) “IVF is high-tech. What we do is more humdrum,” says Phil Boyle, the president of the International Institute for Restorative Reproductive Medicine and a contributor to the Heritage Foundation’s recent report on RRM. He also told me that RRM encourages careful evaluation of both partners, potentially reducing the burden of treatment on women.

[Read: The pro-baby coalition of the far right]

Even so, RRM often requires women to engage in meticulous cycle tracking and hormone monitoring, leaving them to shoulder the ongoing work of managing and measuring their biology in service of pregnancy. This emphasis on women’s bodies and behaviors is especially conspicuous amid a broader cultural preoccupation with male virility. Health Secretary Robert F. Kennedy Jr. has warned of the “existential problem” posed by declining testosterone levels and sperm counts in teenage boys. (He has offered dubious comparisons in the process, claiming that adolescent boys now have less testosterone than 68-year-old men.) Online, male-health influencers blame falling fertility on pesticides and plant-based diets, and advise their followers to eat more meat and avoid processed foods.

And yet, for all the public hand-wringing over male infertility, medical treatments for it remain absent from policy conversations. In the months since the White House issued its executive order on IVF access, it does not appear to have made any mention of improving access to male-infertility care. (When I asked Desai last month about male-fertility proposals, he declined to answer the question.) The Heritage Foundation has vigorously advocated for RRM, yet its policy papers and lobbying efforts do not prioritize male-specific treatments, including semen analysis, hormone testing, and surgeries that can correct some forms of severe male infertility. Its RRM report does make passing mention of interventions for men, such as improving diet and managing insulin resistance, but its recommendations are overwhelmingly directed at women. Fertility-related proposals from both sides of the aisle have likewise scarcely addressed male-specific infertility treatments, according to data from RESOLVE, a nonprofit advocacy organization that supports awareness of the full spectrum of infertility-treatment options.

Policy changes to improve male fertility are both feasible and potentially far-reaching. Access to reproductive urologists is deeply uneven across the U.S., which contributes to the chronic underdiagnosis of male-factor infertility. One 2010 study found that 13 states had no specialists for male infertility at all. To help close this gap, federal agencies could fund additional fellowship positions or loan-repayment programs for male-fertility specialists who commit to working in medically underserved areas. States could also revise telemedicine laws, which sometimes bar out-of-state providers from treating patients remotely. Lawmakers could mandate that insurance companies cover key services and invest in labs that are developing and testing new therapies, such as stem-cell-based sperm regeneration.

[Read: A less brutal alternative to IVF]

For the foreseeable future, IVF will remain irreplaceable for some families, including single parents, heterosexual couples whose future children are at high risk of genetic anomalies, and LGBTQ couples pursuing reciprocal IVF, in which one parent provides the egg and another carries the pregnancy. But advocates across the IVF debate agree that patients need more options, and right now, many don’t have them. Supporters and critics of IVF, including ASRM and the Heritage Foundation, told me they support greater insurance coverage for male-infertility care.

If coverage expands for IVF but not for other fertility treatments, more patients will be routed toward it, even when less invasive or more targeted options might work just as well. A more forward-looking fertility policy would mean not just increasing IVF access but also expanding whose bodies—and whose health—should be the focus of U.S. reproductive care.

The MAHA diet is full of fussy advice: swap the seed oils for beef tallow, cut out the ultra-processed snacks and synthetic food dyes, slap on a continuous glucose monitor to track how your blood sugar fluctuates with each bite. Robert F. Kennedy Jr. and his “Make America healthy again” followers have such strong feelings about food because bad eating habits are making people sick. Many MAHA acolytes are equally particular about the need for a good night’s sleep.

In their best-selling book, Good Energy, Casey and Calley Means, siblings who are both close Kennedy confidants, warn that even the best eating habits cannot make up for bad sleep: “You could eat a perfect ‘Good Energy’ diet, but if you don’t sleep, your cells will spew out excess free radicals.” (Casey is Donald Trump’s surgeon-general nominee, and Calley is a special White House adviser.) Should the family dog be a nuisance at night, it may necessitate “intensive pet training or finding a new home,” they say. Other MAHA figures have similarly suggested making hard choices in the name of prioritizing sleep; Gary Brecka, a self-described biohacker who recently hosted Kennedy on his podcast, Ultimate Human, has recommended a $3,000 smart mattress cover. In typical MAHA fashion, some tips veer into the unscientific and even absurd. Mark Hyman, a longtime friend of Kennedy’s who runs a wellness empire, has outlined a “simple sleep routine” that includes throwing away plug-in air fresheners, staying away from plastic food containers, and even building a Faraday cage over your bed to keep away electromagnetic waves.

Americans have been told over and over again to sleep more, with limited success. Nearly 40 percent of adults aren’t getting enough rest, according to the CDC. The MAHA movement has good reasons to keep hounding the message. Poor sleep exacerbates many of the chronic conditions that the movement is focused on remedying. People who don’t get enough shut-eye are at higher risk of heart disease and obesity. Even a week of sleep troubles can lead to glucose-processing issues similar to those experienced by people with type 2 diabetes. In May, the Trump administration’s MAHA Commission published a long-awaited report on the causes of chronic disease among children; sleep is mentioned more than 20 times. (Calley Means apparently spearheaded the report.)

[Read: Why can’t Americans sleep?]

When it comes to actual interventions and policies, however, sleep has been notably absent from the administration’s planning. As health secretary, Kennedy has had some success cracking down on food dyes and enacting anti-vaccine policies, but he hasn’t laid out anything close to a plan for addressing the country’s sleep problem. The same can be said of state legislators who have been eager to implement MAHA policies. As a cause, sleep is a great illustration of MAHA’s challenges: It’s easy to make the point that Americans are unhealthy. It’s much harder to actually fix it.

Kennedy seems less focused on sleep than other MAHA leaders are—something that goes back to before he was health secretary. He hasn’t mentioned sleep in any speeches since being confirmed for his job. When asked about Kennedy’s views on sleep, a Department of Health and Human Services spokesperson told me that “Secretary Kennedy supports a science-driven approach to health promotion, with an emphasis on raising awareness of lifestyle factors that contribute to long-term wellness.” There is good reason to think that RFK Jr. believes that sleep is an important part of improving America’s health: He has suffered health scares in the past, partially due to sleep deprivation, and in response has prioritized getting more sleep, The New York Times reported last year.

His silence on sleep might have to do with the fact that a good night’s rest is especially difficult to legislate. Sure, the government cannot take a cheeseburger out of someone’s mouth, but it can do a lot to change food habits: tweaking what items can be purchased with food stamps, rewriting the rules for what is served in schools, putting warning labels on unhealthy foods, even banning certain ingredients. There isn’t a similar playbook for sleep.

That’s not to say there are no policies that could help. Take teens: Three-fourths of high schoolers do not get the recommended eight hours of sleep per night, according to the CDC. One of the key reasons is that their routine doesn’t match with their biology. During puberty, adolescents naturally fall asleep and wake up later. This phenomenon, known as sleep-phase delay, is why first period is so tortuous for many high schoolers. Several sleep experts I spoke with suggested that school shouldn’t start so early, which the Means siblings also endorse in their book. When Seattle’s school district pushed its start time back by roughly an hour, students reported about 30 extra minutes of sleep per night. But Kennedy has little power to influence the education system. And even if he were to convince the Department of Education to endorse such a policy, states and localities would likely be the ones to implement such a change.

Coming up with policies to address sleep is all the more challenging because different groups are falling short on rest for different reasons. Some people are deprived of sleep because they live in loud or dangerous areas. Many people are staying up working—or late-night scrolling.  (We know of one president doing so, at least.)

The policy challenges might not stop RFK Jr. from lamenting America’s sleep woes. After all, no health problem is straightforward, and Kennedy has advocated for several food changes that he doesn’t have the power to implement as the head of HHS. Like Hyman, perhaps he could give a speech outlining his own sleep hacks. Maybe he could go the way of Brecka and promote gadgets that promise to improve sleep. (Last week, Kennedy declared that “wearables are a key to the MAHA agenda” and that he envisions every American wearing one within four years.) Or perhaps he, like the Means siblings, will just give Americans some hard truths about the importance of rest. The HHS secretary has shown himself to be an expert at riling crowds by channeling nostalgia for a bygone era, and America’s sleep habits have gotten worse over the years. In 1942, 84 percent of adults were getting at least seven hours of sleep each night, according to a Gallup poll. By 2023, that number had dropped to 52 percent.

But sleep doesn’t lend itself to a rallying cry in the way that other aspects of the MAHA agenda do. The movement’s critiques of our poor diets have been so galvanizing because there is a clear enemy to organize around: the food industry. The same can be said of RFK Jr.’s anti-vaccine activism: Pharmaceutical companies make for an easy scapegoat, even though they aren’t making products that cause autism. This process of determining the source of a problem and assigning blame is an essential part of any social movement, sociologists have suggested, and it is often what motivates action. If everyone was spending their nights tossing and turning on barbed-coil springs, perhaps a campaign could be waged against Big Mattress. But sleep is such a multifaceted problem that it’s difficult to generate a single, unifying enemy.

That doesn’t mean sleep is a losing issue for the MAHA universe. The fact that there’s a market for $3,000 mattress pads demonstrates just how desperate people are for a solution to their sleep woes. But without articulating a clear theory for why Americans’ sleep has suffered, anything Kennedy says about sleep will make him look less like a reformer and more like a self-help guru eager to sell another cure.

I used to love my Teflon pans. I crisped tofu, fried latkes, and reduced sauces to sticky glazes in them, marveling at how cleanup never took more than a swipe of a sponge. Then I started to worry that my skillets might kill me.

The lining on the inside of a nonstick pan is made of plastic. When heated, it can release toxic fumes; when scratched, it can chip off, blending in with tasty bits of char and grains of pepper. “Data indicates that there are no health effects from the incidental ingestion of nonstick coating flakes,” the company that produces Teflon says, noting that the government has deemed the cookware “safe for consumer use.” Still, it warns people to turn their burners down and air vents up when they use their nonstick pans, and to avoid preheating them empty.

Other data, a lot of data, suggest that ingesting plastic can damage your organs, suppress your immune system, harden your veins, and predispose you to neurodegenerative diseases and cancer. Pet birds have died of the “Teflon flu” after breathing in the smoke from their owners’ overheated pans. (Birds’ lungs are especially susceptible to toxic gases.) A story about a budgie did it for me. I tossed my nonstick pans into the trash, over my husband’s objections.

Thus began my slowly escalating, dimly informed campaign to rid my body and life of plastics. I heard a local-radio report on colorectal cancer and impulse-purchased metal baby spoons for my kids at 3 a.m. I recalled a column on endocrine disrupters from who knows when and started drinking my iced coffee from a metal-lined tumbler. I read something about how flexible plastic is particularly problematic and threw out the cling wrap. I got rid of our black plastic spatulas too, after one of my colleagues reported that they might contain flame retardant, which you’re really not supposed to eat.

[Read: Throw out your black plastic spatula]

I was doing my own research, by which I mean I was taking in data from disparate sources with differing degrees of credibility on a bewilderingly complicated issue and analyzing it with sophomore-year scientific literacy before making consumer decisions driven by single-issue neuroticism and a penchant for online shopping. I was also annoying the bejesus out of my husband, who kept asking where the pancake flipper had gone.

Then I read an article suggesting that microplastics might be behind the increasing incidence of type 1 diabetes, which I happen to have. I recalled all the molten Stouffer’s lasagnas I had eaten as a kid. I needed to do something right now, but I realized that I had already purged the obvious offenders from the kitchen.

Before I could buy something expensive and relax, I stopped, for once. Was I actually reducing my exposure to dangerous chemicals? Was my family safer than it had been before I began my campaign? What kinds of plastic are truly dangerous in the first place? I had no idea. More than I wanted to spend hundreds of dollars at Williams-Sonoma, I wanted to know my enemy.

An encomium for the adversary: Plastics are amazing. The synthetic polymers are light and inexpensive, moldable and waterproof, stretchy and resilient. They are also new. The fax machine was invented before plastic was. Plastics have made us safer in a thousand ways: Much-castigated plastic water bottles make the storage and transportation of clean drinking water easy; single-use surgical gear is better at preventing infection than boiled linen. Plastics have also dramatically cut the cost of making and moving things, powering our modern consumer economy no less than gas and electricity have.

[Judith Enck and Jan Dell: Plastic recycling doesn’t work and will never work]

Plastics are the consumer economy, to a remarkable extent. I knew that fleece and diapers were made from plastic. I was surprised to find out that tea bags, sponges, glitter, paint, cigarette filters, nail polish, chewing gum, toothpaste, mattresses, dental floss, wet wipes, and tampons commonly contain plastic too. The Boeing 787 Dreamliner is half plastic composites. Even things that seem like they have nothing to do with plastic are plastic. Aluminum soda cans are lined with an epoxy resin, meaning my predominant source of liquid (room-temperature Diet Coke; not ashamed) essentially comes in a plastic bathtub.

This past spring, I decided to see how long I could go without using plastic. I woke up on linen and cotton sheets and glowered at my iced coffee, chilling in its off-limits plastic bottle in the refrigerator. Head aching, I went to get ready for the day. I couldn’t turn on the light in my closet or my bathroom. Nor could I brush my teeth, or put on deodorant, moisturizer, sunscreen. The only outfit I could conjure up was a capacious linen shift and a saggy cotton-wool cardigan. No underwear or socks, because they have some stretch to them, and if something stretches, it’s thanks to plastic. I could not traipse into my office looking like Gollum’s great-aunt, nor could I commute without shoes on. Eighteen minutes after waking up, I surrendered.

Plastic is not just everywhere in our homes, but everywhere, period. The world produces so much plastic (more than 400 million metric tons a year, according to one estimate—roughly the combined weight of every human alive) that degraded nubbins coat the planet, detectable in the sedimentary depths of the Mariana Trench and the icy heights of Mount Everest.

The human body itself is part plastic: We are humans made of a human-made material. Scientists have found plastic in brains, eyeballs, and pretty much every other organ. We cry plastic tears, leak plastic breast milk, and ejaculate plastic semen. Fetuses contain plastic. Plastic is so ubiquitous that researchers, wanting to examine the effect of plastics on the human body, are struggling to find all-natural individuals to use as controls in studies.

Concerns over plastic exposure have exploded in recent years, with podcast bros, MAHA types, and crunchy moms joining environmentalists (and a number of physicians and scientists) in attempting to ditch the substance. Businesses have started offering direct-to-consumer blood tests for microplastics and related contaminants. (Until I started writing this story, the distinctions were lost on me: We are exposed to bits of plastic, known as nanoplastics or microplastics, and plastic-related chemicals, which can leach out of plastics. The latter can include PFAS, “forever chemicals” with particularly worrisome health implications.)

[Read: The cost of avoiding microplastics]

Curious to know how plastic I am, I coughed up $357 (and some plastic particles, probably) and visited a Quest Diagnostics. “I’ve never seen anyone get this test before,” the phlebotomist whispered, before puncturing my vein.

The results came back a week later: I had 2.06 nanograms of PFAS in every milliliter of my blood, an “intermediate” quantity implying a “potential risk of adverse health effects.” Specifically, the test found perfluorononanoic acid, perfluorohexanesulfonic acid, n-perfluorooctanoic acid, n-perfluorooctane sulfonic acid, and perfluoromethylheptane sulfonic acid isomers swimming around in my blood.

Knowing what I already knew, I would have been shocked if the test had come back negative. But I still felt concerned. Quest provided me with a phone number to set up a consultation with a physician to discuss my results. I called, hoping someone could tell me what, if anything, I should do with this information.

The numbers were “very good news,” the physician told me at first, saying that my report indicated the chemicals were “not detected.” But some substances were detected, I pointed out. What did that mean?

“I see why you’re confused; your level is higher,” she told me. “You have to address this to the lab.” After a few minutes of poring over the numbers, she added, “This is very confusing, even for me.”

We went back and forth on safe levels and detectable quantities before I asked her what it meant to test positive for these substances in general. “There’s not much for us to do but to alert you,” she said. “Everything is made from chemicals, and things are made in China and they don’t have high levels of quality control. That’s what the modern world has to offer us.” She told me to watch out for breast cancer.

I was already doing that. I had read studies linking PFAS to developmental delays, liver damage, kidney cancer, and thyroid disease, among other conditions. Phthalates, used to make plastic flexible, are associated with early menopause and miscarriages. Microplastics and nanoplastics are mixed in with the sand on beaches and float in bottles of distilled water at the grocery store. Nascent research ties them to strokes and lung cancer. How many horrid diagnoses did I need to be on the lookout for?

I could be as vigilant as I wanted to be, but the Quest test was essentially meaningless. It gave me a point-in-time estimate of a handful of kinds of PFAS in my bloodstream. But it provided no sense of my lifetime exposure, nor could it help diagnose a current illness or predict my likelihood of disease going forward.

Kjersti Aagaard is a physician specializing in maternal-fetal medicine whose research demonstrates where the science is today. She recently co-authored a paper showing that the placentas of preterm infants contain more tiny plastic particles than those of full-term infants. Microplastic accumulation might alter blood-vessel development in the womb, increasing the risk of preterm birth, she told me. But she and her colleagues had “no data” demonstrating how microplastics caused early deliveries, if they were causing them at all.

Still, scientists know more than enough to be concerned. Research indicates that plastic chemicals can bind to hormone receptors, kill cells, and damage DNA. Studies show that the degree of exposure to plastics corresponds to the incidence of disease. We don’t know yet “if this is ‘Silent Spring 2.0,’ ” Aagaard wrote in an email. We may not know for a long time. But that doesn’t mean we shouldn’t work to reduce the risks now.

That was my next project, and I conscripted Tracey Woodruff, the director of UC San Francisco’s Program on Reproductive Health and the Environment. Yes, she said, there were straightforward, scientifically informed ways for people to protect themselves. Plastic and plastic-related chemicals have to get into your body to hurt you. You have to consume them, breathe them in, or absorb them through your skin. Cut off the supply lines and hamper the enemy.

She told me she sympathized with the urge to buy your way out of harm, but noted that wealthy people have more PFAS in their body than lower-income people, perhaps because they buy so much more stuff. Some fixes involve spending money, but many don’t; people should just do what they can, she said. In the kitchen, opt for glass and stainless-steel containers, and throw away degraded plastic tools. Avoid doing anything to heat or agitate plastic, so quit putting plastic containers in the microwave and kiddie cups in the dishwasher. Food and beverages themselves carry plastic particles, so avoid processed foods. “Eat less takeout and fast food, eat less packaged food, and eat more food prepared in your home; that can reduce your exposure,” she told me.

Elsewhere in the home, you can replace polyester rugs, vinyl fabrics, and microfiber towels with alternatives made from linen, cotton, leather, or wool. You can rip up your carpet and opt for bare wood floors. Hang plastic-derived garments to dry after washing them on a gentle cold cycle. “Ugh, we were the original fleece family,” Woodruff told me. “It’s so great, lightweight, and warm. But it’s recycled plastic,” so now she’s trying to buy wool and denim coats from thrift stores instead.

Then, keep the battleground clean. Wash your hands. Take off your shoes in the house. Use a HEPA filter. The dust bunnies under your bed and the film on your stove vent contain contaminants, so scrub away grease and mop, dust, and vacuum. “I don’t want people to think, Oh, I should go out and buy industrial-strength cleaning products,” Woodruff said. “Those contain toxic chemicals. You can clean everything with water and vinegar and baking soda.”

I began to put her recommendations to use. I bought a metal filter to make my own iced coffee. (Good luck finding an automatic coffee maker without plastic in it.) I started hang-drying a lot of the household’s laundry and decided to try to buy natural-fiber clothing going forward.

Another point Woodruff made stuck in my head. “People say the dose makes the poison, and that’s fine if you are a healthy adult,” she said. “But there’s a range of how susceptible people are.” People who are pregnant, people with preexisting health conditions, people who work in industrial environments, people who live in polluted neighborhoods, and children are most vulnerable to the “insult” of plastic chemicals.

I turned my attention to my kids. Sheets and blankets are important because you breathe so close to the fibers for so many hours. I replaced my younger son’s with natural alternatives. Then I contemplated what to do about my older son, who is obsessed with dragons. A few years ago, I bought him a plastic-fiber duvet cover with dragons on it. I get sweaty looking at it. I needed to get rid of it.

“Why don’t I get you a nicer comforter with dragons on it?” I said one evening, trying to be nonchalant. He looked at me like I had threatened to send him to an orphanage.

“No,” he said. The dragons were crucial for the household’s safety.

“What if I put dragons above your bed, or around your bed?” No. “What if I got dragon toys?” No. We had fought to a draw. I waited a few weeks, bought a soft cotton duvet cover, and threw out the dragon one without telling him, changing the HEPA filter while I was at it.

The HEPA filter itself was plastic, I noted while standing in my kids’ room, awaiting the tantrum that, thankfully, never materialized. My boys’ chewed-up stuffies were plastic. Their closet was filled with plastic clothes, their shelves stuffed with plastic-coated books, their backpacks and lunch boxes formed from plastic. That night, I dreamed about plastic. I was back in the hospital where I had given birth for the first time, sitting in a plastic wheelchair in the NICU, eating ice chips out of a plastic jug and absorbing plastic stitches into my skin. I took my older son, tiny enough to slip into a pint glass, out of a plastic box where he was being fed by a plastic tube and oxygenated by a plastic cannula.

My anxiety about myself was really about my children—about them growing up in a world where all the objects around them seem bound to hurt them, where too many corporations fight to pad their profits and hide the evidence, where problems are solved by individual action rather than collective responsibility. Until our government acts to protect us, we are both the home chef using the Teflon pan and the budgie choking on the fumes.

Throwing the pans out seemed, for now, like the least I could do. And the most I could do, too.


This article appears in the August 2025 print edition with the headline “My Personal War on Plastic.”

The annual meeting of the International Society for Autism Research is the closest autism science gets to having an Oscars moment of its own. When 2,200 experts from more than 50 countries meet up in one place—as they did this spring in Seattle—a kind of brainy excitement pervades, not just because of the awards given out (yes, awards are given out) or the chance for up-and-comers to network with top names in autism research, but also because there’s always something to celebrate in the science itself. For two decades, studies presented at INSAR have shaped the world’s understanding of autism. The buzz at the conference comes from the conviction that the work matters and that progress continues, sustained by an optimism that no nonscientist could undo.

With one possible exception: U.S. Secretary of Health and Human Services Robert F. Kennedy Jr.

In April, shortly before the conference, Kennedy announced a major research undertaking. He promised that his agency would determine the cause of autism—or, at least, have “some of the answers”—by September. (He soon extended the timeline into next year.) The effort, he pledged, would employ “the most credible scientists from all over the world.”

Now here those scientists were, all in one place. But none of those we spoke with had received the call to help, nor did they expect to. In speeches and interviews as health secretary, Kennedy has made clear his disdain for mainstream autism research, brushing aside the insights gained for this tremendously complex condition through years of research. Instead, backed by the enormous power of his federal office, Kennedy now appears determined to pursue his own long-held set of theories about autism: first, that we are in the midst of an autism epidemic (which is, in fact, highly debatable); second, that autism is caused by one or more “environmental toxins” (which incorrectly suggests that environmental factors have not been explored); and third, that powerful interests want this information covered up (a conspiracy-esque viewpoint that lacks evidence).

“The way the secretary characterizes autism research,” David Amaral, the research director at the MIND Institute at UC Davis and one of INSAR’s co-founders, told us, “it’s as if nobody’s been doing anything for the last 30 years.” Amaral was one of more than a dozen veteran researchers we met with over the four-day conference, whose faces all went dark anytime we asked about the impact of Kennedy’s muscling into their domain. They have been witnessing the health secretary bend the narrative of autism science in America. Their shared assessment: What he’s doing is not good.


The problem begins, in the researchers’ view, with Kennedy’s grasp of the science, which they say he either doesn’t understand or refuses to acknowledge. For instance, Kennedy has complained that too much money has been spent studying genetic causes of autism, describing this avenue as “a dead end.” Between sessions at the conference, the geneticist Joseph Buxbaum sat with us in an empty meeting room and sketched out on a piece of cardboard the numbers and timeline that demonstrate all that’s wrong with this viewpoint. Autism’s genetic underpinnings were first uncovered through studies of twins in the 1970s. Access to the human genome has now revealed that about 80 percent of the odds of being autistic are rooted in heritability. At INSAR this year, one of the most optimistic presentations focused on the progress being made toward genetics-based treatments. “It is shocking,” Buxbaum said of Kennedy’s apparent disregard for experts’ input.   

Compounding the situation are the Trump administration’s blitz of DEI-focused executive orders and DOGE cuts, which are undermining autism research. The Autism Science Foundation has been circulating a questionnaire asking researchers to report funding lost this year. Dozens of responses have been received, so far adding up to more than $80 million worth of halted research and pending grants that now will not come through. Jobs have been lost. Future discoveries have been postponed, possibly for good.

Emily Hilliard, a spokesperson for HHS, told us in an email that Kennedy’s team is “fully committed to leaving no stone unturned in confronting this catastrophic epidemic—employing only gold-standard, evidence-based science.” It’s unclear just whom Kennedy is relying on for scientific expertise; Hilliard did not address a request for more information about the scientists involved in the health secretary’s initiatives. But Kennedy’s singular view on the actual expert consensus seems driven by a personal goal: to implicate vaccines as the cause of autism. He now has reshuffled the ranks of the CDC’s vaccine-advisory committee to include scientists who lack expertise on vaccines and have shared anti-vaccine views, and he has reportedly appointed the son and frequent collaborator of an anti-vaccine activist—one who long promoted false ties between vaccines and autism—to begin examining federal databases for evidence of such a link.

Kennedy has long been a prominent advocate of this false conviction. A quarter century ago, the now-discredited British researcher Andrew Wakefield claimed to have discovered a temporal association between administration of the MMR (measles, mumps, rubella) vaccine and the onset of autistic symptoms in young children. Thus began a self-perpetuating cycle. The greater the number of parents who decided to refuse the MMR vaccine, the more the news media saw a valid trend story. Only four months after Wakefield published in The Lancet, MMR vaccinations had dropped almost 14 percent in South Wales. The fear soon crossed the Atlantic, and Kennedy himself brought further mainstream attention to the issue in “Deadly Immunity,” a 2005 article for Rolling Stone and Salon. (Both publications later retracted the story.) Books were written about the supposed danger. Documentaries were made. Protests were held.

It’s hard to remember now, but up until that time, most people had never heard of autism. Almost overnight, parents everywhere became scared of the word, and scared of what a doctor’s needle might mean for their child. This fear had obvious downsides—the stigmatization of autistic people as being “damaged,” a drop in vaccine uptake broadly, a loss of faith in science, and a sense that something dangerous had been let loose upon the population and especially children. But something constructive came from all the attention to the issue as well. Parent activists jumped on it to pressure Congress to start funding autism research. The money began flowing in earnest in 2006, with a five-year $945 million allocation, and has since reached a total so far of roughly $5 billion, funding university labs and research centers around the U.S. The investment paid off: Autism became better understood. The vaccine question was a top priority out of the gate, and epidemiological research found repeatedly, exhaustively, and emphatically that vaccines do not cause autism.

Clearly, however, RFK Jr. is not satisfied.


When Kennedy speaks today about autism, it’s as though the past 20 years never happened. It’s not just about the canard that is being resuscitated. It’s the language he uses to talk about what he thinks being autistic means. At a press conference in April, he set off a firestorm in autism communities when he described children with autism as “kids who will never pay taxes. They’ll never hold a job; they’ll never play baseball; they’ll never write a poem; they’ll never go out on a date. Many of them will never use a toilet unassisted.”

Tragedy framing, sorrow inducing—this echoes the 20th-century take on autism, when people with the diagnosis were too often treated as not fully human. Many were subjected to abuse and isolation (often by being institutionalized). After Kennedy received some blowback for his comments, he clarified that he was talking specifically about children on the severest end of the spectrum. Hilliard, the HHS spokesperson, told us that Kennedy “remains committed to working toward a society where people with autism have access to meaningful opportunities, appropriate supports and the full respect and recognition they deserve.” She said his statements aimed to emphasize “the need for increased research into environmental factors contributing to the rise in autism diagnoses, not to stigmatize individuals with autism or their families.”

Nevertheless, the damage was done. Regardless of his intention, the ways Kennedy speaks about autism seem to miss how, for many, the prevailing narrative has moved on to more human framing, in which autism is not a disease or a tragedy but a difference meriting acceptance and support. His bleak terminology—autism is “a disease”; it “destroys families”; “we need to put an end to it”—has left a mark. Amy Gravino, who is autistic and specializes in sexuality and relationship coaching, told us she felt shattered by Kennedy’s comments. “For the last 20 years, we as a community have fought against the rhetoric that RFK is now spouting,” she said. “Everything we have tried to do to humanize autistic people has been potentially wiped away in one fell swoop.”

Many parents, too, took offense at RFK’s flattened portrait of their autistic children as a collection of problems and nothing more. The depiction leaves out everything about these children’s worth as people: their capacity for joy, love, and creativity; their inherent dignity. “If the world uses a lens that is only based on deficits and struggle rather than the complexity and nuance that is a part of any human being, including and especially autistic people, that makes true belonging really hard,” Sara Swoboda, a pediatrician in Boise, Idaho, whose daughter has an autism diagnosis, told us over email.

At the INSAR conference, a pediatrician alerted us to concerns spreading among parents about Kennedy’s plans to create a “data platform” for autism. So far, the National Institutes of Health, the agency overseeing the platform, has outlined this project only vaguely, including that it would involve scraping data from all over the digital landscape—from Medicaid claims, private-sector health records, pharmacy chains, insurance billings, and even smartwatches and fitness trackers. It has not gone over well. The pediatrician shared some of the texts she has been getting from contacts around the country reporting that parents of autistic children were calling their health-care providers and pleading with them to scrub references to autism from their kids’ medical records. Other parents waiting for assessments for autism were calling in to cancel.

When we contacted nearly a dozen doctors and advocates about this matter, they confirmed getting similar requests from parents in their practices and communities. “People are freaking out, and I don’t blame them,” Alycia Halladay, the chief science officer for the Autism Science Foundation, told us. “For the government to come in with no transparency and say we have the authority to take this data, that is scary to people.” They’re scared of lost privacy, of seeing their kids stigmatized, of consequences related to insurance and job discrimination. But in a bigger sense, they don’t want their kids marked, and scared of a comeback for those old attitudes about autism.  

Data collection in itself need not be a source of panic. It is, after all, the currency of epidemiology. It’s how the vaccine theory was debunked and how the CDC determines prevalence rates. Usually, methods are put in place to ensure anonymity and ethical disbursement of the data. Hilliard told us that “all NIH-managed databases follow the highest standards of security and privacy, with the protection of personal health information as a top priority.” Even if there’s nothing to fear about this new database, however, good results in science depend on trust. That trust now appears to be at risk.


Not everyone thinks Kennedy is getting every part of the story wrong. His clumsily calibrated messaging, though offensive to many, was appreciated by some families who feel seen by the secretary for addressing a segment of the autistic population that still gets minimal attention. These are people whose challenges range from moderate to severe and who, as Kennedy acknowledges, will never achieve real independence. The most challenged—assessed at roughly 27 percent of the autistic population in a 2023 study—are people with IQs below 50 or whose ability in the use of spoken language is minimal to none. Some in this group can also be violent toward themselves, and their inability to understand danger has resulted in their deaths with shocking frequency. They are people who require round-the-clock supervision,

Especially for the families of such individuals, there is a deep frustration that most people no longer associate autism with individuals like their kids. Theirs is not the popular autism story. They get little interest from Hollywood, which best likes narratives about autistic people as fundamentally quirky or brilliant. Science hasn’t shown much interest either, especially social science, whose practitioners find it easier to study people who can hold a conversation and complete a questionnaire. Additionally, parents who risk sharing details of the struggles at home can end up feeling like they’ve violated some sort of taboo, facing social-media pile-ons in which they’re accused of dehumanizing their children by being explicit about autism’s downsides.

Kennedy was explicit about the downsides. In response, Amy Lutz, a medical historian and the vice president of the National Council on Severe Autism, wrote an article titled “RFK Was Right: Severe Autism Can Be Devastating.” Lutz is the mother of an adult autistic man. She argues that Kennedy definitely gets a lot wrong about autism (for example, vaccines), but at least he is “shining a light on the segment of the autism spectrum that has been increasingly marginalized by a focus on the most capable.”

During our last full day at INSAR, we attended a luncheon sponsored by an advocacy organization called Profound Autism Alliance. Seated around three long tables was a group of psychiatrists, psychologists, pediatricians, and educators all involved in serving the marginalized 27 percent. That morning, they had witnessed a milestone in their field: A presentation at the conference had called for formal recognition and definition of profound autism as a new diagnostic category. (How widely this framing will gain acceptance remains to be seen.) Kennedy had no involvement in the presentation, but we asked the group about the health secretary’s role in bringing attention to the profoundly autistic. There was, at best, some grudging acknowledgment that he had done so. But the researchers—whom we agreed not to name so they could speak without fear of professional repercussions—were much more concerned about the harm they feared Kennedy is doing, and will do, by parachuting, uninvited, into their realm. The risk, they said, is not just the harm to science, or potentially to their own work or the careers of those just starting out who may now choose something “not autism” to work on. All of that, they told us, is secondary to the potential harm to autistic people and to those who know and love them.

If faulty science takes charge and finds a cause that isn’t a cause—such as vaccines—it will imply that the easy next step is finding the antidote to the cause. That is a seriously problematic proposition. For one thing, many diagnosed people say they have no interest in becoming unautistic. For another, a one-and-done remedy for the condition’s most debilitating manifestations is, quite simply, a phantom goal. As anyone who has seriously studied autism will tell you, the condition is too complex. As one pediatric neurologist at the profound-autism luncheon put it, “If I had that magic pill, don’t you think I would give it to you?” There is no magic in science, which is why so many researchers think Kennedy’s approach is an exercise in false hope. One thing the neurologist knows after years of treating children herself, she told us, is “there’s nothing worse for a family than to be given false hope.”

The bus smashed into him last month, when he was crossing the street with his wheelchair. By the time he made it to the public hospital in California where I work as a doctor, two quarts of blood had hemorrhaged into one of his thighs, where a tender football-shaped bulge distorted the skin. He remembered his view of the windshield as the bus bore down, then, as he toppled, of the vehicle’s dirty underbelly. He was convinced he’d die.

He didn’t. Trauma surgeons and orthopedists consulted on his case. He got CT scans, X-rays, and a blood transfusion. Social workers visited him, as did a nutritionist—he was underweight. Antibiotics mopped up the pneumonia he’d contracted from inhaling saliva when he’d passed out. He remained hospitalized for more than a week.

This patient, fortunately, had Medicaid, which meant not only that his care was covered but also that he could see a primary-care doctor after discharge. The public hospital where I’m an internist would have treated him comprehensively regardless of his ability to pay. But in many places, uninsured patients might receive only emergency stabilization at the hospital, face bankrupting bills, and, unless they can pay out of pocket, be denied care at outpatient clinics. And because of work requirements that Congress just passed to restrict Medicaid, the number of uninsured people will quickly grow in the coming months and years.

On the face of it, the requirement that Medicaid beneficiaries submit proof of employment shouldn’t worry people like my patient. Over the course of his life, scoliosis has curved his spine so much that his shoulders hover a couple of feet in front of his legs when he stands, and he’s relied on a wheelchair for more than a decade. His medical condition should exempt him.

But he told our team that he lives in shelters, so he lacks a fixed address. He doesn’t have a cellphone. He could access government websites at a public library, except that his request for a power wheelchair, which Medicaid will cover, hasn’t been approved yet, and navigating the city in a standard one exhausts him. Plus, every time he leaves his stuff behind at the shelter to go somewhere, he told me, it’s stolen. At present, he doesn’t even own an official ID card.

As a doctor in a hospital that serves the urban poor, I see patients who already face such a gantlet of obstacles that modest barriers to accessing government programs can effectively screen them out. The White House’s stated aim with the changes is to reduce waste, fraud, and abuse. But according to projections from the nonpartisan Congressional Budget Office, nearly 12 million Americans will lose insurance by 2034 because of the impacts of the new legislation on Medicaid enrollment and restrictions on Affordable Care Act marketplaces. The safety-net institutions that serve many of the country’s poorest residents cannot make up for the gap. Some hospitals will undoubtedly face financial disaster and close, especially in rural areas—leaving patients with even less ability to get treatment.

Here’s a representative sample of patients on Medicaid I’ve treated recently: a father bleeding into his brain who speaks a Chinese-minority dialect that required multiple conversations with interpreters to identify. A middle-aged man with type 1 diabetes who suffered a stroke that resulted in such severe memory deficits that he can’t reliably remember to inject insulin. A day laborer with liver inflammation who works long hours in construction, often seven days a week, and who’s paid in cash. A young woman with a fentanyl addiction who was too weak and exhausted from malnourishment to enroll in a drug-rehabilitation program. A patient with a dog bite and a skin infection who has ricocheted between low-wage restaurant jobs.

Some of my patients are employed, as are more than two-thirds of adult Medicaid beneficiaries under age 65 without a disability. Others aren’t—and within that group, every one of them would meet criteria for exemption from work requirements, among them medical inability to work, pregnancy, caretaking duties, enrollment in a substance-use treatment program, or at least half-time-student status.

But whether because of language barriers, physical or cognitive disability, lack of internet or phone, or job instability, for all of these patients, overcoming additional bureaucratic barriers would be burdensome at best. For many of them, it would be nearly impossible.

There’s little reason to doubt that, with work requirements in place, many patients like mine will be removed from Medicaid even though they should qualify. After Arkansas deployed work requirements for Medicaid in 2018, for instance, more than two-thirds of the roughly 18,000 people who were disenrolled still should have qualified, according to one estimate. What The Atlantic’s Annie Lowery has called the time tax—“a levy of paperwork, aggravation, and mental effort imposed on citizens in exchange for benefits that putatively exist to help them”—falls disproportionately on those least likely to possess the connections, education, or resources to cut through an endless slog of canned hold music, pages that fail to load, and automated mazes of bureaucracy that dead-end before the caller can connect to a human.  

The time tax of proving employment will act as a gatekeeping device, excluding people from Medicaid while foisting the blame onto their shoulders. It is, in effect, engineered to save money with systems onerous enough to disenfranchise people from what they’re entitled to. The bill will slash approximately $1 trillion from Medicaid by 2034, $325 billion of that because of work requirements, according to the latest Congressional Budget Office cost estimates. It will also waste a colossal amount of money creating the mechanisms to deny people care: Though Congress has allocated only $200 million in federal funding for implementing work requirements, the true cost of setting up and administering these systems will likely be many times more, perhaps as much as $4.9 billion, based on one estimate that drew from states that have tried to put in place such requirements.  

These cuts will play out differently in each state, and even within states. They’ll gut rural health care in some locales, hurt dense urban neighborhoods in others, and hit the working poor everywhere. Their effects will be modulated by how cumbersome or efficient work-verification systems are, by the availability of insurance-eligibility workers, and by community outreach or lack thereof. But in every state, patients will suffer. That’s the predictable consequence of legislation that saves money by letting Americans get sick.

New York City—where takeout is a food group and ovens are for storing clothes—may soon get into the grocery business. If he wins the general election this November, Zohran Mamdani, the new Democratic nominee for mayor, has said he will build a network of municipally owned, affordable grocery stores, one in each of the city’s five boroughs. According to Mamdani, the city could help pay for the stores’ rent and operating costs by taxing the wealthy, and the stores won’t seek to turn a profit, enabling them to sell food at wholesale cost. In the vision Mamdani laid out in a campaign video, the stores’ mission would be combating “price gouging” by offering lower prices than corporate grocery stores.

If Mamdani is able to pull this off—a huge if, given the economic considerations, as critics are quick to point out—it will be the first time in American history that a city of New York’s size has commanded its own grocery stores. New Yorkers are in favor of the idea: Two-thirds of them, including 54 percent of Republicans, support public groceries, according to a March poll by the Climate and Community Institute, a progressive think tank. But because nothing exactly like Mamdani’s plan has ever been tried before in a large city, no one can be certain whether it will really be able to sell more affordable food, let alone help address food insecurity and health disparities in the city. What Mamdani has proposed is a $60 million experiment, with New Yorkers as test subjects.

A couple of other large American cities are trying out similar plans, but what little real precedent exists for Mamdani’s plan comes mostly from rural America. A handful of towns have opened municipally owned groceries, mostly because they had no choice: Small towns once relied on mom-and-pop shops, but these are vanishing as dollar stores proliferate and big-box retailers in larger rural cities monopolize the wholesale supply. Without a supermarket, residents have to either drive out of town for food or rely on convenience stores and dollar stores, which don’t stock many healthy options. In 2018, the town of Baldwin, Florida (current population 1,366), lost its only grocery when the local IGA closed. It became a food desert: The next-closest supermarket was 10 miles away—not a simple trip for older adults who don’t drive or for people without a car. The mayor proposed a municipally owned store, which opened the next year. In Kansas, the cities of St. Paul (population 603) and Erie (population 1,019) started their own grocery stores in 2008 and 2021, respectively. St. Paul had not had a supermarket since 1985.

The fates of these stores and their hometowns have varied. Baldwin Market became a lifeline for many residents, particularly during the pandemic. But it struggled to break even and closed in 2024. Now the town largely relies on a handful of convenience stores and a Dollar General as it awaits the rumored opening of a new private grocery. Erie Market similarly struggled to balance its books. Operations were a challenge; the store sometimes stocked expired food, and its refrigerated section lost power after a thunderstorm. Last year, the city leased it to a private owner, who has yet to reopen the store.

By contrast, St. Paul Supermarket has operated as a fully municipally owned grocery since 2013 (before that, it was funded by a community-development group) and shows no signs of closing. Its success has been attributed to community buy-in. Locals were motivated by the desire to preserve their city, fearing that the lack of a grocery store would drive away current residents and scare off potential new ones. “It’s a retention strategy, but it’s also a recruitment strategy,” Rial Carver, the program leader at Kansas State University’s Rural Grocery Initiative, told me.

The primary goal of a municipally owned store is to get food to people who need it. But the city will have to decide which food to stock and, inevitably, will face questions about how those choices influence the diet or health of potential customers. (Imagine the criticism a Mamdani administration might face for subsidizing Cheetos—or, for that matter, organic, gluten-free cheese puffs.) Theoretically, getting people better access to any sort of food can have health benefits, Craig Willingham, the managing director of CUNY’s Urban Food Policy Institute, told me. But so few examples of successful municipal grocery stores exist that there is virtually no research on their health effects.  

Research on the health impact of opening a privately owned grocery in a food desert has had mixed results. An ongoing study of a food-desert neighborhood in Pittsburgh has found that after a supermarket opened, residents consumed fewer calories overall—less added sugar, but also fewer whole grains, fruits, and vegetables. A 2018 study set in a Bronx neighborhood with few grocery stores linked the opening of a new supermarket to residents eating more vegetables and fruit and consuming fewer soft drinks, salty snacks, and pastries, but their spending on unhealthy foods increased along with their purchases of healthy ones.

A new grocery alone won’t change food habits, according to a 2019 study led by Hunt Allcott, an economist at Stanford. “People shop at the new store, but they buy the same kinds of groceries they had been buying before,” Allcott told me. What does help nudge people toward buying healthier foods, he said, is making those foods affordable—while also taxing unhealthy items such as soda.

With so little background information to go on, there’s no telling how Mamdani’s experiment will play out in a big city—or whether it will even get off the ground. New York differs from the sites of other municipal-grocery experiments not only in its size and density but also in its general abundance of grocery stores. Proximity isn’t the major reason people can’t get food, healthy or otherwise, Allcott said—cost is. From 2013 to 2023, the amount of money New Yorkers spent on groceries rose nearly 66 percent—far higher than the national average. The city’s poverty rate—a metric based on the price of a minimal diet—is nearly twice that of the national average; from 2020 to 2023, one in three New Yorkers used food pantries. In Chelsea, a Manhattan neighborhood that is known for its luxury high-rises and is also home to a large housing project, some residents would rather take the train into New Jersey to buy groceries than shop at the expensive local supermarkets, Willingham said.

Grocery stores are tough business. Profit margins are as slim as 1 to 3 percent, and prices are largely determined by suppliers, who tend to privilege volume. A single grocer (or the small network that Mamdani envisions) won’t get as good a deal as a large chain. And running a store is hard, Carver told me: A manager needs to be nimble and adjust to customer demands, skills that municipal bodies are not exactly known for. In New York, at least, there’s reason to expect that public groceries wouldn’t actually be cheaper.

Mamdani (whose campaign did not respond to a request for comment) has acknowledged that New York’s city government might not be cut out for stocking shelves. If the pilot plan doesn’t work, he said on the podcast Plain English last week, he won’t try to scale it up. Yet he believes that it’s worth trying. “This is a proposal of reasonable policy experimentation,” he said.

National grocery costs are expected to increase 2.2 percent this year, according to the USDA. Price hikes will hit poor Americans even harder if Congress passes President Donald Trump’s megabill, which includes cuts to federal food-assistance programs such as SNAP. Among such threats to food affordability, the mere possibility of change could justify a trial of something new. Other large cities, too, are signing up as guinea pigs: Madison, Wisconsin, is in the process of opening a municipally owned store. Last year, Atlanta addressed food insecurity among public-school students and their families by opening a free grocery store—it functions like a food pantry but is stocked like a supermarket—funded by a public-private partnership. Its impact on health hasn’t yet been studied, but demand is high. “We do slots for appointments, and they’re immediately gone,” Chelsea Montgomery, the adviser to operations of Atlanta Public Schools, told me.

Mamdani’s proposal is hardly the first unorthodox policy experiment New York has considered. The city took a chance on congestion pricing to reduce traffic and fund public transit, on universal pre-K to guarantee access to early childhood education, and on supervised injection sites to curb the overdose crisis. All have achieved their objectives. Perhaps, in a decade, millions of New Yorkers will get their organic, gluten-free cheese puffs on the cheap at a city-owned market. Or perhaps the whole project will go the way of the city’s failed attempt to end poverty by offering cash in exchange for efforts to build healthy habits. The point of experimentation is to find out.

For Robert F. Kennedy Jr., “Make America healthy again” is far more than a nice slogan. His cosmic purpose in life, he has said, is to fix the country’s health woes. “The first thing I’ve done every morning for the past 20 years is to get on my knees and pray to God that he would put me in a position to end the chronic-disease epidemic,” Kennedy told senators during his confirmation hearing in January. As health secretary, he has continued to emphasize his commitment to that goal. But yesterday, Kennedy cheered a move that is all but guaranteed to make America less healthy.

Trump’s “Big Beautiful Bill,” which the Senate passed yesterday, includes provisions that would dramatically change Medicaid, the government program that provides health insurance primarily to low-income people. The bill requires a significant portion of Medicaid enrollees to work in order to access benefits, and creates other barriers for people enrolling in the program. All told, it would kick nearly 12 million people off health insurance by 2034, according to an estimate from the nonpartisan Congressional Budget Office. If the bill passes the House, it will then head to President Donald Trump, who will sign it into law.

Medicaid is an expensive program, and the White House has said that more stringently policing who can access it will root out fraud and save funds for the neediest Americans. (Fraud is indeed a problem with the program.) Kennedy, who oversees Medicaid as the head of Health and Human Services, has echoed that message. Yesterday, he posted on X that the bill will allow the Trump administration “to deliver better care to America’s most vulnerable citizens.” But nearly the entire medical establishment warns that slashing Medicaid and limiting access to medical care would have grave consequences for Americans’ health. The American Hospital Association has said the bill “will result in irreparable harm to our health care system.” The American College of Emergency Physicians has said that it “will have dangerous ripple effects that impact anyone in need of lifesaving emergency care.”

Kennedy has argued several times that the number of people on Medicaid needs to shrink. In a recent congressional hearing, Kennedy argued that Medicaid should be for only “poor children,” “mothers,” and “the disabled,” rather than “able-bodied people who are not looking for jobs.” During his Senate confirmation hearing, Kennedy made clear that his problem with Medicaid is unsatisfactory outcomes. “We are spending $900 billion, and our people are getting sicker every single year,” he said. (An HHS spokesperson didn’t respond to a request for comment.)

But cutting Medicaid would directly undermine Kennedy’s MAHA agenda. Many of the sickest Americans are on Medicaid, and they might not otherwise be able to afford health insurance. The program allows them to access care for the kinds of diseases that RFK Jr. is set on remedying. Nine in 10 adults who are on Medicaid and have chronic conditions report seeing a doctor in the past year, versus just 63 percent of uninsured adults, according to KFF, a nonpartisan organization.

There’s a lot for Kennedy to like in the services Medicaid provides. The health secretary is perhaps the most outspoken advocate for the concept that food is medicine, but Medicaid has long embraced the idea. Every state runs its own Medicaid program, and Massachusetts, for example, provides healthy meals to people with certain diet-related conditions. This pilot program has reduced hospitalizations by nearly a quarter, according to one study. Kennedy has also cited his longtime advocacy for people with disabilities. Several states, including California, pay for aides who can help disabled enrollees with personal-care tasks. Some will even pay for renovations to make homes more accessible, ensuring that people with disabilities can live more independently.

With his framing on Medicaid, Kennedy has reversed the causes and effects of America’s health woes. He is right to point out that Medicaid serves an incredibly sick population. Adults below the poverty line are roughly twice as likely to report only fair or poor health compared with the entire U.S. population, according to the CDC. But Medicaid is part of the solution, not the problem itself.

Illustrations by Armando Veve

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

I like to tell people that the night before I stopped sleeping, I slept. Not only that: I slept well. Years ago, a boyfriend of mine, even-keeled during the day but restless at night, told me how hard it was to toss and turn while I instantly sank into the crude, Neanderthal slumber of the dead. When I found a magazine job that allowed me to keep night-owl hours, my rhythms had the precision of an atomic clock. I fell asleep at 1 a.m. I woke up at 9 a.m. One to nine, one to nine, one to nine, night after night, day after day. As most researchers can tell you, this click track is essential to health outcomes: One needs consistent bedtimes and wake-up times. And I had them, naturally; when I lost my alarm clock, I didn’t bother getting another until I had an early-morning flight to catch.

Then, one night maybe two months before I turned 29, that vaguening sense that normal sleepers have when they’re lying in bed—their thoughts pixelating into surreal images, their mind listing toward unconsciousness—completely deserted me. How bizarre, I thought. I fell asleep at 5 a.m.

This started to happen pretty frequently. I had no clue why. The circumstances of my life, both personally and professionally, were no different from the week, month, or two months before—and my life was good. Yet I’d somehow transformed into an appliance without an off switch.

I saw an acupuncturist. I took Tylenol PM. I sampled a variety of supplements, including melatonin (not really appropriate, I’d later learn, especially in the megawatt doses Americans take—its real value is in resetting your circadian clock, not as a sedative). I ran four miles every day, did breathing exercises, listened to a meditation tape a friend gave me. Useless.

I finally caved and saw my general practitioner, who prescribed Ambien, telling me to feel no shame if I needed it every now and then. But I did feel shame, lots of shame, and I’d always been phobic about drugs, including recreational ones. And now … a sedative? (Two words for you: Judy Garland.) It was only when I started enduring semiregular involuntary all-nighters—which I knew were all-nighters, because I got out of bed and sat upright through them, trying to read or watch TV—that I capitulated. I couldn’t continue to stumble brokenly through the world after nights of virtually no sleep.

I hated Ambien. One of the dangers with this strange drug is that you may do freaky things at 4 a.m. without remembering, like making a stack of peanut-butter sandwiches and eating them. That didn’t happen to me (I don’t think?), but the drug made me squirrelly and tearful. I stopped taking it. My sleep went back to its usual syncopated disaster.

In Sleepless: A Memoir of Insomnia, Marie Darrieussecq lists the thinkers and artists who have pondered the brutality of sleeplessness, and they’re distinguished company: Duras, Gide, Pavese, Sontag, Plath, Dostoyevsky, Murakami, Borges, Kafka. (Especially Kafka, whom she calls literature’s “patron saint” of insomniacs. “Dread of night,” he wrote. “Dread of not-night.”) Not to mention F. Scott Fitzgerald, whose sleeplessness was triggered by a single night of warfare with a mosquito.

But there was sadly no way to interpret my sleeplessness as a nocturnal manifestation of tortured genius or artistic brilliance. It felt as though I’d been poisoned. It was that arbitrary, that abrupt. When my insomnia started, the experience wasn’t just context-free; it was content-free. People would ask what I was thinking while lying wide awake at 4 a.m., and my answer was: nothing. My mind whistled like a conch shell.

But over time I did start thinking—or worrying, I should say, and then perseverating, and then outright panicking. At first, songs would whip through my head, and I couldn’t get the orchestra to pack up and go home. Then I started to fear the evening, going to bed too early in order to give myself extra runway to zonk out. (This, I now know, is a typical amateur’s move and a horrible idea, because the bed transforms from a zone of security into a zone of torment, and anyway, that’s not how the circadian clock works.) Now I would have conscious thoughts when I couldn’t fall asleep, which can basically be summarized as insomnia math: Why am I not falling asleep Dear God let me fall asleep Oh my God I only have four hours left to fall asleep oh my God now I only have three oh my God now two oh my God now just one.

“The insomniac is not so much in dialogue with sleep,” Darrieussecq writes, “as with the apocalypse.”

I would shortly discover that this cycle was textbook insomnia perdition: a fear of sleep loss that itself causes sleep loss that in turn generates an even greater fear of sleep loss that in turn generates even more sleep loss … until the next thing you know, you’re in an insomnia galaxy spiral, with a dark behavioral and psychological (and sometimes neurobiological) life of its own.

I couldn’t recapture my nights. Something that once came so naturally now seemed as impossible as flying. How on earth could this have happened? To this day, whenever I think about it, I still can’t believe it did.

In light of my tortured history with the subject, you can perhaps see why I generally loathe stories about sleep. What they’re usually about is the dangers of sleep loss, not sleep itself, and as a now-inveterate insomniac, I’ve already got a multivolume fright compendium in my head of all the terrible things that can happen when sleep eludes you or you elude it. You will die of a heart attack or a stroke. You will become cognitively compromised and possibly dement. Your weight will climb, your mood will collapse, the ramparts of your immune system will crumble. If you rely on medication for relief, you’re doing your disorder all wrong—you’re getting the wrong kind of sleep, an unnatural sleep, and addiction surely awaits; heaven help you and that horse of Xanax you rode in on.

It should go without saying that for some of us, knowledge is not power. It’s just more kindling.

The cultural discussions around sleep would be a lot easier if the tone weren’t quite so hectoring—or so smug. A case in point: In 2019, the neuroscientist Matthew Walker, the author of Why We Sleep, gave a TED Talk that began with a cheerful disquisition about testicles. They are, apparently, “significantly smaller” in men who sleep five hours a night rather than seven or more, and that two-hour difference means lower testosterone levels too, equivalent to those of someone 10 years their senior. The consequences of short sleep for women’s reproductive systems are similarly dire.

“This,” Walker says just 54 seconds in, “is the best news that I have for you today.”

He makes good on his promise. What follows is the old medley of familiars, with added verses about inflammation, suicide, cancer. Walker’s sole recommendation at the end of his sermon is the catechism that so many insomniacs—or casual media consumers, for that matter—can recite: Sleep in a cool room, keep your bedtimes and wake-up times regular, avoid alcohol and caffeine. Also, don’t nap.

I will now say about Walker:

1. His book is in many ways quite wonderful—erudite and wide-ranging and written with a flaring energy when it isn’t excessively pleased with itself.

2. Both Why We Sleep and Walker’s TED Talk focus on sleep deprivation, not insomnia, with the implicit and sometimes explicit assumption that too many people choose to blow off sleep in favor of work or life’s various seductions.

If public awareness is Walker’s goal (certainly a virtuous one), he and his fellow researchers have done a very good job in recent years, with the enthusiastic assistance of my media colleagues, who clearly find stories about the hazards of sleep deprivation irresistible. (In the wine-dark sea of internet content, they’re click sirens.) Walker’s TED Talk has been viewed nearly 24 million times. “For years, we were fighting against ‘I’ll sleep when I’m dead,’ ” Aric Prather, the director of the behavioral-sleep-medicine research program at UC San Francisco, told me. “Now the messaging that sleep is a fundamental pillar of human health has really sunk in.”

Yet greater awareness of sleep deprivation’s consequences hasn’t translated into a better-rested populace. Data from the CDC show that the proportion of Americans reporting insufficient sleep held constant from 2013 through 2022, at roughly 35 percent. (From 2020 to 2022, as anxiety about the pandemic eased, the percentage actually climbed.)

So here’s the first question I have: In 2025, exactly how much of our “sleep opportunity,” as the experts call it, is under our control?

According to the most recent government data, 16.4 percent of American employees work nonstandard hours. (Their health suffers in every category—the World Health Organization now describes night-shift work as “probably carcinogenic.”) Adolescents live in a perpetual smog of sleep deprivation because they’re forced to rise far too early for school (researchers call their plight “social jet lag”); young mothers and fathers live in a smog of sleep deprivation because they’re forced to rise far too early (or erratically) for their kids; adults caring for aging parents lose sleep too. The chronically ill frequently can’t sleep. Same with some who suffer from mental illness, and many veterans, and many active-duty military members, and menopausal women, and perimenopausal women, and the elderly, the precariat, the poor.

“Sleep opportunity is not evenly distributed across the population,” Prather noted, and he suspects that this contributes to health disparities by class. In 2020, the National Center for Health Statistics found that the poorer Americans were, the greater their likelihood of reporting difficulty falling asleep. If you look at the CDC map of the United States’ most sleep-deprived communities, you’ll see that they loop straight through the Southeast and Appalachia. Black and Hispanic Americans also consistently report sleeping less, especially Black women.

Even for people who aren’t contending with certain immutables, the cadences of modern life have proved inimical to sleep. Widespread electrification laid waste to our circadian rhythms 100 years ago, when they lost any basic correspondence with the sun; now, compounding matters, we’re contending with the currents of a wired world. For white-collar professionals, it’s hard to imagine a job without the woodpecker incursions of email or weekend and late-night work. It’s hard to imagine news consumption, or even ordinary communication, without the overstimulating use of phones and computers. It’s hard to imagine children eschewing social media when it’s how so many of them socialize, often into the night, which means blue-light exposure, which means the suppression of melatonin. (Melatonin suppression obviously applies to adults too—it’s hardly like we’re avatars of discipline when it comes to screen time in bed.)  

Most of us can certainly do more to improve or reclaim our sleep. But behavioral change is difficult, as anyone who’s vowed to lose weight can attest. And when the conversation around sleep shifts the onus to the individual—which, let’s face it, is the American way (we shift the burden of child care to the individual, we shift the burden of health care to the individual)—we sidestep the fact that the public and private sectors alike are barely doing a thing to address what is essentially a national health emergency.

Given that we’ve decided that an adequate night’s rest is a matter of individual will, I now have a second question: How are we to discuss those who are suffering not just from inadequate sleep, but from something far more severe? Are we to lecture them in the same menacing, moralizing way? If the burden of getting enough sleep is on us, should we consider chronic insomniacs—for whom sleep is a nightly gladiatorial struggle—the biggest failures in the armies of the underslept?

Those who can’t sleep suffer a great deal more than those gifted with sleep will ever know. Yet insomniacs frequently feel shame about the solutions they’ve sought for relief—namely, medication—likely because they can detect a subtle, judgmental undertone about this decision, even from their loved ones. Resorting to drugs means they are lazy, refusing to do simple things that might ease their passage into unconsciousness. It means they are neurotic, requiring pills to transport them into a natural state that every other animal on Earth finds without aid.

Might I suggest that these views are unenlightened? “In some respects, chronic insomnia is similar to where depression was in the past. We’d say, ‘Major depression’ and people would say, ‘Everybody gets down now and then,’ ” John Winkelman, a psychiatrist in the sleep-medicine division at Harvard Medical School, said at a panel I attended last summer. Darrieussecq, the author of Sleepless, puts it more bluntly: “ ‘I didn’t sleep all night,’ sleepers say to insomniacs, who feel like replying that they haven’t slept all their life.”

The fact is, at least 12 percent of the U.S. population suffers from insomnia as an obdurate condition. Among Millennials, the number pops up to 15 percent. And 30 to 35 percent of Americans suffer from some of insomnia’s various symptoms—trouble falling asleep, trouble staying asleep, or waking too early—at least temporarily. In 2024, there were more than 2,500 sleep-disorder centers in the U.S. accredited by the American Academy of Sleep Medicine. Prather told me the wait time to get into his sleep clinic at UCSF is currently a year. “That’s better than it used to be,” he added. “Until a few months ago, our waitlist was closed. We couldn’t fathom giving someone a date.”

So what I’m hoping to do here is not write yet another reproachful story about sleep, plump with misunderstandings and myths. Fixing sleep—obtaining sleep—is a tricky business. The work it involves and painful choices it entails deserve nuanced examination. Contrary to what you might have read, our dreams are seldom in black and white.

a sketch of a case full of different medical sleep aids
Armando Veve

Whenever I interviewed a clinician, psychiatrist, neuroscientist, or any other kind of expert for this story, I almost always opened with the same question: What dogma about sleep do you think most deserves to be questioned?

The most frequent answer, by a long chalk, is that we need eight hours of it. A fair number of studies, it turns out, show that mortality rates are lowest if a person gets roughly seven hours. Daniel F. Kripke, a psychiatrist at UC San Diego, published the most famous of these analyses in 2002, parsing a sample of 1.1 million individuals and concluding that those who reported more than eight hours of sleep a night experienced significantly increased mortality rates. According to Kripke’s work, the optimal sleep range was a mere 6.5 to 7.4 hours.

These numbers shouldn’t be taken as gospel. The relationship between sleep duration and health outcomes is a devil’s knot, though Kripke did his best to control for the usual confounds—age, sex, body-mass index. But he could not control for the factors he did not know. Perhaps many of the individuals who slept eight hours or more were doing so because they had an undetected illness, or an illness of greater severity than they’d realized, or other conditions Kripke hadn’t accounted for. The study was also observational, not randomized.

But even if they don’t buy Kripke’s data, sleep experts don’t necessarily believe that eight hours of sleep has some kind of mystical significance. Methodologically speaking, it’s hard to determine how much sleep, on average, best suits us, and let’s not forget the obvious: Sleep needs—and abilities—vary over the course of a lifetime, and from individual to individual. (There’s even an extremely rare species of people, known as “natural short sleepers,” associated with a handful of genes, who require only four to six hours a night. They tear through the world as if fired from a cannon.) Yet eight hours of sleep or else remains one of our culture’s most stubborn shibboleths, and an utter tyranny for many adults, particularly older ones.

“We have people coming into our insomnia clinic saying ‘I’m not sleeping eight hours’ when they’re 70 years of age,” Michael R. Irwin, a psychoneurologist at UCLA, told me. “And the average sleep in that population is less than seven hours. They attribute all kinds of things to an absence of sleep—decrements in cognitive performance and vitality, higher levels of fatigue—when often that’s not the case. I mean, people get older, and the drive to sleep decreases as people age.”

Another declaration I was delighted to hear: The tips one commonly reads to get better sleep are as insipid as they sound. “Making sure that your bedroom is cool and comfortable, your bed is soft, you have a new mattress and a nice pillow—it’s unusual that those things are really the culprit,” Eric Nofzinger, the former director of the sleep neuroimaging program at the University of Pittsburgh’s medical school, told me. “Most people self-regulate anyway. If they’re cold, they put on an extra blanket. If they’re too warm, they throw off the blanket.”

“Truthfully, there’s not a lot of data supporting those tips,” Suzanne Bertisch, a behavioral-sleep-medicine expert at Brigham and Women’s Hospital, in Boston, told me. That includes the proscription on naps, she added, quite commonly issued in her world. (In general, the research on naps suggests that short ones have beneficial outcomes and long ones have negative outcomes, but as always, cause and effect are difficult to disentangle: An underlying health condition could be driving those long naps.)

Even when they weren’t deliberately debunking the conventional wisdom about sleep, many of the scholars I spoke with mentioned—sometimes practically as an aside—facts that surprised or calmed. For instance: Many of us night owls have heard that the weather forecast for our old age is … well, cloudy, to be honest, with a late-afternoon chance of keeling over. According to one large analysis, we have a 10 percent increase in all-cause mortality over morning larks. But Jeanne Duffy, a neuroscientist distinguished for her expertise in human circadian rhythms at Brigham and Women’s, told me she suspected that this was mainly because most night owls, like most people, are obliged to rise early for their job.

So wait, I said. Was she implying that if night owls could contrive work-arounds to suit their biological inclination to go to bed late, the news probably wouldn’t be as grim?

“Yes,” she replied.

A subsequent study showed that the owl-lark mortality differential dwindled to nil when the authors controlled for lifestyle. Apparently owls are more apt to smoke, and to drink more. So if you’re an owl who’s repelled by Marlboros and Jameson, you’re fine.

Kelly Glazer Baron, the director of the behavioral-sleep-medicine program at the University of Utah, told me that she’d love it if patients stopped agonizing over the length of their individual sleep phases. I didn’t get enough deep sleep, they fret, thrusting their Apple Watch at her. I didn’t get enough REM. And yes, she said, insufficiencies in REM or slow-wave sleep can be a problem, especially if they reflect an underlying health issue. But clinics don’t look solely at sleep architecture when evaluating their patients.

“I often will show them my own data,” Baron said. “It always shows I don’t have that much deep sleep, which I find so weird, because I’m a healthy middle-aged woman.” In 2017, after observing these anxieties for years, Baron coined a term for sleep neuroticism brought about by wearables: orthosomnia.

But most surprising—to me, anyway—was what I heard about insomnia and the black dog. “There are far more studies indicating that insomnia causes depression than depression causes insomnia,” said Wilfred Pigeon, the director of the Sleep & Neurophysiology Research Laboratory at the University of Rochester. Which is not to say, he added, that depression can’t or doesn’t cause insomnia. These forces, in the parlance of health professionals, tend to be “bidirectional.”

But I can’t tell you how vindicating I found the idea that perhaps my own insomnia came first. A couple of years into my struggles with sleeplessness, a brilliant psychopharmacologist told me that my new condition had to be an episode of depression in disguise. And part of me thought, Sure, why not? A soundtrack of melancholy had been playing at a low hum inside my head from the time I was 10.

The thing was: I became outrageously depressed only after my insomnia began. That’s when that low hum started to blare at a higher volume. Until I stopped sleeping, I never suffered from any sadness so crippling that it prevented me from experiencing joy. It never impeded my ability to socialize or travel. It never once made me contemplate antidepressants. And it most certainly never got in the way of my sleeping. The precipitating factor in my own brutal insomnia was, and remains, an infuriating mystery.

Sleep professionals, I have learned, drink a lot of coffee. That was the first thing I noticed when I attended SLEEP 2024, the annual conference of the American Academy of Sleep Medicine, in Houston: coffee, oceans of it, spilling from silver urns, especially at the industry trade show. Wandering through it was a dizzying experience, a sprawling testament to the scale and skyscraping profit margins of Big Sleep. More than 150 exhibitors showed up. Sheep swag abounded. Drug reps were everywhere, their aggression tautly disguised behind android smiles, the meds they hawked called the usual names that look like high-value Scrabble words.

I’ve never understood this branding strategy, honestly. If you want your customers to believe they’re falling into a gentle, natural sleep, you should probably think twice before calling your drug Quviviq.

I walked through the cavernous hall in a daze. It was overwhelming, really—the spidery gizmos affixed to armies of mannequins, the Times Square–style digital billboards screaming about the latest in sleep technology.

At some point it occurred to me that the noisy, overbusy, fluorescent quality of this product spectacular reminded me of the last place on Earth a person with a sleep disorder should be: a casino. The room was practically sunless. I saw very few clocks. After I spent an afternoon there, my circadian rhythms were shot to hell.

But the conference itself …! Extraordinary, covering miles of ground. I went to one symposium about “sleep deserts,” another about the genetics of sleep disturbance, and yet another about sleep and menopause. I walked into a colloquy about sleep and screens and had to take a seat on the floor because the room was bursting like a suitcase. Of most interest to me, though, were two panels, which I’ll shortly discuss: one about how to treat patients with anxiety from new-onset insomnia, and one on whether hypnotics are addictive.

My final stop at the trade fair was the alley of beauty products—relevant, I presume, because they address the aesthetic toll of sleep deprivation. Within five minutes, an energetic young salesman made a beeline for me, clearly having noticed that I was a woman of a certain age. He gushed about a $2,500 infrared laser to goose collagen production and a $199 medical-grade peptide serum that ordinarily retails for $1,100. I told him I’d try the serum. “Cheaper than Botox, and it does the same thing,” he said approvingly, applying it to the crow’s-feet around my eyes.

I stared in the mirror. Holy shit. The stuff was amazing.

“I’ll take it,” I told him.

He was delighted. He handed me a box. The serum came in a gold syringe.

“You’re a doctor, right?”

A beat.

“No,” I finally said. “A journalist. Can only a dermatologist—”

He told me it was fine; it’s just that doctors were his main customers. This was the sort of product women like me usually had to get from them. I walked away elated but queasy, feeling like a creep who’d evaded a background check by purchasing a Glock at a gun show.

The first line of treatment for chronic, intractable sleeplessness, per the American Academy of Sleep Medicine, is cognitive behavioral therapy for insomnia, or CBT-I. I’ve tried it, in earnest, at two different points in my life. It generally involves six to eight sessions and includes, at minimum: identifying the patient’s sleep-wake patterns (through charts, diaries, wearables); “stimulus control” (setting consistent bedtimes and wake-up times, resisting the urge to stare at the clock, delinking the bed from anything other than sleep and sex); establishing good sleep habits (the stuff of every listicle); “sleep restriction” (compressing your sleep schedule, then slowly expanding it over time); and “cognitive restructuring,” or changing unhealthy thoughts about sleep.

The cognitive-restructuring component is the most psychologically paradoxical. It means taking every terrifying thing you’ve ever learned about the consequences of sleeplessness and pretending you’ve never heard them.

I pointed this out to Wilfred Pigeon. “For the medically anxious, it’s tough,” he agreed. “We’re trying to tell patients two things at the same time: ‘You really need to get your sleep on track, or you will have a heart attack five years earlier than you otherwise would.’ But also: ‘Stop worrying about your sleep so much, because it’s contributing to your not being able to sleep.’ And they’re both true!”

Okay, I said. But if an insomniac crawls into your clinic after many years of not sleeping (he says people tend to wait about a decade), wouldn’t they immediately see that these two messages live in tension with each other? And dwell only on the heart attack?

“I tell the patient their past insomnia is water under the bridge,” Pigeon said. “We’re trying to erase the added risks that ongoing chronic insomnia will have. Just because a person has smoked for 20 years doesn’t mean they should keep smoking.”

He’s absolutely right. But I’m not entirely convinced that these incentives make the cognitive dissonance of CBT-I go away. When Sara Nowakowski, a CBT-I specialist at Baylor College of Medicine, gave her presentation at SLEEP 2024’s panel on anxiety and new-onset insomnia, she said that many of her patients start reciting the grim data from their Fitbits and talking about dementia.

That’s likely because they’ve read the studies. Rapid-eye-movement (REM) sleep, that vivid-dream stage when our eyes race beneath our eyelids like mice under a blanket, is essential to emotional regulation and problem-solving. Slow-wave sleep, our deepest sleep, is essential for repairing our cells, shoring up our immune systems, and rinsing toxins from our brains, thanks to a watery complex of micro-canals called the glymphatic system. We repair our muscles when we sleep. We restore our hearts. We consolidate memories and process knowledge, embedding important facts and disposing of trivial ones. We actually learn when we’re asleep.

Many insomniacs know all too well how nonnegotiably vital sleep is, and what the disastrous consequences are if you don’t get it. I think of the daredevil experiment that Nathaniel Kleitman, the father of sleep research, informally conducted as a graduate student in 1922, enlisting five classmates to join him in seeing how long they could stay awake. He lasted the longest—a staggering 115 hours—but at a terrible price, temporarily going mad with exhaustion, arguing on the fifth day with an imaginary foe about the need for organized labor. And I think of Allan Rechtschaffen, another pioneer in the field, who in 1989 had the fiendish idea to place rats on a spinning mechanism that forced them to stay awake if they didn’t want to drown. They eventually dropped dead.

So these are the kinds of facts a person doing CBT-I has to ignore.

Still. Whether a patient’s terrors concern the present or the future, it is the job of any good CBT-I practitioner to help fact-check or right-size them through Socratic questioning. During her panel at SLEEP 2024, Nowakowski gave very relatable examples:

When you’re struggling to fall asleep, what are you most worried will happen?

I’ll lose my job/scream at my kids/detonate my relationship/never be able to sleep again.

And what’s the probability of your not falling asleep?

I don’t sleep most nights.

And the probability of not functioning at work or yelling at the kids if you don’t?

Ninety percent.

She then tells her patients to go read their own sleep diary, which she’s instructed them to keep from the start. The numbers seldom confirm they’re right, because humans are monsters of misprediction. Her job is to get her patients to start decatastrophizing, which includes what she calls the “So what?” method: So what if you have a bad day at work or at home? You’ve had others. Will it be the end of the world? (When my second CBT-I therapist asked me this, I silently thought, Yes, because when I’m dangling at the end of my rope, I just spin more.) CBT-I addresses anxiety about not sleeping, which tends to be the real force that keeps insomnia airborne, regardless of what lofted it. The pre-sleep freaking out, the compulsive clock-watching, the bargaining, the middle-of-the-night doom-prophesizing, the despairing—CBT-I attempts to snip that loop. The patient actively learns new behaviors and attitudes to put an end to their misery.

But the main anchor of CBT-I is sleep-restriction therapy. I tried it back when I was 29, when I dragged my wasted self into a sleep clinic in New York; I’ve tried it once since. I couldn’t stick with it either time.

The concept is simple: You severely limit your time in bed, paring away every fretful, superfluous minute you’d otherwise be awake. If you discover from a week’s worth of sleep-diary entries (or your wearable) that you spend eight hours buried in your duvet but sleep for only five of them, you consolidate those splintered hours into one bloc of five, setting the same wake-up time every day and going to bed a mere five hours before. Once you’ve averaged sleeping those five hours for a few days straight, you reward your body by going to bed 15 minutes earlier. If you achieve success for a few days more, you add another 15 minutes. And then another … until you’re up to whatever the magic number is for you.

No napping. The idea is to build up enough “sleep pressure” to force your body to collapse in surrender.

Sleep restriction can be a wonderful method. But if you have severe insomnia, the idea of reducing your sleep time is petrifying. Technically, I suppose, you’re not really reducing your sleep time; you’re just consolidating it. But practically speaking, you are reducing your sleep, at least in the beginning, because dysregulated sleep isn’t an accordion, obligingly contracting itself into a case. Contracting it takes time, or at least it did for me. The process was murder.

“If you get people to really work their way through it—and sometimes that takes holding people’s hands—it ends up being more effective than a pill,” Ronald Kessler, a renowned psychiatric epidemiologist at Harvard, told me when I asked him about CBT-I. The problem is the formidable size of that if. “CBT-I takes a lot more work than taking a pill. So a lot of people drop out.”

They do. One study I perused had an attrition rate of 40 percent.

Twenty-six years ago, I, too, joined the legions of the quitters. In hindsight, my error was my insistence on trying this grueling regimen without a benzodiazepine (Valium, Ativan, Xanax), though my doctor had recommended that I start one. But I was still afraid of drugs in those days, and I was still in denial that I’d become hostage to my own brain’s terrorism. I was sure that I still had the power to negotiate. Competence had until that moment defined my whole life. I persuaded the doctor to let me try without drugs.

As she’d predicted, I failed. The graphs in my sleep diary looked like volatile weeks on the stock exchange.

For the first time ever, I did need an antidepressant. The doctor wrote me a prescription for Paxil and a bottle of Xanax to use until I got up to cruising altitude—all SSRIs take a while to kick in.

I didn’t try sleep restriction again until many years later. Paxil sufficed during that time; it made me almost stupid with drowsiness. I was sleepy at night and vague during the day. I needed Xanax for only a couple of weeks, which was just as well, because I didn’t much care for it. The doctor had prescribed too powerful a dose, though it was the smallest one. I was such a rookie with drugs in those days that it never occurred to me I could just snap the pill in half.

Have I oversimplified the story of my insomnia? Probably. At the top of the SLEEP 2024 panel about anxiety and new-onset insomnia, Leisha Cuddihy, a director at large for the Society of Behavioral Sleep Medicine, said something that made me wince—namely, that her patients “have a very vivid perception of pre-insomnia sleep being literally perfect: ‘I’ve never had a bad night of sleep before now.’ ”

Okay, guilty as charged. While it’s true that I’d slept brilliantly (and I stand by this, brilliantly) in the 16 years before I first sought help, I was the last kid to fall asleep at slumber parties when I was little. Cuddihy also said that many of her patients declare they’re certain, implacably certain, that they are unfixable. “They feel like something broke,” she said.

Which is what I wrote just a few pages back. Poisoned, broke, same thing.

By the time Cuddihy finished speaking, I had to face an uncomfortable truth: I was a standard-issue sleep-clinic zombie.

But when patients say they feel like something broke inside their head, they aren’t necessarily wrong. An insomniac’s brain does change in neurobiological ways.

“There is something in the neurons that’s changing during sleep in patients with significant sleep disruptions,” said Eric Nofzinger, who, while at the University of Pittsburgh, had one of the world’s largest databases of brain-imaging studies of sleeping human beings. “If you’re laying down a memory, then that circuitry is hardwired for that memory. So one can imagine that if your brain is doing this night after night …”

We know that the hypothalamic-pituitary-adrenal axis, our body’s first responder to stress, is overactivated in the chronically underslept. If the insomniac suffers from depression, their REM phase tends to be longer and more “dense,” with the limbic system (the amygdala, the hippocampus—where our primal drives are housed) going wild, roaring its terrible roars and gnashing its terrible teeth. (You can imagine how this would also make depressives subconsciously less motivated to sleep—who wants to face their Gorgon dreams?) Insomniacs suffering from anxiety experience this problem too, though to a lesser degree; it’s their deep sleep that’s mainly affected, slimming down and shallowing out.

And in all insomniacs, throughout the night, the arousal centers of the brain keep clattering away, as does the prefrontal cortex (in charge of planning, decision making), whereas in regular sleepers, these buzzing regions go offline. “So when someone with insomnia wakes up the next morning and says, ‘I don’t think I slept at all last night,’ in some respects, that’s true,” Nofzinger told me. “Because the parts of the brain that should have been resting did not.”

And why didn’t they rest? The insomniac can’t say. The insomniac feels at once responsible and helpless when it comes to their misery: I must be to blame. But I can’t be to blame. The feeling that sleeplessness is happening to you, not something you’re doing to yourself, sends you on a quest for nonpsychological explanations: Lots of physiological conditions can cause sleep disturbances, can’t they? Obstructive sleep apnea, for instance, which afflicts nearly 30 million Americans. Many autoimmune diseases, too. At one point, I’ll confess that I started asking the researchers I spoke with whether insomnia itself could be an autoimmune disorder, because that’s what it feels like to me—as if my brain is going after itself with brickbats.

“Narcolepsy appears to be an example of a sleep disorder involving the immune system,” Andrew Krystal, a psychiatrist specializing in sleep disorders at UCSF, told me.

What? I said. Really?

Really, he replied. “There are few things I know of,” he said, “that are as complicated as the mammalian immune system.”

But insomnia-as-autoimmune-disorder is only a wisp of a theory, a wish of a theory, nothing more. In her memoir, The Shapeless Unease: A Year of Not Sleeping, the novelist Samantha Harvey casts around for a physiological explanation, too. But after she completes a battery of tests, the results come back normal, pointing to “what I already know,” she writes, “which is that my sleeplessness is psychological. I must carry on being the archaeologist of myself, digging around, seeing if I can excavate the problem and with it the solution—when in truth I am afraid of myself, not of what I might uncover, but of managing to uncover nothing.”

an illustration of a hand squeezing a sheep-shaped stress ball
Armando Veve

I didn’t tolerate my Paxil brain for long. I weaned myself off, returned to normal for a few months, and assumed that my sleeplessness had been a freak event, like one of those earthquakes in a city that never has them. But then my sleep started to slip away again, and by age 31, I couldn’t recapture it without chemical assistance. Prozac worked for years on its own, but it blew out whatever circuit in my brain generates metaphors. When I turned to the antidepressants that kept the electricity flowing, I needed sleep medication too—proving, to my mind, that melancholy couldn’t have been the mother of my sleep troubles, but the lasting result of them. I’ve used the lowest dose of Klonopin to complement my SSRIs for years. In times of acute stress, I need a gabapentin or a Unisom too.

Unisom is fine. Gabapentin also turns my mind into an empty prairie.

Edibles, which I’ve also tried, turn my brain to porridge the next day. Some evidence suggests that cannabis works as a sleep aid, but more research, evidently, is required. (Sorry.)

Which brings me to the subject of drugs. I come neither to praise nor to bury them. But I do come to reframe the discussion around them, inspired by what a number of researcher-clinicians said about hypnotics and addiction during the SLEEP 2024 panel on the subject. They started with a simple question: How do you define addiction?

It’s true that many of the people who have taken sleep medications for months or years rely on them. Without them, the majority wouldn’t sleep, at least in the beginning, and a good many would experience rebound insomnia if they didn’t wean properly, which can be even worse. One could argue that this dependence is tantamount to addiction.

But: We don’t say people are addicted to their hypertension medication or statins, though we know that in certain instances lifestyle changes could obviate the need for either one. We don’t say people are addicted to their miracle GLP-1 agonists just because they could theoretically diet and exercise to lose weight. We agree that they need them. They’re on Lasix. On Lipitor. On Ozempic. Not addicted to.

Yet we still think of sleep medications as “drugs,” a word that in this case carries a whiff of stigma—partly because mental illness still carries a stigma, but also because sleep medications legitimately do have the potential for recreational use and abuse.

But is that what most people who suffer from sleep troubles are doing? Using their Sonata or Ativan for fun?

“If you see a patient who’s been taking medication for a long time,” Tom Roth, the founder of the Sleep Disorders and Research Center at Henry Ford Hospital, said during the panel, “you have to think, ‘Are they drug-seeking or therapy-seeking ?’ ” The overwhelming majority, he and other panelists noted, are taking their prescription drugs for relief, not kicks. They may depend on them, but they’re not abusing them—by taking them during the day, say, or for purposes other than sleep.

Still, let’s posit that many long-term users of sleep medication do become dependent. Now let’s consider another phenomenon commonly associated with reliance on sleep meds: You enter Garland and Hendrix territory in a hurry. First you need one pill, then you need two; eventually you need a fistful with a fifth of gin.

Yet a 2024 cohort study, which involved nearly 1 million Danes who used benzodiazepines long-term, found that of those who used them for three years or more—67,398 people, to be exact—only 7 percent exceeded their recommended dose.

Not a trivial number, certainly, if you’re staring across an entire population. But if you’re evaluating the risk of taking a hypnotic as an individual, you’d be correct to assume that your odds of dose escalation are pretty low.

That there’s a difference between abuse and dependence, that dependence doesn’t mean a mad chase for more milligrams, that people depend on drugs for a variety of other naturally reversible conditions and don’t suffer any stigma—these nuances matter.

“Using something where the benefits outweigh the side effects certainly is not addiction,” Winkelman, the Harvard psychiatrist and chair of the panel, told me when we spoke a few months later. “I call that treatment.”

The problem, he told me, is when the benefits stop outweighing the downsides. “Let’s say the medication loses efficacy over time.” Right. That 7 percent. And over-the-counter sleep meds, whose active component is usually diphenhydramine (more commonly known as Benadryl), are potentially even more likely to lose their efficacy—the American Academy of Sleep Medicine advises against them. “And let’s say you did stop your medication,” Winkelman continued. “Your sleep could be worse than it was before you started it,” at least for a while. “People should know about that risk.”

A small but even more hazardous risk: a seizure, for those who abruptly stop taking high doses of benzodiazepines after they’ve been on them for a long period of time. The likelihood is low—the exact percentage is almost impossible to ascertain—but any risk of a seizure is worth knowing about. “And are you comfortable with the idea that the drug could irrevocably be changing your brain?” Winkelman asked. “The brain is a machine, and you’re exposing it to the repetitive stimulus of the drug.” Then again, he pointed out, you know what else is a repetitive stimulus? Insomnia.

“So should these things even be considered a part of an addiction?” he asked. “At what point does a treatment become an addiction? I don’t know.”

Calvinist about sleep meds, blasé about sleep meds—whatever you are, the fact remains: We’re a nation that likes them. According to a 2020 report from the National Center for Health Statistics, 8.4 percent of Americans take sleep medications most nights or every night, and an additional 10 percent take them on some. Part of the reason medication remains so popular is that it’s easy for doctors to prescribe a pill and give a patient immediate relief, which is often what patients are looking for, especially if they’re in extremis or need some assistance through a rough stretch. CBT‑I, as Ronald Kessler noted, takes time to work. Pills don’t.

But another reason, as Suzanne Bertisch pointed out during the addiction-and-insomnia-meds panel, is that “primary-care physicians don’t even know what CBT-I is. This is a failure of our field.”

Even if general practitioners did know about CBT-I, too few therapists are trained in it, and those who are tend to have fully saturated schedules. The military, unsurprisingly, has tried to work around this problem (sleep being crucial to soldiers, sedatives being contraindicated in warfare) with CBT-I via video as well as an online program, both shown to be efficacious. But most of us are not in the Army. And while some hospitals, private companies, and the military have developed apps for CBT-I too, most people don’t know about them.

For years, medication has worked for me. I’ve stopped beating myself up about it. If the only side effect I’m experiencing from taking 0.5 milligrams of Klonopin is being dependent on 0.5 milligrams of Klonopin, is that really such a problem?

There’s been a lot of confusing noise about sleep medication over the years. “Weak science, alarming FDA black-box warnings, and media reporting have fueled an anti-benzodiazepine movement,” says an editorial in the March 2024 issue of The American Journal of Psychiatry. “This has created an atmosphere of fear and stigma among patients, many of whom can benefit from such medications.”

A case in point: For a long time, the public believed that benzodiazepines dramatically increased the risk of Alzheimer’s disease, thanks to a 2014 study in the British Medical Journal that got the usual five-alarm-fire treatment by the media. Then, two years later, another study came along, also in the British Medical Journal, saying, Never mind, nothing to see here, folks; there appears to be no causal relationship we can discern.

That study may one day prove problematic, too. But the point is: More work needs to be done.

A different paper, however—again by Daniel Kripke, the fellow who argued that seven hours of sleep a night predicted the best health outcomes—may provide more reason for concern. In a study published in 2012, he looked at more than 10,000 people on a variety of sleep medications and found that they were several times more likely to die within 2.5 years than a matched cohort, even when controlling for a range of culprits: age, sex, alcohol use, smoking status, body-mass index, prior cancer. Those who took as few as 18 pills a year had a 3.6-fold increase. (Those who took more than 132 had a 5.3-fold one.)

John Winkelman doesn’t buy it. “Really,” he told me, “what makes a lot more sense is to ask, ‘Why did people take these medications in the first place?’ ” And for what it’s worth, a 2023 study funded by the National Institute on Drug Abuse and published in the Journal of the American Medical Association found that people on stable, long-term doses of a benzodiazepine who go off their medication have worse mortality rates in the following 12 months than those who stay on it. So maybe you’re damned if you do, damned if you don’t.

Still, I take Kripke’s study seriously. Because … well, Christ, I don’t know. Emotional reasons? Because other esteemed thinkers still think there’s something to it?

In my own case, the most compelling reasons to get off medication are the more mundane ones: the scratchy little cognitive impairments it can cause during the day, the risk of falls as you get older. (I should correct myself here: Falling when you’re older has the potential to be not mundane, but very bad.) Medications can also cause problems with memory as one ages, even if they don’t cause Alzheimer’s, and the garden-variety brain termites of middle and old age are bummer enough.

And maybe most generally: Why have a drug in your system if you can learn to live without it?

My suspicion is that most people who rely on sleep drugs would prefer natural sleep.

So yes: I’d love to one day make a third run at CBT-I, with the hope of weaning off my medication, even if it means going through a hell spell of double exhaustion. CBT-I is a skill, something I could hopefully deploy for the rest of my life. Something I can’t accidentally leave on my bedside table.

Some part of me, the one that’s made of pessimism, is convinced that it won’t work no matter how long I stick with it. But Michael Irwin, at UCLA, told me something reassuring: His research suggests that if you have trouble with insomnia or difficulty maintaining your sleep, mindfulness meditation while lying in bed can be just as effective as climbing out of bed, sitting in a chair, and waiting until you’re tired enough to crawl back in—a pillar of CBT‑I, and one that I absolutely despise. I do it sometimes, because I know I should, but it’s lonely and freezing, a form of banishment.

And if CBT-I doesn’t work, Michael Grandner, the director of the sleep-and-health-research program at the University of Arizona, laid out an alternative at SLEEP 2024: acceptance and commitment therapy, or ACT. The basic idea is exactly what the name suggests. You accept your lot. You change exactly nothing. If you can’t sleep, or you can’t sleep enough, or you can sleep only in a broken line, you say, This is one of those things I can’t control. (One could see how such a mantra might help a person sleep, paradoxically.) You then isolate what matters to you. Being functional the next day? Being a good parent? A good friend? If sleep is the metaphorical wall you keep ramming your head against, “is your problem the wall?” Grandner asked. “Or is your problem that you can’t get beyond the wall, and is there another way?”

Because there often is another way. To be a good friend, to be a good parent, to be who and whatever it is you most value—you can live out a lot of those values without adequate sleep. “When you look at some of these things,” Grandner said, “what you find is that the pain”—of not sleeping—“is actually only a small part of what is getting in the way of your life. It’s really less about the pain itself and more about the suffering around the pain, and that’s what we can fix.”

Even as I type, I’m skeptical of this method too. My insomnia was so extreme at 29, and still can be to this day, that I’m not sure I am tough enough—or can summon enough of my inner Buddha (barely locatable on the best of days)—to transcend its pain, at once towering and a bore. But if ACT doesn’t work, and if CBT-I doesn’t work, and if mindfully meditating and acupuncture and neurofeedback and the zillions of other things I’ve tried in the past don’t work on their own … well … I’ll go back on medication.

Some people will judge me, I’m sure. What can I say? It’s my life, not theirs.

I’ll wrap up by talking about an extraordinary man named Thomas Wehr, once the chief of clinical psychobiology at the National Institute of Mental Health, now 83, still doing research. He was by far the most philosophical expert I spoke with, quick to find (and mull) the underlayer of whatever he was exploring. I really liked what he had to say about sleep.

You’ve probably read the theory somewhere—it’s a media chestnut—that human beings aren’t necessarily meant to sleep in one long stretch but rather in two shorter ones, with a dreamy, middle-of-the-night entr’acte. In a famous 2001 paper, the historian A. Roger Ekirch showed that people in the pre-electrified British Isles used that interregnum to read, chat, poke the fire, pray, have sex. But it was Wehr who, nearly 10 years earlier, found a biological basis for these rhythms of social life, discovering segmented sleep patterns in an experiment that exposed its participants to 14 hours of darkness each night. Their sleep split in two.

Wehr now knows firsthand what it is to sleep a divided sleep. “I think what happens as you get older,” he told me last summer, “is that this natural pattern of human sleep starts intruding back into the world in which it’s not welcome—the world we’ve created with artificial light.”

There’s a melancholy quality to this observation, I know. But also a beauty: Consciously or not, Wehr is reframing old age as a time of reintegration, not disintegration, a time when our natural bias for segmented sleep reasserts itself as our lives are winding down.

His findings should actually be reassuring to everyone. People of all ages pop awake in the middle of the night and have trouble going back to sleep. One associates this phenomenon with anxiety if it happens in younger people, and no doubt that’s frequently the cause. But it also rhymes with what may be a natural pattern. Perhaps we’re meant to wake up. Perhaps broken sleep doesn’t mean our sleep is broken, because another sleep awaits.

And if we think of those middle-of-the-night awakenings as meant to be, Wehr told me, perhaps we should use them differently, as some of our forebears did when they’d wake up in the night bathed in prolactin, a hormone that kept them relaxed and serene. “They were kind of in an altered state, maybe a third state of consciousness you usually don’t experience in modern life, unless you’re a meditator. And they would contemplate their dreams.”

Night awakenings, he went on to explain, tend to happen as we’re exiting a REM cycle, when our dreams are most intense. “We’re not having an experience that a lot of our ancestors had of waking up and maybe processing, or musing, or let’s even say ‘being informed’ by dreams.”

We should reclaim those moments at 3 or 4 a.m., was his view. Why not luxuriate in our dreams? “If you know you’re going to fall back asleep,” he said, “and if you just relax and maybe think about your dreams, that helps a lot.”

This assumes one has pleasant or emotionally neutral dreams, of course. But I take his point. He was possibly explaining, unwittingly, something about his own associative habits of mind—that maybe his daytime thinking is informed by the meandering stories he tells himself while he sleeps.

The problem, unfortunately, is that the world isn’t structured to accommodate a second sleep or a day informed by dreams. We live unnatural, anxious lives. Every morning, we turn on our lights, switch on our computers, grab our phones; the whir begins. For now, this strange way of being is exclusively on us to adapt to. Sleep doesn’t much curve to it, nor it to sleep. For those who struggle each night (or day), praying for what should be their biologically given reprieve from the chaos, the world has proved an even harsher place.

But there are ways to improve it. Through policy, by refraining from judgment—of others, but also of ourselves. Meanwhile, I take comfort in the two hunter-gatherer tribes Wehr told me about, ones he modestly noted did not confirm his hypothesis of biphasic sleep. He couldn’t remember their names, but I later looked them up: the San in Namibia and the Tsimané in Bolivia. They average less than 6.5 hours of sleep a night. And neither has a word for insomnia.


This article appears in the August 2025 print edition with the headline “American Insomnia.”

This week, Robert F. Kennedy Jr. used his address to a global vaccine summit to disparage global vaccination. The conference was organized by Gavi, the world’s leading immunization program, and in a recorded speech, Kennedy accused the organization of collaborating with social-media companies to stifle dissenting views on immunization during the coronavirus pandemic and said it had “ignored the science” in its work. He criticized Gavi for recommending COVID-19 shots to pregnant women, and went deep on a discredited study that purported to find safety issues with a tetanus vaccine commonly used in the developing world. “In its zeal to promote universal vaccination,” Kennedy claimed, Gavi “has neglected the key issue of vaccine safety.”

Kennedy’s remarks confirmed what The New York Times first reported in March: that the United States, Gavi’s third-largest donor, would stop pledging money to the organization. (Congress, which has always had final say over Gavi funding, has not yet weighed in.) They are also the first indication that the U.S.’s rejection of global vaccine campaigns stems from the Trump administration’s opposition not only to foreign aid, but to vaccination itself. For the first time, Kennedy has managed to use the anti-vaccine agenda to guide American foreign policy.

Gavi, at its most basic level, is Costco for immunizations, wielding its massive purchasing power to buy vaccines in bulk for cheap. National governments and private philanthropies pledge funding to it every five years. The United Kingdom and the Gates Foundation are its largest donors; the United Nations distributes the shots. The poorest countries pay 20 cents per vaccine, and prices rise along with national income. Since the partnership was launched, in January 2000, 19 countries—including Ukraine, Congo, and Guyana—have gone from relying on Gavi to paying for vaccinations entirely on their own. Indonesia, which accepted donations from Gavi as recently as 2017, pledged $30 million to the organization this funding cycle.

Gavi, by its own estimate, has saved about 19 million lives and vaccinated 1 billion children. At the conference this week, the director of the World Health Organization noted that since 2000, the number of children who die each year before they reach the age of 5 has fallen by more than half, largely due to the power of vaccines. By Gavi’s estimates, the U.S. canceling its Biden-era pledge to provide $1.2 billion this donation cycle could lead to the deaths of more than 1 million children who otherwise would have lived. (The Department of Health and Human Services did not respond to a request for comment.) In his recorded remarks, Kennedy said America would not send the money until Gavi can “re-earn the public trust” by “taking vaccine safety seriously.”

Cutting off millions of children’s only access to routine vaccines is “the most emphatic globalization of the anti-vaxxer agenda,” Lawrence Gostin, the faculty director of Georgetown’s O’Neill Institute for National and Global Health Law, told me. Tom Frieden, the former director of the CDC, told me that after he heard Kennedy’s remarks, “I was literally sick to my stomach,” because “unscientific, irresponsible statements like this will result in the deaths of children.” (The U.S. has run an international anti-vaccine campaign before: According to an investigation by Reuters, in 2020, the Pentagon unleashed bot accounts on multiple social-media platforms that impersonated Filipinos and discouraged uptake of China’s Sinovac vaccine—the first COVID vaccine available in the Philippines—using a hashtag that read, in Tagalog, “China is the virus.” The goal was not to combat vaccines, but to undermine China’s influence.)

[Read: RFK Jr. is barely even pretending anymore]

Kennedy’s prerecorded address held back his harshest critiques of Gavi. In his 2021 book, The Real Anthony Fauci, Kennedy paints “Bill Gates’s surrogate group Gavi” (the Gates Foundation co-founded Gavi) as nothing more than a profiteering “cabal” and a facilitator of “African Genocide.” To hear Kennedy tell it, “virtually all of Gates’s blockbuster African and Asian vaccines—polio, DTP, hepatitis B, malaria, meningitis, HPV, and Hib—cause far more injuries and deaths than they avert.”

Decades’ worth of safety and efficacy studies have proved him wrong. In his remarks to Gavi this week, Kennedy focused on the DTP (diphtheria, tetanus, and pertussis) shot, describing at length a “landmark” 2017 study that found the vaccine increased all-cause mortality among girls in Guinea-Bissau. But as Frieden pointed out, this was in fact a relatively small observational study. In 2022, a randomized, double-blind, placebo-controlled trial of more than 50,000 newborns found that the DTP vaccine significantly decreased infant mortality. Frieden compared the evidence: “Hundreds of kids versus 50,000 kids. Poorly done; well done.”

Kennedy made efforts to take his anti-vaccine advocacy global before he became America’s health secretary. In 2021, he delivered a webinar on the importance of expanding an “international movement” for Children’s Health Defense, the anti-vaccine organization he founded. In 2019, when Samoa was experiencing a major dip in measles immunization after an improperly prepared vaccine killed two children, Kennedy visited the prime minister and, on behalf of Children’s Health Defense, reportedly offered to build an information system the country could use to track the health effects of vaccines and other medical interventions. When a deadly measles outbreak took hold later that year, Kennedy sent a letter to the prime minister suggesting that widespread vaccination might make unvaccinated Samoan children more likely to die of measles. (In an interview for a 2023 documentary, Kennedy said that “I had nothing to do with people not vaccinating in Samoa” and that his conversations about vaccines with the prime minister had been “limited.”)

[Read: The U.S. is going backwards on vaccines, very fast]

Now, it seems, Kennedy has gained the power to realize his ambitions both domestically and abroad. Earlier this month, Kennedy dismissed all 17 members of the CDC’s vaccine advisory committee, then replaced them with a group that includes several allies who have spread misinformation about the harms of vaccines. This week, as other countries pledged their support for Gavi, Kennedy’s brand-new, handpicked panel convened for a discussion of the dangers of thimerosal, a vaccine ingredient that is a frequent target of anti-vaxxers despite having been found safe. The committee has formed a working group to review the “cumulative effect” of childhood vaccination in the United States. As Kennedy said in his address to Gavi, “Business as usual is over.”

Older Posts