Category:

Medical News

In the fall of 2021, Gabriel Arias felt like his body was “rotting from the inside.” He was diagnosed with acute myeloid leukemia, a form of blood cancer so aggressive that doctors had him hospitalized the day of his biopsy. In cases like his, the ideal treatment is a transplant. Arias’s cancer-prone blood cells needed to be destroyed and replaced with healthy ones taken from the bone marrow or blood of a donor who matched him biologically. Fortunately, doctors found him a match in the volunteer-donor registries—a man in Poland. Unfortunately, Arias’s single match in the entire world was no longer available to donate.

In the past, the road to transplant might have ended here, but a medical advance had dramatically expanded the pool of donors for patients such as Arias. With the right drug, Arias could now get a transplant from his brother, a partial match, or, as he ultimately chose, he could join a clinical trial in which his donor would be a stranger who shared just eight of 10 markers used in bone-marrow transplants. Under this looser standard, Arias’s registry matches multiplied from one to more than 200. “It really is a game changer,” says Steve Devine, the chief medical officer of the nonprofit NMDP,  which runs the U.S. donor registry and has led research into the use of mismatched donors. Today, agonizing searches for a matched donor are largely a thing of the past.

The drug powering this breakthrough is actually very old. Cyclophosphamide was first developed in the 1950s for chemotherapy. Fifty years later, researchers at Johns Hopkins began studying whether it could be repurposed to prevent a common and sometimes deadly complication of bone-marrow transplants called graft-versus-host disease, where the donor’s white blood cells—which form the recipient’s new immune system—attack the rest of the body as foreign. The bigger the mismatch between donor and recipient, the more likely this was to happen. Cyclophosphamide worked stunningly well against graft-versus-host disease: The drug cut rates of acute and severe complications by upwards of 80 percent.

Cyclophosphamide has now enabled more patients than ever to get bone-marrow transplants —more than 7,000 last year, according to NMDP. (Bone-marrow transplant is still used as an umbrella term, though many of these procedures now use cells collected from the blood rather than bone marrow, which can be done without surgery. Both versions are also known, more accurately, as hematopoietic or blood stem-cell transplants.) The field has essentially surmounted the problem of matching donors, a major barrier to transplants, Ephraim Fuchs, an oncologist at Johns Hopkins University, told me. Fuchs couldn’t remember the last time a patient failed to get a blood stem-cell transplant because they couldn’t find a donor.


It wasn’t obvious that cyclophosphamide would work so well. “I’m just going to come clean,” Devine told me. “Back in 2003 and 2005, I thought it was crazy.” Derived from a relative of mustard gas, the drug is known to be highly toxic to a variety of blood cells; in fact, doctors had long used it to kill the diseased bone marrow in patients before transplant. Why would you want to give such a drug after transplant, when the new donor cells are still precious and few? It defied a certain logic.

But as far back as the 1960s, researchers also noticed that high doses of post-transplant cyclophosphamide could prevent graft-versus-host disease in mice, even if they did not know why. Over the next few decades, scientists working away in labs learned that cyclophosphamide isn’t quite carpet-bombing the blood. It actually spares the stem cells most important to successful transplant. (Blood stem cells differentiate into all the types of red and white blood cells that a patient will need.) Why cyclophosphamide works so well against graft-versus-host disease is still unclear, but the drug also seems to selectively kill white blood cells active in the disease while sparing those that quell the immune system.

By the late ’90s, doctors saw a clear need to expand the search for donors. Bone-marrow transplants are most successful when donor and recipient share the same markers, known as HLA, which are protein tags our cells use to distinguish self from nonself. We inherit HLA markers from our parents, so siblings have about a one-in-four chance of being perfectly matched. As families got smaller in the 20th century, though, the likelihood of a sibling match fell. Donor registries such as NMDP were created to fill the gap, however imperfectly.   

Doctors soon began coalescing around the idea of using family members who were only haploidentical, or half matched, meaning they shared at least five out of 10 HLA markers. Every child is a half match to their parents, and every parent to their child; siblings also have a 50 percent chance of being half matches. But when doctors first tried these transplants, the “outcomes were horrible,” Leo Luznik, an oncologist at Johns Hopkins, told me. Patients had frighteningly high rates of graft-versus-host disease, and more than half died within three years.

Based on the lab findings, Luznik, Fuchs, and other colleagues at Johns Hopkins wondered if post-transplant cyclophosphamide could help. The pharmaceutical companies that made it were uninterested in funding any research, Luznik said, because “it was an old, very cheap drug.” With government grants, however, the team was able to prove that cyclophosphamide got the rate of graft-versus-disease as low as in matched sibling transplants. By the late 2000s, transplants with half-matched family members were becoming routine.

Still, not every patient will have a sibling or parent or child who can donate. Doctors began wondering if cyclophosphamide could work for unrelated donors too. If only eight of the 10 markers have to be matched, then almost everyone would find a donor, even multiple donors. This was especially important for patients of mixed or non-European ancestry, who have a harder time finding unrelated donors, because people of those backgrounds make up a smaller proportion of registry donors and because they can carry a more diverse set of HLA markers. Two-thirds of white people can find a fully matched registry donor, but that number drops to 23 percent for Black Americans and 41 percent for Asians or Pacific Islanders.

Amelia Johnson, who is half Indian and half Black, was one of the first children to get a transplant from a mismatched unrelated donor in a clinical trial in 2022. Her mom, Salome Sookdieopersad, remembers being told, “You guys need to start recruiting bone-marrow donors to help increase your chances.” When that still didn’t turn up an ideal match, Sookdieopersad prepared to donate to her daughter as a half match. But then Amelia was offered a spot in the clinical trial, and they decided to take it. Transplants with mismatched unrelated donors had already been tried in adults—that was Arias’s trial—and they offered other potential benefits. A younger donor, for example, has younger cells, which fare noticeably better than older ones. Amelia did end up with a bout of graft-versus-host disease; cyclophosphamide lowers the risk but not to zero. Still, the transplant was necessary to save her life, and her mom pointed out that some risk was unavoidable, no matter the type of donor: A friend of Amelia’s got graft-versus-host even with a perfectly matched one. Doctors were able to treat Amelia’s complications, and she returned to school last August. The pediatric trial she was part of is ongoing.

In adults, where more data are available, doctors are already moving ahead with mismatched, unrelated donors. Between this and half-matched family members, patients who once might have had zero donors are now finding themselves with multiple possibilities. Doctors can be choosier too: They can select the youngest donor, for example, or match on characteristics such as blood type. The larger pool of donors also prevents situations like Arias’s, in which a single matched donor who signed up years ago is no longer available, which happens with some regularity. Cyclophosphamide is now routinely used in matched transplants too, because it lowers the risk of graft-versus-host disease even further.

Arias’s mismatched unrelated donor in the trial was an anonymous 22-year-old man who lives somewhere in the United States. When Arias and I spoke last month, it had been almost exactly two years since his transplant. He’s cancer free. He and his wife just welcomed a baby girl. None of this would have likely been possible without the transplant, without the donor, without a 70-year-old drug that had been smartly repurposed.

After a decade working as an obstetrician-gynecologist, Marci Bowers thought she understood menopause. Whenever she saw a patient in her 40s or 50s, she knew to ask about things such as hot flashes, vaginal dryness, mood swings, and memory problems. And no matter what a patient’s concern was, Bowers almost always ended up prescribing the same thing. “Our answer was always estrogen,” she told me.

Then, in the mid-2000s, Bowers took over a gender-affirmation surgical practice in Colorado. In her new role, she began consultations by asking each patient what they wanted from their body—a question she’d never been trained to ask menopausal women. Over time, she grew comfortable bringing up tricky topics such as pleasure, desire, and sexuality, and prescribing testosterone as well as estrogen. That’s when she realized: Women in menopause were getting short shrift.

Menopause is a body-wide hormonal transition that affects virtually every organ, from skin to bones to brain. The same can be said of gender transition, which, like menopause, is often referred to by doctors and transgender patients as “a second puberty”: a roller coaster of physical and emotional changes, incited by a dramatic shift in hormones. But medicine has only recently begun connecting the dots. In the past few years, some doctors who typically treat transgender patients—urologists, gender-affirmation surgeons, sexual-medicine specialists—have begun moving into menopause care and bringing with them a new set of tools.

“In many ways, trans care is light years ahead of women’s care,” Kelly Casperson, a urologist and certified menopause provider in Washington State, told me. Providers who do both are well versed in the effects of hormones, attuned to concerns about sexual function, and empathetic toward people who have had their symptoms dismissed by providers. If the goal of menopause care isn’t just to help women survive but also to allow them to live their fullest life, providers would do well to borrow some insights from a field that has been doing just that for decades.

[From the October 2019 issue: The secret power of menopause]

American women’s relationship with estrogen has been a rocky one. In the 1960s, books such as Feminine Forever, written by the gynecologist Robert A. Wilson, framed estrogen as a magical substance that could make women once again attractive and sexually available, rendering the menopausal “much more pleasant to live with.” (The New York Times later reported that Wilson was paid by the manufacturer of Premarin, the most popular estrogen treatment at the time.) Later, the pitch switched to lifelong health. By 1992, Premarin was the most prescribed drug in the United States. By the end of the decade, 15 million women were on estrogen therapy, with or without progesterone, to treat their menopause symptoms.

Then, in 2002, a large clinical trial concluded that oral estrogen plus progesterone treatment was linked to an increased risk of stroke, heart disease, and breast cancer. The study was an imperfect measure of safety—it focused on older women rather than on the newly menopausal, and it tested only one type of estrogen—but oral-estrogen prescriptions still plummeted, from nearly a quarter of women over 40 to roughly 5 percent. Despite this blow to the hormone’s reputation, evidence has continued to pile up confirming that oral estrogen can help prevent bone loss and treat hot flashes and night sweats, though it can increase the risk of strokes for women over 60. Topical estrogen helps address genital symptoms, including vaginal dryness, irritation, and thinning of the tissues, as well as urinary issues such as chronic UTIs and incontinence.

But estrogen alone can’t address every menopause symptom, in part because estrogen is not the only hormone that’s in short supply during menopause; testosterone is too. Although researchers lack high-quality research on the role of testosterone in women over age 65, they know that in premenopausal women, it plays a role in bone density, heart health, metabolism, cognition, and the function of the ovaries and bladder. A 2022 review concluded, “Testosterone is a vital hormone in women in maintaining sexual health and function” after menopause.

Yet for decades, standard menopause care mostly ignored testosterone. Part of the reason is regulatory: Although estrogen has enjoyed FDA approval for menopausal symptoms since 1941, the agency has never green-lit a testosterone treatment for women, largely because of scant research. That means doctors have to be familiar enough with the hormone to prescribe it off-label. And unlike estrogen, testosterone is a Schedule III controlled substance, which means more red tape. Some of Casperson’s female patients have had their testosterone prescription withheld by pharmacists; one was asked if she was undergoing gender transition.

[Helen Lewis: Capitalism has plans for menopause]

The other hurdle is cultural. These days, providers such as Casperson, as well as  menopause-trained gynecologists, might prescribe testosterone to menopausal women experiencing difficulty with libido, arousal, and orgasm. Many women see improvements in these areas after a few months. But first, they have to get used to the idea of taking a hormone they’ve been told all their lives is for men, at just the time when their femininity can feel most tenuous (see: Feminine Forever). Here, too, experience in trans care can help: Casperson has talked many transmasculine patients through similar hesitations about using genital estrogen cream to balance out the side effects of their high testosterone doses. Taking estrogen, she tells those patients, “doesn’t mean you’re not who you want to be,” just as taking testosterone wouldn’t change a menopause patient’s gender identity.

Many trans-health providers have also honed their skills in speaking frankly about sexuality. That’s especially true for those who do surgeries that will affect a patient’s future sex life, Blair Peters, a plastic surgeon at Oregon Health & Science University who performs phalloplasties and vaginoplasties, told me. Experts I spoke with, including urologists and gynecologists with training in sexual health, said that gynecologists can often fall short in this regard. Despite treating vaginas for a living, they can often be uncomfortable bringing up sexual concerns with patients or inexperienced at treating issues beyond vaginal dryness. They can also assume, inaccurately, that concerns about vaginal discomfort always center on penetrative sex with a male partner, Tania Glyde, an LGBTQ+ therapist in London and the founder of the website Queer Menopause, told me. A 2022 survey of OB-GYN residency programs found that less than a third had a dedicated menopause curriculum.

Bowers, who is herself transgender, told me she got comfortable talking about sexuality in a clinical setting only after moving into trans care. If she were to return to gynecology today, she said, she would add some frank questions to her conversations with midlife patients who share that they’re having sexual issues: “Tell me about your sexuality. Tell me, are you happy with that? How long does it take you to orgasm? Do you masturbate? What do you use?”

Menopause care has already benefited from decades of effort by queer people, who have pushed doctors to pay more attention to a diversity of experiences. Research dating as far back as the 2000s that included lesbians going through menopause helped show that common menopause stereotypes, such as anxiety over remaining attractive to men and disconnect between members of a couple, were far from universal. Trans people, too, have benefited from advances in menopause care. Because both gender transition and menopause involve a sharp drop in estrogen, many transmasculine men who take testosterone also lose their period, and experience a similar (though more extreme) version of the genital dryness and irritation. That means they can benefit from treatments developed for menopausal women, as Tate Smith, a 25-year-old trans activist in the U.K., realized when he experienced genital pain and spotting after starting testosterone at 20. After he found relief with topical estrogen cream, he made an Instagram post coining the term trans male menopause to make sure more trans men were aware of the connection.

[Read: What menopause does to women’s brains]

The more menopause and gender care are considered together in medical settings, the better the outcomes will be for everyone involved. Yet menopause studies rarely consider trans men and nonbinary people, along with younger women and girls who experience menopause due to cancer treatment, surgery, or health conditions that affect ovarian function. Although these patient populations represent a small proportion of the patients going through menopause, their experiences can help researchers understand the effects of low estrogen across a range of bodies. Siloing off menopause from other relevant fields of medicine means menopausal women and trans people alike can miss out on knowledge and treatments that already exist.

Unlike gender transition, menopause is generally not chosen. But it too can be an opportunity for a person to make choices about what they want out of their changing body. Not all women in menopause are worried about their libido or interested in taking testosterone. Like trans patients, they deserve providers who listen to what they care about and then offer them a full range of options, not just a limited selection based on outdated notions of what menopause is supposed to be.

In December 1921, Leonard Thompson was admitted to Toronto General Hospital so weak and emaciated that his father had to carry him inside. Thompson was barely a teenager, weighing all of 65 pounds, dying of diabetes. With so little to lose, he was an ideal candidate to be patient No. 1 for a trial of the pancreatic extract that would come to be called insulin.

The insulin did what today we know it can. “The boy became brighter, more active, looked better and said he felt stronger,” the team of Toronto researchers and physicians reported in March 1922 in The Canadian Medical Association Journal. The article documented their use of insulin on six more patients; it had seemingly reversed the disease in every case. As John Williams, a diabetes specialist in Rochester, New York, wrote of the first patient on whom he tried insulin later that year, “The restoration of this patient to his present state of health is an achievement difficult to record in temperate language. Certainly few recoveries from impending death more dramatic than this have ever been witnessed by a physician.”

Of all the wonder drugs in the history of medicine, insulin may be the closest parallel, in both function and purpose, to this century’s miracle of a metabolic drug: the GLP-1 agonist. Sold under now-familiar brand names including Ozempic, Wegovy, and Mounjaro, these new medications for diabetes and obesity have been hailed as a generational breakthrough that may one day stand with insulin therapy among “the greatest advances in the annals of chronic disease,” as The New Yorker put it in December.

But if that analogy is apt—and the correspondences are many—then a more complicated legacy for GLP-1 drugs could be in the works. Insulin, for its part, may have changed the world of medicine, but it also brought along a raft of profound, unintended consequences. By 1950, the new therapy had tripled the number of years that patients at a major diabetes center could expect to live after diagnosis. It also kept those patients alive long enough for them to experience a wave of long-term complications. Leonard Thompson would die at 27 of pneumonia. Other young men and women who shared his illness also died far too young, their veins and arteries ravaged by the disease, and perhaps—there was no way to tell—by the insulin therapy and associated dietary protocols that had kept them alive in the first place.

In the decades that followed, diabetes, once a rare disorder, would become so common that entire drug-store aisles are now dedicated to its treatment-related paraphernalia. Roughly one in 10 Americans is afflicted. And despite a remarkable, ever-expanding armamentarium of drug therapies and medical devices, the disease—whether in its type 1 or type 2 form—is still considered chronic and progressive. Patients live far longer than ever before, yet their condition is still anticipated to get worse with time, requiring ever more aggressive therapies to keep its harms in check. One in every seven health dollars is now spent on diabetes treatment, amounting to $800 million every day.

The advent of insulin therapy also changed—I would even say distorted—the related medical science. In my latest book, Rethinking Diabetes, I document how clinical investigators in the 1920s abruptly shifted their focus from trying to understand the relationship between diet and disease to that between drug and disease. Physicians who had been treating diabetes with either fat-rich diets absent carbohydrates (which had been the accepted standard of care in both the U.S. and Europe) or very low-calorie “starvation” diets came to rely on insulin instead. Physicians would still insist that diet is the cornerstone of therapy, but only as an adjunct to the insulin therapy and in the expectation that any dietary advice they gave to patients would be ignored.

With the sudden rise of GLP-1 drugs in this decade, I worry that a similar set of transformations could occur. Dietary therapy for obesity and diabetes may be sidelined in favor of powerful pharmaceuticals—with little understanding of how the new drugs work and what they really tell us about the mechanisms of disease. And all of that may continue despite the fact that the long-term risks of taking the drugs remain uncertain.


“The ebullience surrounding GLP-1 agonists is tinged with uncertainty and even some foreboding,” Science reported in December, in its article declaring these obesity treatments the journal’s Breakthrough of the Year. “Like virtually all drugs, these blockbusters come with side effects and unknowns.” Yet given the GLP-1 agonists’ astounding popularity, such cautionary notes tend to sound like lip service. After all, the FDA has deemed these drugs safe for use, and doctors have been prescribing products in this class to diabetes patients for 20 years with little evidence of long-term harm.

Yet the GLP-1 agonists’ side effects have been studied carefully only out to seven years of use, and that was in a group of patients on exenatide—an early, far less potent product in this class. The study offered no follow-up on the many participants in that trial who had discontinued use. Other long-term studies have followed patients on the drugs for at least as many years, but they’ve sought (and failed to find) only very specific harms, such as pancreatic cancer and breast cancer. In the meantime, a 2023 survey found that more than two-thirds of patients prescribed the newer GLP-1 agonists for weight loss had stopped using them within a year. Why did they quit? What happened to them when they did?

The stories of Leonard Thompson and the many diabetes patients on insulin therapy who came after may be taken as a warning. The GLP-1 drugs have many traits in common with insulin. Both treatments became very popular very quickly. Within years of its discovery, insulin was being prescribed for essentially every diabetic patient whose physician could obtain the drug. Both insulin and GLP-1 agonists were originally developed as injectable treatments to control blood sugar. Both affect appetite and satiety, and both can have remarkable effects on body weight and composition. The GLP-1s, like insulin, treat only the symptoms of the disorders for which they are prescribed. Hence, the benefits of GLP-1s, like those of insulin, are sustained only with continued use.

The two treatments are also similar in that they work, directly or indirectly, by manipulating an unimaginably complex physiological system. When present in their natural state—as insulin secreted from the pancreas, or GLP-1 secreted from the gut (and perhaps the brain)—they’re both involved in the regulation of fuel metabolism and storage, what is technically known as fuel partitioning. This system tells our bodies what to do with the macronutrients (protein, fat, and carbohydrates) in the foods we eat.

Chris Feudtner, a pediatrician, medical historian, and medical ethicist at the University of Pennsylvania, has described this hormonal regulation of fuel partitioning as that of a “Council of Food Utilization.” Organs communicate with one another “via the language of hormones,” he wrote in Bittersweet, his history of the early years of insulin therapy and the transformation of type 1 diabetes from an acute to a chronic disease. “The rest of the body’s tissues listen to this ongoing discussion and react to the overall pattern of hormonal messages. The food is then used—for burning, growing, converting, storing, or retrieving.” Perturb that harmonious discourse, and the whole physiological ensemble of the human body reverberates with corrections and counter-corrections.

This is why the long-term consequences of using these drugs can be so difficult to fathom. Insulin therapy, for instance, did not just lower patients’ blood sugar; it restored their weight and then made them fatter still (even as it inhibited the voracious hunger that was a symptom of uncontrolled diabetes). Insulin therapy may also be responsible, at least in part, for diabetic complications—atherosclerosis and high blood pressure, for instance. That possibility has been acknowledged in textbooks and journal articles but never settled as a scientific matter.

With the discovery of insulin and its remarkable efficacy for treating type 1 diabetes, diabetologists came to embrace a therapeutic philosophy that is still ascendant today: Treat the immediate symptoms of the disease with drug therapy and assume that whatever the future complications, they can be treated by other drug or surgical therapies. Patients with diabetes who develop atherosclerosis may extend their lives with stents; those with hypertension may go on blood-pressure-lowering medications.

A similar pattern could emerge for people taking GLP-1s. (We see it already in the prospect of drug therapies for GLP-1-related muscle loss.) But the many clinical trials of the new obesity treatments do not and cannot look at what might happen over a decade or more of steady use, or what might happen if the injections must be discontinued after that long. We take for granted that if serious problems do emerge, far down that distant road, or if the drugs have to be discontinued because of side effects, newer treatments will be available to solve the problems or take over the job of weight maintenance.

In the meantime, young patients who stick with treatment can expect to be on their GLP-1s for half a century. What might happen during those decades—and what might happen if and when they have to discontinue use—is currently unknowable, although, at the risk of sounding ominous, we will find out.

Pregnancy is another scenario that should generate serious questions. A recently published study found no elevated risk of birth defects among women taking GLP-1 agonists for diabetes right before or during early pregnancy, as compared with those taking insulin, but birth defects are just one obvious and easily observable effect of a drug taken during pregnancy. Children of a mother with diabetes or obesity tend to be born larger and have a higher risk of developing obesity or diabetes themselves later in life. The use of GLP-1 agonists during pregnancy may reduce—or exacerbate—that risk. Should the drugs be discontinued before or during pregnancy, any sudden weight gain (or regain) by the mother could similarly affect the health of her child. The consequences cannot be foreseen and might not manifest themselves until these children reach their adult years.

The rise of GLP-1 drugs may also distort our understanding of obesity itself, in much the way that insulin therapy distorted the thinking in diabetes research. With insulin’s discovery, physicians assumed that all diabetes was an insulin-deficiency disorder, even though this is true today for only 5 to 10 percent of diabetic patients, those with type 1. It took until the 1960s for specialists to accept that type 2 diabetes was a very different disorder—a physiological resistance to insulin, inducing the pancreas to respond by secreting too much of the hormone rather than not enough. And although the prognosis today for a newly diagnosed patient with type 2 diabetes is better than ever, physicians have yet to establish whether the progression and long-term complications of the disease are truly inevitable, or whether they might be, in fact, a consequence of the insulin and other drug therapies that are used to control blood sugar, and perhaps even of the diets that patients are encouraged to eat to accommodate these drug therapies.

Already, assumptions are being made about the mechanisms of GLP-1 agonists without the rigorous testing necessary to assess their validity. They’re broadly understood to work by inhibiting hunger and slowing the passage of food from the stomach—effects that sound benign, as if the drugs were little more than pharmacological versions of a fiber-rich diet. But changes to a patient’s appetite and rate of gastric emptying only happen to be easy to observe and study; they do not necessarily reflect the drugs’ most important or direct actions in the body.

When I spoke with Chris Feudtner about these issues, we returned repeatedly to the concept that Donald Rumsfeld captured so well with his framing of situational uncertainty: the known unknowns and the unknown unknowns. “This isn’t a you-take-it-once-and-then-you’re-done drug,” Feudtner said. “This is a new lifestyle, a new maintenance. We have to look down the road a bit with our patients to help them think through some of the future consequences.”

Patients, understandably, may have little time for a lecture on all that we don’t know about these drugs. Obesity itself comes with so many burdens—health-related, psychological, and social—that deciding, after a lifetime of struggle, to take these drugs in spite of potential harms can always seem a reasonable choice. History tells us, though, that physicians and their patients should be wary as they try to balance known benefits against a future, however distant, of unknown risk.

Good chocolate, I’ve come to learn, should taste richly of cocoa—a balanced blend of bitter and sweet, with notes of fruit, nuts, and spice. My favorite chocolate treat is nothing like that. It’s the Cadbury Creme Egg, an ovoid milk-chocolate shell enveloping a syrupy fondant center. To this day, I look forward to its yearly return in the weeks leading up to Easter.

Most popular chocolate is like this: milky, sugary, and light on actual cocoa. Lots of sugary sweets contain so little of the stuff that they are minimally chocolate. M&M’s, Snickers bars, and Hershey’s Kisses aren’t staples of American diets because they are the best—rather, they satisfy our desire for chocolate while costing a fraction of a jet-black bar made from single-origin cocoa.

But chocolate isn’t as economical as it once was. By one estimate, retail prices for chocolate rose by 10 percent just last year. And now this is the third year in a row of poor cocoa harvests in West Africa, where most of the world’s cocoa is grown. Late last month, amid fears of a worsening shortage, cocoa prices soared past $10,000 per metric ton, up from about $4,000 in January. To shoulder the costs, chocolate companies are gearing up to further hike the price of their treats in the coming months. Prices might not fall back down from there. Chocolate as we know it may never be the same.

Chocolate has had “mounting problems for years,” Sophia Carodenuto, an environmental scientist at the University of Victoria, in Canada, told me. The farmers who grow them are chronically underpaid. And cocoa trees—the fruits of which contain beans that are fermented and roasted to create chocolate—are tough to grow, and thrive only in certain conditions. A decade ago, chocolate giants warned that the cocoa supply, already facing environmental challenges, would soon be unable to keep up with rising demand. “But what we’re seeing now is a little bit of an explosion” in the crop’s struggles, Carodenuto said.

The simplest explanation for the ongoing cocoa shortage is extreme weather, heightened by climate change. Exceptionally hot and dry conditions in West Africa, partly driven by the current El Niño event, have led to reduced yields. Heavier-than-usual rains have created ideal conditions for black pod disease, which causes cocoa pods to rot on the branch. All of this has taken place while swollen shoot, a virus fatal to cocoa plants, is spreading more rapidly in cocoa-growing regions. Global cocoa production is expected to fall by nearly 11 percent this season, Reuters reported.

In the past, when supply fell and prices rose, farmers were motivated to plant more cocoa, which led to a boost in supply five years later, when the new trees began to bear fruit, says Nicko Debenham, the former head of sustainability at the chocolate giant Barry Callebaut. Already, some West African farmers are racing to plant new trees. But they may not be able to plant their way out of future cocoa shortages. “Climate change is definitely a challenge” because it will make rainfall less predictable, which is a problem for moisture-sensitive cocoa trees, Debenham told me. Furthermore, rising temperatures and more frequent droughts will render some cocoa-growing regions unusable.  

Climate change isn’t the only problem. Cocoa crops in Côte d’Ivoire and Ghana, where 60 percent of the world’s cocoa come from, may already be in “structural decline,” Debenham said, citing disease, aging cocoa trees, and illegal gold mining on farmland. More important, the farmers who tend to the crops can’t afford to invest in their farms to increase their yields and bolster resilience against climate change. The bleak outlook for cocoa farmers threatens to doom cocoa-growing in the region altogether. In Ghana, the average cocoa farmer is close to 50 years old. A new generation of farmers is needed to maintain the cocoa supply, but young people may just walk away from the industry.

No matter how you look at it, the future of cocoa doesn’t look good. With less cocoa available all around, chocolate may become more expensive. For high-end chocolate brands, whose products use lots of cocoa, the recent price hikes are reportedly an existential threat. Barry Callebaut has predicted that the companies it supplies with cocoa will raise chocolate prices by up to 8 percent in the next few months. Because companies buy beans in advance, it will take some time before retail prices reflect the current shortage, so further increases are likely.

When cocoa prices go up, companies start reducing bar sizes and adding in substitutes such as fruit and nuts to reduce the amount of cocoa content. “They’ll try and use every trick in the book to keep the consumption levels up,” Debenham said. My beloved Cadbury Creme Egg, for example, is markedly smaller than it used to be. Now, as Bloomberg has noted, companies are promoting candies that contain less chocolate, such as the new Reese’s caramel Big Cup from Hershey’s, or treats that have no chocolate at all, such as gummies.

Cocoa shortages will affect all kinds of chocolate, but mass-produced sweets may change beyond just the prices. The erratic temperatures brought about by climate change could change the flavor of beans, depending on where they are grown. Variability is a concern for commercial chocolate makers, who need to maintain consistent flavors across their products. They may counteract discrepancies among different batches of beans by combining them, then roasting them at a higher temperature, Johnny Drain, a food-science expert and co-founder of the cocoa-free chocolate brand Win-Win, told me. Doing so can eliminate unwanted qualities, but it may also remove desirable ones, resulting in a less interesting flavor overall. Even if an M&M contains a minimal amount of actual chocolate, a longtime consumer could notice a change in flavor.

Commercial chocolate makers may also tweak their recipes to amp up or mimic chocolate flavors without using more cocoa. These candies contain relatively little cacao to begin with; only 10 percent of a product’s weight must be cocoa in order to qualify as chocolate in the eyes of the FDA. Some already use chocolatelike ingredients such as cocoa-butter equivalents, cocoa extenders, and artificial cocoa flavors. In some cases, the swaps are noticeable: Cadbury’s use of an emulsifying filler to reduce the amount of cocoa butter in its Caramello bars diminished “the rich creaminess of the original,” Bon Appétit noted in 2016.

Newer chocolate alternatives may provide more satisfying counterfeits. Win-Win isn’t the only start-up producing cocoa-free chocolate, which is similar in concept to animal-free meat. The company uses plant ingredients to emulate the flavor and texture of chocolate—as do its competitors Foreverland and Voyage Foods. Another firm, California Cultured, grows actual cacao cells in giant steel tanks.

[Read: Silicon Valley is coming for your chocolate]

Cocoa-free chocolate is currently far more expensive than chocolate, but Drain hopes it will eventually become “cheaper than the cheapest chocolate.” At that point, he said, it’ll likely find its niche at the lowest end of the market, where chocolate plays a supporting role rather than a starring one—think chocolate-coated ice creams and granola bars with chocolate chips. Already, some of these products are labeled as having “chocolate flavor” or being “chocolatey” instead of “chocolate,” which has a strict FDA definition.

Yet change is always tough to swallow. So much of the appeal of cheap chocolate is that it’s always been there—whether in the form of a Hershey’s Kiss, Oreo cookies, a bowl of Cocoa Puffs, or the shell of a fondant-filled egg. “You grow up with those tastes. It’s hard to fathom how pervasive it has been,” Carodenuto said. Chocolate lovers have weathered minor tweaks to these candies over the years, but the shifts happening today may be less tolerable—or at the very least more noticeable. The change that has been hardest to ignore is that cheap chocolate is no longer that cheap.

At Hopscotch, Daryl Collins’s bottle shop in Baltimore, he happily sells wine to 18-year-olds. If a customer isn’t sure what variety they like (and who is, at that age?), Collins might even pull a few bottles off the shelves and pop the corks for an impromptu tasting. No Maryland law keeps these teens away from the Tempranillo, because at this shop, none of the drinks contain alcohol.

The number and variety of zero- and low-alcohol beverages, a once-lagging category that academics and the World Health Organization refer to as “NoLos,” has exploded in the past five-plus years. The already growing “sober curious” movement—made up of adults who want to practice more thoughtful or limited alcohol consumption while still socializing over a drink at home or at a bar—snowballed during pandemic shutdowns. Today, about 70 NoLo bottle shops like Hopscotch dot the U.S., along with several dozen nonalcoholic, or NA, bars, most less than four years old.

Nearly all of the products they’re stocked with were designed with adults in mind. But broken down to their most basic ingredients, many are hardly different from juice, soda, or kombucha. In theory, these are teen-friendly drinks. But not every bar or shop owner will sell to under-21s; state laws, too, when they exist, differ on what kind of alcohol-like beverages are appropriate for people too young to drink actual alcohol. As nonalcoholic adult beverages become more mainstream, they’re forcing a reckoning over what makes a drink “adult” if not the alcohol, and testing whether drinking culture can truly be separated from booze.

Picture, for instance, a Shirley Temple, the consummate children’s drink. Add a shot of vodka, and it becomes a Dirty Shirley. Now replace the vodka with about an ounce of cinnamon-infused “Zero-Proof Vodka Alternative” from a sexy glass bottle. Can a 10-year-old have that Shirley Temple? What if the add-in is instead an ounce of tap water with an identical-tasting cinnamon extract?

[Read: Millennials are sick of drinking]

This puzzle is a diagnostic for how zero-proof entrepreneurs approach the allures and dangers of drinking culture, along with the role they’d like alternatives to play in changing it. Some think brand or bottle design makes a beverage “adult,” and worry that packaging elements more frequently associated with alcohol could open the door to consuming it. Others make decisions based on a drink’s name, how it was created, or what it’s an homage to—a mocktail with a distinct identity is preferable to one that impersonates a well-established recipe. The atmosphere matters too: Is the bar modeled more closely after a family-friendly taproom or an upscale cocktail joint?

The decision to sell booze-like substances to under-21s is constrained by law. The federal government defines an alcoholic beverage as a drink with 0.5 percent or more alcohol by volume, in line with your average kombucha and lower than some apple, orange, and grape juices. (Beer alternatives are subject to additional regulation by the U.S. Alcohol and Tobacco Tax and Trade Bureau—which, among other requirements, bans the word beer from the packaging unless it’s part of the phrase near beer.) But state definitions of specific alcoholic beverages can zero in on processes and ingredients (such as malt) in a way that fails to distinguish between the real deal and NA alternatives. NoLo manufacturers keep their products below the 0.5 percent federal cutoff, but the drinks can still end up with murky legal status once they arrive on local shelves. Pennsylvania, for example, has a law that makes it illegal to supply under-21-year-olds with a NoLo analogue of any real adult beverage—something no other state prohibits.

In Lafayette, Indiana, Rob Theodorow splits the policy at his combination bar and bottle shop, Generation NA, down the middle. Any NA beers, wines, and spirits (say, a six-pack from Athletic Brewing Co., Noughty’s Sparkling Rosé, or Seedlip’s ginlike Spice 94) are off-limits to under-21s. Customers over 18 are welcome to purchase drinks that are less reminiscent of those in a real liquor store—like wellness sodas made by brands such as Recess and Kin Euphorics—or to sample the beers at free tastings.

[Read: The meaning of dry January]

Selling NA drinks to younger people isn’t explicitly illegal under Indiana law, but even if he had a clear green light, Theodorow would draw the line at selling any product that ever contained alcohol—even fully dealcoholized drinks such as Heineken 0.0—to under-21s. “I am a big believer in trying to steer people away from alcohol,” he told me. To him, that means treating products that taste and look just like alcohol with the same discretion as those that actually contain alcohol.

Some proprietors worry that developing a taste for NoLos will make young people more likely to desire the real thing. “When it comes to children, permitting them to consume any versions of beer or wine or spirits can normalize or desensitize them to the concept of alcoholic beverages,” says Cate Faulkner, a co-founder and the director of Zero Proof Collective, an industry group in Minnesota. Others are mostly concerned that selling younger people NoLo beverages could still feed the toxic side of drinking culture: Imagine 15-year-olds shotgunning NA beers in the backyard. “It’s not about the liquid so much as it is about the ritual,” Laura Silverman, the founder of the NA information hub Zero Proof Nation, told me.

[From the July/August 2021 issue: America has a drinking problem]

Still other advocates and entrepreneurs see NoLos as a way for young adults to form healthier habits. One of them is Laura Willoughby. She’s the director of partnerships at Club Soda, a shop and bar she co-founded that hosts many 16th-, 17th-, and 18th-birthday parties in London, where the legal drinking age is 18. “Once you take alcohol out of beer,” Willoughby told me, “it’s got four ingredients, no sugar, it’s hydrating, and it’s full of vitamin B-12. Aside from water, it’s the healthiest thing you can drink in the pub.” But she, like Theodorow, won’t offer anyone under the legal drinking age a nonalcoholic beverage made by a brand that also sells alcohol.

Both abroad and in the U.S., these conversations are rooted in old questions about the “right age” and way to introduce young people to alcohol: Should it be done gradually throughout childhood, or all at once at 21? Research has yet to provide a clear answer, let alone one that applies to NoLos too. A few international studies have shown that, for young people, consuming NoLos is associated with drinking real alcohol, but the cultural role of alcohol varies greatly around the world. Some early evidence from Europe suggests that NoLos can worsen existing substance cravings in adults with alcohol-use disorder, but the zero-proof community is also full of people—including Silverman—who credit the drinks with helping them maintain sobriety. The answer will probably never be clear-cut. Molly Bowdrig, a clinical psychologist and postdoctoral scholar at the Stanford Prevention Research Center, just wrapped up one of the first-ever studies of U.S. consumers of nonalcoholic beverages; her strongest finding was that the way NA beverages change people’s relationships with alcohol is nuanced and varied. (Her research has yet to be peer-reviewed and published.)

Without a firm consensus, Willoughby and other shop and bar owners told me that they often err on the side of caution and let parents make decisions about what their underage kids can drink. But even for parents deeply enmeshed in the NA industry, the decision isn’t straightforward. Collins’s own daughter is 9, and even after months of running Hopscotch, he struggled to describe what he would or wouldn’t let her drink. When I asked him, he paused, then collected four cans from the fridges along a shop wall. In his house, a nonalcoholic Bee’s Knees would be for adults only, because it shares a name with a real cocktail and has just 15 percent juice. But a Fauxmosa, with 65 percent juice and a distinct mocktail name, is kid-friendly in his book. White Claw’s new nonalcoholic seltzers, though functionally the same product as LaCroix or Spindrift, would only get the okay from Collins if served to his daughter in a glass. (“Imagine my daughter going to school and telling her teacher, ‘Hey, I had a White Claw on Saturday,” he says.) And he classified the last can, a seltzer flavored with hops, as an adult-only beverage “because of American culture,” in which the flavor of hops is closely associated with beer.

[Read: Canned cocktails give Millennials what they’ve always desired]

It was enough to make my head spin, even though the cocktails Collins mixed me when I arrived didn’t contain a drop of alcohol. His answers made sense, but others would have too. As long as these drinks exist in a liminal space in our culture, norms will grow and change in real time along with the kids subject to them. Maybe one day, we’ll look back to find that they’ve changed for alcohol too.

The legalization of cannabis in the United States—the biggest change in policy for an illegal substance since Prohibition ended—has been an unqualified success for approximately no one. True, the drug is widely available for commercial purchase, many marijuana-related charges have been dropped, and stoner culture has become more aligned with designer smoking paraphernalia featured on Goop than the bumbling spaciness of Cheech and Chong. But a significant part of the market is still underground, medical research is scant, and the aboveground market is not exactly thriving. Longtime marijuana activists are unhappy. Entrepreneurs are unhappy. So are people who buy weed, as well as those who think weed should never have been legal in the first place.

The country is now poised to change the legal status of a new class of psychoactive drugs: psychedelics. They have shown early promise in treating mental-health issues, even though some enthusiasts are concerned that easing regulations without a better understanding of these drugs could put patients in a degree of danger. Oregon is now licensing centers that can administer psilocybin, and MDMA is moving toward FDA approval. In some ways, the fate of legalized pot could be read as a cautionary tale for how these drugs, too, could be left in a limbo that hardly anyone really likes. But as psychedelics enter the mainstream—becoming decriminalized, state-legal, or even FDA-approved—they will be dispensed and sold differently from marijuana. The process is unlikely to please everyone, but if advocates, manufacturers, and lawmakers can keep psychedelics from falling into the same traps that marijuana did, they have a chance of creating an industry that serves at least some Americans well.

Despite its wide availability, cannabis is still illegal under federal law. Since 1970, it’s been listed as a Schedule I drug, meaning the government believes it carries a high likelihood of abuse and no medical value. Starting in the 1970s, states began passing laws lowering penalties for cannabis possession and allowing it for medical purposes and, eventually, recreational use. In 2013, U.S. Deputy Attorney General James M. Cole issued a memo declaring that the Justice Department would not strictly enforce federal marijuana laws in states that legalized it.

Americans were eager to buy marijuana as states launched legalization programs, but in most states, cannabis hasn’t proved to be a great legal business. Because cannabis is still illegal at the federal level, many entrepreneurs don’t benefit from many of the tax breaks that other businesses do, meaning the industry essentially pays billions in extra taxes, according to an analysis by a cannabis-research firm. In most states, big corporations dominate the industry while small-business owners struggle to afford high licensing fees. One promise of legalization was its potential benefit to Black and Latino communities, who were disproportionately affected by the War on Drugs, but as of 2017, members of these communities accounted for single-digit percentages of state-sanctioned cannabis entrepreneurs. Because legal weed is sometimes more expensive than old-school dealer prices, plenty of people simply stick with the unregulated underground. In states like New York, the gap between what’s legal and what’s enforced has allowed illicit stores to pop up among the legitimate ones.

[Read: America loves its unregulated wellness chemicals]

That’s all in places where recreational use is legal. In the 14 states where only medical use is allowed, dispensaries must make a different calculus: They serve patients in need, but others treat them only as a loophole to access the drug—and are likely to go elsewhere if recreational access becomes available. In the 12 states where even medical marijuana is illegal, the only cannabis products legally available are a handful of medications that have been approved by the FDA, all for a narrow set of indications, such as vomiting among chemotherapy patients, AIDS-related anorexia, and certain kinds of seizures. Active clinical trials to study cannabis are few and far between, which means that the drug currently has no clear pathways to be federally approved for other indications.

Psychedelics are much further behind in their journey to wider availability, but on the surface, they appear poised to mimic marijuana. The drugs are undergoing a sea change in public sentiment; Aaron Rodgers has a trip story, and so does Michael Bublé. Half of state governments have introduced psychedelics-related bills, some of them to form working groups that will dig into the science of psychedelics, and some to launch full-blown psilocybin industries. Assuming that reform moves at a comparable speed to cannabis, researchers have estimated that 25 states could legalize psychedelics in the next decade or so. Voters in Oregon and Colorado have already passed ballot initiatives that opened the way for psilocybin industries in their states.

But instead of following marijuana’s trajectory for reform—first decriminalization, then medical legitimacy, followed by recreational use—psychedelics are barreling down all those avenues at once. State-regulated psilocybin centers are shaping up to be less like cannabis dispensaries and more like a hybrid of a psychotherapy office, a medical clinic, and a spa. Rules in the psilocybin industry are much more restrictive than those for cannabis ever were, because many people believe unlocking the possible benefits of these drugs requires more careful support and perceive the risks of a bad trip as higher. Whereas Oregonians over 21 can pop into a dispensary, flash an ID, and take their bud (or gummies, or soda, or cookies) home, adults seeking psilocybin must undergo a comprehensive mental- and physical-health screening, then ingest their dose under the supervision of a trained and licensed facilitator. Meanwhile, researchers and start-ups are pushing psychedelics forward on the medical front. We might see the first MDMA approval to treat PTSD as soon as August.

[Read: What if psychedelics’ hallucinations are just a side effect?]

This air of legitimacy—the idea that psychedelics could be more serious drugs than cannabis—gives drugs like MDMA and psilocybin certain advantages. Psychedelics research is booming in a way that marijuana research never really has. Few scientists have overcome the hurdles of studying cannabis; for instance, by most accounts, the government-grown weed that researchers have to use is less potent than what’s sold at dispensaries, and has been known to arrive in labs already moldy. And with the thriving black market, pharmaceutical companies have seemed uninterested in sponsoring cannabis clinical trials. But for psychedelics, which support much less illegal trade, start-ups and nonprofits are leaning hard into the clinical-trial route. Sue Sisley, a physician and marijuana researcher who previously developed clinical trials for marijuana, says psychedelics have moved toward legalization so quickly in large part because they’re following Big Pharma’s traditional drug-development playbook.

In some ways, the FDA-forward approach is a gamble for people hoping to create a psychedelics industry. For one thing, Ariel Clark, an Anishinaabe cannabis and psychedelics attorney, worries that this pathway could put the drugs firmly out of (legal) reach for the people who could most benefit from the protections that come with legalization, including the Indigenous communities that already use some psychedelics in traditional ceremonies. Currently, a single psilocybin session at an Oregon service center costs thousands; companies that receive FDA approval could possibly further inflate the prices of approved psychedelics and treatment. (That has already happened with ketamine.) Additionally, FDA approval of proprietary psychedelics might still leave growers paying the same high taxes that have caused small-cannabis-business owners to fold. “Farmers already have really low margins,” and taxes don’t help, says Hadas Alterman, a partner at Plant Medicine Law Group and a member of Oregon’s Psilocybin Advisory Board’s equity subcommittee.

But even in a scenario that leaves small-time entrepreneurs in the dust, some advocates argue that medicalization would be a net good. Medicalization might not lead the United States to a hallucinogenic utopia, but compared with underground use, it very well could increase access to psychedelics, improve patient safety, and build a functional (if inequitable) industry. Psychedelics manufacturers and people with the means to afford their pricey treatments might be perfectly happy with that outcome.

[Read: Psychedelics open your brain. You might not like what falls in.]

Psychedelics are still in uncharted territory in the U.S. Even though start-ups are investing millions on the assumption that they’ll be approved as medicines, and state-run programs are charging ahead with developing their own psychedelic industries, research on the drugs is still in early days, and their exact path to legalization remains murky. But at the very least, lawmakers have a great model of what not to do. “Folks sitting in government saw what happened with cannabis,” Clark told me. One can only assume that they don’t want to repeat the experience.

Within the next couple of decades, a new generation of contraceptives could hit the American market. One, a pill that blocks certain cells from accessing vitamin A, might be able to limit fertility without flooding the body with hormones; another is an injection that temporarily blocks up the reproductive plumbing. The method that’s furthest along in trials is a topical gel that promises to induce temporary infertility when smeared daily on the shoulders and upper arms—without affecting mood or libido. “Overall, we don’t have any serious adverse events at all,” Christina Wang, a contraceptive researcher at the Lundquist Institute in California, and one of the developers of the gel, told me.

This coming slew of treatments will be notable not only for their imaginative delivery methods, but for their target audience: men. For decades, men hoping to manage their fertility have been limited to just two imperfect options—condoms or vasectomies. But in recent years, researchers have taken massive steps toward developing simple, convenient, and effective contraceptive options for men with virtually zero side effects. Soon, women may not be forced to bear nearly the entire burden of preventing pregnancy.

But the coming innovations won’t just be about expanding the menu of options for men. Better male contraception wouldn’t be on the way if not for the many scientific paths that female contraception has paved. Now women’s birth control—much of which still comes with plenty of irksome, sometimes risky, side effects—seems due for some kickbacks. True, the logistics of keeping an egg from exiting an ovary don’t completely overlap with the mechanics of keeping sperm out of the female reproductive tract. But in principle, “there are a lot of similarities,” Diana Blithe, the chief of the NIH’s Contraceptive Development Program, told me, which means one can easily inform the other. With an eye on what’s now being accomplished for male contraception, researchers may soon be able to deliver to women new forms of birth control that aren’t just more tolerable, but also more on-demand, less invasive, or even usable on male and female reproductive systems alike.

In the six-plus decades since the debut of the birth-control pill, the list of contraceptive choices for women has lengthened impressively. People can opt for barrier methods, or choose among pills, patches, and implants; they can receive injections a few times a year, or select an intrauterine device that can last up to 10 years. “We have so many options, it’s almost like you’re in a cereal aisle,” Amy Alspaugh, a nurse and reproductive-health researcher at the University of Tennessee at Knoxville, told me.

Read: The calendar of human fertility is changing

Many methods are also ever-improving: IUDs, implants, and injections now have longer lifespans and are easier to insert and remove; doses of hormonal contraception have dramatically decreased. “We used to give basically like a horse dose of estrogen and progestin,” Alspaugh told me. “Now we give the lowest dose that we know we can give and still have it be effective,” in order to minimize side effects. Some researchers have been exploring new ways to deliver contraception—microneedles, for instance, or even microchip technology that might allow women to remotely tune their birth control. (The latter idea has raised privacy concerns galore.) The Population Council, an NGO based in New York, has been working on a multipurpose vaginal ring that will, in addition to preventing pregnancy, release an antiviral to protect women against HIV, Régine Sitruk-Ware, a reproductive endocrinologist and contraception researcher at the nonprofit, told me.

By and large, though, the changes to female contraception have been incremental—more ingredient swaps than whole new recipes. Categorically, “we’ve had the same offerings for pretty much 30 years now,” Heather Vahdat, the executive director of the Male Contraceptive Initiative, told me. And plenty of women remain dissatisfied with the inconveniences and risks that come with the choices at hand. Some experience weight gain, acne, or nasty mood swings, or worry about the risk of stroke that can come with hormone-based pills. Others balk at the often-painful placement process for IUDs. Manually inserting a device into the lower abdomen probably wouldn’t be acceptable in other contexts without anesthesia, and yet, for women’s contraception, “we’ve socialized that into something acceptable,” Brian Nguyen, an ob-gyn and contraception researcher at the University of Southern California, told me. Nonhormonal methods such as condoms, diaphragms, and spermicide are easy to come by, but generally less effective than hormonal ones. They can also come with their own side effects. And women could certainly benefit from a greater variety of on-demand methods, Vahdat told me—contraception for when you have sex, “not just in case you have sex”—that would save them the trouble of weathering side effects all month, year, or decade long.

Over the years, some researchers have argued that significant inconveniences and side effects are acceptable for female contraception. Women, after all, are weighing those costs against pregnancy, itself a risky condition that can come with life-threatening complications; men, meanwhile, take contraception to prevent pregnancy in someone else. I asked Vahdat whether the typical side-effect profile of currently available female contraceptives would pass muster in any of the male methods in trials. “Based on history,” she told me, “I think that it would not.” Several other experts agreed. In 2011, a worldwide trial for an injectable hormonal contraceptive for men was halted when an independent safety-review committee determined that the drug’s side effects “outweighed the potential benefits.” The side effects in question included mood swings and depression, both of which are frequently experienced by women on birth control. And yet, most of the participants who stuck with the study said that they wanted to keep using the injection. In recent years, Nguyen has heard more and more of the men in contraceptive trials cite their female partners’ negative experiences with birth control as reason for their participation. “Many think of risk to their partner as a risk themselves,” he said.

[Read: The different stakes of male and female birth control]

Still, the strict standards for the tolerability of male birth control could raise the floor for female methods too. Such crossover advances are already in the works. Researchers took care to formulate the topical contraceptive for men with a dose of natural testosterone, alongside progestin, the active ingredient that halts sperm production; the idea, experts told me, is to better recapitulate what’s naturally seen in men’s bodies, to minimize unnecessary side effects. Many female hormonal contraceptives, meanwhile, rely on a synthetic compound called ethinylestradiol that incompletely mimics the estrogen women’s bodies make—and appears to raise the risk of blood clots. The Population Council is now working on another vaginal ring that replaces ethinylestradiol with hormones better matched to female biology.

Other conveniences may be trickier to translate. For example, researchers hope to someday offer men a more easily reversible vasectomy, in which a dissolvable or removable hydrogel is inserted into the vas deferens. But experts told me that temporarily stopping up the fallopian tubes is simply harder. Plus, whereas sperm are churned out constantly, eggs are released for fertilization on a cycle that can be tricky to measure and predict—which can make side effects frustratingly tough to control too, Nguyen told me. Targeted interventions are also more easily delivered to the testes than the ovaries. And their success is easier to verify: Men have long been able to check their own sperm count with a device that’s similar to an at-home COVID test, but no parallel exists for women, Wang told me. And because sperm take months to produce, male hormonal contraceptives might be more forgiving to users who miss a day of treatment—unlike many pills designed for women, which tend to be less flexible, Mitchell Creinin, a contraceptive researcher at UC Davis Health, told me.

[Read: A vaccine for birth control?]

The difficulty of wrangling eggs, though, doesn’t have to mean limiting options for women. Conception can’t happen unless egg and sperm actually meet—which means that just about any drug designed to waylay the functionality or motility of sperm could play a role in the female reproductive tract. The options go way beyond spermicide: The Population Council is working on a product that will modify the vagina’s acidity to stop sperm from swimming properly, Sitruk-Ware told me. And Deborah Anderson, an immunologist and reproductive-health researcher at Boston University, has been working on a dissolvable film imbued with sperm-blocking antibodies that can be placed into the vagina before sex, and seem to persist at high enough levels to provide contraception for 24 to 48 hours, she told me. A couple of drugs being trialed for men could even someday be marketed to women in some form—among them, a sperm-motility-blocking drug that, experts told me, might be deployable in the female reproductive tract too.

With all the attention now being paid to men’s contraceptive preferences, some researchers worry that women’s needs will fall even further to the wayside. Jeffrey Jensen, a contraception researcher at Oregon Health & Science University, told me that even as grants for male methods continue to be green-lighted, his team has had to pause work on some female-contraceptive projects because of lack of funding in recent years. “Policy makers think that we’ve checked the box and that we can move on,” he said. And Sitruk-Ware said that, although researchers had at one point started developing a topical contraceptive gel for women, “donors were more interested in the gel for men.”

Still, the arrival of male contraception is unlikely to dampen women’s enthusiasm for using their own methods, Allison Merz, an ob-gyn at UC San Francisco, told me. If anything, when those ultrasafe, ultra-effective products for men come to market, they’ll ignite more discussions over female contraception—and inspire more questions about why convenience and tolerability weren’t prioritized for women from the start.

A man comes to Northwell Health’s hospital on Staten Island with a sprained ankle. Any allergies? the doctor asks. How many alcoholic drinks do you have each week? Do you have access to firearms inside or outside the home? When the patient answers yes to that last question, someone from his care team explains that locking up the firearm can make his home safer. She offers him a gun lock and a pamphlet with information on secure storage and firearm-safety classes. And all of this happens during the visit about his ankle.

Northwell Health is part of a growing movement of health-care providers that want to talk with patients about guns like they would diet, exercise, or sex—treating firearm injury as a public-health issue. In the past few years, the White House has declared firearm injury an epidemic, and the CDC and National Institutes of Health have begun offering grants for prevention research. Meanwhile, dozens of medical societies agree that gun injury is a public-health crisis and that health-care providers have to help stop it.

Asking patients about access to firearms and counseling them toward responsible storage could be one part of that. “It’s the same way that we encourage people to wear seat belts and not drive while intoxicated, to exercise,” Emmy Betz, an emergency-medicine physician and the director of the University of Colorado’s Firearm Injury Prevention Initiative, told me. An unsecured gun could be accessible to a child, someone with dementia, or a person with violent intent—and may increase the chance of suicide or accidental injury in the home. Securely storing a gun is fundamental to the National Rifle Association’s safety rules, but as of 2016, only about half of firearm owners reported doing so for all of their guns.

Some evidence shows that when health-care workers counsel patients and give them a locking device, it leads to safer storage habits. Doctors are now trying to figure out the best way to broach the conversation. Physicians talk about sex, drugs, and even (if your earbuds are too loud) rock and roll. But to many firearm owners, guns are different.

Not so long ago, powerful physicians argued that if guns were causing so much harm, people should just quit them. In the 1990s, the director of the CDC’s injury center said that a public-health approach to firearm injury would mean rebranding guns as a dangerous vice, like cigarettes. “It used to be that smoking was a glamor symbol—cool, sexy, macho,” he told The New York Times in 1994. “Now it is dirty, deadly—and banned.” In the 2010s, the American Academy of Pediatrics’ advice was to “NEVER” have a gun in the home, because the presence of one increased a child’s risk of suicide or injury so greatly. (“Do not purchase a gun,” the group warned bluntly.) And when asked in 2016 whom they would go to for safe-storage advice, firearm owners ranked physicians second to last, above only celebrities.

In the past couple of decades, some states have toyed with laws that curtail doctors’ ability to talk with patients about firearms and the information they can collect, to assuage gun owners’ privacy concerns. Only in Florida did the most restrictive version—what physicians call a “gag law”—pass, in 2011; six years later, a federal court struck it down. But “I think the gag orders, even though they’re not in effect now, really scared people,” Amy Barnhorst, an emergency psychiatrist and firearm-injury-prevention researcher at UC Davis, told me. A smattering of studies have found that doctors—particularly pediatricians—generally think talking with their patients about firearm safety is important, but most of the time, they’re not doing it. As of 2019, only 8 percent of firearm owners said their doctor had ever brought it up.

That year, in California, Barnhorst launched the state-funded BulletPoints Project, a free curriculum that teaches health-care workers how and when to talk about firearms with their patients. The program instructs them to keep politics and personal opinions out of the conversation, and to ask only those patients who have particular reasons for extra caution—including people with children, those experiencing domestic violence, or those living with someone with a cognitive impairment. It also suggests more realistic advice than “Do not purchase a gun.” Maybe a patient has a firearm for self-defense (the most common reason to have one), so they’d balk at the idea of storing a gun unloaded and locked, with the ammunition separate. A health-care worker might recommend a quick-access lockbox instead.

Researchers are now testing whether these firearm conversations have the best outcome if doctors broach them only when there’s a clear reason or if they do it with every patient. Johns Hopkins is trialing a targeted approach, talking about firearms and offering gun locks in cases where pediatric patients have traumatic injuries. Meanwhile, Northwell Health, which is New York State’s largest health system, asks everyone who comes into select ERs about gun access and offers locks to those who might need them. Both of these efforts are federally funded studies testing whether doctors feel confident enough to actually talk with patients about this, and whether those conversations lead people to store their firearms more securely.

For doctors, universal screening means “there’s no decision point of who you’re going to ask or when you’re going to ask,” Sandeep Kapoor, an assistant professor of emergency medicine who is helping implement the program at Northwell Health, told me. So far, Northwell’s trial has screened about 45,000 patients, which signals that the approach can be scaled up. Kapoor told me that with this strategy, gun-safety conversations could eventually become as routine for patients as having their blood pressure taken. When she was in primary pediatrics, Katherine Hoops, a core faculty member at Johns Hopkins’s Center for Gun Violence Solutions, worked firearm safety into every checkup, as she would bike helmets and seat belts. (The American Academy of Pediatrics still maintains that the safest home for a child is one without a gun, but the organization now recommends that pediatricians talk about secure storage with every family, and offers a curriculum on how to have this conversation.) Universal screening can also find people whom a targeted approach might miss: The team at Northwell recently learned through screening questions that a 13-year-old who came in with appendicitis had been threatened with guns by bullies, and brought in his parents, a team of social workers, and the school to help.

But a patient in the ER for a sprained ankle may understandably wonder why a doctor is asking about firearms. “There’s no context,” Chris Barsotti, an emergency-medicine physician and a co-founder of the American Foundation for Firearm Injury Reduction in Medicine, told me. The firearm community, he said, remembers when “the CDC wanted to stigmatize gun ownership,” so any movement for health care workers to raise these questions needs nuance. To his mind, these should be tailored conversations. Betz, of the University of Colorado, raises the question only when a patient is at risk, and believes that firearm safety can otherwise be in the background of a practice—for example, in a waiting room where secure-storage brochures are displayed alongside pamphlets on safe sex and posters on diabetes prevention.

About half of firearm-owning patients agree that it’s sometimes appropriate for a doctor to talk with them about firearms, according to a 2016 study by Betz and her colleagues. They’re even more okay with it if they have a child at home. The physicians I asked said that the majority of the time, these conversations go smoothly. But Betz’s study also found that 45 percent of firearm-owning patients thought doctors should never bring up guns. Paul Hsieh, a radiologist and a co-founder of the group Freedom and Individual Rights in Medicine, wrote in an email that gun owners he’s spoken with “find the question about firearms ownership intrusive in a different way than questions about substance use or sexual partners.”

Chethan Sathya, a pediatric trauma surgeon and the director of Northwell Health’s Center for Gun Violence Prevention, pointed out that those topics used to be contentious for physicians to talk about. To treat guns as a public-health issue, “we can’t be uncomfortable having conversations,” he told me. But doctors have more power in this situation than they do in others. They might tell someone with diabetes to stop having soda three times a day, but they can’t literally take soda away from a patient. With guns, they might be able to. In states with extreme-risk laws, if a health-care provider believes that their patient poses an immediate threat to themselves or others, they can work with law enforcement to petition the court to temporarily remove someone’s firearms; a handful of states allow medical professionals to file these petitions directly. There are many people “across America right now who own guns and won’t come to counseling, because they don’t want their rights taken away for real or imagined reasons,” Jake Wiskerchen, a mental-health counselor in Nevada who advocates for such patients, told me. They worry that if their doctor includes gun-ownership status in their medical record, they could be added to a hypothetical national registry of firearm owners. And if questions about guns were to become truly routine in a doctor’s office—such as on an intake form—he said owners might just lie or decide they “don’t want to go to the doctor anymore.”

Physicians accordingly choose their words carefully. They talk about preventing firearm injury instead of gun violence—both because the majority of gun deaths are suicides, not homicides, and because it’s a less loaded term. Telling a diabetic patient to cut back on soda might work, but people “are not just going to throw their guns in the trash,” Barnhorst, of UC Davis, told me. “There’s a lot more psychological meaning behind firearms for people than there is for sodas.”

Barsotti says a public-health approach to firearm safety requires more engagement with the upwards of 30 percent of American adults who own a firearm. Owners of shooting ranges and gun shops are already “practicing public health without the benefit of medical or public-health expertise,” he told me. They’re running their own storage programs for community members who don’t want their guns around for whatever reason; they’re bringing their friends for mental-health treatment when they might be at risk. Betz’s team collaborated with gun shops, shooting ranges, and law-enforcement agencies in Colorado to create a firearms-storage map of sites willing to hold guns temporarily, and she counsels gun clubs on suicide prevention, as a co-founder of the Colorado Firearm Safety Coalition. Exam-room conversations can be lifesaving, but in curbing gun injury, Betz told me, health-care workers “have one role to play. We’re not the solution.”

When a new RSV vaccine for pregnant people arrived last fall, Sarah Turner, a family-medicine physician at Lutheran Hospital, in Indiana, couldn’t help but expect some pushback. At most, about half of her eligible pregnant patients opt to get a flu vaccine, she told me, and “very few” agree to the COVID shot.

But to Turner’s surprise, patients clamored for the RSV shot—some opting in even more eagerly than they did for Tdap, which protects newborns against pertussis and had previously been her easiest sell. For once, expectant parents were the ones starting conversations about immunizations.

Each year in the U.S., respiratory syncytial virus hospitalizes an estimated 58,000 to 80,000 kids under the age of 5; the risks are highest for infants, for whom the virus is “enemy No. 1,” says Sallie Permar, an immunologist and pediatrician in chief at NewYork-Presbyterian/Weill Cornell Medical Center. But this past season marked the first time that the U.S. had two tools that could substantially shrink that toll: a vaccine for pregnant people, who can then pass antibodies to their child, and a monoclonal antibody, known as nirsevimab, that is given directly to infants. Their arrival feels akin “to the end of polio,” Anne-Marie Rick, a pediatrician and clinical researcher at Children’s Hospital of Pittsburgh, told me: With both shots in widespread use, the risks of winter illness could forever look different for the youngest Americans.

But some experts worry that these powerful shots are being squandered. The CDC’s seasonal recommendations governing their use may simply be “too strict,” Permar told me. In keeping with those guidelines, many practices stopped giving the maternal vaccine at the end of January; the main window for administering the monoclonal antibody is expected to close at the end of this week. The next eligibility windows won’t open for months. The U.S has two brand-new shots that protect extraordinarily well against a deadly respiratory virus—and that people actually want to take—and it is holding them back.

The guiding principle behind the CDC’s recommendations has logic to it. RSV is a seasonal virus, and both injections are thought to offer protection for about six months. For the maternal vaccine, which is administered between 32 and 36 weeks of pregnancy, the clock on the baby’s protection starts after birth. So if a pregnant person gets the vaccine in June—outside of the CDC’s recommended window—and has their baby in July, their child may be vulnerable again come February, before RSV season typically ends. In theory, spring and summer infants might be better protected by getting nirsevimab starting in October, when RSV usually arrives. Current guidelines also require a choice between the two options: Most infants that benefit from maternal vaccination are not eligible to also receive nirsevimab.

[Read: The worst pediatric-care crisis in decades]

This past season, though, nirsevimab was in severely short supply—in large part because drug companies seem to have underestimated demand, William J. Muller, a pediatric-infectious-disease expert at Northwestern who helped trial the monoclonal antibody, told me. Many hospital systems also balked at the cost of the new drug, which is pricier than the maternal vaccine, wasn’t yet bundled into the expense of delivering infants, and wasn’t consistently covered by insurance. The shortfalls became so dire that Sanofi, nirsevimab’s manufacturer, stopped taking new orders for certain doses of the monoclonal antibody as early as October. The CDC issued a health alert, calling on providers to restrict administration of those doses to only the highest-risk infants. “In our hospital system, we had some for the NICU babies, and that was literally it,” Turner, of Lutheran Hospital, told me.

Nirsevimab should be more available this year: Spokespeople from AstraZeneca and Sanofi told me that the companies are “confident we will meet the global demand” for the antibody in 2024. But last year set quite a low bar. And when the window for administration opens in October—potentially already coinciding with RSV’s rise—supplies could go fast, as parents who haven’t been able to get shots for themselves or their babies rush to catch up, Grace Lee, a pediatrician at Stanford, who advised the CDC on its RSV guidance, told me. (The CDC did not respond to a request for comment.) Opening the administration window earlier for either the vaccine or the monoclonal antibody could ease that burden: The U.S. starts immunizing people against the flu well ahead of the season’s start, Lee said, because “it’s just not feasible to vaccinate the entire U.S. population in a week.”

For several years, too, “RSV has been on the move,” Permar told me, thanks in large part to pandemic mitigations. The virus was virtually absent in 2020, only to come roaring back for a bizarrely early season that began during the summer of 2021 and had mostly concluded by the end of January 2022. In the past two seasons, the virus has also arrived somewhat early, starting with a September rise. If that pattern holds, waiting until September to vaccinate pregnant people or until October to immunize infants might leave many newborns more vulnerable than they need to be for weeks or months.

Many experts do anticipate that RSV’s pattern will quickly settle back to its norm. Over the decades, its consistency “has been remarkable,” says Sarah Long, a pediatrician at Drexel University who advised the CDC on its guidelines for both new RSV interventions. But even in more predictable years, RSV transmission varies across regions—sometimes kick-starting during the summer in the South and lingering until spring further north. The recommendations “can’t be a one-size-fits-all” across the U.S., says Shabir Madhi, a vaccinologist at the University of the Witwatersrand, in South Africa, who helped lead clinical trials of the maternal vaccine. These are judgment calls: France opens its nirsevimab window earlier than the U.S.; Belgium will allow some pregnant people to receive a vaccine as early as the spring. The U.K. is weighing whether to offer both injections at any time of year.

One argument for the current seasonal window is that giving a vaccine or a monoclonal antibody injection too early might mean recipients miss out on protection at the end of the season, Karen Acker, a pediatrician at Weill Cornell, told me. But Permar and others are hopeful that the effects of the new RSV interventions might last longer than five or six months, which is about when clinical trials stopped directly testing their effects. Early data for nirsevimab, for instance, suggests that a little bit of protection may even trickle into subsequent seasons, Muller told me.

[Read: Fall’s vaccination routine didn’t have to be this hard]

RSV is also of greatest threat to children within the first few months of life, when their respiratory tracts are still tiny and developing. Given the choice between offering the maternal vaccine a little early—which could leave an older infant a bit more vulnerable at the season’s end—and waiting to administer nirsevimab to a young infant after RSV season has started, the former might actually be the safer strategy. Plus, summer babies who don’t get nirsevimab at the hospital are less likely to get it later, especially if their parents aren’t regularly taking them to see a pediatrician. Giving a shot on the early side is better than never giving one all, Joshua Salomon, a health-policy researcher at Stanford, told me.

In theory, the CDC’s guidelines do make room for adjustments in administration windows, in accordance with local RSV trends. But those decisions can be difficult to execute when providers have to place orders ahead of time and store vials in limited space. So far, many doctors’ offices and hospitals have stuck to the months outlined by the CDC guidance. “The cutoff dates have been taken very dogmatically,” Rick told me. At the start of the past season, infants just one day over the recommended dosing age of eight months or younger were denied nirsevimab, Turner told me. Then, a lot of providers simply stopped offering the maternal vaccine after January 31, or simply ran out.

When both the need and the enthusiasm for a vaccine or drug is strong, taking every opportunity for protection makes sense. Several experts I talked with supported wider windows; Permar thinks the U.S. should even consider offering the maternal vaccine year-round. To her mind, restrictions regarding both seasonality and gestational age too strongly limit the chances that a baby will be protected. Some providers also noted that, given all the uncertainties, they would recommend the maternal vaccine as primary defense, leaving nirsevimab as the backup—simply because the vaccine can be delivered first. A maternal shot can set babies up with protection from the moment of birth, a sort of insurance policy that can guard against nirsevimab supply or delivery issues. A wider window of vaccine eligibility might not be a perfect solution. But it could get more infants protected when they most need it—putting to best use a shot that people are actually willing to get.

For something that isn’t candy, Zyn nicotine pouches sure look a lot like it. The packaging, a small metal can, looks more than a little like a tin of mints. The pouches come in a wide variety of flavors: citrus, cinnamon, “chill,” “smooth.” And they’re consumed orally, more like jawbreakers or Warheads than cigarettes.

America has found itself in the beginnings of a Zyn panic. As cigarette and vape use have trailed off in recent years, Zyn and other nicotine pouches are gaining traction. The absolute pouch-usage numbers are still not that high, but sales have more than quadrupled from late 2019 to early 2022. Although only adults 21 and older can legally purchase them—a fact that the product’s website directly points out—they are reportedly catching on with teens. “I’m delivering a warning to parents,” Senate Majority Leader Chuck Schumer said in January, calling for a crackdown, “because these nicotine pouches seem to lock their sights on young kids.” Earlier this month, a group of plaintiffs filed a class-action lawsuit accusing the tobacco giant Philip Morris International (PMI), which also makes Zyn, of purposefully targeting kids. (“We believe the complaints are without merit and will be vigorously defended,” a PMI spokesperson told me over email, adding that Zyn offers “adult-orientated flavors.”)

On their surface, nicotine pouches seem to be a fad like any other, but they are the end result of a century of nicotine marketing and development that began with cigarettes and has now moved beyond. “It’s basically part of the long history of the candification of nicotine,” Robert Proctor, a Stanford historian who has written multiple books on tobacco, told me. Over the years, the tobacco industry has gradually introduced more and more products flavored and packaged like sweet treats. Now, with Zyn, the industry has finally devised a near-perfect one.

[Read: The easy way for Joe Biden to save lives]

Once, nicotine wanted to be the opposite of candy. In the 1920s, weight loss—or “reducing,” as it was then known—became a major craze, and the tobacco industry moved to market its products as a healthier alternative to candy. “Reach for a Lucky instead of a sweet!” read one ad. Candymakers were understandably aggrieved about this slight, but the industries made nice in 1930, when Lucky’s maker dropped “instead of a sweet!” from its slogan. Candy and cigarettes had commonalities. Both relied on sugar—candy because, well, obviously, and cigarettes to cut the bitter taste of tobacco. Both were placed near the checkout register, to encourage impulse purchases. Soon, the makers of both products established joint trade journals and labor unions, at least one of which exists to this day. (Yes, some of the workers who make Ghirardelli chocolate and Marlboro cigarettes are represented by the same union.)   

Around this time, tobacco companies warmed up to the potential of cigarettes made out of chocolate, bubblegum, or pure sugar. Candy cigarettes, they seem to have realized, were free advertising, a gateway for kids into the world of smoking. (“Just Like Daddy!” read the slogan on one brand’s boxes.) The more similar the candy replicas looked to the real deal, the better. By the 1950s, most of the top cigarette brands—Lucky Strike, Chesterfield, Philip Morris—had their candy equivalent made by other companies, with packaging that very closely matched the real thing.

This was roughly the equivalent of a modern apple-juice maker packaging its product in a Jack Daniel’s bottle. These tobacco companies claim never to have encouraged this, but as Proctor details in his 2011 book, Golden Holocaust, they did nothing to discourage it either. The goal, he writes, was to “create Philip Morris in the minds of our future smokers.” (That Philip Morris and the current Philip Morris International are not technically the same company, having since rebranded and then split apart.)

Over time, cigarettes themselves became more and more candylike—and the government has responded by cracking down. Menthol cigarettes went big in the 1950s and ’60s, and starting in the ’70s, companies introduced a wider range of even more candylike flavors: chocolate, strawberry, Twista Lime, Warm Winter Toffee. Flavored cigarettes were eventually banned in 2009—with the exception of menthol—because of their disproportionate popularity among kids. But flavored e-cigarettes such as Juul took their place just a few years later and quickly became the most popular tobacco product among American youth—until they, too, were mostly banned in 2020.

Now, with products such as Zyn, the candification of nicotine is pretty much complete. Pouches don’t just taste like candy; they’re also packaged like candy and consumed like candy (don’t swallow them, though). Proctor told me he’s talked with people for hours before realizing they had a nicotine pouch in their mouth. “It’s the ultimate merger of two of the leading hazards of modernity,” he said. Other companies such as Velo and Lucy are selling nicotine pouches too. Lucy even calls one of its special pouch lines “Breakers” (which sounds suspiciously close to Icebreakers, though a spokesperson for the company told me in an email, “They are in no way intended to resemble ice breakers the mints or any other type of candy.”). And it’s not just pouches: Nicotine chewing gum and lozenges have become available in wide varieties of flavors and are packaged in candy-colored pastels. Nicotine gummies have been on the rise as well.

[Read: Gummy vitamins are just candy]

Unlike with cigarette-shaped candies or candy-flavored cigarettes, both of which were uncomplicatedly bad, there actually is a legitimate, good-faith argument to be had about the merits of Zyn and similar flavored products. On the one hand, they do not contain tobacco and are not smoked, which is largely what makes cigarettes so deadly. The tobacco industry has positioned these products as a way for adults to wean themselves off of cigarettes, and they sure seem to be much safer than cigarettes, which kill more than 480,000 Americans each year—more than the combined deaths from COVID and car-crash fatalities in 2021. So the more people popping flavored pouches or gummies rather than smoking cigarettes, the better. On the other hand, they are addictive, and flavored products have been shown to play a major role in hooking kids. The PMI spokesperson told me, “If you’re worried about your health, the best thing is to never start using nicotine or”—if you already do—“stop using it.”

Whether the increase in the number of kids using nicotine is worth the decrease in the number of adults using cigarettes is hotly debated. There’s a dark irony to the fact we’re having this debate at all. A hundred years ago, tobacco companies invoked the idea, if not the specific language, of harm reduction when they marketed their cigarettes as a healthy alternative to candy. Now they’re making their own nicotine products more candylike and marketing them as a healthy alternative to cigarettes. The harm reducer has become the harm to be reduced.

After all this, flavored nicotine pouches might end up banned, just like flavored cigarettes and vapes before them. But in the cat-and-mouse game that the tobacco industry has been playing with regulators, Zyn may have a better chance of persisting than anything before it.

Newer Posts