Category:

Medical News

There’s no such thing as a miracle cure for weight loss, but the latest obesity drugs seem to come pretty close. People who take Ozempic or other weekly shots belonging to a class known as GLP-1 agonists, after the gut hormone they mimic, can lose a fifth or more of their body weight in a year. Incessant “food noise” fueling the urge to eat suddenly goes silent.

In recent months, the mystique of these drugs has only grown. Both semaglutide (sold under the brand names Ozempic and Wegovy) and tirzepatide (Mounjaro and Zepbound) were initially developed for diabetes and then repurposed for weight loss. But they apparently can do so much more than that. Studies showing the heart benefits of semaglutide have already led the FDA to approve Wegovy as a way to reduce the risk of major cardiac events, including stroke, heart attack, and death, in certain patients. The drug has also shown clear benefits for sleep apnea, kidney disease, liver disease—and can potentially help with fertility issues, Alzheimer’s, Parkinson’s, colorectal cancer, alcohol overuse, and even nail-biting. These days, a new use for GLP-1s seems to emerge every week.

With each new breakthrough, GLP-1s look more and more like the Swiss Army knife of medications. As Vox asked last year: “Is there anything Ozempic can’t do?” But GLP-1s can’t take all the credit. Obesity is linked to so many ailments that losing huge amounts of weight from these drugs is destined to have “a pretty dominant effect” on health outcomes, Randy Seeley, an obesity researcher at the University of Michigan, told me. Teasing out exactly what is causing these secondary benefits will be difficult. But the future of these drugs may hinge on it.

[Read: The future of obesity drugs just got way more real]

Some of the additional health effects of GLP-1s do seem in line with a drug that can lead to dramatic weight loss. People with obesity are at a much higher risk for heart attacks and liver disease; excessive weight can restrict breathing at night, leading to sleep apnea. Of course obesity drugs would help. Even reports of “Ozempic babies”—people unexpectedly conceiving while on GLP-1s—make sense considering that fertility tends to improve when people lose weight. But weight loss alone isn’t always the only explanation. A major trial tracking the heart health of people on semaglutide suggested that patients can have cardiovascular improvements even if they don’t lose much weight. “It is quite clear that there are benefits to these drugs that are beyond weight loss,” Seeley said.

GLP-1s improve health outcomes through three mechanisms, Daniel Drucker, a professor of medicine at the University of Toronto who co-discovered GLP-1 in the 1980s, told me. (Both Drucker and Seeley have consulted with GLP-1 manufacturers, as have many prominent obesity researchers.) The first mechanism involves the main functions of the drug: controlling blood sugar and inducing weight loss. That the drug coaxes the pancreas into secreting insulin led to its development for diabetes. Weight loss mostly happens through a separate process affecting the brain and gut that prompts a waning appetite and a lingering feeling of fullness. Disentangling their effects is difficult because high blood sugar can lead to weight gain, and is linked to many of the same chronic illnesses as obesity, including heart disease and cancer. The significant reductions in the risk of cardiovascular disease and death from chronic kidney disease seen by people on GLP-1 drugs “certainly reflect” both changes in blood sugar and weight, Drucker said.

A second mechanism that could explain some of these health effects is that the drugs act directly on certain organs. GLP-1 receptors exist on tissues all over the body: throughout the lungs, kidneys, cardiovascular system, gut, skin, and central nervous system. The drugs’ heart benefits, for example, might involve GLP-1 receptors in the heart and blood vessels, Steven Heymsfield, a professor who studies obesity at Louisiana State University, told me.

Beyond affecting individual organs, GLP-1s likely spur wide-ranging effects across the body through a third, more generalized process: reducing inflammation. Chronic diseases associated with obesity and diabetes, such as liver, kidney, and cardiovascular disease, are “all driven in part by increased inflammation,” which GLP-1s can help reduce, Drucker said.

In some situations, these mechanisms may work hand in hand, as in the case of Alzheimer’s. An older GLP-1 drug called liraglutide has shown potential as a treatment for Alzheimer’s, and semaglutide’s effect on early stages of the disease is being tested in a Phase 3 trial. The brain is littered with GLP-1 receptors, inflammation is known to be a central driver of neurodegeneration, and losing weight and having lower blood sugar “will probably help reduce the rate of cognitive decline,” Drucker said.

More complex effects will be harder to disentangle. The drugs are thought to curb addictive and compulsive behaviors, such as alcohol overuse, impulse shopping, and gambling. In animals, GLP-1s have been shown to affect the brain’s reward circuitry—a handy explanation, but perhaps an overly simplistic one. “Reward isn’t just one thing,” Seeley said. The mechanism that makes eating rewarding may differ slightly from that of smoking or gambling. If that’s the case, it wouldn’t make sense for a single drug to tamp down all of those behaviors.

[Read: Did scientists accidentally invent an anti-addiction drug?]

Still other benefits of GLP-1s have yet to be explained. In a large study of people with diabetes published in February, those who took GLP-1s had a lower risk of colorectal cancer than those who didn’t—and weight didn’t seem to be a factor. One possible explanation for the link is that the drugs reduce inflammation that could lead to cancer. Yet recent research in mice suggests that blocking the GLP-1 receptor—that is, doing the opposite of what the drugs do—is what triggers the immune system to fight colorectal cancer.

Some of the ancillary effects being observed now will prove to be legitimate; others won’t. “This happens every time we discover a new molecule,” Seeley said. At first, a drug proves to be amazingly effective against the condition it’s designed to treat. As more people use it, new effects come to light; before long, it begins to seem like a cure-all. Research ensues. Then, the comedown: The studies, when completed, show that it can treat some conditions but not others. In the 1980s, statins emerged as a powerful treatment for high cholesterol, and excitement then mounted about their additional benefits on kidney disease and cognitive decline. Now statins are largely used for their original purpose.

Each new discovery about what GLP-1s can do seems like a lucky surprise—a bonus effect of already miraculous drugs. But people don’t want drugs that are surprising. They want ones that are effective: not medications that might lower their risk of other illnesses, but those that will. To make those drugs, manufacturers need to know what’s actually happening in the body—to what degree the health effects can be attributed to more than just weight loss. To prescribe those drugs, health-care providers need to know the same thing. Doing so will become even harder as GLP-1s themselves become more complicated, targeting multiple other metabolic pathways, each with their own downstream effects. Tirzepatide already targets an additional hormone on top of GLP-1, and a drug that targets three hormones is on its way.

A fuller picture of the potential of GLP-1s may begin to emerge soon. Some of the trials investigating their effects on early Alzheimer’s and Parkinson’s are expected to have results before the end of 2025, offering “a glimpse of whether or not they work,” Drucker said. Eventually, studies may reveal how they work—for these and all the other ancillary benefits. Drug companies are in a furious battle to develop new kinds of obesity drugs, and as it’s shaping up, the future of these medications may not entirely be about obesity. As new kinds of drugs are developed, drugmakers will have to consider whether they maintain, improve upon, or weaken the other benefits, according to Drucker. Competition will likely give rise to a wide range of drugs, each specific to a certain condition or combination of them. GLP-1s might follow the trajectory of blood-pressure medications, which come in more than 200 types to suit all kinds of patients.

New benefits will propel GLP-1 further into the mainstream—not just by opening them up to new subsets of people, but by adding pressure on insurance providers to cover them. Medicare doesn’t pay for obesity drugs, in part because the federal government has historically considered weight loss to be a cosmetic issue, not a medical one. But in March, after the FDA extended Wegovy’s approval to include reducing cardiovascular risk in people with obesity, some Medicare plans began to offer coverage to patients with both weight and heart problems. That GLP-1s have multiple uses is not in itself miraculous. But it would be a small miracle if all of their additional effects, whether separate from or downstream of weight loss, are what help obesity drugs become as widely available as so many other life-changing treatments.

Updated at 12:05 p.m. ET on June 7, 2024

Our most recent flu pandemic—2009’s H1N1 “swine flu”—was, in absolute terms, a public-health crisis. By scientists’ best estimates, roughly 200,000 to 300,000 people around the world died; countless more fell sick. Kids, younger adults, and pregnant people were hit especially hard.

That said, it could have been far worse. Of the known flu pandemics, 2009’s took the fewest lives; during the H1N1 pandemic that preceded it, which began in 1918, a flu virus infected an estimated 500 million people worldwide, at least 50 million of whom died. Even some recent seasonal flus have killed more people than swine flu did. With swine flu, “we got lucky,” Seema Lakdawala, a virologist at Emory University, told me. H5N1 avian flu, which has been transmitting wildly among animals, has not yet spread in earnest among humans. Should that change, though, the world’s next flu pandemic might not afford us the same break.

[Read: Cows have almost certainly infected more than two people with bird flu]

Swine flu caught scientists by surprise. At the time, many researchers were dead certain that an H5N1, erupting out of somewhere in Asia, would be the next Big Bad Flu. Their focus was on birds; hardly anyone was watching the pigs. But the virus, a descendant of the devastating flu strain that caused the 1918 pandemic, found its way into swine and rapidly gained the ability to hack into human airway cells. It was also great at traveling airborne—features that made it well positioned to wreak global havoc, Lakdawala said. By the time experts caught on to swine flu’s true threat, “we were already seeing a ton of human cases,” Nahid Bhadelia, the founding director of the Boston University Center on Emerging Infectious Diseases, told me. Researchers had to scramble to catch up. But testing was intermittent, and reporting of cases was inconsistent, making it difficult for scientists to get a handle on the virus’s spread. Months passed before the rollout of a new vaccine began, and uptake was meager. Even in well-resourced countries such as the U.S., few protections hindered the virus’s initial onslaught.

But the worst never came to pass—for reasons that experts still don’t understand. Certainly, compared with the 1918 pandemic, or even those in the 1950s and ’60s, modern medicine was better equipped to test for and treat flu; although vaccine uptake has never been perfect, the availability of any shots increased protection overall, Sam Scarpino, an infectious-disease modeler and the director of AI and life sciences at Northeastern University, told me. Subtler effects may have played a role too. Other H1N1 viruses had been circulating globally since the late 1970s, potentially affording much of the population a degree of immunity, Troy Sutton, a virologist at Pennsylvania State University, told me. Older people, especially, may have harbored an extra dose of defense, from additional exposure to H1N1 strains from the first half of the 20th century. (After the 1918 pandemic, versions of that virus stuck around, and continued to percolate through the population for decades.) Those bonus safeguards might help explain why younger people were so severely affected in 2009, Lakdawala told me.

Some of those same factors could end up playing a role in an H5N1 epidemic. But 2009 represents an imperfect template—especially when so much about this new avian flu remains unclear. True human-to-human spread of H5N1 is still a distant possibility: For that, the virus would almost certainly need to undergo some major evolutionary alterations to its genome, potentially even transforming into something almost unrecognizable. All of this muddies any predictions about how a future outbreak might unfold.

Still, experts are keeping a close eye on a few factors that could raise H5N1’s risks. For instance, no versions of H5N1 flu have ever gained a sustained foothold in people, which means “there’s very little immunity in the community,” Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota, told me.

Exposure to other flu strains could offer limited protection. Lakdawala and Sutton have been running experiments in ferrets, which transmit and fall ill with flu much like people do. Their preliminary results suggest that animals with previous exposures to seasonal-flu strains experience milder disease when they’re next hit with this particular H5N1. That said, ferrets with zero prior flu experience—which would be the case for some very young kids—fare poorly, worse than they do with the H1N1 of 2009, and “that’s scary,” Lakdawala told me.

It’s too early to say how those results would translate into people, for whom data are sparse. Since this H5N1 virus was first detected in the 1990s, scientists have recorded hundreds of human cases, nearly half of whom have died. (Avian flus that spill intermittently people often have this kind of nasty track record: This week, the WHO reported that another kind of bird flu, designated H5N2, killed a man in Mexico in late April. It was the flu subtype’s first recorded instance in a human; no evidence suggests yet that this virus has the ability to spread among people, either.) Experts caution strongly against reading too much into the stats: No one can be certain how many people the virus has actually infected, making it impossible to estimate a true fatality rate. The virus has also shape-shifted over decades—and the versions of it that killed those people did not seem capable of spreading among them. As Sutton pointed out, past experiments suggest that the mutations that could make H5 viruses more transmissible might also make them a bit less deadly. That’s not a guarantee, however: The 1918 flu, for instance, “transmitted really well in humans and caused very severe disease,” Sutton said.

[Read: America’s infectious-disease barometer is off]

Scientists also can’t extrapolate much from the fact that recent H5N1 infections among dairy workers in the U.S. have been documented as mild. Many people who work on farms are relatively young and healthy, Bhadelia noted; plus, their exposures have, presumably, been through virus-laden raw milk. The virus could affect a different community in more dramatic ways, and the nature of the disease could shift if the virus entered the body via another route. And “mildness” in the short term isn’t always a comfort, Scarpino said: As with COVID, the disease could still have chronic consequences for someone’s health.

The world is in some ways better prepared for H5N1 than it was in 2009. Scientists have had eyes on this particular avian flu for decades; in the past few years alone, they’ve watched it hopscotch into dozens of animal species, and tracked the genetic tweaks it’s made. Already, U.S. experts are testing for the pathogen in wastewater, and federal regulators have taken action to halt its spread in poultry and livestock. H5 vaccines are stockpiled, and more are on the way—a pipeline that may be speedier than ever before, thanks to the recent addition of mRNA tech.

[Read: The bird-flu host we should worry about]

But this close to the worst days of the COVID-19 pandemic, Osterholm and others worry that halting any outbreak will be harder than it otherwise would have been. “We could see many, many individuals refusing to get a vaccine,” he said. (That may be especially true if two doses are required for protection.) Bhadelia echoed that concern, adding that she’s already seeing a deluge of misinformation on social media. And Scarpino noted that, after the raging debates over COVID-era school closures, legislators may refuse to entertain the option again—even though children are some of the best conduits for flu viruses. Stopping a pandemic requires trust, coordination, and public buy-in. On that front alone, Osterholm said, “without a doubt, I think we’re less prepared.”

The world has a track record of not taking flu seriously—even, sometimes, when it sparks a pandemic. In the months following the initial outbreaks of swine flu, the outbreak was mocked as a nothingburger; public-health officials were criticized for crying wolf. But the arguably “mild” flu epidemic still filled hospital emergency departments with pneumonia cases, spreading the virus to scores of health-care workers; kids still fell sick in droves. So many young people died that, in terms of years of life lost, Osterholm told me, the toll of 2009 still exceeded those of the flu pandemics that began in 1957 and 1968. Nor are comparisons with seasonal flus exactly a comfort: Most years, those epidemics kill tens of thousands of people in the U.S. alone.

H5N1 could also permanently alter the world’s annual flu burden. An avian-flu pandemic could present the perfect opportunity for this virus to join the other flus that transmit seasonally—becoming endemic threats that may be with us for good. “We’ve seen that with every flu pandemic that’s occurred,” Sutton told me. More circulating flu viruses could mean more flu cases each year—or, perhaps, more chances for these viruses to mingle their genetic material and generate new versions of themselves to which the population lacks immunity.

However likely those possibilities are, halting H5N1’s spread now would preclude all of them. Scientists have foresight on this avian flu in a way they never did with pre-pandemic swine flu. Capitalizing on that difference—perhaps the most important one between these two flus—could keep us from experiencing another outbreak at all.

“People will die if doctors misdiagnose patients.” This is true as far as it goes. But the recent news that prompted Elon Musk to share this observation on X was not precisely about medical errors. It was about what he might call the “woke mind virus.” A story by Aaron Sibarium in The Washington Free Beacon had revealed complaints that UCLA’s medical school was admitting applicants partly based on race—a practice that has long been outlawed in California public schools. And this process wasn’t just discriminatory, the story argued; it was potentially disastrous for the public.

The Free Beacon noted that the med school’s U.S. News & World Report ranking had dropped from 6 to 18 since 2020, and the story shared leaked data showing students’ poor performance on their shelf exams. (These evaluations are used as preparation for the national licensing exams that every M.D. recipient must pass before they can practice medicine in the United States.) According to Sibarium, almost one-quarter of the class of 2025 had failed at least three shelf exams, while more than half of students in their internal-medicine, family-medicine, emergency-medicine, or pediatrics rotations had failed tests in those subjects at one point during the 2022–23 academic year—and those struggles led many trainees to postpone taking their national licensing exams. “I don’t know how some of these students are going to be junior doctors,” one unnamed UCLA professor told him. “Faculty are seeing a shocking decline in knowledge of medical students.”

Steven Dubinett, the dean of UCLA med school, denied the story’s allegations. He told me that the admissions committee does not give advantages to any applicants based on race, and he called it “malign and totally not true” to say that his students have been struggling. Dubinett believes that anti-DEI sentiment both within and outside the school is to blame for stirring up this controversy: People “think diversity, equity, and inclusion is in some way against them. And nothing could be further from the truth,” he said. But the stakes are high for prospective doctors and patients alike. The Free Beacon’s claims call attention to a heated fight among medical educators over how much admissions criteria and test scores actually matter, and whether they have any bearing on what it means to be a good physician.

[Read: When medical schools become less diverse]

Sibarium asserts that both UCLA’s fall in the rankings and its decline in test performance can be blamed, in part, on race-based admissions. The latter metric “coincided with a steep drop in the number of Asian matriculants,” the story said, and was associated with a change in the med school’s admissions standards. Dubinett told me that the decline in shelf-exam performance was “modest.” He also pointed to data shared with the UCLA community by the school after the publication of the Free Beacon story, which show that every test-taker had passed surgery, neurology, and emergency medicine during a recent set of exams from 2023–24. The data also show that, for most other specialties, the passing rates were close to the expected benchmark of 95 percent. Moreover, 99 percent of UCLA’s med students had passed the second of three tests required to obtain a medical license on their first try as of 2022–23, and scores have remained above the national average over the past three years.

Meanwhile, the Free Beacon offers little more than speculation about how UCLA’s shifting racial demographics might be linked to academic problems at the school. Nationwide, Black and Hispanic medical students do matriculate with slightly lower grades and scores on the standardized Medical College Admission Test than white and Asian matriculants, according to statistics compiled by the Association of American Medical Colleges. Yet data from UCLA and the AAMC show that the average MCAT scores of UCLA’s accepted medical students have not declined in recent years. As for grades, the average GPA among UCLA’s accepted applicants is 3.8—up from 3.7 in 2019. Dubinett told me that the school sets a minimum threshold for MCAT scores and GPAs that is designed to produce a high graduation rate. “There’s a cutoff based on national data—it’s not made up in our back room,” he said. “We’ve got nearly 14,000 students applying for 173 spots. Are you going to tell me that we’re getting people who are unqualified?”

So what did happen in 2022 to suddenly make so many students perform poorly on their shelf exams? The Free Beacon acknowledges that the med school began a major update to its curriculum in 2020. Following a national trend, UCLA significantly cut back the initial amount of time spent on classroom teaching so that clinical rotations could begin a year earlier. Med schools have been trying to expose their students to real-world experiences sooner, on the principle that book-learned facts are less worthwhile without on-the-job training. Yet shelf exams still require memorizing a hefty dose of facts, which can only be more difficult given less time to study. Dubinett chalks up the initial drop in scores to having students take the tests earlier in their education. Five other medical schools, he notes, also reported a decline in shelf-exam performance under a compressed curriculum.

[From the May 1966 issue: “Our Backward Medical Schools”]

Whether med schools’ broader shift away from traditional coursework has been producing better doctors overall is a separate question. A new curriculum that led to lower standardized-test scores in the short term might be associated, in the long run, with either better or worse clinical care. But no one really knows which is the case. There is still no reliable way to track educational quality. The influential—but controversial—U.S. News med-school rankings, for instance, don’t directly evaluate how good a program is at preparing future doctors. The benefits (or costs) of switching up criteria for admission into med school are just as mysterious. One study counted as many as 87 different personal qualities that are considered useful in the practice of medicine. Which qualities matter most is anyone’s guess.

At UCLA, dry—yet still important—scientific material has been compressed in favor of hands-on experience. The school has also committed to instilling its student body with a social consciousness. In prior coverage for the Free Beacon, Sibarium has described the mandatory Structural Racism and Health Equity course for first-years, which, according to a 2023–24 syllabus obtained by the Free Beacon, intends to help students “develop a structurally competent, anti-racist lens for viewing and treating health and illness,” and encourages them to become “physician-advocates within and outside of the clinical setting.” The Free Beacon called attention to pro-Palestinian, anti-capitalist, and fat-activist messaging included in the course this academic year. That story quotes a former dean of Harvard Medical School, Jeffrey Flier, who described the course as “truly shocking” and said that it is based on a “socialist/Marxist ideology that is totally inappropriate.” (Dubinett told me that the entire first-year curriculum is being evaluated.) Other academics have expressed concern about how a similar approach to teaching students, adopted on a larger scale, might be changing medicine. Schools’ focus on racism and inequality “is coming at the expense of rigorous training in medical science. The prospect of this ‘new,’ politicized medical education should worry all Americans,” Stanley Goldfarb, a former associate dean at the University of Pennsylvania medical school, wrote in a 2019 op-ed in The Wall Street Journal. In a follow-up from 2022, he warned of a “woke takeover of healthcare.”

[Read: The French are in a panic over le Wokisme]

What, exactly, makes a med school “woke”? Any physician can see that unevenly distributed wealth and opportunity play a role in people’s health, and that many illnesses disproportionately fall along racial lines. Doctors who learn about these topics while in school may be more cognizant of them in the clinic. Indeed, after Goldfarb’s first essay was published in the Journal, some physicians started sharing stories that could be taken to support this argument: Posting under the hashtag #GoldfarbChallenge, they described how trying to help patients navigate dire living situations is as much a part of the job as recalling the nuances of biochemistry.

Acknowledging inequality is not an entirely new phenomenon in medicine. Schools were already teaching classes on cultural competence and health disparities when I was a student at the University of Rochester a decade ago. What is different is the open endorsement of political activism—almost always from a left-wing perspective. I certainly attended my share of lectures on how to care for patients from different backgrounds, but I don’t recall witnessing any lecturers leading classroom chants of “Free, Free Palestine,” as allegedly occurred during UCLA’s first-year course this spring. Since I graduated, the AAMC has published a voluntary set of “Diversity, Equity, and Inclusion Competencies” that encourage trainees to “influence decision-makers and other vested groups” by advocating for “public policy that promotes social justice and addresses social determinants of health.” The guidance lists “colonization, White supremacy, acculturation, assimilation” as “systems of oppression” whose impact must be remedied. This new pedagogical approach comes at a time when U.S. physician groups have taken vocal stances on controversial issues such as gun violence, transgender rights, and mask mandates.

I suspect this left turn in medicine was born of a feeling of impotence, rather than a Marxist conspiracy. Doctors have always been better at altering people’s physiology than fixing the social and economic circumstances of their patients. Perhaps medical schools now figure that health outcomes will improve if physicians become more involved in progressive politics. But whatever the intention, this approach will alienate a lot of patients. In recent months, some doctors have been disciplined for voicing pro-Palestine or pro-Israel stances—presumably on behalf of potential patients who might be offended by their politics. Maybe the same caution should apply to med-school lectures given at UCLA.

The push for improved student-body diversity has also grown in prominence. For most of the 20th century, schools encouraged applicants to fit the typical pre-med profile of a diligent lab rat. Over the past few decades, that attitude has changed. Now admissions offices are more comfortable with the idea that students who haven’t focused on the hard sciences or don’t have perfect academic records can still become successful—or might be even better—physicians.

I credit this shift for my own admission to medical school. I was a socially minded liberal-arts student who decided to study linguistics after a calamitous run-in with organic chemistry. By the time I applied, some schools had decided that MCAT scores were not the ultimate determinant of who will make a good doctor. My university was so interested in attracting the sorts of kids who might enrich the campus through what it now calls “the diversity of their educational and experiential backgrounds” that it allowed me to skip the exam altogether. I did end up having academic struggles, and passed anatomy by the skin of my teeth (having failed to correctly answer how many teeth humans have, among other questions). Now I’m a medical-school professor myself. It takes all kinds.

To this day, would-be doctors are expected to master an incredible amount of minutiae, but it is only through clinical practice that they figure out which facts matter most. Nothing is as clarifying as seeing patients live or die because of what you know—or, just as often, how well you communicate it. The Free Beacon article relayed an anecdote by a faculty member describing how a student “could not identify a major artery” in the operating room when asked. Being told to pick out an artery on the spot and failing at that task is, frankly, a rite of passage for medical students. But I’ve seen more people hurt by doctors who didn’t know how to speak Spanish or build rapport than by doctors who forgot the name of a blood vessel. If we keep arguing over what health-care professionals must know, it’s because the answers are as varied as our patients.

The Obese Police

by

Language is constantly evolving, but you know a change has hit the big time when the AP Stylebook makes it official. In light of all the recent news attention to Ozempic and related drugs, the usage guide’s lead editor announced in April that the entry for “Obesity, obese, overweight” had been adjusted. That entry now advises “care and precision” in choosing how to describe “people with obesity, people of higher weights and people who prefer the term fat.” The use of obese as a modifier should be avoided “when possible.”

In other words, the new guidelines endorse what has been called “people-first language”—the practice of trading adjectives, which come before the person being described, for prepositional phrases, which come after. If you put the word that indicates the condition or disability in front, then—the thinking goes—you are literally and metaphorically leading with it. Reverse the order, and you’ve focused on the person, in all their proper personhood. This change in syntax isn’t just symbolic, its proponents argue: A fact sheet from the Obesity Action Coalition promises that people-first language can “help prevent bias and discrimination.” Changing words is changing minds.

People’s minds sure could use some changing. The world is an awfully inhospitable place for fat people—I know firsthand, because I used to be one. But I also know secondhand, because the discrimination, bias, and downright cruelty are on display for anyone who’s paying attention. Nobody with a shred of decency wants a society where fatness, obesity, high BMI—whatever you call it—is an invitation to humiliation and scorn. So if using people-first language really can reshape people’s attitudes, or if it really makes the world even just a sliver more accepting, I’m in.

I am not at all convinced, though, that a diktat about language will ever make a dent in deeply entrenched enmity; and although the push for people-first language is undoubtedly well-meaning, there’s a whiff of condescension in the idea that people can’t recognize kindness and compassion without signposts put up by social scientists. Around every use of obese or fat or people living with obesity, there are lots of other phrases, and it’s those other phrases—not the people-first or people-last ones—that convey how the writer or speaker feels about fatness.

This puts me at odds with just about the entire medical establishment. “Because of the importance of reducing bias associated with obesity, The Obesity Society and all members of the Obesity Care Continuum have affirmed people-first language as the standard for their publications and programs,” Ted Kyle and Rebecca Puhl wrote in a 2014 commentary for the journal Obesity. The American Medical Association did the same in 2017. People-first language for obesity is now preferred at the National Institutes of Health and the Obesity Action Coalition. Ditto the American Academy of Orthopaedic Surgeons, the College of Contemporary Health, Obesity Canada, and the World Obesity Federation. You need to follow suit if you want to publish academic work in certain journals, present at certain conferences, or—as of this spring—write for any outlet that uses the AP Stylebook.

The problem is, there’s not much evidence that people-first language really can reduce bias, let alone eliminate it. The first position statement on the topic, put out by the Obesity Society in 2013 and co-signed by four other groups, offered just two references to prior research. The first pointed to a study done more than a decade earlier at Ball State University, where psychology researchers asked a few hundred students to describe a hypothetical person with a disability, and then surveyed the same students on their disability-related attitudes. The authors found that people who didn’t use people-first language in their descriptions had more or less the same attitude as people who did—although on a few specific items in the survey, they did show some signs of greater bias. (As the paper notes, “results were mixed.”) In any case, the study gave no reason to believe that students’ word choice was affecting their beliefs, rather than vice versa (which makes more sense). Still, advocates in the obesity field have been pointing to this research, again and again, as evidence that “people-first language affects attitudes and behavioral intentions,” as those advocates put it.

The Obesity Society’s second cited reference in support of people-first language points to a study that came out in 2012, led by Puhl, who is now the deputy director of the Rudd Center for Food Policy and Health at the University of Connecticut. Puhl and her co-authors surveyed more than 1,000 adults on how they’d feel if a doctor at a checkup used each of 10 terms to describe them, including obese, unhealthy weight, high BMI, chubby, and fat. On average, people said that unhealthy weight and high BMI were more desirable, and felt less stigmatizing, than most of the other options; obese and fat were just the opposite. But no one was asked about obese versus person with obesity.

For a paper published in 2018, a group of researchers at the University of Pennsylvania’s Center for Weight and Eating Disorders finally posed that question, in a survey of 97 patients seeking bariatric surgery. Respondents were asked how much they liked each of seven “obesity-related terms,” including some that were people-first (for example, person with obesity and person with excess fat) and some that were not (obese person, fat person). The former got higher ratings, overall.

But even the Penn study had complications. For one thing, not every people-first phrasing was preferred: Patients said they liked the term heavy more than person with excess fat, for example. Also, when asked to choose between obese person and person with obesity, the men in the group didn’t go for people-first—they preferred the more old-fashioned terminology. In a 2020 review, Puhl found that preference for weight-related terms differed not only by gender, but also by race or ethnicity, age, and body size. “People generally prefer more neutral terminology, like higher weight,” she told me recently, but some African Americans might like the word thick, while adolescents at a weight-loss camp favored chubby and plus size (but not curvy). Aspiring health-care providers were fond of unhealthy weight, understandably. Taken all together, she explained, overweight did pretty well, while fat and obese did not.

But again, very little could be said about anybody’s preference for (or against) people with obesity: Out of the 33 studies that Puhl used for her analysis, exactly one—the Penn survey—included people-first phrasing. As for whether using obese as an adjective might actually cause harm, and whether people-first constructions could ever ameliorate that harm, Puhl acknowledged that the evidence is thin. We have surveys on preferences, along with the occasional study (such as this one, on substance abuse) that shows people having slightly different reactions to written passages using different language. And that’s about it.

[Read: The medical establishment embraces leftist language]

It’s hard to imagine what persuasive evidence of harm from using obese as an adjective would even look like. How can we tease out a causal effect of language on social conditions? And, to muddy the waters even more, many fat activists make the case that all forms of the word obesity are stigmatizing. If you’re defining people with a certain BMI or above as having a disease, then how you choose to write your sentences doesn’t really matter, Tigress Osborn, the executive director of the National Association to Advance Fat Acceptance (NAAFA), told me. “Obesity as a disease state is dehumanizing in and of itself,” she said. Whether it’s used as an adjective or noun, the O-word pathologizes fatness.

Some doctors have subscribed to this belief. In 2017, the American Association of Clinical Endocrinologists and the American College of Endocrinology put out a statement citing what they called “the stigmata and confusion related to the differential use and multiple meanings of the term ‘obesity,’” which proposed a new alternative: “adiposity-based chronic disease.” But activists like Osborn opt for plain old fat. She described going to a diversity symposium when she was in college and meeting a NAAFA member who was unapologetic in her use of the word. “She was the first person in my real life who used fat as an adjective and not as an insult,” Osborn said. That’s how to destigmatize the word, she added: Just use it in an ordinary way, to describe an ordinary human condition. “You can’t destigmatize a word you can’t even say.”

When I asked Puhl and Osborn for some actual guidance on all of this, both responded with advice that is consistent with common sense and common courtesy. They talked about context: The language a doctor uses with a patient is going to be different from the language a journalist uses in an article about obesity statistics, which is going to be different from how we talk with friends and family. If the person right in front of you has a clear language preference, honor it. If you’re addressing a group, mix it up. If you feel respect and compassion, that will come through.

As a journalist on the obesity beat, I write about obese people pretty often, so I bristled when a well-known obesity researcher chastised me not long ago for using obese as an ordinary adjective. “Join the people who care,” he wrote. But the idea that word order telegraphs moral priority simply doesn’t jibe with how people actually speak and write, and insisting that it does burdens us with, at best, linguistic awkwardness and, at worst, abominations like people with overweight. True, you wouldn’t describe someone with cancer as being cancerous or someone with dementia as being demented, because those words have their own colloquial meanings. There are, however, other perfectly respectable health-related adjectives that get used routinely: diabetic, asthmatic, anemic, immunocompromised, myopic. And, I think, obese.

Language is, by its nature, majority-rule. A word’s meaning changes when enough people use it in its new, changed way. And I understand the hope and the compassion behind a top-down effort to change the way we talk about fatness. But I do not, cannot, see the value in replacing garden-variety adjectives with phrases that only call attention to themselves.

If ideas like this get traction, it’s because we don’t have many effective strategies to combat bias, so well-intentioned people latch on to anything that looks even remotely promising. But our public discourse shouldn’t be victim to attempts to rally consensus for a position that is largely unsupported by the evidence. Using people with obesity will not make much difference in the end. But the policing of language and, by extension, the ideas that it expresses, certainly might.

Tomorrow, a Food and Drug Administration advisory committee will meet to discuss whether the United States should approve its first psychedelic drug. The fate of the treatment—MDMA-assisted therapy for post-traumatic stress disorder—will turn on how the FDA interprets data from two clinical trials that, on their face, are promising. Long-suffering patients who took the drug while undergoing intensive talk therapy were about twice as likely to recover from PTSD as patients who got the placebo with therapy.

If the treatment is approved this summer, it could bring relief to some of the approximately 13 million Americans with PTSD. It could also serve as a model for other psychedelics to meet the FDA’s regulatory bar. But there’s a conundrum at the core of these two clinical trials, one that has plagued virtually all efforts to study psychedelics.

In clinical trials, participants (and the researchers studying them) generally aren’t supposed to know whether they’re getting the actual drug or a placebo, to avoid allowing people’s expectations about a treatment to shape their response to it. Blinding, as this practice is called, is a key component of a randomized controlled clinical trial, or RCT—medicine’s gold standard for demonstrating that a drug actually works. But virtually no one can take a psychedelic drug and not know it.

Some experts believe that unblinding threatens to undermine the entire field of psychedelic research because it means researchers can’t know whether the drugs’ early promise in clinical trials is real or a mirage, driven by the placebo effect and outsize expectations about the power of these drugs. But others argue that RCTs themselves are at fault. To them, psychedelics are exposing long-ignored cracks in our gold standard, especially for testing drugs that act on our minds.

[Read: What it’s like to trip on the most potent magic mushroom]

When randomized controlled trials are well designed, “there is no substitute,” Boris Heifets, a neuroscientist at Stanford University, told me. In an RCT, participants get randomly sorted into two groups, receiving either the treatment or a placebo. Scientists have prized such trials since the 1960s for their power to rule out all the nondrug reasons people who are given a new medication might get better. Chief among those reasons is the placebo effect, in which a patient’s belief in a treatment, rather than anything about the drug or procedure itself, leads to improvement. If trial participants come in with sky-high expectations (as experts suspect is the case in many psychedelics trials), knowing that they’ve received a drug could fuel positive responses, and learning they’ve been denied it could cause them to react negatively. “We’ve gotten a ton of things wrong by trusting unblinded results,” says David Rind, the chief medical officer of the Institute for Clinical and Economic Review, a nonprofit that evaluates new medical treatments.

For all of RCTs’ advantages, “I think it’s obvious that they’re not well suited for studying psychedelics,” Heifets said. In cancer-drug trials, participants won’t know the difference between a saline IV drip and medicine; to test new surgical procedures, control groups sometimes get cut into and sewed up without the actual treatment. But psychedelics like psilocybin or LSD launch people into hallucinatory states that bend space and time. MDMA, known to many as ecstasy, is less extreme, but still sparks expansive feelings of love and empathy. “Participants will know within half an hour whether they’ve been assigned to the experimental or placebo condition,” Michiel van Elk, a cognitive psychologist at Leiden University, told me. In the MDMA clinical trials, run by the pharmaceutical company Lykos Therapeutics, nearly all participants correctly guessed which group they were in.

Many scientists want to get around this problem by designing better blinds. Some labs have tried to keep patients in the dark by administering drugs under anesthesia or using mind-altering pills like methamphetamines as a placebo. Others are trying to engineer new psychedelics that skip the trip entirely. But to other scientists, clever attempts to stuff psychedelics into the RCT framework ignore the possibility that psychedelics’ benefits aren’t reducible to the biochemical action of the drug itself. Since the 1960s, psychedelic researchers have known that the beliefs and expectations a person brings to a trip can influence whether it’s healing or nightmarish. (That’s why most psychedelic-therapy protocols include several psychotherapy sessions before, during, and after treatment.) By striving to cleave the drug’s effects from the context in which it’s given—to a patient by a therapist, both of whom are hoping for healing—blinded studies may fail to capture the full picture.

[Read: Psychedelics open your brain. You might not like what falls in.]

From this perspective, high proportions of unblinding in positive psychedelic trials don’t necessarily mean that the results are invalid. “It’s how people engage with those effects and their therapist that’s contributing to the improvement,” Eduardo Schenberg, a neuroscientist at Instituto Phaneros, a nonprofit psychedelic-research center in Brazil, told me. Recent research backs this up. One small study found that among chronic PTSD patients who got MDMA-assisted therapy, the strength of the bond between therapist and patient—something the drug helps forge with its empathy-inducing effects—predicted treatment success. Given the importance of context, even the most perfectly designed RCTs may fail to capture how helpful these drugs are outside trials, Schenberg said.

Such failure, if it exists, might extend beyond psychedelics to several kinds of psychoactive drugs. For instance, a 2022 analysis found that many antidepressant trials fail to effectively blind participants, in part because of side effects. “We know that 80 percent of the treatment response from antidepressants can be attributed to the placebo response,” Amelia Scott, a clinical psychologist at Macquarie University who co-wrote that study, told me. Yet in practice, antidepressants are effective for many people, suggesting that RCTs aren’t quite capturing what these drugs can offer—and that limiting ourselves to treatments that can be perfectly blinded could mean ignoring helpful mental-health interventions. “We shouldn’t be afraid to question the gold standard,” Schenberg told me. “For different kinds of diseases and treatments, we may need slightly different standards.”

RCTs likely won’t lose their perch as the gold standard anytime soon, for evaluating psychedelics or anything else. But they could be supplemented with other kinds of studies that would broaden our understanding of how psychedelics work, Matt Butler, a neuroscientist at King’s College London, told me. Scientists are already trying open-label trials, where participants know which treatment they’re getting, and measuring expectations along with treatment effects. Descriptive studies, which track how treatments are working in actual clinics, could provide a richer picture of what therapeutic contexts work best. “These levels of evidence aren’t as good as RCTs,” Butler said, but they could help deepen our understanding of when therapies that don’t conform to RCTs could be most helpful.

[Read: What if psychedelics’ hallucinations are just a side effect?]

None of this is to say that Lykos’s flawed RCT data will be enough to convince the FDA’s advisers that Americans with PTSD should be offered MDMA. Several groups, including the American Psychiatric Association, have expressed concern about the trials ahead of the advisory meeting. In addition to the unblinding issue, claims that therapists encouraged participants to report favorable results and hide adverse events prompted the Institute for Clinical and Economic Review to release a report casting doubt on the studies. Lykos CEO Amy Emerson pushed back in a statement, saying, “We stand by the quality and integrity of our research and development program.” Still, some researchers remain worried. “If this sets a precedent that these trials are acceptable data, what does that mean for the future?” Suresh Muthukumaraswamy, a neuropharmacologist at the University of Auckland, told me.

The recent past suggests that blinding may not be a deal-breaker for the FDA. In 2019, the agency approved Spravato esketamine nasal spray—a version of ketamine—for the treatment of depression despite concerns about unblinding in the drug’s clinical trials. And the FDA worked with Lykos to design the MDMA-therapy trials after designating it a breakthrough treatment in 2017. In an email, an FDA spokesperson told me that blinded RCTs provide the most rigorous level of evidence, but “unblinded studies can still be considered adequate and well-controlled as long as there is a valid comparison with a control.” In such cases, the spokesperson said, regulators can take into account things like the size of the treatment effect in deciding whether the treatment performed significantly better than the placebo.

[Read: Placebo effect of the heart]

Even if the FDA is on board, rolling out psychedelic therapies before scientists fully understand the interplay among expectation, therapy, and drugs could mean missing an opportunity to force companies to provide data that would meaningfully advance the study of these drugs, Muthukumaraswamy said. It also risks undermining these treatments in the long run. If sky-high expectations are ultimately fueling the positive results we see now, the FDA could end up approving a treatment that becomes less effective as its novelty wears off. That’s especially true if we’re missing key components of what makes these treatments work, or what puts people at risk for bad experiences. To better answer those questions—for psychedelics and other psychoactive drugs—we may need studies that go beyond the gold standard.

For some, the world suddenly goes blurry. Others describe it as having a dust storm in your eyes, or being shaken up in a snow globe. People might see flashing lights or black spots drifting through their field of vision, or acquire a sudden sensitivity to light, worse than walking into the sunlight after having your eyes dilated. If patients aren’t treated, some will inevitably go blind.

Many medical providers never suspect the culprit: syphilis. Usually, a syphilis infection shows up first as a firm, painless sore on the genitals or inside the mouth or anus, then as a rash, often on the hands and feet. If the infection is caught in either of these two stages, the cure is a shot of penicillin, which kills the bacteria. Left untreated, syphilis can enter another, more dangerous phase, attacking the heart, bones, brain, or nerves years or even decades later. Only about 1 to 5 percent of syphilis cases are thought to involve the eyes.

But now, eye symptoms are showing up seemingly all by themselves. Last year, doctors reported 17 new cases of eye syphilis to the Chicago Department of Public Health, mostly in people assigned male at birth with no other signs of the disease. In southwest Michigan, in 2022, five women showed up at clinics with ocular syphilis that ended up being traced back to the same male partner. Experts are disturbed by what these cases might portend: that syphilis has been allowed to spread so widely, and for so long, that what used to be considered a fringe event might not be so rare anymore.

Because eye-syphilis symptoms can be the only noticeable sign of the disease, by the time people do get correctly diagnosed, their vision might be permanently damaged. Peter Leone, an infectious-disease physician at the University of North Carolina School of Medicine, is haunted by a patient who came into his hospital in 2015. The 33-year-old man had been experiencing blurred vision, light sensitivity, and ringing in his ears for weeks, but was misdiagnosed with a sinus issue at the emergency room and sent home with antibiotics. By the time Leone saw him two weeks later, the man could barely count the fingers on a hand held directly in front of his face. Leone immediately began treating him for syphilis, but he never regained his vision.

“Obviously it’s disturbing,” Leone told me. Eye syphilis “was a rare event before, and there seems to be a resurgence.” He was so troubled by the patient he saw in 2015 that he reached out to colleagues to document other cases of eye syphilis around the country, warning that they could represent “a true epidemic.” Scattered reports of rising ocular syphilis have also occurred in France, Canada, and other countries.

The simplest explanation for the jump in eye-related cases could just be that syphilis of any sort has been on the rise in the U.S. for decades, says Amy Nham, an officer with the Epidemic Intelligence Service at the CDC who investigated the Chicago cases. Sexually transmitted infections of all kinds are increasing worldwide, thanks to a long-standing lack of access to testing and treatment, increasing drug use, and falling condom use.

In the U.S., syphilis is gaining ground with particular speed. More than 200,000 Americans were infected with syphilis in 2022, which experts believe is likely an underestimation due to lack of screening during the coronavirus pandemic. That’s almostdouble 80 percent more  the number of cases than in 2018, and the highest number of documented cases since 1950. Experts aren’t quite sure why. The disease has always been a wily foe, combining the sneakiest qualities of several other STIs: chlamydia’s immune-evading powers, herpes’s ability to lie dormant for years, and gonorrhea’s trick of traveling through the bloodstream to faraway organs. Christina Marra, a neurosyphilis expert at the University of Washington Medical School, told me that syphilis also seems to be highly stigmatized even compared with other STIs like HIV, which could lead patients to avoid screening. In studies, Marra spoke with hundreds of men who had both HIV and syphilis. “They tell their mom about their HIV but they don’t tell their mom about their syphilis,” she said.

The idea that as infections continue to increase, so do the number of rare or extreme cases, including stand-alone eye syphilis, is the most accepted explanation among scientists. But several experts are concerned that a different,more unique situation is unfolding. Some of the recent eye-syphilis cases might suggest a new eye-loving strain of the disease. That would explain the fact that all the cases in the Michigan cluster occurred at roughly the same time within a small geographic area, and stemmed from a single partner. “That is very strong epidemiological evidence that there was something unique about the syphilis strain in this case,” William Nettleton, a family-medicine doctor and public-health researcher at Western Michigan University who documented the cluster, told me.

But in Chicago, the infections were documented over eight months, and occurred in hospitals all across the city. And past investigations have not supported the hypothesis of eye-loving strains, although they have found evidence for strains that are more likely to cause neurological symptoms. (Genetic sequencing is not part of standard clinical protocol, so no one attempted to sequence the strain types in the Chicago cases. A larger CDC study to identify any strains that may be associated with the eyes is ongoing.) Where symptoms show up in the body might be also influenced by a person’s individual immune system and risk factors, Leone said.

Nham and other experts are less concerned with any possible new syphilis strains, and more worried about the fact that the disease is rising in new populations. In the past, men who have sex with men, transgender women, and people with HIV were at highest risk. But syphilis is now rising in women heterosexual populations and heterosexual menpeople without HIV as well. Most of the cases in Chicago were among heterosexual people assigned male at birth without HIV. The Michigan cluster consisted of five HIV-negative women and one HIV-negative man. The man who went blind in North Carolina was heterosexual and HIV-negative. Of particular concern is the sharp rise in pregnant women, who can pass syphilis through the placenta, resulting in stillborn babies or ones who grow up with blindness, deafness, or bone damage.

Today’s apparent increase in neurological and ocular symptoms is a throwback to a time before penicillin, when about one-third of syphilis sufferers experienced neurological symptoms. In the 16th-century epic poem from which syphilis gets its name, the poet describes an unfortunate youth who, “his eyes, so beautiful, the clear mirrors of the day are devoured by a fearsome ulcer!” The Dutch painter Gerard de Lairesse and the Portuguese writer Camilo Castelo Branco are believed to have lost their vision from syphilis. Even Friedrich Nietzsche might have gone near-blind from the disease.

These unusual manifestations of syphilis are so antiquated that many doctors working today weren’t trained to recognize them in medical school. In fact, “there’s an entire generation of clinicians, including myself, who never saw syphilis in medical training because, in 1999 and 2000 when I was training, there was almost no syphilis in the U.S.,” says Ina Park, a sexual-health researcher at UC San Francisco and the author of the book Strange Bedfellows: Adventures in the Science, History, and Surprising Secrets of STDs.

But even if doctors were better trained to spot unusual symptoms, the communities most at risk—many of which lack access to testing centers, education, and treatment—might not benefit from that knowledge. The man who came to Leone in 2015 delayed going to the ER in the first place because he had no health insurance. If he had been able to see Leone two weeks earlier, he would likely still have his sight. During the pandemic, many STI clinics closed or switched over to virtual care; last year, Congress proposed a $400 million cut from the national STI-intervention workforce. And in the past year, doctors have faced an acute national shortage of Bicillin L-A, an injectable form of penicillin that is the most effective antibiotic for treating syphilis and the only one recommended for pregnant women.

To the uninitiated, a sudden outbreak of eye syphilis sounds like the plot of a horror movie. But to Leone, the cases in Chicago felt like déjà vu. “I’m going to be really honest, it didn’t surprise me at all,” he told me. We’ve known the cure for syphilis since 1943. The true horror is that the U.S. has allowed this ancient scourge to gain a foothold once again.

This article was originally published by Undark Magazine.

When Mana Parast was a medical resident in 2003, she had an experience that would change the course of her entire career: her first fetal autopsy.

The autopsy, which pushed Parast to pursue perinatal and placental pathology, was on a third-trimester stillbirth. “There was nothing wrong with the baby; it was a beautiful baby,” she recalls. We’re not done, she remembers her teacher telling her. Go find the placenta.

The placenta, a temporary organ that appears during pregnancy to help support a growing fetus, didn’t look as it should. Instead, it “looked like a rock,” Parast says. As far as they could tell, no one had ever examined this patient’s placenta through her pregnancy, and it was her fifth or sixth stillbirth, Parast recalls.

Every year, there are approximately 5 million pregnancies in the United States. One million of those pregnancies end in miscarriage, and more than 20,000 end in stillbirth. Up to half of these pregnancy losses have unidentified causes. Recent and ongoing research, though, suggests that the placenta may hold the key to understanding and preventing some pregnancy complications, such as preterm birth and maternal and infant mortality. A closer look at the placenta—including its size and function—may have a significant impact on stillbirth rates.

The placenta and its pathologies have largely been understudied, some clinicians say. There are multiple reasons: the difficulties in studying a fleeting and dynamic organ, the limitations in researching pregnant people, a lack of scientific consensus, few prospective studies, and the absence of standardized pathology reports on placentas.

Some groups are working to change that. The placenta “is this complex organ that’s critical to support fetal development, so you would think we know everything about it,” says David Weinberg, the project lead for the Human Placenta Project, or HPP, an initiative by the National Institute of Child Health and Human Development. The project has awarded studies more than $101 million from 2014 to 2023 to develop better assessment tools for the placenta while it is growing inside a pregnant person.

Placental research is an area of obstetrics that is sorely lacking, according to Weinberg. Although limited research has been done on abnormal placentas after delivery, the HPP research teams realized in early meetings that if they wanted to improve outcomes, they’d need to know more about what a normal placenta does over the course of pregnancy. They are one of several U.S.-based teams tackling this issue.

The shift in research is a welcome one for Parast, who is now director of the Perinatal Pathology Service and a co-director of the Center for Perinatal Discovery at UC San Diego, and has received HPP funding for some of her work. But more should be done, she adds, including adopting a more cooperative approach to applying new findings: “If we’re going to do this right, we have to come at it with this mindset.”

The human placenta does a lot of work for the fetus; it is, effectively, the fetal lungs, kidneys, and digestive tract. It’s also one of the only organs in the animal world that consists of two separate organisms—with tissues from both the mother and fetus—as well as the only temporary organ.

The placenta evolves across a pregnancy, too, continuing to support the developing fetus while interacting with the maternal environment, Weinberg says. The research has, so far, shown that issues with the placenta—its size, its placement, its microbiome—can signal health problems with both pregnant person and fetus, such as preeclampsia, gestational diabetes, preterm birth, and stillbirth.

[Read: The mystery of Zika’s path to the placenta]

As researchers have tried to develop ways to observe the placenta throughout the course of an entire pregnancy, they’re facing challenges, though. It’s difficult, for instance, to study the organ before a birth, because of potential risks both for the woman and for her developing fetus. Pregnant women have been historically excluded from most pharmacological and preventative trials according to the National Institutes of Health Office of Research on Women’s Health. The potential reasons include the threat of legal liability should the study harm the fetus, and the complex physiology of the pregnant body.

Because research on pregnant women faces so many restrictions, most placental research has been done after birth in a pathology lab. Here, the organ is typically examined only after a poor pregnancy outcome, such as stillbirth or placental abruption, in which the placenta pulls away from the uterus wall and causes heavy bleeding.

Placental pathology, though, has also long had limitations. “No one in their right mind was studying placentas,” says Harvey Kliman, the director of the Yale School of Medicine’s Reproductive and Placental Research Unit, recalling the early years of his pathology training in the 1980s, when the organ was particularly understudied. As a medical student, he says, “I was discouraged from going into OB-GYN. I was told you can’t really do research on pregnant women. This is still basically true.” Conducting OB-GYN research can be particularly challenging compared with other fields of medicine, he adds.

Although the advanced pathology residents were working on cancer, Kliman says that newer residents started in the basement morgue performing autopsies on placentas and fetuses. Even today, there is a hierarchy in pathology, and placental pathology is at the bottom, he says, akin to “scrubbing toilet bowls in the Navy.”

“A placenta review after loss can take up to six months, because there’s no priority—there’s no patient on the table,” Kliman says. Most pathologists, he adds, “don’t see the human side of this at all. I deal with patients every day. This is very real to me.”

Parast says that the culture of pathology is partly responsible for the lack of placental recognition, because pathologists often work in isolation from one another: “If there’s a perinatal pathologist, they’re the only one. So few people are doing this.”

Historically, getting pathologists to come together and agree on the details of placenta work is difficult; to change that, Parast has been working with Push for Empowered Pregnancy, a nonprofit that aims to end preventable stillbirths, along with other advocacy groups such as Star Legacy Foundation. Parast has also pushed the Society for Pediatric Pathology to come together and standardize the way placental autopsy reports are written. This is a big complaint among obstetricians and advocates, she says, because when it comes to the reports as they are now, “no one understands them.” She adds that clinicians also need more training on how to interpret them.

Placenta research is also hampered because of how science is done more broadly, says Michelle Oyen, a biomedical-engineering professor at Washington University in St. Louis. Competitive grant proposals and funding incentives can dissuade collaboration and methodology sharing. But building improved obstetrical outcomes requires collaboration between engineers and ob-gyns, she explains. Historically, she adds, there hasn’t been a relationship between those fields, unlike other areas of medicine, such as orthopedics or cardiology.

Also at issue are shame and stigma around pregnancy loss—and women’s health in general. “It’s not just about the science, it’s about the fact that these problems are much bigger than most people understand,” Oyen says, referring to the systemic, gender-based obstacles in medicine. And NIH funding, when used to study diseases that primarily affect one gender, disproportionately goes to those that affect men, according to a 2021 study published in the Journal of Women’s Health.

Furthermore, a 2021 study in the journal Science showed that female teams of inventors are much more likely to pioneer inventions in women’s health than majority-male teams. With the majority of patents being held by men, “there is a balance problem there,” Oyen says.

[Read: A Fitbit for your placenta]

That may be changing. “Women’s health is having a moment. Those of us who have been working quietly on this for 25 years are laughing about it,” she adds. “Like we’ve been doing this this whole time, and suddenly, you’re really interested in it.”

Research efforts like the Human Placenta Project aim to build a new research base on the ephemeral organ. Now, 10 years into the HPP, researchers have a better understanding of the organ and its role in pregnancy outcomes. They are developing tools to monitor the placenta noninvasively, Weinberg says, such as advances in magnetic resonance imaging and ultrasounds, both of which can help better visualize the placenta and its blood flow.

“We’re at a point of clinical validation,” he says. “Researchers think they have a measure that can indicate whether or not a fetus may be a risk.” Prospective studies are the next step.

Unfortunately, none of these projects will be market-ready in the near future, he says, although he argues that the project has brought national attention to the placenta.

“I do believe the HPP raised global awareness,” Weinberg says. “Things that seemed sci-fi not that long ago are now a possibility.”

Still, some clinicians and advocates are disheartened by what they feel is slow progress with big projects such as the HPP, including Kliman and the advocacy groups Push and Measure the Placenta. Kliman’s placental research has highlighted the role of a small placenta as the leading cause of stillbirth. An unusually small placenta, he says, is a stillbirth risk because fetuses can grow too large for it; this may cause the fetus’s growth to stagnate, or make the organ simply give out.

Diagnosing a small placenta is “low-hanging fruit,” he says, estimating that it could prevent 7,000 stillbirths a year.

A recent study that Kliman co-authored in the journal Reproductive Sciences showed that in the pregnancy losses they studied, one-third of previously unexplained stillbirths was associated with a small placenta. His team reviewed clinical data and placental pathology for more than 1,200 unexplained pregnancy losses and determined that the most common feature of stillbirth was a small placenta. This article has hopefully opened up a door to confirming where these losses are coming from, he says.

In 2009, together with his father, an electrical engineer and a mathematician who has since died, Kliman developed a 2-D-ultrasound measurement tool called Estimated Placental Volume which takes about 30 extra seconds at a routine ultrasound. But although the tool launched 15 years ago, getting it implemented has proved difficult.

Whether or not his EPV tool will become standard across obstetrics is still uncertain, he says. “We’re dealing with a paradigm change, and there’s a lot of resistance to changing the paradigm.”

Other groups are also developing new tools for placental health. Oyen, for instance, is part of In Utero, a $50 million program funded by Wellcome Leap, which aims to halve stillbirth rates globally. For research on the placenta—and maternal and fetal health more broadly—the stakes are particularly high, she says: “Right now, all of the statistics on maternal and fetal mortality are going in the wrong direction in this country.” Although fetal mortality rates have held relatively steady in the most recent years for which there are data, Oyen emphasizes that stagnation is not improvement.

Oyen’s team is working to develop new ways to see how oxygen flows in and out of the placenta, using high-resolution imaging and modeling. The models could help determine how the placenta is working and, ultimately, detect if there is growth restriction.

The project follows a collaborative model with teams around the world made up of biomedical engineers, clinicians, and computer scientists. Because of this, Oyen argues, the project is more nimble than traditional research: “We have all these data-sharing agreements. We share techniques; we share information within this program. This is a model for how we have to move forward.”

Getting obstetricians to implement these new findings in placental research will be the next big push, and in the U.S., that means taking the consensus to the American College of Obstetricians and Gynecologists—the herald of standard of care practices and guidelines for ob-gyns.

Professional societies need to develop guidelines, Parast says: “Obs need to come out and say ‘We need this.’ If there’s a little bit of a push from the obs, our societies will catch on.”

More than 20 years ago, when Parast processed her first placenta, the one that looked more like a rock than an organ, she and her teacher identified an accumulation of protein-containing material that indicated an underlying condition, possibly autoimmune, she says, which may have restricted the fetus’s growth. Had someone looked at this patient’s placentas sooner, Parast says, her multiple stillbirths may have been prevented with treatment.

My refrigerator has a chronic real-estate problem. The issue isn’t leftovers; it’s condiments. Jars and bottles have filled the door and taken over the main shelves. There’s so little room between the chili crisp, maple syrup, oyster sauce, gochujang, spicy mustard, several kinds of hot sauce, and numerous other condiments that I’ve started stacking containers. Squeezing in new items is like simultaneously playing Tetris and Jenga. And it’s all because of three little words on their labels: Refrigerate after opening.

But a lot of the time, these instructions seem confusing, if not just unnecessary. Pickles are usually kept cold after opening, but the whole point of pickling is preservation. The same is true of fermented things, such as sauerkraut, kimchi, and certain hot sauces. Ketchup bottles are a fixture of diner counters, and vessels of chili oil and soy sauce sit out on the tables at Chinese restaurants. So why must they take up valuable fridge space at home?

Meanwhile, foods languish in the pantry when they would do better in the fridge. Nuts develop an off-taste after a few months; spices fade to dust in roughly the same time span. Recently, a bag of flaxseed I’d bought just a few weeks earlier went rancid and began to smell like paint thinner. A lot of commonly unrefrigerated foods could benefit from cold storage, Kasiviswanathan Muthukumarappan, a refrigeration expert at South Dakota State University, told me. Yet maddeningly, they aren’t labeled as such, whereas many shelf-stable foods are refrigerated by default. The conventions of food storage are full of inconsistencies, wasting not only precious refrigerator space but sometimes also food itself.

Judging by a trip to the grocery store, there are two kinds of foods: fridge foods and pantry foods. Pasta and granola bars, for example, are kept at room temperature, whereas fresh foods such as meat, dairy, and produce are kept cold. These types of highly perishable items are defined by the FDA as “temperature control for safety” foods, and keeping them below 40 degrees Fahrenheit slows the growth of many harmful microbes, which can cause food poisoning. Outside the fridge, pathogenic microbes grow rapidly: According to the U.S. Department of Agriculture, these foods shouldn’t be left unrefrigerated for even just two hours.

But the binary—fridge foods and pantry foods—is too simplistic. Many condiments, for example, exist in a murky middle ground. Some mustards can sit out on a counter, whereas others are prone to mold, Karen Schaich, a food-science professor at Rutgers University, told me. Relishes, which are usually chopped pickled vegetables or fruits, can also develop mold or yeast fermentation if not refrigerated. In part, it comes down to their sugar content: Microbes don’t thrive in acidic conditions, but they generally do like some sugar. A broad rule of thumb is that “extremely tart or sour” condiments are usually safe to leave on the counter, as long as they aren’t also sweet, Schaich said.

Proper food storage just can’t be boiled down to a single question—to chill or not to chill?—because the effects of refrigeration are twofold. Beyond safety, the fridge helps maintain a food’s flavor. It does this in part by slowing the growth of spoilage microbes, which usually aren’t harmful but produce revolting flavors and odors. The fridge also slows natural processes that degrade quality. Once safety is controlled for, “chemistry takes over,” Schaich said, referring to reactions that cause food to develop weird or gross flavors over months or even years.

The big one is oxidation, which is responsible for many foul odors, tastes, and textures in food, such as stale Cheerios and oil that smells like Play-Doh. It’s caused by exposure to oxygen and accelerated by factors including time, moisture, bacteria, light, and, crucially, heat. Refrigeration keeps food tasting fresh by controlling for the latter. That’s why products such as Heinz ketchup and Kikkoman soy sauce have labels saying they should be stored in the fridge: not for safety, but for flavor. Put them in your pantry, and they’re unlikely to make you sick.

When it comes to maintaining flavor, one molecule is more consequential than others. “It’s the fat that matters,” Muthukumarappan said. Fatty foods—certain nuts such as pecans and walnuts, some kinds of oil—oxidize and go rancid, usually developing sour or bitter flavors and, sometimes, the tangy smell of metal or the waxy one of crayons. It makes sense to refrigerate peanut butter, and nuts in general, Muthukumarappan said. Better yet, store them in the freezer if you plan on keeping them for years. Grains are likewise vulnerable to rancidity: Hemp seeds have a high oil content and can oxidize within months, and so can some types of flour, Schaich said—in particular, whole-grain flours such as rye and spelt. Storing them in the refrigerator is better than in the cupboard, she said, but vacuum-sealing them to remove oxygen, then putting them in the freezer, is best for long-term storage.

There are other reasons you might want to put things in the refrigerator. Spices don’t usually become rancid, but their potency fades. A milk-carton-size container of smoked paprika I ordered about a year ago is now basically red sawdust. Old cumin smells dull, like pencil shavings. The flavor and pungency of spices comes from volatile oils, which too are vulnerable to oxidation. Staleness, Muthukumarappan told me, is usually caused by repeated exposure to the air—as in, regularly opening and closing a spice jar. Keeping spices near heat and light can accelerate the process. The freezer is useful if you plan to store spices long term, provided that they’re kept in airtight containers. But if they’re going to be used frequently, it’s best for them to stay at room temperature. Keeping them cold risks condensation forming every time the container is opened, potentially leading to clumps, off-flavors, or even microbial growth, Luke LaBorde, a food-science professor at Penn State, told me.

In all my years of cooking, I can’t remember seeing a ketchup bottle that said it was okay to store at room temperature, just as I’ve never come across a spice jar that was meant to be kept in the freezer. Storage instructions on foods, or lack thereof, manifest a different reality, one where proper storage techniques aren’t general knowledge but insider information: There probably won’t be any refrigeration instructions on a bag of pine nuts, but if you know, you know. Expecting every product to have detailed instructions is unrealistic. A simpler storage system, if a more space-intensive one, might be to keep everything cold by default. That way, at least most foods would be safer, and presumably stay fresher. When I asked Muthukumarappan whether any foods would taste better if stored at room temperature, he said he couldn’t think of any. Yet there is still lively debate over whether tomatoes, bread, eggs, butter, and olive oil taste best at room temperature.

The fridge-pantry dichotomy will never fully encompass the murky science of food safety, and the experts don’t always agree. Even the rules for produce aren’t totally clear-cut: All sliced fruit, but not all whole fruit, should be kept cold—especially sliced melons. Unlike most fruits, melons aren’t very acidic, making them more hospitable to pathogenic microbes, LaBorde said. Garlic is safe for several months when kept at room temperature, but homemade garlic-in-oil carries the risk of botulism unless refrigerated.

There’s only one way to reclaim our fridge space and avoid rancid nuts, stale oats, and moldy jellies: thinking beyond the fridge-pantry binary. In particular, factor in how long and where you intend on storing food. It’s not always easy: Buy in bulk from Costco, where you can get a five-pound bag of walnuts and a gallon of mayonnaise, and food can easily linger—or be forgotten—in a humid pantry for months, even years. Still, if a bottle of ketchup is going to get used up in a week of summer barbecues, you can let it hang out on the counter. Went nuts when the walnuts went on sale? Freeze some for future you.

The science of food storage was widely known several generations ago because it was taught in American schools, Schaich told me. Now we’re on our own. Although we’re unlikely to ever grasp all of its complexities, understanding it just a little more has some advantages. Disregarding the recommendation to refrigerate an open jar of capers gave me a frisson of excitement—not just because it felt like breaking an imperfect rule, but because of the space it opened up in my fridge.

When my 2-year-old began favoring string cheese and croutons over peas and cauliflower, I tried to get creative. First, I mimicked the artsy approach to vegetables I remembered from childhood, starting with the classic ants on a log and then advancing to cucumber caterpillars and hummus monsters with carrot teeth. My toddler was only mildly amused. Next I turned to persuasion, repeating just how delicious bok choy is and how strong spinach would make her. On most days, I was lucky to get a single bite of something green within an inch of her mouth.

So I turned to Instagram and TikTok, where I quickly noticed that one veggie trick triumphed above all others: Hide the vegetables your child dislikes in the dishes they love. Does your kid like pancakes? Mix a little powdered spinach into those. Mac and cheese? That distinct orange color could come from carrots. You can even disguise cauliflower and broccoli in pizza sauce.

The sneak-it-in strategy predates social media. Authors of parenting cookbooks, such as Deceptively Delicious and The Sneaky Chef: Simple Strategies for Hiding Healthy Foods in Kids’ Favorite Meals, made the rounds on TV programs like The Oprah Winfrey Show and the Today show back in the late aughts. The fact that stealth cooking has remained so popular is amazing when you consider how much work it is. You might spend an extra hour cooking, say, chicken nuggets from scratch with pureed beets tucked inside—versus buying a bag of regular chicken nuggets from the supermarket. But if it helps your toddler get their recommended cup or cup and a half of vegetables each day, it’s worth it, right?

The nutrition experts I spoke with say it’s not. “Children by and large don’t need us to go to those lengths to get vegetables into them,” Laura Thomas, a nutritionist who directs the London Centre for Intuitive Eating, told me.

[Read: The ominous rise of toddler milk]

Vegetables, of course, have many health benefits. Some studies have linked eating vegetables to a decreased risk of several chronic diseases, including heart disease. But these studies look at veggie consumption across many years, not strictly what you eat as a toddler. And even though many children in the U.S. aren’t meeting dietary guidelines on vegetables, Thomas said that doesn’t necessarily mean they are undernourished. A large national study published in 2018 found that toddlers, despite their reputation for veggie-hatred, on average consume enough calcium, vitamin A, and iron. They tend to be low on potassium and fiber, but children (and adults, for that matter) can absorb such crucial nutrients from meat, nuts, beans, whole grains, and other nongreen foods. “There is almost nothing inherent to a vegetable that you can’t get in other foods,” Thomas said.

Disregarding vegetables isn’t an ideal long-term solution, because many of the foods that we tend to eat in their place are high in calories and low in fiber. But in the short term, accepting alternatives can help your toddler survive their pickiest stages without getting scurvy. And crucially, hiding veggies in bread- or meat- or sugar-heavy foods still means your kid is eating a lot of bread or meat or sugar. No amount of vegetables can counteract the detrimental effects of excess sugar.

Prominent nutritionists and child-development specialists alike have been telling parents for years to stop pressuring and tricking kids into eating vegetables. Yet health-conscious parents just can’t seem to put down the blender—which might say less about picky kids and more about the years of health messaging and fad diets their elders have endured. “All of these Millennials who grew up with ‘clean eating’ haven’t really thrown off that baggage,” Thomas said. Ellyn Satter, who for decades has been an expert on feeding and raising healthy kids, puts it more bluntly: “The belief is that if you hide vegetables in your child’s food, they won’t get fat and they’re going to live forever.”

[Read: The latest diet trend is not dieting]

Covertly shredding beets into meatballs and sneaking pureed veggies into our children’s mouths with whipped-cream chasers isn’t just pointless, Satter and other nutritionists say. The approach can even be counterproductive. “The goal of child nutrition is not to get children to eat everything they’re supposed to today. It is to help them to learn to enjoy a variety of healthy food for a lifetime,” Satter told me. And everything scientists know about how to do that stands in contrast to grinding vegetables into an indistinguishable pulp and masking them with other flavors.

Experts told me that if you consistently prepare and eat meals with your kids that contain a variety of foods—including disliked vegetables—without pressuring them to taste or swallow anything, they’ll eventually learn to eat most of what’s offered. Satter originally outlined this approach back in the 1980s, and told me that it works primarily because it creates trust between parent and child. “The child needs to trust their parents to let them determine what to eat or not eat from what the parents offer,” she said. If your child discovers that you’ve been hiding cauliflower in their tater tots or telling them tiny pieces of broccoli are actually green sprinkles, Satter said, you could rupture that trust, and your child may become more wary of the foods you serve or develop negative associations with vegetables.

Nearly 40 years after Satter outlined her feeding method, pediatric nutritionists continue to be wary of the trust-destroying potential of veggie-sneaking. Rafael Pérez-Escamilla, a public-health professor at Yale, told me that even if your child is going through a mac-and-cheese phase (as his son did for many years in the ’90s), he would never advise hiding vegetables in other foods. “Surround your child with healthy foods, but let the kid decide. Let the kid touch the food, smell the food; let the kid learn to eat when he or she is hungry and stop eating when he or she knows he is full,” he said. “It’s easier said than done, but it works.”

[Read: Putting kids on diets won’t solve anything]

The hands-off approach certainly takes less physical work, but Pérez-Escamilla is right that it can be a real emotional struggle. As a parent, I’m still tempted to soothe my anxiety by sneaking kale into a smoothie, and reluctant to cook creamed spinach for my toddler over and over only to be rejected each time. But I have learned to find some comfort in acting as a role model instead of a micromanager.   

Over the past few months, I’ve quit slipping broccoli into pasta sauce and started offering it as part of dinner. Sometimes my toddler takes a nibble; sometimes she doesn’t. I’ve noticed that the less I show I care, the more she experiments on her own.

Doctors often have a piece of advice for the rest of us: Don’t Google it. The search giant tends to be the first stop for people hoping to answer every health-related question: Why is my scab oozing? What is this pink bump on my arm? Search for symptoms, and you might click through to WebMD and other sites that can provide an overwhelming possibility of reasons for what’s ailing you. The experience of freaking out about what you find online is so common that researchers have a word for it: cyberchondria.

Google has introduced a new feature that effectively allows it to play doctor itself. Although the search giant has long included snippets of text at the top of its search results, now generative AI is taking things a step further. As of last week, the search giant is rolling out its “AI overview” feature to everyone in the United States, one of the biggest design changes in recent years. Many Google searches will return an AI-generated answer right underneath the search bar, above any links to outside websites. This includes questions about health. When I searched Can you die from too much caffeine?, Google’s AI overview spit out a four-paragraph answer, citing five sources.

But this is still a chatbot. In just a week, Google users have pointed out all kinds of inaccuracies with the new AI tool. It has reportedly asserted that dogs have played in the NFL and that President Andrew Johnson had 14 degrees from the University of Wisconsin at Madison. Health answers have been no exception; a number of flagrantly wrong or outright weird responses have surfaced. Rocks are safe to eat. Chicken is safe to eat once it reaches 102 degrees. These search fails can be funny when they are harmless. But when more serious health questions get the AI treatment, Google is playing a risky game.

Google’s AI overviews don’t trigger for every search, and that’s by design. “What laptop should I buy?” is a lower-stakes query than “Do I have cancer?” of course. Even before the introduction of AI search results, Google has said that it treats health queries with special care to surface the most reputable results at the top of the page. “AI overviews are rooted in Google Search’s core quality and safety systems,” a Google spokesperson told me in an email, “and we have an even higher bar for quality in the cases where we do show an AI overview on a health query.” The spokesperson also said that Google tries to show the overview only when the system is most confident in the answer. Otherwise it will just show a regular search result.

When I tested the new tool on more than 100 health-related queries this week, an AI overview popped up for most of them, even the sensitive questions. For real-life inspiration, I used Google’s Trends, which gave me a sense of what people actually tend to search for on a given health topic. Google’s search bot advised me on how to lose weight, how to get diagnosed with ADHD, what to do if someone’s eyeball is popping out of its socket, whether menstrual-cycle tracking works to prevent pregnancy, how to know if I’m having an allergic reaction, what the weird bump on the back of my arm is, how to know if I’m dying. (Some of the AI responses I found have since changed, or no longer show up.)

Not all the advice seemed bad, to be clear. Signs of a heart attack pulled up an AI overview that basically got it right—chest pain, shortness of breath, lightheadedness—and cited sources such as the Mayo Clinic and the CDC. But health is a sensitive area for a technology giant to be operating what is still an experiment: At the bottom of some AI responses is small text saying that the tool is “for informational purposes only … For medical advice or diagnosis, consult a professional. Generative AI is experimental.” Many health questions contain the potential for real-world harm, if answered even just partially incorrectly. AI responses that stoke anxiety about an illness you don’t have are one thing, but what about results that, say, miss the signs of an allergic reaction?

Even if Google says it is limiting its AI-overviews tool in certain areas, some searches might still slip through the cracks. At times, it would refuse to answer a question, presumably for safety reasons, and then answer a similar version of the same question. For example, Is Ozempic safe? did not unfurl an AI response, but Should I take Ozempic? did. When it came to cancer, the tool was similarly finicky: It would not tell me the symptoms of breast cancer, but when I asked about symptoms of lung and prostate cancer, it obliged. When I tried again later, it reversed course and listed out breast-cancer symptoms for me, too.

Some searches would not result in an AI overview, no matter how I phrased the queries. The tool did not appear for any queries containing the word COVID. It also shut me down when I asked about drugs—fentanyl, cocaine, weed—and sometimes nudged me toward calling a suicide and crisis hotline. This risk with generative AI isn’t just about Google spitting out blatantly wrong, eye-roll-worthy answers. As the AI research scientist Margaret Mitchell tweeted, “This isn’t about ‘gotchas,’ this is about pointing out clearly foreseeable harms.” Most people, I hope, should know not to eat rocks. The bigger concern is smaller sourcing and reasoning errors—especially when someone is Googling for an immediate answer, and might be more likely to read nothing more than the AI overview. For instance, it told me that pregnant women could eat sushi as long as it doesn’t contain raw fish. Which is technically true, but basically all sushi has raw fish. When I asked about ADHD, it cited AccreditedSchoolsOnline.org, an irrelevant website about school quality.

When I Googled How effective is chemotherapy?, the AI overview said that the one-year survival rate is 52 percent. That statistic comes from a real scientific paper, but it’s specifically about head and neck cancers, and the survival rate for patients not receiving chemotherapy was far lower. The AI overview confidently bolded and highlighted the stat as if it applied to all cancers.

In certain instances, a search bot might genuinely be helpful. Wading through a huge list of Google search results can be a pain, especially compared with a chatbot response that sums it up for you. The tool might also get better with time. Still, it may never be perfect. At Google’s size, content moderation is incredibly challenging even without generative AI. One Google executive told me last year that 15 percent of daily searches are ones the company has never seen before. Now Google Search is stuck with the same problems that other chatbots have: Companies can create rules about what they should and shouldn’t respond to, but they can’t always be enforced with precision. “Jailbreaking” ChatGPT with creative prompts has become a game in itself. There are so many ways to phrase any given Google search—so many ways to ask questions about your body, your life, your world.

If these AI overviews are seemingly inconsistent for health advice, a space that Google is committed to going above and beyond in, what about all the rest of our searches?

Newer Posts