Category:

Medical News

Spared by DOGE—For Now

by

Americans have plenty to worry about these days when it comes to infectious-disease outbreaks. This is the worst flu season in 15 years, there’s a serious measles outbreak roiling Texas, and the threat of bird flu isn’t going away. “The house is on fire,” Denis Nash, an epidemiologist at CUNY School of Public Health, told me. The more America is pummeled by disease, the greater the chance of widespread outbreaks and even another pandemic.

As of this week, the federal government may be less equipped to deal with these threats. Elon Musk’s efforts to shrink the federal workforce have hit public-health agencies, including the CDC, NIH, and FDA. The Trump administration has not released details on the layoffs, but the cuts appear to be more than trivial. The CDC lost an estimated 700 people, according to the Associated Press. Meanwhile, more than 1,000 NIH staffers reportedly lost their jobs.

Perhaps as notable as who was laid off is who wasn’t. The Trump administration initially seemed likely to target the CDC’s Epidemic Intelligence Service, a cohort of doctors, scientists, nurses, and even veterinarians who investigate and respond to disease outbreaks around the world. Throughout the program’s history, EIS officers have been the first line of defense against anthrax, Ebola, smallpox, polio, E. coli, and, yes, bird flu. Four recent CDC directors have been part of the program.

The layoffs were mostly based on workers’ probationary status. (Most federal employees are considered probationary in their first year or two on the job, and recently promoted staffers can also count as probationary.) EIS fellows typically serve two-year stints, which makes them probationary and thus natural targets for the most recent purge. EIS fellows told me they were bracing to be let go last Friday afternoon, but the pink slips never came. Exactly why remains unclear. In response to backlash about the planned firings, Musk posted on X on Monday that EIS is “not canceled” and that those suggesting otherwise should “stop saying bullshit.” A spokesperson for DOGE did not respond to multiple requests for comment.

This doesn’t mean EIS is safe. Both DOGE and Robert F. Kennedy Jr., Donald Trump’s newly confirmed health secretary, are just getting started. More layoffs could still be coming, and significant cuts to EIS would send a clear message that the administration does not believe that investigating infectious-disease outbreaks is a good use of tax dollars. In that way, the future of EIS is a barometer of how seriously the Trump administration takes the task of protecting public health.

Trump and his advisers have made it abundantly clear that, after the pandemic shutdowns in 2020, they want a more hands-off approach to dealing with outbreaks. Both Trump and Kennedy have repeatedly downplayed the destruction caused by COVID. But so far, the second Trump administration’s approach to public health has been confusing. Last year, Trump said he would close the White House’s pandemic office; now he is reportedly picking a highly qualified expert to lead it. The president hasn’t laid out a bird-flu plan, but amid soaring egg prices, the head of his National Economic Council recently said that the plan is coming. Kennedy has also previously said that he wants to give infectious-disease research a “break” and focus on chronic illness; in a written testimony during his confirmation hearings, he claimed that he wouldn’t actually do anything to reduce America’s capacity to respond to outbreaks.

The decision to spare EIS, at least for now, only adds to the confusion. (Nor is it the sole murky aspect of the layoffs: Several USDA workers responding to bird flu were also targeted, although the USDA told me that those cuts were made in error and that it is working to “rectify the situation.”) On paper, EIS might look like a relatively inconsequential training program that would be apt for DOGEing. In reality, the program is less like a cushy internship and more akin to public health’s version of the CIA.

Fellows are deployed around the world to investigate, and hopefully stop, some of the world’s most dangerous pathogens. The actual work of an EIS officer varies depending on where they’re deployed, though the program’s approach is often described as “shoe-leather epidemiology”—going door to door or village to village probing the cause of an illness in the way a New York City detective might investigate a stabbing on the subway. Fellows are highly credentialed experts, but the process provides hands-on training in how to conduct an outbreak investigation, according to Nash, the CUNY professor, who took part in the program. Nash entered EIS with a Ph.D. in epidemiology, but “none of our training could prepare us for the kinds of things we would learn through EIS,” he said.

In many cases, EIS officials are on the ground investigating before most people even know there’s a potential problem. An EIS officer investigated and recorded the United States’ first COVID case back in January 2020, when the virus was still known as 2019-nCoV. It would be another month before the CDC warned that the virus would cause widespread disruption to American life.

More recently, in October, EIS officers were on the ground in Washington when the state was hit with its first human cases of bird flu, Roberto Bonaccorso, a spokesperson with the Washington State Department of Health, wrote to me. “Every single outbreak in the United States and Washington State requires deployment of our current EIS officers,” Bonaccorso said.

EIS is hardly the only tool the federal government uses to protect the country against public-health threats. Managing an outbreak requires coordination across an alphabet soup of agencies and programs; an EIS fellow may have investigated the first COVID case, but that of course didn’t stop the pandemic from happening. Other vital parts of how America responds to infectious diseases were not spared by the DOGE layoffs. Two training programs with missions similar to that of EIS were affected by the cuts, according to a CDC employee whom I agreed not to identify by name because the staffer is not authorized to talk to the press.

The DOGE website boasts of saving nearly $4 million on the National Immunization Surveys, collectively one of the nation’s key tools for tracking how many Americans, particularly children, are fully vaccinated. What those cuts will ultimately mean for the future of the surveys is unknown. A spokesperson for the research group that runs the surveys, the National Opinion Research Center, declined to comment and directed all questions to the CDC.

And more cuts to the nation’s public-health infrastructure, including EIS, could be around the corner. RFK Jr. has already warned that certain FDA workers should pack their bags. Kennedy has repeatedly claimed that public-health officials inflate the risks of infectious disease threats to bolster their importance with the public; EIS fellows are the first responders who hit the ground often before public officials are even sounding the alarm bells.

Ironically, the work of the EIS is poised to become especially pressing during Trump’s second term. If measles, bird flu, or any other infectious disease begins spreading through America unabated after we have fired the public-health workforce, undermined vaccines, or halted key research, it will be the job of EIS fellows to figure out what went wrong.

On the afternoon of Friday, February 7, as staff members were getting ready to leave the headquarters of the National Institutes of Health, just outside Washington, D.C., officials in the Office of Extramural Research received an unexpected memo. It came from the Department of Health and Human Services, which oversees the NIH, and arrived with clear instructions: Post this announcement on your website immediately.

The memo announced a new policy that, for many universities and other institutions, would hamstring scientific research. It said that the NIH planned to cap so-called indirect costs funded by grants—overhead that covers the day-to-day administrative and logistical duties of research. Some NIH-grant recipients had negotiated rates as high as 75 percent; going forward, the memo said, they would now be limited to just 15 percent. And this new cap would apply even to grants that had already been awarded.

The announcement was written as if it had come from the NIH Office of the Director. It also directed all inquiries to the Office of Extramural Research’s policy branch. And yet, no one at the NIH had seen the text until that Friday afternoon, several current and former NIH officials with knowledge of the situation told me. “None of us had anything to do with that document,” one of them said. But the memo was dressed up in a way clearly intended to make it look like a homegrown NIH initiative. (Everyone I spoke with for this story requested anonymity out of fear of reprisal from the Trump administration. HHS did not respond to requests for comment.)

Over the next several days, the memo sparked confusion and chaos at the NIH, and across American universities and hospitals, as researchers tried to reckon with the likely upshot—that many of them would have to shut down their laboratories or fire administrative staff. A federal judge has since temporarily blocked the cap on indirect costs. But the memo’s abrupt arrival at the NIH, and the way it bulldozed through the agency, underscores how aggressively the Trump administration is exercising its authority and demanding compliance. “Their approach seems to be We go in; we bully; we say, ‘Do this; you have no choice,” and shows little regard for the people or research affected, one former official told me.

Typically, a memo communicating a major decision related to grants would take months or years to put together, sometimes with public input, and released six months to a year before being implemented, one current NIH official told me—earlier, even, “if the impact will be more substantial.” In this case, though, Stefanie Spear, the HHS principal deputy chief of staff, told officials in the Office of Extramural Research, which oversees the awarding of grants, that this new memo needed to be posted to the NIH website no later than 5 p.m. that afternoon—within about an hour of the agency receiving it. Soon, the timeline tightened: The memo had to be published within 15 minutes. “It was designed to minimize the chance that anyone within an agency could even have time to respond,” another former NIH official told me.

Substantial changes are generally vetted through HHS leadership, and NIH officials have always “very much abided by the directives of the department,” the former official said. But in the past, drafting those sorts of directives has been collaborative, a former NIH official told me. If NIH officials disagreed with a policy that HHS proposed, a respectful discussion would ensue. Indirect-cost rates are controversial: The proportion of NIH funding that has gone to them has grown over time, and proponents of trimming overhead argue that doing so would make research more efficient. A cut this deep and sudden, though, would upend research nationwide. And to grant recipients and NIH officials, it seemed less an attempt to reform or improve the current system, and more an effort to blow it up entirely. Either way, a unilateral demand to publish unfamiliar content under the NIH’s byline was unprecedented in the experience of the NIH officials I spoke with. “It was completely inappropriate,” the former official told me.

But Spear and Heather Flick Melanson, the HHS chief of staff, insisted that the memo was to go live that evening. Officials immediately began to scramble to post the notice on the agency’s grants website, but they quickly hit some technical snares. Fifteen minutes passed, then 15 more. The two HHS officials began to badger NIH staff, contacting them as often as every five minutes, demanding an explanation for why the memo was still offline. The notice went live just before 5:45 p.m., and finally, the phone calls from HHS stopped.

Almost immediately, the academic world erupted in panic and rage. At the same time, the news was blazing through the NIH; staff members felt blindsided by the memo, which appeared to have come from within the agency but which they’d known nothing about. The notice’s formatting, tone, and abruptness also led many within the agency to suspect that it had not originated there or been vetted by NIH officials. “I’ve never seen anything so sloppy,” the current NIH official, who has written several NIH notices, told me. “We also don’t publish announcements after 5 p.m. on Friday, ever … I checked multiple times to be sure it was real.”

The NIH had already been caught in the Trump administration’s first salvo of initiatives. On January 27, a memo from the Office of Management and Budget froze the agency’s ability to fund grants. (In the following week, multiple federal judges issued orders that should have unpaused the funding halt, but many grants remained in limbo.) And in 2017, during Donald Trump’s first term, his administration went after indirect costs, proposing to cap them at 10 percent. That prompted the House and Senate Appropriations Committees to introduce a new provision that blocked the administration from altering those rates; Congress has since included language in its annual spending bills that prevents changes to indirect costs without legislative approval. On February 10 of this year—the Monday after the memo restricting those rates went up—yet another federal judge issued yet another temporary restraining order that again instructed the NIH to thaw its funding freeze.

Last week, the NIH told its staff to resume awarding grants, with prior indirect-cost rates intact. But “the damage is done,” the former NIH official said. Scientists across the nation have had their funding disrupted; many have had to halt studies. And at the NIH—where roughly 1,000 staff members recently received termination notices, amid a mass layoff of federal workers that stretched across HHS—those who remain fear for their job and the future of the agency. The nation’s leaders, NIH officials told me, seem entirely unwilling to consult the NIH about its own business. If the administration remains uninterested in maintaining the agency’s basic functions, the NIH’s purpose—supporting medical research in the United States—will crumble, or at least deteriorate past the point where it resembles anything that the people who make up the agency can still recognize.

Donald Trump’s first term saw a great deal of political polarization. Right- and left-leaning Americans disagreed about environmental regulation and immigration. They disagreed about vaccines and reproductive rights. And they disagreed about whether or not to have children: As Republicans started having more babies under Trump, the birth rate among Democrats fell dramatically.

A few years ago, Gordon Dahl, an economist at UC San Diego, set out to measure how Trump’s 2016 victory might have affected conception rates in the years following. And he and his colleagues found a clear effect: Starting after Trump’s election, through the end of 2018, 38,000 fewer babies than would otherwise be expected were conceived in Democratic counties. By contrast, 7,000 more than expected were conceived in Republican counties in that same period. (The study, published in 2022, was conducted before data on the rest of Trump’s term were available.) Over the past three decades, Republicans have generally given birth to more kids than Democrats have. But during those first years of the first Trump administration, the partisan birth gap widened by 17 percent. “You see a clear and undeniable shift in who’s having babies,” Dahl told me.

That isn’t to say 38,000 couples took one look at President Trump and decided, Nope, no baby for us! But the correlation that Dahl’s team found was clear and strong. The researchers also hypothesized that George W. Bush’s win in 2000, another close election, would have had a noticeable effect on fertility rates. And they found that after that election, too, the partisan fertility gap widened, although less dramatically than after the 2016 election. According to experts I spoke with, as the ideological distance between Democrats and Republicans has grown, so has the influence of politics on fertility. In Trump’s second term, America may be staring down another Democratic baby bust.

Dahl’s paper suggested a novel idea: Perhaps shifts in political power can influence fertility rates as much as, say, the economy does. This one paper only goes so far: Dahl and his co-authors found evidence for a significant shift in birth rates only in elections that a Republican won; for the 2008 election, they found no evidence that Barack Obama’s victory affected fertility rates. (They suggest in the paper, though, that the intense economic impact of the Great Recession might have drowned out any partisan effect.) And the study looked only at those three elections; little other research has looked so directly at the impact of American presidential elections on partisan birth rates. But plenty of  studies have found that political stability, political freedom, and political transitions all affect fertility. To researchers like Dahl, this growing body of work suggests that the next four years might follow similar trends.

In the U.S., partisan differences in fertility patterns have existed since the mid-1990s. Today, in counties that lean Republican, people tend to have bigger families and lower rates of childlessness; in places that skew Democratic, families tend to be smaller. And according to an analysis by the Institute for Family Studies, a right-leaning research group, places that tilt more Republican have become associated with even higher fertility rates over the past 12 years. “I don’t think there’s any reason to think that’s about to stop,” Lyman Stone, a demographer with the institute, told me.

That Democrats might choose to have fewer babies under a Republican president, and perhaps vice versa, may seem intuitive. People take into account a lot of factors when they’re deciding to have kids, including the economy and their readiness to parent. “People are not just looking at the price of eggs,” Sarah Hayford, the director of the Institute for Population Research at Ohio State University, told me. They also consider more subjective factors, such as their own well-being, their feelings about the state of society, and their confidence (or lack thereof) in political leadership. Trump’s supporters may feel more optimistic than ever about the future, but his detractors feel otherwise. After a few short weeks in office, the president has already announced withdrawals from the Paris Agreement on climate change and the World Health Organization, and paused funding for a slew of government services. Those include child-care-assistance programs, although the administration has promised to support policies to encourage family growth. “If you’re a Democrat and you really care about child care and family leave and climate change,” Dahl said, you might conclude that “this is maybe not the right time to bring a kid into the world.”

Some would-be parents aren’t just worried about the world they might bring a child into—they’re worried about themselves, too. In 2016, Roe v. Wade still protected Americans’ right to an abortion. Since the Supreme Court struck down Roe, states across the country have enacted abortion bans. In some cases, those bans have meant that pregnant women have had to wait for care, or be airlifted to other states; as a direct result, at least five pregnant American women have died. These risks can weigh heavily. After the election, Planned Parenthood locations across the country saw a surge in appointments for birth control and vasectomies.

Brittany, a labor-and-delivery nurse in North Carolina, told me that she and her husband had decided to try for one more kid—she wanted a girl, after three boys—but after Trump was reelected, she changed her mind. (Brittany requested that I not use her last name, in order to protect her medical privacy.) During her first pregnancy, when she nearly lost her uterus to a severe postpartum hemorrhage, doctors stopped the bleeding with the help of a device that can also be used in abortions. Emergency abortion is legal in North Carolina, but Brittany fears that could change or that doctors might become more wary about using those same tools to save her reproductive organs—or even her life—under an administration that has signaled support for anti-abortion groups. Brittany is 37 now, and not optimistic about her chances of getting pregnant in four years, when Trump is out of office. Her husband, who voted for Trump, “thinks that I’m kind of blowing things out of proportion when I say we’re definitely not having another baby because of this administration,” she said. For her, though, it seemed like the only rational choice.

If Democrats’ drops in fertility over the coming years do again outstrip Republican gains, that trend will worsen a broader issue the U.S. is facing: a countrywide baby bust. The fertility rate has been falling for almost a decade, save for a brief pandemic baby boom. Around the world, falling birth rates have set off anxieties about how societies might handle, for instance, the challenge of an aging population with few younger people to care for them. In the U.S., fears about population collapse also have helped unite conservatives with the techno-libertarians who have recently flocked to Trump’s inner orbit. Elon Musk, who has 12 children, has repeatedly claimed that population collapse is a bigger threat than climate change. At the annual March for Life in Washington, D.C., last month, Vice President J. D. Vance told the crowd, “I want more babies in the United States of America.”

So far, no country has hit on the magic public policy that will reverse population decline. Taiwan introduced more paid family leave, along with cash benefits and tax credits for parents of young kids. Russia, Italy, and Greece have all tried paying people to have kids. Japan has tried an ever-changing list of incentives for some 30 years, among them subsidized child care, shorter work hours, and cash. None of it has worked. Vance favors expanding the child tax credit; the Trump administration has also sent early signals of family-first policies, including a memo instructing the Department of Transportation to preferentially direct grants and services toward communities with high marriage and birth rates.

As Musk and Vance fight against population decline, they could entice enough Americans to have kids that they can counteract a Democratic deficit, or even reverse falling birth rates. But that won’t be easy. “There may be a Trump bump in conservative places and a Trump bust in liberal places,” Stone told me. “I would bet on the dip being bigger.”

For decades in the United States, scientists and government officials have coexisted in a mostly peaceable and productive symbiosis. The government has funded science and then largely left well enough alone. Scientific agencies have been staffed by scientists; scientists have set scientific priorities; scientists have ensured the integrity of the science that is done, on the theory that scientists know their own complicated, technical, sometimes arcane work best. Under that system, science has flourished, turning the government’s investment into technological innovation and economic growth. Every dollar invested in research and development has been estimated to return at least $5 on average—billions annually.

Recently, that “we pay; you do” mutualism has grown shakier and, since January, fractured into all-out antagonism. In less than a month, the Trump administration has frozen research funds, halted health communications and publications, vanished decades of health and behavior data from its websites, terminated federally funded studies, and prompted researchers to scrub extensive lists of terms from manuscripts and grant proposals. Those changes are, by official accounts, in compliance with Donald Trump’s recent executive orders, which are intended to derail “wasteful” DEI programs and purge any references to “gender ideology” from content funded or published by the federal government.

The administration’s actions have also affected scientific pursuits in ways that go beyond those orders. The dismantling of USAID has halted clinical trials abroad, leaving participants with experimental drugs and devices still in their bodies. Last week, NIH announced that it would slash the amount its grants would pay for administrative costs—a move that has since been blocked by a federal judge but that would substantially hamper entire institutions from carrying out the day-to-day activities of research. The administration is reportedly planning to cut the budget for the National Science Foundation. Mass layoffs of federal workers have also begun, and two NIH scientists (who asked not to be identified for fear of professional repercussions) told me they participated in a meeting this morning in which it was announced that thousands of staff across the Department of Health and Human Services would be let go starting today. Robert F. Kennedy Jr. has now become the head of that department, after two confirmation hearings in which he showed a lack of basic understanding of the U.S. health system and a flagrant disregard for data that support the safety and effectiveness of various lifesaving vaccines. (The White House did not return repeated requests for comment.)

The mood among scientists is, naturally, grim. “There is so little clarity about what to do,” Elizabeth Wrigley-Field, a sociologist at the University of Minnesota, told me. Many researchers I spoke with feel immobilized by the recent changes. “Everyone is in a panic right now,” Mati Hlatshwayo Davis, the director of health for the city of St. Louis, Missouri, told me. “And when researchers don’t know what they’re allowed to do, science is not going to get done.”

Science can be polarizing, especially in times of crisis: During the worst days of the coronavirus pandemic, people fought intensely over the validity of data, and who counted as an expert. And new presidents frequently shift federal priorities, including scientific ones, as is their prerogative—George W. Bush, for instance, defunded certain types of stem-cell research. But generally, those changes have been “much smaller in scale and much more targeted,” Alexander Furnas, a policy researcher at Northwestern University who studies how the government has funded science, told me. Funding for science has also, historically, had strong bipartisan support: Furnas and his colleagues have found that Republicans have appropriated more money to science than Democrats have in the past 40 years. Now the government is experimenting with just how much it can renegotiate its relationship with science.

The new administration seems unlikely to abandon science in its entirety—research into space exploration or artificial intelligence may well continue without friction and even flourish under Trump’s leadership. In an executive order released yesterday, the White House reaffirmed its commitment to tackling chronic disease, a priority of Kennedy’s. But the new administration can pursue certain sectors of science, and talk up scientific values, while still diminishing the research enterprise as a whole. Science and government are now weeks into what will likely be a prolonged battle over how research can and will be done in the United States.

Although some of the government’s actions against science have since been blocked or undone, the suddenness, frequency, and extent of the changes so far have left researchers dreading what might come next.

Nearly every scientist I spoke with for this story told me that they expect national health to decline under the new administration’s leadership. Attempts to defund science on the whole could affect work across fields, throttling drug discovery, clinical trials, climate adaptation, and more. Many also expect that the moratorium on DEI-focused programming will have severe impacts on who is able to do the work of science—further impeding women, people of color, and other groups underrepresented in the field from entering and staying in it. By deleting data and imposing restrictions on the ways in which new data can be collected, the government has also set a worrying new standard for its reach into American research.

The government already wields enormous power over science: It has for decades been the biggest funder of basic research in this country. It employs some of the country’s best scientists. And it’s a crucial source of scientific data, aggregating from across sources to paint portraits of national trends in cancer rates, infectious disease, energy use, air quality, food consumption, and more—essentially, every quantifiable aspect of what it means to exist as an American. Researchers rely on those banks of intel.

Political interference has obstructed American science before. In the 1990s, after a study linked keeping guns at home to higher homicide rates at home, the National Rifle Association lobbied to eliminate the entire National Center for Injury Prevention and Control at the CDC, which had funded that research; in response, Congress passed the Dickey Amendment, which forbade the CDC from using its funds to “advocate or promote gun control.” The agency essentially halted its funding of any firearm-related research until 2019, when Congress again began allocating funds for research on gun violence. By then, many experts had been scared off the topic, Megan Ranney, the dean of Yale’s School of Public Health, told me: Other sources of funding were scarce, and researchers worried that any professional connection to this topic could hurt their chances of getting government grants, even for unrelated projects. When Ranney chose to study firearm violence in the early 2000s, she said, “I was explicitly told not to, because it would doom my career as an academic scientist.” The CDC-funding gap also left a chasm in data sets tracking gun ownership and violence. Only now, Ranney told me, is a critical mass of well-trained researchers starting to make up for those decades of lost evidence.

In the coming years, HIV/AIDS research—to take just one example—could be similarly stifled, Amy Fairchild, a historian at Syracuse University, told me. Epidemiologists who study sexually transmitted infections in general tend to diligently track gender, because patterns of transmission can differ so greatly along that axis. But under Trump’s leadership, that scientific rigorousness has turned into a potential vulnerability. Already, the administration has issued guidance limiting PEPFAR-funded pre-exposure prophylaxis to only “pregnant and breastfeeding women,” excluding by omission other populations extremely vulnerable to infection, including both men who have sex with men as well as transgender people. And several sexual-health researchers told me that the Trump administration recently issued a termination order for their large, CDC-funded study that focused on reducing health disparities among populations affected by multiple STIs. (A judge has since issued a temporary restraining order allowing the study to resume.)

But the new administration’s approach to science bleeds past the bounds of any single field. One cancer researcher at George Washington University, for instance, had to halt a project investigating the best ways to collect information about gender and sexual orientation from patients. The terms the Trump administration has flagged as “gender ideology”—a list that includes pregnant people, transgender, binary, non-binary, assigned at birth, cisgender, and queer—have already been politicized. But by targeting gender as a category as well as the entire concept of DEI, the government has now granted itself a way to directly affect just about any scientific discipline in which human identity and behavior are relevant, or in which people of diverse backgrounds are involved in the work—which is to say, any scientific discipline. The National Science Foundation, for instance, has been hunting for hundreds of DEI-related terms in grants, leaving researchers fearful that merely attempting to include diverse populations in their work will tank their chances of success.

Many scientists told me that they expect the list of terms the government is flagging or excising to expand. The National Institutes of Health is reportedly scouring existing grants that mention COVID, and the Commerce Department has asked the National Oceanic and Atmospheric Administration to scan grants for climate-related words, though the upshot of those searches isn’t yet clear. Kate Brown, a science historian at MIT, told me that some of her colleagues have been passing around lists of words that they think “should not appear in grant applications,” including environment, climate, and race. David Ho, a climate scientist at the University of Hawaiʻi at Mānoa, told me that a dean at his school recommended removing even the term biodiversity from public-facing documents, out of concern that it would be flagged by searches for DEI-related terminology. (A spokesperson at the university told me that they couldn’t find any official message from the school or the university.)

Navigating these new restrictions isn’t as simple as “find and replace.” Deshira Wallace, a health-behavior researcher at the UNC Gillings School of Global Public Health, has been modifying her grant proposals to, for instance, describe the impacts of heat exposure on certain groups overrepresented among farmworkers, rather than referring directly to climate change or Black and Latino people. But she cannot hide that diversity and health equity are core to her work. Other researchers are concluding that the simpler solution may be to step away from projects that, for instance, examine the impacts of air pollution on racially segregated communities (impossible to study without acknowledging diversity) or firearm deaths among American kids (which vary by gender and other demographics).

Scientists are used to pressure from institutions and academic journals to tweak the language that they use, in order to increase the chances that their research might be funded. But to be barred from using certain words—certain concepts—that are firmly grounded in fact represents a new level of government interference, seemingly driven by a skepticism that scientists are approaching their own work correctly. Asking about gender “is established science,” helping researchers identify populations with different needs, wants, behaviors, and risks, Hilary Reno, the medical director of the St. Louis County Sexual Health Clinic, told me. (She herself runs a CDC-funded community survey about sexual-health care that asks about gender, and recently received notice from the agency that she needed to terminate all activity “promoting or inculcating gender ideology.”)

Dismissing established scientific methodology as politically transgressive is a dangerous precedent to set. In the current political environment, Republicans have become generally less trusting of scientists than Democrats are; right-wing groups are openly treating some scientists as political enemies, publishing “watch lists” filled with federal workers, many of them people of color, who work on government programs aimed at reducing health inequities. Scientific expertise itself is now being billed as a political liability, which opens the door to “a populist approach to what counts as valid scientific knowledge,” Fairchild told me. That creates the opportunity, too, for future leaders of any political affiliation to conflate popular opinion with data-driven truths. Assaults on facts and evidence become that much more permissible if facts can be massaged or dismissed by people in power.

Science, like government, has its flaws, inefficiencies, and scandals. The ways in which research has been structured and funded in the U.S. have also been subject to plenty of valid criticism and scrutiny. But the internal standards American science has set for itself have also largely held up. Reputable research must contain data that have been collected with the correct tools under reasonable conditions, are repeatable under different settings by different researchers, and have been thoroughly vetted by independent experts in the same field. These standards can make science seem tedious and opaquely technical, but empiricism serves research’s ultimate goal: to reveal what’s currently true.

Federal data sets, in particular, are meant to be publicly accessible snapshots of reality, Rachel Hardeman, a health-equity researcher at the University of Minnesota, told me. But in any scientific endeavor, “you never manipulate data” or remove it, Niema Moshiri, a bioinformatician at UC San Diego, told me, not from your own data set, not from anyone else’s, not from the public record. The scientific process itself is iterative by design—understanding changes as knowledge accrues—but tampering with or disappearing data is, among scientists, essentially equivalent to misrepresenting evidence. “It goes against everything that we are taught as scientists,” Reno said. Researchers who have been found guilty of these acts have paid the consequences: “You get papers retracted for less than that. You get your career ruined for less than that.”

The federal government has now violated that code of conduct. In yesterday’s executive order, Trump highlighted the importance of “protecting expert recommendations from inappropriate influence and increasing transparency regarding existing data.” But that is exactly what the administration’s critics have said it is already failing to do. At the end of last month, the CDC purged its website of several decades’ worth of data and content, including an infectious-disease-surveillance tool as well as surveys tracking health-risk behaviors among youths. (On Tuesday, a federal judge ordered the government to restore, for now, these and other missing data and webpages to their pre-purge state.) And as soon as the Trump administration started pulling data sets from public view, scientists started worrying that those data would reappear in an altered form, or that future scientific publications would have to be modified. Many scientists could also easily imagine government agencies insisting on removing entire columns of information or swapping the word gender for sex—an inaccurate substitution that could muddle demographic patterns and erase entire populations of people from data sets. Some of those fears have since been realized: NIH officials have been instructed to offer only “male” and “female” as choices on clinical-data forms; CDC scientists have been ordered to retract manuscripts already submitted for publication to scrub any mention of gender-acknowledging terms; at least one published paper has been deleted from a government website, its authors told that reposting will be possible only after forbidden terms are removed.

Eventually, after a lengthy legal fight or under a different president, the Trump administration’s restrictions might be lifted. But future researchers will “have to be really careful about how we interpret these next several years,” Wallace told me: Any gaps in data sets will be permanent. So might the dent in scientists’ trust in the U.S. government. “We have really taken it for granted that that data would always be there,” David Margolius, Cleveland’s director of public health, told me. In the long term, American science may no longer look reputable to scientists abroad. With so many restrictions and caveats on how data are collected, “for me, it would feel a little bit tainted,” Wallace said.

In this more hostile environment, many scientists are weighing their next moves. Some who work for the government are expecting to be fired; some at universities told me they’d consider exiting academic science, perhaps to take a private-sector job; others told me they’d stay until they were forced out. Still others, though, said they aren’t willing to let the American government dictate the terms of their scientific integrity and success. “If these things directly impact our progress? I’m going to leave,” Keolu Fox, a geneticist at UC San Diego, told me. Science, after all, is a transferrable skill—one that plenty of other countries generously support.

For decades, the government’s relatively harmonious partnership with science has been a boon for the United States: American research brought the world a polio vaccine, the first crewed lunar landing, the internet. Since the Manhattan Project, science in America has also been explicitly recruited to the project of national security. The threat of America’s current research infrastructure buckling, though, has revealed how tenuous science’s place in the U.S. really is. “There’s no replacement at scale” for federal funding, Wrigley-Field told me. Universities don’t have the budget to foot the bill for every researcher who loses a grant; private foundations can supplement some funding, but the money they provide might come with strings. And a nation of privatized, commercialized research could invite “all kinds of compromises” and conflicts of interest, Brown told me.

Both by trying to control science and by funding less of it, the Trump administration is starting a slide toward a future where more, rather than fewer, people have reason to distrust science and its results. People whom the government is intent on disparaging may choose to stop participating in research, federally funded or not—further muddying scientists’ ability to monitor the nation’s health or assess how treatments or interventions work for different groups of people. If gender can be dissociated from health, little may stop federal leaders from burying other realities that they find unpalatable—the extent and severity of the growing H5N1 bird-flu outbreak, for example, or the costs of declining vaccine uptake.

There will undoubtedly be periods, in the coming weeks and months, when the practice of science feels normal. Many scientists are operating as they usually do until they are told otherwise. But that normalcy is flimsy at best, in part because the Trump administration has shown that it may not care what data, well collected or not, have to say. During his Senate confirmation hearings, Kennedy repeatedly refused to acknowledge that vaccines don’t cause autism, insisting that he would do so only “if the data is there.” Confronted by Senator Bill Cassidy with decades of data that were, in fact, there, he continued to equivocate, at one point attempting to counter with a discredited paper funded by an anti-vaccine group.

In all likelihood, more changes are to come—including, potentially, major budgetary cuts to research, as Congress weighs this year’s funding for the nation’s major research agencies. Trump and his administration are now deciding how deep a rift to make in America’s scientific firmament. How long it takes to repair the damage, or whether that will be possible at all, depends on the extent of the damage they inflict now.

Twice during his Senate confirmation hearings at the end of last month, Robert F. Kennedy Jr., America’s new health secretary, brought up a peer-reviewed study by a certain “Mawson” that had come out just the week before. “That article is by Mawson,” he said to Senator Bill Cassidy, then spelled out the author’s name for emphasis: “M-A-W-S-O-N.” And to Bernie Sanders: “Look at the Mawson study, Senator … Mawson. Just look at that study.”

“Mawson” is Anthony Mawson, an epidemiologist and a former academic who has published several papers alleging a connection between childhood vaccines and autism. (Any such connection has been thoroughly debunked.) His latest on the subject, and the one to which Kennedy was referring, appeared in a journal that is not indexed by the National Library of Medicine or by any other organization that might provide it with some scientific credibility. One leading member of the journal’s editorial board, a stubborn advocate for using hydroxychloroquine and ivermectin to treat COVID-19, has lost five papers to retraction. Another member is Didier Raoult (whose name the journal has misspelled), a presence on the Retraction Watch leaderboard, which is derived from the work of a nonprofit we cofounded, with 31 retractions. A third, and the journal’s editor in chief, is James Lyons-Weiler, who has one retraction of his own and has called himself, in a since-deleted post on X, a friend and “close adviser to Bobby Kennedy.” (Mawson told us he chose this journal because several mainstream ones had rejected his manuscript without review. Lyons-Weiler did not respond to a request for comment.)

Perhaps a scientist or politician—and certainly a citizen-activist who hopes to be the nation’s leading health-policy official—should be wary of citing anything from this researcher or this journal to support a claim. The fact that one can do so anyway in a setting of the highest stakes, while stating truthfully that the work originated in a peer-reviewed, academic publication, reveals an awkward fact: The scientific literature is an essential ocean of knowledge, in which floats an alarming amount of junk. Think of the Great Pacific Garbage Patch, but the trash cannot be identified without special knowledge and equipment. And although this problem is long-standing, until the past decade or so, no one with both the necessary expertise and the power to intervene has been inclined to help. With the Trump administration taking control of the CDC and other posts on the nation’s science bulwark, the consequences are getting worse. As RFK Jr. made plain during his confirmation hearing, the advocates or foes of virtually any claim can point to published work and say, “See? Science!”

This state of affairs is not terribly surprising when one considers how many studies labeled as “peer reviewed” appear every year: at least 3 million. The system of scientific publishing is, as others have noted, under severe strain. Junk papers proliferate at vanity journals and legitimate ones alike, due in part to the “publish or perish” ethos that pervades the research enterprise, and in part to the catastrophic business model that has captured much of scientific publishing since the early 2000s.

That model—based on a well-meaning attempt to free scientific findings from subscription paywalls—relies on what are known as article-processing charges: fees researchers pay to publishers. The charges aren’t inconsequential, sometimes running into the low five figures. And the more papers that journals publish, the more money they bring in. Researchers are solicited to feed the beast with an ever-increasing number of manuscripts, while publishers have reason to create new journals that may end up serving as a destination for lower-quality work. The result: Far too many papers appear each year in too many journals without adequate peer review or even editing.

[Read: The real cost of knowledge]

The mess that this creates, in the form of unreliable research, can to some extent be cleaned up after publication. Indeed, the retraction rate in science—meaning the frequency with which a journal says, for one reason or another, “Don’t rely on this paper”—has been growing rapidly. It’s going up even faster than the rate of publication, having increased roughly tenfold over the past decade. That may sound like editors are weeding out the literature more aggressively as it expands. And the news is in some ways good—but even now, far more papers should be retracted than are retracted. No one likes to admit an error—not scientists, not publishers, not universities, not funders.

Profit motive can sometimes trump quality control even at the world’s largest publishers, which earn billions annually. It also fuels a ravenous pack of “paper mills” that publish scientific work with barely any standards whatsoever, including those that might be used to screen out AI-generated scientific slop.

An empiricist might say that the sum total of these articles simply adds to human knowledge. If only. Many, or even most, published papers serve no purpose whatsoever. They simply appear and … that’s it. No one ever cites them in subsequent work; they leave virtually no trace of their existence.

Until, of course, someone convinces a gullible public—or a U.S. senator—that all research currency, new and old, is created equal. Want to make the case that childhood vaccines cause autism? Find a paper in a journal that says as much and, more important, ignore the countless other articles discrediting the same idea. Consumers are already all too familiar with this strategy: News outlets use the same tactic when they tell you that chocolate, coffee, and red wine are good for you one week—but will kill you the next.

Scientists are not immune from picking and choosing, either. They may, for example, assert that there is no evidence for a claim even though such evidence exists—a practice that has been termed “dismissive citation.” Or they may cite retracted papers, either because they didn’t bother checking on those papers’ status or because that status was unclear. (Our team built and shared the Retraction Watch Database—recently acquired by another nonprofit—to help address the latter problem.)

The pharmaceutical industry can also play the science-publication system to its advantage. Today, reviewers at the FDA rely on raw data for their drug approvals, not the questionable thumbs-up of journals’ peer review. But if the agency, flawed as it may be, has its power or its workforce curbed, the scientific literature (with even greater flaws) is not prepared to fill the gap.

Kennedy has endorsed at least one idea that could help to solve these many problems. At his confirmation hearing, he suggested that scientific papers should be published alongside their peer reviews. (By convention, these appraisals are kept both anonymous and secret.) A few publishers have already taken this step, and although only time will tell if it succeeds, the practice does appear to blunt the argument that too much scientific work is hashed out behind closed doors. If such a policy were applied across the literature, we might all be better off.

Regardless, publishers must be more honest about their limitations, and the fact that many of their papers are unreliable. If they did their part to clean up the literature by retracting more unworthy papers, even better. Opening up science at various stages to more aggressive scrutiny—“red teaming,” if you will—would also help. Any such reforms will be slow-moving, though, and America is foundering right now in a whirlpool of contested facts. The scientific literature is not equipped to bail us out.

Before it burned, Charlie Springer’s house contained 18,000 vinyl LPs, 12,000 CDs, 10,000 45s, 4,000 cassettes, 600 78s, 150 8-tracks, hundreds of signed musical posters, and about 100 gold records. The albums alone occupied an entire wall of shelves in the family room, and another in the garage. On his desk were a set of drumsticks from Nirvana and an old RCA microphone that Prince had given to him at a recording session for Prince. A neon Beach Boys sign—as far as he knows, one of only eight remaining in the world—hung above the dining table. In his laundry room was a Gibson guitar signed by the Everly Brothers; near his fireplace, a white Stratocaster signed to him by Eric Clapton.

Last month, the night the Eaton Fire broke out, Charlie evacuated to his girlfriend’s house. And when he came back, the remnants of his home had been bleached by the fire. The spot in the family room where the record collection had been was dark ash.

I’ve known Charlie for as long as I can remember. He and my father met because of records. In the late 1980s, Charlie was at a crowded party in the Hollywood Hills when he heard someone greet my father by his full name. Charlie whipped around: “You’re Fred Walecki? I’ve been seeing your name on records.” Dad owned a rock-and-roll-instrument shop, and musicians thanked him on their albums for the gear (and emotional support) he provided during recording sessions. Charlie was a national sales manager at Warner Bros. Records and could rattle off the B-side of any record, so of course he’d clocked Walecki appearing over and over again. Growing up, I thought every song I’d ever heard could also be found on Charlie’s shelves; his friend Jim Wagner, who once ran sales, merchandising, and advertising for Warner Bros. Records, called it the Rock and Roll Hall of Fame West.

Charlie’s collection started when he was 6. He had asked his mother to get him the record “about the dog,” and she’d brought back Patti Page’s “(How Much Is) That Doggie in the Window?” No, not that one—he wanted a 45 of Elvis’s recently released single, “Hound Dog.” He’d cart it around with him for the next seven decades, across several states, before placing it on his shelf in Altadena. At age 8, he mowed lawns and shoveled snow in his hometown outside Chicago to afford “Sweet Little Sixteen,” by Chuck Berry, and “Tequila,” by the Champs; when he was 9, he got Ray Charles’s “What’d I Say.” And when he was 10, he walked into his local record shop and found its owner, Lenny, sitting on the floor, frazzled, surrounded by piles of records. Every week, Lenny had to rearrange the records on his wall to reflect the order of the Top 40 chart made by the local radio station WLS. Charlie offered to help.

“What will it cost me?” Lenny asked.

“Two singles a week.” Charlie held on to all of those singles, and the paper surveys from WLS, too.

When he was 12, he bought his first full albums: Surfin’ Safari, by the Beach Boys; Bob Dylan’s eponymous debut; and Green Onions, by Booker T. and the M.G.s. He entered a Wisconsin seminary two years later, hoping to become a priest. There, he and his friends found a list of addresses for members of Milwaukee’s Knights of Columbus chapter, and sent out letters asking for donations—a hi-fi stereo console, a jukebox—to the poor seminarians, who went without so much. Radios were contraband, but Charlie taped one underneath the chair next to his bed, and at night, while 75 other students slept around him, he would use an earbud to listen to WLS. “And I would hear records, and I would go, Oh my God, I gotta get this record. I have to. ” Seminarians could go into town only if it was strictly necessary, so he’d break his glasses, and run between the optometrist and the five-and-dime. That’s how he got a couple of other Beach Boys records, the Kinks’ “Tired of Waiting for You,” and the Lovin’ Spoonful’s “Daydream.”

Charlie dropped out of seminary in 1967, at the end of his junior year. All of those five-and-dime records had been in his prefect’s room, but when he left, the prefect was nowhere to be found. So, Charlie got a ladder, wriggled through a transom, and got his collection, stored in two crates which had previously contained oranges. (“Orange crates held albums perfectly,” he told me.) Then he hitchhiked to San Francisco and grew his hair out just in time for the Summer of Love. He moved into a commune of sorts, a 16-unit apartment building with the walls between apartments broken down, and got a job hanging posters for the Fillmore on telephone poles around the Bay Area. He’d staple up psychedelic artwork advertising Jefferson Airplane, Sons of Champlin, the Grateful Dead, or Sly and the Family Stone. (He still had about 75 of those posters.) He worked at Tower Records on the side but would hand his paycheck back to his boss: The money all went to records. Anytime one of his favorites—Morrison, Mitchell, Dylan, the Beach Boys—released a new album, he’d host a listening party for friends. When he moved back to Chicago, his music collection took up most of the car. The record store he managed there, Hear Here, would receive about 20 new albums every day to play over the loudspeakers. When Charlie heard Bruce Springsteen’s first album (two before Born to Run), he thought it was such a hit, he locked the shop door. “Until I sell five of these records,” he announced, “nobody is getting out of this store.”

Next, Charlie worked his way up at a music-distribution company, starting from a gig in the warehouse (picker No. 9). Later, at Warner Bros. Records, he’d work with stores and radio stations to help artists sell enough music to get, and then sustain, their big break. To sell Takin’ It to the Streets, he drove with the Doobie Brothers so they could sign albums at a Kansas City record shop; to help Dire Straits get their start, he lobbied radio stations to play their first single for about a year until it caught on. He was also on the shortlist of people who would listen to test pressings of a new album for any pops or crackles, before the company shipped the final version. Charlie held on to about 1,000 of those rare pressings, including Fleetwood Mac’s Rumours and Prince’s Purple Rain.

He moved to Los Angeles in the ’80s to be Warner’s national sales manager, and in 1991, he bought his home on Skylane Drive, in Altadena. Nestled in the foothills, the area smelled of the hay for his neighbors’ horses. Along the fence was bougainvillea, and in his yard, a magnificent native oak that our families would sit beneath together. He started placing thousands of his albums on those shelves in the family room, overlooking that tree.

In Charlie’s house, a record was always playing. He had recently papered the walls and ceiling of his bathroom with the WLS surveys he started collecting as a child, in his first record-store job. Every record he pulled off the shelf came with a memory, he told me. And if he kept an album or a memento in his house, “it was a good story.”

A gold record from U2, on the wall next to the staircase: “All bands, when they first start off, they’re new bands, and nobody knows who they are, okay? … I went up with U2, on their first album, from Chicago to Madison, and they played a gig for about 15 people, and then we went to eat at an Italian restaurant. I went back to the restaurant a couple years later, and the same waitress waited on me, and I said, ‘Wow, I remember I was in here with U2.’ And she goes, ‘Those guys were U2?’ I was like, ‘They were U2 then and they’re U2 now.’”

In the kitchen, a poster of Jimi Hendrix striking a power chord at the Monterey Pop Festival: “Seal puts his first record out, and I have just become a vice president at Warner Bros. And I go to my very first VP lunch, and I announce, ‘Hey, this new Seal record is going to go gold.’ The senior VP of finance says, ‘You shouldn’t say that. Why would you make that kind of expectation?’ And I’m like, ‘Because I know with every corpuscle in my body it’s gonna go gold’ … So we make a $1 gentlemen’s bet. About six weeks later, it’s gold.” At the next lunch, he asked the finance executive to sign his dollar bill. Just then, Mo Ostin, the head of the label, walked in and heard about their wager. “Mo said, ‘So Charlie, is there something around the building that you always liked?’ I was like, ‘Well, that Jim Marshall poster of Hendrix.’ And he goes, ‘It’s yours.’”


*Illustration sources: RCA / Michael Ochs Archive / Getty; Stoughton Printing / Jay L. Clendenin / Los Angeles Times / Getty; Warner Brothers / Alamy; Sun Records / Alamy

RFK Jr. Won. Now What?

by

America’s health secretaries, almost as a rule, have résumés manicured to a point of frictionlessness. Once in a while one will attract scandal in their tenure; see Tom Price’s reported fondness for chartered jets. But anyone who has garnered enough cachet to be nominated to head the Department of Health and Human Services tends to arrive in front of the Senate with such impeccable credentials that finding anything that might disqualify them from the position is difficult.

Donald Trump’s selection of Robert F. Kennedy Jr., who was confirmed today as America’s newest health secretary, was specifically intended to break that mold. Kennedy positioned himself as a truth teller determined to uproot the “corporate capture” and “tyrannical insensate bureaucracies” that had taken hold of the nation’s public-health agencies. Even so, it’s remarkable just how unimaginable his confirmation would have been in any political moment other than today’s, when an online reactionary has been given a high-level position in the Justice Department and a teenager known as “Big Balls” is advising the State Department. Kennedy holds broadly appealing views on combatting corruption and helping Americans overcome chronic disease. But he is also, to an almost cartoonish degree, not impeccably credentialed. He has trafficked in innumerable unproven and dangerous conspiracy theories about vaccines, AIDS, anthrax, President John F. Kennedy’s assassination, COVID-19, sunlight, gender dysphoria, and 5G. He has potential financial conflicts of interest. He has spoken about a worm eating part of his brain and about dumping a dead bear in Central Park. He has been accused of sexual assault. (In his confirmation hearing, Kennedy denied the allegation and said it was “debunked.”)

In the end, none of it mattered. While Senate Democrats unanimously opposed Kennedy’s confirmation, he sailed through the Senate’s vote this morning after losing just one Republican vote, Senator Mitch McConnell of Kentucky, a polio survivor who appears to have taken issue with Kennedy’s anti-vaccine activism. Kennedy did, however, earn the support of Senator Bill Cassidy, a physician who until last week seemed to be the Republican lawmaker most concerned about the potential damage of elevating an anti-vaccine conspiracy theorist to the nation’s highest perch in public health. Kennedy’s confirmation is a victory for Trump, and a clear message that Senate Republicans are willing to embrace pseudoscience in their unwavering deference to him. Americans’ health is in Kennedy’s hands.

So what happens next? Spokespeople for Kennedy did not respond to my request to talk with him about his agenda. Nevertheless, Kennedy’s first weeks in office will likely be hectic ones, adding to the chaos of Trump’s nonstop executive orders and Elon Musk’s crackdowns on numerous federal agencies. As HHS head, Kennedy will oversee 13 different agencies, including the CDC, FDA, and National Institutes of Health. Prior to being appointed, Kennedy said he believed that 600 employees would need to be fired at the NIH and replaced with employees more aligned with Trump’s views. (The NIH employs roughly 20,000 people, so such a cut at least would be minor compared with the Department of Government Efficiency’s more sweeping moves.) He has also implied that everyone at the FDA’s food center could be handed pink slips. More generally, he has said he will “remove the financial conflicts of interest in our agencies,” but he hasn’t spelled out exactly who he believes is so conflicted that they should be out of a job.

At NIH in particular, any sudden moves by Kennedy would compound changes already unfolding under the auspices of DOGE. Musk’s crew has attempted to dramatically cut the amount of administrative funding typically doled out by the agency to universities in support of scientific research. Planned meetings about those funds were also abruptly canceled last month. (The funding cuts have been temporarily halted by a federal judge, and funding meetings appear to have resumed.) It’s easy to assume that Kennedy would support these efforts, given his aspirations to fire federal bureaucrats. But the DOGE effort may in fact undermine his larger goals, setting up some potential tension between Kennedy and Musk. Research funding is essential to Kennedy’s pursuit of unraveling the causes of America’s chronic-disease crisis; he has suggested devoting more of the NIH’s resources to investigating “preventive, alternative, and holistic approaches to health.”

On the policy front, in both the immediate and long term, chronic diseases will likely occupy Kennedy’s attention the most. He has called that issue an existential threat to the United States, and it is the clearest part of Kennedy’s agenda that has bipartisan support. However, exactly what he can do on this issue is uncertain. Many of the policies he’s advocated for, such as removing junk food from school lunches, actually fall to a different agency: the U.S. Department of Agriculture. The only food-related policy he’s regularly touted that he has the power to enact is banning certain chemical additives in the food supply. Even so, banning a food additive is typically a laboriously slow legal process.

His public statements provide other, vaguer hints about issues that he will likely contend with during his term. On abortion, he has said that he will direct the FDA and NIH to closely scrutinize the safety of the abortion pill mifepristone. (Trump has previously suggested that his administration would protect access to abortion pills, though the president’s position is murky at best.) On the price of drugs, Kennedy has said that he wants to crack down on the middlemen who negotiate them for insurance companies. But by and large Kennedy has said little about how he will tackle the complex regulatory issues that are traditionally the focus of the health secretary. He might simply not have that much to say. Kennedy has implied that he cares far less about those topics than about diet and chronic disease. During his confirmation hearing, he told senators that focusing on issues such as insurance payments without lowering the rate of chronic illness would be akin to “moving deck chairs around on the Titanic.”

The biggest and most consequential question mark is how Kennedy will approach vaccines. If he were to chip away at Americans’ access to shots, or even simply at Americans’ readiness to receive them, he could degrade the nation’s protections against an array of diseases and, ultimately, be the cause of people’s deaths. Kennedy’s anti-vaccine advocacy was the subject of some of the most intense scrutiny during his confirmation hearings. “If you come out unequivocally, ‘Vaccines are safe; it does not cause autism,’ that would have an incredible impact. That’s your power. So what’s it going to be?” Cassidy asked. Kennedy pledged that he would not deprioritize or delay approval of new vaccines, and not muck up the government’s vaccine-approval standards. Throughout the process, he attempted to distance himself from his past vaccine positions, which include an assertion that the federal officials supporting the U.S. childhood-vaccine program were akin to leaders in the Catholic Church covering up pedophilia among priests. But his answers to senators’ questions about his past remarks and whether vaccines cause autism were consistently evasive. And some of his plans play into the anti-vaccine camp’s hands. He has promised, for example, to push for government-funded studies to be released with their full raw data—a move that likely would please transparency advocates, though also would act as an olive branch to anti-vaccine activists who have had to sue federal agencies in recent years for certain vaccine data.

Last week, after Cassidy cast a decisive committee vote that allowed Kennedy’s nomination to advance to full Senate consideration, he said in a speech on the Senate floor that he had pressure-tested Kennedy enough to feel confident that he could rebuild trust in public health. (Cassidy did not mention that advancing Kennedy was also in his political interest. A spokesperson for Cassidy declined my requests for an interview.) Kennedy holds an almost biblical status among his supporters, and a significant portion of those people distrust federal health agencies. Cassidy’s professed belief in Kennedy’s leadership offers a soothing vision: Imagine Americans whose views on the public-health establishment have been deeply eroded over time, all with their faith restored in one of the world’s most rigorous scientific institutions thanks to a radical outsider.

But consider the logic here. By voting to confirm Kennedy, the U.S. Senate is wagering the future of our public-health system on a prayer that a conspiracy theorist can build back up the agencies that he and his supporters have spent years breaking down. A more realistic outcome may be that Kennedy leaves public health more broken than ever before. Although many Americans are skeptical of the government’s scientific institutions, polls show that relatively few have the sort of deep-seated contempt for public-health agencies that Kennedy has espoused. By pandering to that fraction of voters, Kennedy risks alienating the much larger portion of Americans who might not agree with everything the CDC has done in recent years, but also don’t think that the agency’s vaccine program is comparable to a Nazi death camp, as Kennedy has claimed.

If Kenendy did go so far as to disavow any connection between autism and vaccines, that itself might lead to trouble. Jennifer Reich, a professor at the University of Colorado at Denver who has studied vaccine skepticism, told me that the autism issue is just one part of a larger, much more diffuse set of concerns shared by parents who question vaccinating their children. For RFK to disavow all of his vaccine antagonism, he would essentially have to abandon his prima facie skepticism toward science more generally. Such an apology would likely do more to turn some of his most ardent supporters against him than change their views, argues Alison Buttenheim, an expert on vaccine skepticism at the University of Pennsylvania. “People will do amazing leaps and cartwheels to not have their beliefs and their behaviors in conflict,” she told me.

If Kennedy genuinely wants to restore faith in public health, he’ll have to win over his fellow conspiracists while maintaining the trust of the many people who already thought the agencies were doing a fine job before he arrived. Perhaps he’ll try. But proclaiming, as he did in October, that the “FDA’s war on public health is about to end” is not a great way to start.

Five years ago, the coronavirus pandemic struck a bitterly divided society.

Americans first diverged over how dangerous the disease was: just a flu (as President Donald Trump repeatedly insisted) or something much deadlier.

Then they disputed public-health measures such as lockdowns and masking; a majority complied while a passionate minority fiercely resisted.

Finally, they split—and have remained split—over the value and safety of COVID‑19 vaccines. Anti-vaccine beliefs started on the fringe, but they spread to the point where Ron DeSantis, the governor of the country’s third-most-populous state, launched a campaign for president on an appeal to anti-vaccine ideology.

Five years later, one side has seemingly triumphed. The winner is not the side that initially prevailed, the side of public safety. The winner is the side that minimized the disease, then rejected public-health measures to prevent its spread, and finally refused the vaccines designed to protect against its worst effects.

[David A. Graham: The noisy minority]

Ahead of COVID’s fifth anniversary, Trump, as president-elect, nominated the country’s most outspoken vaccination opponent to head the Department of Health and Human Services. He chose a proponent of the debunked and discredited vaccines-cause-autism claim to lead the CDC. He named a strident critic of COVID‑vaccine mandates to lead the FDA. For surgeon general, he picked a believer in hydroxychloroquine, the disproven COVID‑19 remedy. His pick for director of the National Institutes of Health had advocated for letting COVID spread unchecked to encourage herd immunity. Despite having fast-tracked the development of the vaccines as president, Trump has himself trafficked in many forms of COVID‑19 denial, and has expressed his own suspicions that childhood vaccination against measles and mumps is a cause of autism.

The ascendancy of the anti-vaxxers may ultimately prove fleeting. But if the forces of science and health are to stage a comeback, it’s important to understand why those forces have gone into eclipse.

From March 2020 to February 2022, about 1 million Americans died of COVID-19. Many of those deaths occurred after vaccines became available. If every adult in the United States had received two doses of a COVID vaccine by early 2022, rather than just the 64 percent of adults who had, nearly 320,000 lives would have been saved.

[From the January/February 2021 issue: Ed Yong on how science beat the virus]

Why did so many Americans resist vaccines? Perhaps the biggest reason was that the pandemic coincided with a presidential-election year, and Trump instantly recognized the crisis as a threat to his chances for reelection. He responded by denying the seriousness of the pandemic, promising that the disease would rapidly disappear on its own, and promoting quack cures.

The COVID‑19 vaccines were developed while Trump was president. They could have been advertised as a Trump achievement. But by the time they became widely available, Trump was out of office. His supporters had already made up their minds to distrust the public-health authorities that promoted the vaccines. Now they had an additional incentive: Any benefit from vaccination would redound to Trump’s successor, Joe Biden. Vaccine rejection became a badge of group loyalty, one that ultimately cost many lives.

A summer 2023 study by Yale researchers of voters in Florida and Ohio found that during the early phase of the pandemic, self-identified Republicans died at only a slightly higher rate than self-identified Democrats in the same age range. But once vaccines were introduced, Republicans became much more likely to die than Democrats. In the spring of 2021, the excess-death rate among Florida and Ohio Republicans was 43 percent higher than among Florida and Ohio Democrats in the same age range. By the late winter of 2023, the 300-odd most pro-Trump counties in the country had a COVID‑19 death rate more than two and a half times higher than the 300 or so most anti-Trump counties.

In 2016, Trump had boasted that he could shoot a man on Fifth Avenue and not lose any votes. In 2021 and 2022, his most fervent supporters risked death to prove their loyalty to Trump and his cause.

Why did political fidelity express itself in such self-harming ways?

The onset of the pandemic was an unusually confusing and disorienting event. Some people who got COVID died. Others lived. Some suffered only mild symptoms. Others spent weeks on ventilators, or emerged with long COVID and never fully recovered. Some lost businesses built over a lifetime. Others refinanced their homes with 2 percent interest rates and banked the savings.

We live in an impersonal universe, indifferent to our hopes and wishes, subject to extreme randomness. We don’t like this at all. We crave satisfying explanations. We want to believe that somebody is in control, even if it’s somebody we don’t like. At least that way, we can blame bad events on bad people. This is the eternal appeal of conspiracy theories. How did this happen? Somebody must have done it—but who? And why?

Compounding the disorientation, the coronavirus outbreak was a rapidly changing story. The scientists who researched COVID‑19 knew more in April 2020 than they did in February; more in August than in April; more in 2021 than in 2020; more in 2022 than in 2021. The official advice kept changing: Stay inside—no, go outside. Wash your hands—no, mask your face. Some Americans appreciated and accepted that knowledge improves over time, that more will be known about a new disease in month two than in month one. But not all Americans saw the world that way. They mistrusted the idea of knowledge as a developing process. Such Americans wondered: Were they lying before? Or are they lying now?

In a different era, Americans might have deferred more to medical authority. The internet has upended old ideas of what should count as authority and who possesses it.

The pandemic reduced normal human interactions. Severed from one another, Americans deepened their parasocial attachment to social-media platforms, which foment alienation and rage. Hundreds of thousands of people plunged into an alternate mental universe during COVID‑19 lockdowns. When their doors reopened, the mania did not recede. Conspiracies and mistrust of the establishment—never strangers to the American mind—had been nourished, and they grew.

The experts themselves contributed to this loss of trust.

It’s now agreed that we had little to fear from going outside in dispersed groups. But that was not the state of knowledge in the spring of 2020. At the time, medical experts insisted that any kind of mass outdoor event must be sacrificed to the imperatives of the emergency. In mid-March 2020, federal public-health authorities shut down some of Florida’s beaches. In California, surfers faced heavy fines for venturing into the ocean. Even the COVID‑skeptical Trump White House reluctantly canceled the April 2020 Easter-egg roll.

And then the experts abruptly reversed themselves. When George Floyd was choked to death by a Minneapolis police officer on May 25, 2020, hundreds of thousands of Americans left their homes to protest, defying three months of urgings to avoid large gatherings of all kinds, outdoor as well as indoor.

On May 29, the American Public Health Association issued a statement that proclaimed racism a public-health crisis while conspicuously refusing to condemn the sudden defiance of public-safety rules.

The next few weeks saw the largest mass protests in recent U.S. history. Approximately 15 million to 26 million people attended outdoor Black Lives Matter events in June 2020, according to a series of reputable polls. Few, if any, scientists or doctors scolded the attendees—and many politicians joined the protests, including future Vice President Kamala Harris. It all raised a suspicion: Maybe the authorities were making the rules based on politics, not science.

The politicization of health advice became even more consequential as the summer of 2020 ended. Most American public schools had closed in March. “At their peak,” Education Week reported, “the closures affected at least 55.1 million students in 124,000 U.S. public and private schools.” By September, it was already apparent that COVID‑19 posed relatively little risk to children and teenagers, and that remote learning did not work. At the same time, returning to the classroom before vaccines were available could pose some risk to teachers’ health—and possibly also to the health of the adults to whom the children returned after school.

[David Frum: I moved to Canada during the pandemic]

How to balance these concerns given the imperfect information? Liberal states decided in favor of the teachers. In California, the majority of students did not return to in-person learning until the fall of 2021. New Jersey kept many of its public schools closed until then as well. Similar things happened in many other states: Illinois, Maryland, New York, and so on, through the states that voted Democratic in November 2020.

Florida, by contrast, reopened most schools in the fall of 2020. Texas soon followed, as did most other Republican-governed states. The COVID risk for students, it turned out, was minimal: According to a 2021 CDC study, less than 1 percent of Florida students contracted COVID-19 in school settings from August to December 2020 after their state restarted in-person learning. Over the 2020–21 school year, students in states that voted for Trump in the 2020 election got an average of almost twice as much in-person instruction as students in states that voted for Biden.

Any risks to teachers and school staff could have been mitigated by the universal vaccination of those groups. But deep into the fall of 2021, thousands of blue-state teachers and staff resisted vaccine mandates—including more than 5,000 in Chicago alone. By then, another school year had been interrupted by closures.

By disparaging public-health methods and discrediting vaccines, the COVID‑19 minimizers cost hundreds of thousands of people their lives. By keeping schools closed longer than absolutely necessary, the COVID maximizers hazarded the futures of young Americans.

Students from poor and troubled families, in particular, will continue to pay the cost of these learning losses for years to come. Even in liberal states, many private schools reopened for in-person instruction in the fall of 2020. The affluent and the connected could buy their children a continuing education unavailable to those who depended on public schools. Many lower-income students did not return to the classroom: Throughout the 2022–23 school year, poorer school districts reported much higher absenteeism rates than were seen before the pandemic.

Teens absent from school typically get into trouble in ways that are even more damaging than the loss of math or reading skills. New York City arrested 25 percent more minors for serious crimes in 2024 than in 2018. The national trend was similar, if less stark. The FBI reports that although crime in general declined in 2023 compared with 2022, crimes by minors rose by nearly 10 percent.

People who finish schooling during a recession tend to do worse even into middle age than those who finish in times of prosperity. They are less likely to marry, less likely to have children, and more likely to die early. The disparity between those who finish in lucky years and those who finish in unlucky years is greatest for people with the least formal education.

Will the harms of COVID prove equally enduring? We won’t know for some time. But if past experience holds, the COVID‑19 years will mark their most vulnerable victims for decades.

The story of COVID can be told as one of shocks and disturbances that wrecked two presidencies. In 2020 and 2024, incumbent administrations lost elections back-to-back, something that hadn’t happened since the deep economic depression of the late 1880s and early 1890s. The pandemic caused a recession as steep as any in U.S. history. The aftermath saw the worst inflation in half a century.

In the three years from January 2020 through December 2022, Trump and Biden both signed a series of major bills to revive and rebuild the U.S. economy. Altogether, they swelled the gross public debt from about $20 billion in January 2017 to nearly $36 billion today. The weight of that debt helped drive interest rates and mortgage rates higher. The burden of the pandemic debt, like learning losses, is likely to be with us for quite a long time.

Yet even while acknowledging all that went wrong, respecting all the lives lost or ruined, reckoning with all the lasting harms of the crisis, we do a dangerous injustice if we remember the story of COVID solely as a story of American failure. In truth, the story is one of strength and resilience.

Scientists did deliver vaccines to prevent the disease and treatments to recover from it. Economic policy did avert a global depression and did rapidly restore economic growth. Government assistance kept households afloat when the world shut down—and new remote-work practices enabled new patterns of freedom and happiness after the pandemic ended.

The virus was first detected in December 2019. Its genome was sequenced within days by scientists collaborating across international borders. Clinical trials for the Pfizer-BioNTech vaccine began in April 2020, and the vaccine was authorized for emergency use by the FDA in December. Additional vaccines rapidly followed, and were universally available by the spring of 2021. The weekly death toll fell by more than 90 percent from January 2021 to midsummer of that year.

The U.S. economy roared back with a strength and power that stunned the world. The initial spike of inflation has subsided. Wages are again rising faster than prices. Growth in the United States in 2023 and 2024 was faster and broader than in any peer economy.

Even more startling, the U.S. recovery outpaced China’s. That nation’s bounceback from COVID‑19 has been slow and faltering. America’s economic lead over China, once thought to be narrowing, has suddenly widened; the gap between the two countries’ GDPs grew from $5 trillion in 2021 to nearly $10 trillion in 2023. The U.S. share of world economic output is now slightly higher than it was in 1980, before China began any of its economic reforms. As he did in 2016, Trump inherits a strong and healthy economy, to which his own reckless policies—notably, his trade protectionism—are the only visible threat.

In public affairs, our bias is usually to pay most attention to disappointments and mistakes. In the pandemic, there were many errors: the partisan dogma of the COVID minimizers; the capitulation of states and municipalities to favored interest groups; the hypochondria and neuroticism of some COVID maximizers. Errors need to be studied and the lessons heeded if we are to do better next time. But if we fail to acknowledge America’s successes—even partial and imperfect successes—we not only do an injustice to the American people. We also defeat in advance their confidence to collectively meet the crises of tomorrow.

Perhaps it’s time for some national self-forgiveness here. Perhaps it’s time to accept that despite all that went wrong, despite how much there was to learn about the disease and how little time there was to learn it, and despite polarized politics and an unruly national character—despite all of that—Americans collectively met the COVID‑19 emergency about as well as could reasonably have been hoped.

The wrong people have profited from the immediate aftermath. But if we remember the pandemic accurately, the future will belong to those who rose to the crisis when their country needed them.


This article appears in the March 2025 print edition with the headline “Why the COVID Deniers Won.”

President Donald Trump might have campaigned on lowering the prices of groceries, but even as egg prices have become a minor national crisis, he has stayed quiet about the driving cause of America’s egg shortage: bird flu. Trump hasn’t outlined a plan for containing the virus, nor has he spoken about bird flu publicly since the CDC announced last April that the virus had infected a dairy worker. Last week, the CDC, which has ceased most communication with the public since Trump took office, posted data online that suggested humans may be able to spread the virus to cats. The agency quickly deleted the information.

Bird flu has now spread to cow herds across the country, led to the euthanization of tens of millions of domesticated poultry, sickened dozens of people in the United States, and killed one. The virus is not known to spread between humans, which has prevented the outbreak from exploding into the next pandemic. But the silence raises the question: How prepared is Trump’s administration if a widespread bird-flu outbreak does unfold? The administration reportedly plans to name Gerald Parker as the head of the White House’s Office of Pandemic Preparedness and Response Policy, which was created in 2022 by Congress and is charged with organizing the responses of the various agencies that deal with infectious diseases. (I reached out to both Parker and the White House; neither replied.)

If the president names him to the post, the appointment might be the least controversial of any of Trump’s health-related picks: Parker is an expert on the interplay between human and animal health who served in the federal government for roughly a decade. But confronting bird flu—or any other pandemic threat—in this administration would require coordinating among a group of people uninterested in using most tools that can limit the spread of infectious disease.

Trump’s pick to lead the CDC, David Weldon, has questioned the safety of vaccines, and Jay Bhattacharya, the administration’s nominee to lead the National Institutes of Health, vehemently opposed COVID shutdowns. Robert F. Kennedy Jr., an anti-vaccine conspiracy theorist who likely will be installed as the head of the Department of Health and Human Services in the coming days, has implied that Anthony Fauci and Bill Gates have funded attempts to create a bird-flu virus capable of infecting humans, and that past threats of flu pandemics were concocted by federal health officials both to inflate their own importance and to pad the pockets of pharmaceutical companies that produce flu vaccines.

Many of Trump’s health appointees are united in their view that the U.S. overreacted to COVID. They—and plenty of Americans—argue that measures such as masking, lockdowns, and vaccination mandates were unnecessary to respond to COVID, or were kept in place for far too long. Faced with another major outbreak, the Trump administration will almost certainly start from that stance.

One way or another, Trump is likely to face some sort of public-health crisis this term. Most presidents do. Barack Obama, for instance, dealt with multiple major public-health crises, each brutal in its own way. Zika didn’t turn into a pandemic, but it still resulted in more than 300 American children being born with lifelong birth defects. Ebola, in 2014, killed only two people in the U.S., but allowing the virus, the death rate of which can be as high as 90 percent, to freely spread across America would have been catastrophic. In 2009 and 2010, swine flu led to more than 12,000 deaths in the U.S.; roughly 10 percent of the victims were under 18. Even if bird flu does no more than it already has, it’ll still cause a headache for the White House. Bird flu continues to wreak financial havoc for farmers, which is then trickling down to consumers in the form of higher prices, particularly on eggs.

Step by step, the U.S. keeps moving closer to a reality where the bird-flu virus does spread among people. Last week, the U.S. Department of Agriculture reported that cows have now contracted the variant of the virus that was responsible for the recent fatal case in the United States. That means the chances of humans catching that strain are now higher than they were: Many recent human cases have been in dairy farmworkers. As cases of seasonal flu increase too, so does the chance of the bird-flu virus gaining mutations that allow it to spread freely between humans. If both viruses infect the same cell simultaneously, they could swap genetic material, potentially giving the bird-flu virus new abilities for transmission.

Parker clearly understands this danger. Last year, he spoke to USA Today about the potential for the virus to mutate and change the outlook of the current epidemic. He also wrote on X that “federal, state, and private sector leaders need to plan for challenges we may face if H5N1 were to make the fateful leap and become a human pathogen.” How much leeway the Trump administration will give Parker—or whoever does run the pandemic-preparedness office—to keep the U.S. out of calamity is another matter.

Plenty of public-health experts have come to look back at the coronavirus pandemic and regret certain actions. Should bird flu worsen, however, many of the same tools could become the best available options to limit its toll. Parker, for his part, expressed support during the worst parts of the pandemic for masking, social distancing, and vaccinations, and although he said in 2020 that he doesn’t like lockdowns, his social-media posts at the time suggested he understood that some amount of community-level social distancing and isolation might be necessary to stop the disease’s spread. How eager the Trump administration will be to use such tools at all could depend on Parker’s ability to convince his colleagues to deploy them.

The White House pandemic-response office was set up to play air-traffic control for the CDC, the NIH, and other agencies that have a role amid any outbreak. But having a job in the White House and a title like director of pandemic preparedness does not guarantee that Parker will be able to win over the crew of pandemic-response skeptics he will be tasked with coordinating. And his job will be only more difficult after Trump sniped at the purpose of the office, telling Time in April that it “sounds good politically, but I think it’s a very expensive solution to something that won’t work.”

Although Trump appears to have thought better of dissolving the entire office, its director can’t really succeed at fulfilling its purpose without the president’s support. The only thing that could make persuading a group of pandemic skeptics to care about an infectious-disease outbreak more difficult is your boss—the president of the United States—undercutting your raison d’être. Parker has some sense of the enormity of the job he’d take on. In 2023, he tweeted, “Pandemic Preparedness, and global health security have to be a priority of the President and Congress to make a difference.” In 2025, or the years that follow, he may see firsthand what happens when the country’s leaders can’t be bothered.

Updated at 5:49 p.m. ET on February 10, 2025

“I personally think that the post–World War II system of big research universities funded heavily by the government will not continue.” That’s how one professor at a big state research university responded when I asked how he was feeling about our shared profession. That system is the cornerstone of U.S. higher education—at Harvard or Princeton, yes, but also the University of Michigan and Texas A&M. The research university has helped establish the meaning of “college” as Americans know it. But that meaning may now be up for grabs.

In the past two weeks, higher ed has been hit by a series of startling and, in some cases, potentially illegal budget cuts. First came a total freeze of federal grants and loans (since blocked, perhaps ineffectually, by two federal judges), then news that the National Science Foundation, which pays for research in basic, applied, social, and behavioral science as well as engineering, could have its funding cut by two-thirds. On Friday night, the National Institutes of Health, which provides tens of billions of dollars in research funding every year, announced an even more momentous change: According to an official notice and a post from the agency’s X account, it would be slashing the amount that it pays out in grants for administrative costs, effective as of this morning.

This latest move may sound prosaic: The Trump administration has merely put a single cap on what are called “indirect costs,” or overhead. But it’s a very big deal. Think of these as monies added to each research grant to defray the cost of whatever people, equipment, buildings, and other resources might be necessary to carry out the scientific work. If the main part of a grant is meant to pay for the salaries of graduate students and postdocs, for example, along with the materials those people will be using in experiments, then the overhead might account for the equipment that they use, and the lab space where they work, and the staff members who keep their building running. The amount allotted by the NIH for all these latter costs has varied in the past, but for some universities it was set at more than 60 percent of each grant. Now, for as long as the Trump administration’s new rule is in place, that rate will never go higher than 15 percent. Andrew Nixon, the director of communications for the Department of Health and Human Services, told me the administration takes the view that it could force universities to pay back any overhead above this rate that was received in the past. “We have currently chosen not to do so to ease the implementation of the new rate,” he said.

In practical terms, this means that every $1 million grant given to a school could have been transformed, at the stroke of midnight, into one that’s worth about $700,000. Imagine if your income, or the revenue for your business, was cut by nearly 30 percent, all at once. At the very least, you’d have a cash-flow problem. Something would have to give, and fast. You’d need to find more money, or cut costs, or fire people, or cease certain operations—or do all of those things at once.

It’s safe to assume that those consequences now affect every American research university. Some campuses stand to lose $100 million a year or more. Schools with billions of dollars in endowments, tens of thousands of students, or high tuition rates will all be affected. Just as your family has to pay bills or your business has to pay salaries, so do universities. “I think we could lose $1 million to $2 million a week,” one top university administrator, who declined to be named to avoid political scrutiny, told me. But the loss could also be much larger. Administrators can only guess right now. They don’t yet know how to figure out the impact of this cut, because they’ve never been through anything like it.

Within an hour of this article’s initial publication, a federal judge in Massachusetts put a hold on the cap on indirect costs, just as the freeze on federal funds was quickly stopped in court. (Stuart Buck, who has a law degree and is the executive director of the Good Science Project, a think tank focused on improving science policy, had told me that the cut probably wouldn’t pass legal scrutiny.) But whatever happens next, a jolt has already been administered to research universities, with immediate effects. And the sudden, savage cuts are setting up these institutions for more punishment to come. A 75-year tradition of academic research in America, one that made the nation’s schools the envy of the world, has been upset.

The “post–World War II system” of research that the state-school professor mentioned can be traced back almost entirely to one man: Vannevar Bush. His diverse accomplishments included his vision, published in The Atlantic, for a networked information system that would inspire hypertext and the World Wide Web. In 1941, Bush became the first director of the Office of Scientific Research and Development, funded by Congress to carry out research for military, industrial, medical, and other purposes, including that which led to the atom bomb.

Universities in America received little public-research funding at the time. Bush thought that should change. In 1945, he put out an influential report, “Science: The Endless Frontier,” arguing that the federal government should pay for basic research in peacetime, with decisions about what to fund being made not by bureaucrats, but by the scientific community itself. Bush advocated for a new kind of organization to fund science in universities with federal money, which was realized in 1950 as the National Science Foundation. Then his model spread to the NIH and beyond.

Money from these agencies fueled the growth of universities in the second half of the 20th century. To execute their now-expanded research mission, universities built out graduate programs and research labs. The work helped them attract scientists—many of them the best in their field—who might otherwise have worked in industry, and who could also teach the growing number of undergraduates. The research university was and is not the only model for college life in America, but during this period, it became the benchmark.

Now many university professors and researchers believe that this special fusion of research and teaching is at risk. “I feel lost,” a research scientist at a top-five university who works on climate and data science told me. (She asked not to be named, because she is concerned about being targeted online.) Like others I spoke with this week, she expressed not only fear but anger and despair. She feels lost in her own career, but also as an American scientist whose identity is bound up in the legacy of Bush’s endless frontier. It’s “like I don’t know my own country anymore,” she said. Even though her work isn’t funded by the NIH, she worries that similar cuts to indirect costs will come to the NSF and other agencies. She said that her salary and benefits are paid for entirely by federal grants. If money for overhead gets held up, even temporarily, the work could get stopped and the lab shut down—even at a wealthy and prestigious school like hers.

Others I spoke with had similar reactions. Bérénice Benayoun, a gerontologist at the University of Southern California who studies how male and female immune systems respond differently to aging, has already heard that the NIH overhead cut might lead to salary freezes and layoffs at her institution. The people working in administrative, purchasing, and shared-services roles are all funded by this pool of money, she said, so they might be the first to lose their job. Mark Peifer, a cell biologist at the University of North Carolina at Chapel Hill, worries that his doctoral students and research techs might not get their paycheck if the lab’s accounting staff, which is also paid from overhead, are let go.

Both Benayoun and Peifer suggested that, in addition to harming their friends and co-workers, these changes would affect the pace of science overall. Some administrative duties might be handed off to faculty. “We’ll be able to do less science and train fewer talented people,” Benayoun said. Support for doctoral tuition—often paid from grants—could also be at risk, which would mean fewer graduate students doing lab work. That would slow down research, too. Some of these consequences might arrive “within weeks,” she told me.

Peifer told me he feels “both devastated and defiant.” His research relies on advanced confocal microscopes that are priced as high as $1 million each. Indirect costs on grants help a school like UNC invest in that equipment, along with laboratory cold rooms, electricity bills, and other, more mundane needs. Universities also use overhead to cover start-up costs, sometimes millions of dollars’ worth, for setting up new faculty with labs. Those kinds of investments would also be endangered if the NIH overhead cut is maintained. “It will mean the end of biomedical science in the United States,” Peifer said.

Biomedical science is probably not about to end. But Peifer and his peers do have reason to worry. Many scientists have devoted most of their lives to doing research, and they’ve done so in a system that is designed, through its structure and incentives, to wind them up. Even as their fields have grown more crowded over time, and grant funding more competitive, they may be pressured by their universities to spend more money on their work. The schools compete for rankings, status, talent, and students based in part on the number of dollars that they dole out in doing research, a metric known as “research expenditures.” Given all the pressure on professors to win more grants, and pay more bills, even just the prospect of a major funding cut can feel like a cataclysm.

The one that kicked in and then was stopped today may not be coming back. When the first Trump administration tried to limit overhead on NIH grants in its 2018 budget proposal, its plan didn’t work. Congress rejected the idea, and the NIH appropriations language that lawmakers adopted in response is very clear: Indirect-cost rates cannot be touched. The fact that the administration went ahead and changed them anyway suggests “no theory of reform,” Stuart Buck said.

At the very least, the cut to indirect costs has precipitated a short-lived funding crisis, of a type that should now be familiar to American scientists. George Porter, a computer-science professor at UC San Diego whose work focuses on how to reduce the energy required to run big data centers, has been through similar scares. In 2017, the Department of Energy briefly halted payment on a $12 million grant he was awarded after the administration sought (unsuccessfully) to eliminate the agency that funded it, Porter told me. Government shutdowns in 2018 and 2019 created further obstacles to his getting access to federal grants. “I’ve been trying to tell people that science funding is very fragile for some time,” he said.

But after most of a century of success and support from the federal government, research universities and their faculties may have become inured to risk. One computer-science professor who declined to be named because he was coming up for a performance review told me that few of his colleagues believed that anything would really change that much because of Donald Trump. “Everyone was wondering whether their grant funding would be delayed. The idea that it might be canceled, or that two-thirds of the NSF budget may be cut, just wasn’t something anyone believed could happen,” he said.

Now the sense of dread has reached even those in computer science, where grant money tends to come from other sources—the NSF, the Department of Defense, NASA, the Department of Energy. Any budget cuts brought on by the NIH could be felt by everyone across the university. “Suddenly there are some very serious rumors going around,” the computer-science professor said, including the possibility that faculty in his department would have to pay some of their grant funds back to the university to make up for the total shortfall. Even university leaders seem surprised. “This literally breaks everything,” one senior administrator at a major public university told me after learning about the NIH overhead cut. “What are they doing?”

If this cut is reinstated, or if new ones follow, universities will need to figure out how to respond. Some might press researchers to make up the deficit with future research funds, a practice that would make an already hard job even harder. Some might choose to invest more of their endowment or tuition proceeds in research, a choice that could cut financial aid, making college even less affordable. Big state schools could try to appeal to legislatures for increased funding.

Even if they head off a crisis, other institutions of higher ed might suffer in their stead. Nicholas Creel is a business-law professor at Georgia College & State University, a small, public liberal-arts college. Schools like his focus on teaching, which might suggest that it’s immune to the sort of government cuts that would be catastrophic for a large research university. But Creel worries that his college could be in trouble too, if the state government responds by shifting money to the bigger institutions. “That’ll mean less funding for schools like mine, schools that operate on a budget that those major research universities would consider a rounding error.”

In the meantime, the Trump administration’s cuts aren’t even set up to make research more efficient. The real problem, Robert Butera, an engineering professor and the chief research operations officer at the Georgia Institute of Technology, told me, is compliance bloat. Buck agrees. More changes to the regulations and policies affecting federal grants have accrued since 2016 than they did in the 25 years prior, and universities must hire staff to satisfy new demands. In other words, the federal government’s own rules have helped create the rising overhead costs that the same government is now weaponizing against higher ed.

Inside universities, faculty members squabble about the details. Many scientists would agree that overhead is too high—but only because they perceive that money as being taken from their own grant funds. Administrators say that overhead never covers costs, even at the rates that were in place until last week. Despite all of this, few of those I’ve spoken with in the past few days seem to have considered making any lasting change to how universities are run.

Maybe this was never about efficiency. American confidence in higher education has plummeted: Last year, a Gallup poll reported that 36 percent of Americans had “a great deal or quite a lot” of confidence in higher ed, a figure that had reached nearly 60 percent as recently as 2015; 32 percent of respondents said they had “very little or no” confidence in the sector, up from just 10 percent 10 years ago. These changes don’t have much to do with scientific research. According to Gallup, those who have turned against universities cite the alleged “brainwashing” of students, the irrelevance of what is being taught, and the high cost of education. Destroying American university research does not directly target any of these issues. (It could very well result in even steeper tuition.) But it does send a message: The public is alienated from the university’s mission and feels shut out from the benefits it supposedly provides.

Scientists, locked away in labs doing research and scrabbling for grants, may not have been prepared to hear this. The climate and data scientist, for one, simply couldn’t believe that Americans wouldn’t want the research that she and others perform. “I just can’t understand how so many people don’t understand that this is valuable, needed work,” she said.

But now may not be the perfect time to make appeals to the value and benefit of scientific research. The time to do that was during the years in which public trust was lost. In a way, this error traces back to the start of modern federally funded research on college campuses. In “Science: The Endless Frontier,” Vannevar Bush appealed to the many benefits of scientific progress, citing penicillin, radar, insulin, air-conditioning, rayon, and plastics, among other examples. He also put scientists on a pedestal. Universities make the same appeals and value judgments to this day. Cutting back their research funding is not in the nation’s interests. Neither is insisting on the status quo.

Newer Posts