Category:

Medical News

Dale Carnegie, the self-made titan of self-help, swore by the social power of names. Saying someone’s name, he wrote in How to Win Friends and Influence People, was like a magic spell, the key to closing deals, amassing political favors, and generally being likable. According to Carnegie, Franklin D. Roosevelt won the presidency partly because his campaign manager addressed voters by their names. The Steel King, Andrew Carnegie (no relation), reportedly secured business deals by naming companies after at least one competitor and a would-be buyer, and maintained employee morale by calling his factory workers by their first name. “If you don’t do this,” Dale Carnegie warned his readers, “you are headed for trouble.”

By Carnegie’s measure, plenty of people are in serious jeopardy. It’s not that they don’t remember what their friends and acquaintances are called; rather, saying names makes them feel anxious, nauseated, or simply awkward. In 2023, a group of psychologists dubbed this phenomenon alexinomia. People who feel it most severely might avoid addressing anyone by their name under any circumstance. For others, alexinomia is strongest around those they are closest to. For example, I don’t have trouble with most names, but when my sister and I are alone together, saying her name can feel odd and embarrassing, as if I’m spilling a secret, even though I’ve been saying her name for nearly 25 years. Some people can’t bring themselves to say the name of their wife or boyfriend or best friend—it can feel too vulnerable, too formal, or too plain awkward. Dale Carnegie was onto something: Names have a kind of power. How we use or avoid them can be a surprising window into the nature of our relationships and how we try to shape them.

The social function of names in Western society is, in many ways, an outlier. In many cultures, saying someone else’s given name is disrespectful, especially if they have higher status than you. Even your siblings, parents, and spouse might never utter your name to you. Opting for relationship terms (auntie) or unrelated nicknames (little cabbage) is the default. Meanwhile, American salespeople are trained to say customers’ names over and over again. It’s also a common tactic for building rapport in business pitches, during telemarketing calls, and on first dates.

Western norms can make sidestepping names a source of distress. For years, Thomas Ditye, a psychologist at Sigmund Freud Private University, in Vienna, and his colleague Lisa Welleschik listened as their clients described their struggles to say others’ names. In the 2023 study that coined the term alexinomia, Ditye and his colleagues interviewed 13 German-speaking women who found the phenomenon relatable. One woman told him that she couldn’t say her classmates’ names when she was younger, and after she met her husband, the issue became more pronounced. “Even to this day, it’s still difficult for me to address him by name; I always say ‘you’ or ‘hey,’ things like that,” she said. In a study published last year, Ditye and his colleagues searched online English-language discussion forums and found hundreds of posts in which men and women from around the world described how saying names made them feel weird. The team has also created an alexinomia questionnaire, with prompts that include “Saying the name of someone I like makes me feel exposed” and “I prefer using nicknames with my friends and family in order to avoid using names.”

[From the April 2023 issue: An ode to nicknames]

Names are a special feature of conversation in part because they’re almost always optional. When an element of a conversation isn’t grammatically necessary, its use is likely socially meaningful, Steven Clayman, a sociology professor at UCLA, told me. Clayman has studied broadcast-news journalists’ use of names in interviews, and found that saying someone’s name could signal—without saying so directly—that you’re speaking from the heart. But the implications of name-saying can shift depending on what’s happening at the moment someone says a name and who’s saying it; we all know that if your mom uses your name, it usually means you’re in trouble. Even changing where in the sentence the name falls can emphasize disagreement or make a statement more adversarial. “Shayla, you need to take a look at this” can sound much friendlier than “You need to take a look at this, Shayla.” And, of course, when someone says your name excessively, they sound like an alien pretending to be a human. “It may be that folks with alexinomia have this gut intuition, which is correct, that to use a name is to take a stand, to do something—and maybe something you didn’t intend,” Clayman said. Another person could misinterpret you saying their name as a sign of closeness or hostility. Why not just avoid the issue?

In his case studies and review of internet forums, Ditye noticed that many people mentioned tripping up on the names of those they were most intimate with—like me, with my sister. This might sound counterintuitive, but saying the names of people already close to us can feel “too personal, too emotional, to a degree that it’s unpleasant,” Ditye told me, even more so than saying the name of a stranger. Perhaps the stakes are higher with those we love, or the intimacy is exaggerated. People on the forums agreed that avoiding loved ones’ names was a way to manage closeness, but sometimes in the opposite way. “I think this is pretty common among close couples,” one person wrote. “It’s a good thing.” Using a name with your nearest and dearest can feel impersonal, like you’re a used car salesman trying to close a deal. If I say my boyfriend’s name, it does seem both too formal and too revealing. But if I use his nickname—Squint—I feel less awkward.

[Read: Why we speak more weirdly at home]

Alexinomia is a mostly harmless quirk of the human experience. (It can cause problems in rare cases, Ditye told me, if, say, you can’t call out a loved one’s name when they’re walking into traffic.) Still, if you avoid saying the names of those closest to you, it can skew their perception of how you feel about them. One of Ditye’s study participants shared that her husband was upset by her inability to say his name. It made him feel unloved.

As Dale Carnegie wrote, “a person’s name is to that person the sweetest and most important sound in any language.” Pushing through the discomfort and simply saying their name every now and then can remind your loved ones that you care. By saying someone else’s name, even when it’s awkward, you’ll be offering a bit of yourself at the same time.

When my family returned to our home in Santa Monica last Sunday night, we breathed a sigh of relief. Our house was fine, and the air quality was in the “good” category. Schools would reopen the next day. But as we unpacked, I noticed what looked like salt-and-pepper snow delicately dancing over the street. Ash from the Palisades Fire, burning just five miles north of us, was descending all around, coating the car we had left behind. In the backyard, it gathered over the small patch of turf we played on and in small clusters all across the garden, where my kids had recently planted carrots.

The next morning, we walked to school, talking about the blue sky. My 8-year-old pointed out the piles of windblown ash by the curb. That day, the kids would stay inside so the school could clean the debris from the playground equipment and yard.

As I walked the four blocks back home, a city-owned street sweeper buzzed past. When the truck’s bristles hit the pockets of ash, they kicked up car-size clouds of dust, sending all the debris back into the air. I clutched my N95 mask tighter against my face, pulled down my sunglasses, and jogged away. I closed the door tightly behind me.

That night, a local bookstore and mediation space held a ceremony to “call in the rain for a land devastated by fire.” Rain would help keep more fires from starting, and it would also help wash the ash away. For now, we’re left to deal with it on our own, swabbing surfaces, clearing streets, wondering what we’re breathing in and what it will do to the waterways that absorb it.

On Tuesday, the debris was continuing to fall, so the school held a “walking-only” recess. When I saw gardeners arriving armed with leaf blowers, my heart sank. (Los Angeles County has temporarily banned their use because they throw up so much dust.) But no one knew exactly the right way to clean up the mess. One neighbor was vacuuming their steps with a Shop-Vac.

With smoke, the hazards are clear: You can see it and smell it, and get out of the way. Our phones have been vibrating with air-quality indexes, which measure pollution in the air, but not ash. With ash circling like toxic feathers, it’s hard to know what is safe. The residue from house fires contains far more toxins than that of brush fires. The PVC pipes, lithium-ion car batteries, plastic siding, flooring, and everything else that evaporated in the blazes launched a soup of chemicals—nickel, chromium, arsenic, mercury—into the air. Older homes can contain lead and asbestos. Until Wednesday, the day after walking-only recess, L.A. County had an ash advisory in place, which recommended staying inside and wearing a mask and goggles when leaving the house.

But our lives in Los Angeles are largely outside: This is a city that dines outdoors all year long, where winter temperatures hover in the 60s and surfers are in the water in January. With no rain in the forecast, how long will our lives be coated in a fine layer of toxic dust? Maybe a very long time: A webinar put on by California Communities Against Toxics warned that the amount of ash that the fires had generated would take years to excavate, and created public-health risks.

The prospect of continued exposure to airborne chemicals sounds ominous, but Thomas Borch, a professor of environmental and agricultural chemistry at Colorado State University, was more sanguine. After the 2021 Marshall Fire tore through towns in the foothills of the Rocky Mountains, Borch studied contaminants in the soil at houses near the fire. Some of the properties had elevated levels of heavy metals, but most were still below levels of concern. And although living among clouds of fine debris might feel apocalyptic, Borch told me that the wind could be helping to dilute the contamination in my neighborhood. “A lot of these ashes spread out over a much bigger area,” he said, which helps mitigate their health impacts.

Once ash and soot creep inside homes—through doors and windows, on shoes and clothes—“it’s a lot harder to actually get rid of,” he added. Cleaning can reinvigorate pollution inside the home, so it has to be done carefully. Borch advised that we vacuum with a HEPA filter and wet-mop surfaces to keep pollution from building up inside the house.

But the real questions regarding human health and ash are still open. Researchers have only recently started to investigate how the ash from structural fires differs from that of wildfires. In Los Angeles, Borch’s colleagues have set up 10 coffee-bag-size samplers around the fires (as close as they were allowed to go). They also plan to collect ash from within the burn areas and from windblown dust to compare the different toxins in smoke and ash, as well as their concentrations in the weeks and months following the fires.

If rain does arrive, it will wash out much of the debris, and the city will feel clear again. But that rain could also carry contaminants into streams, reservoirs used for drinking water, or the Pacific Ocean. Perhaps by then the wind will have blown most of the ash away, or in places, such as my neighborhood, outside of the fire’s direct path—we will have cleared the ash on our own. (Clearing ash in fire zones is a regulated process.) My family is still waiting to pull up the vegetables in our yard, but I’m no longer worried about bouncing balls and biking. We’ve been slowly wetting down our stone patio and stairs and trying to gently sweep up the ash, while making sure we’re protected by gloves, goggles, and masks. Half of the neighbors are wearing masks outside. We’re still swirling around like ash from the crisis, waiting for the rains to put everything back in place.

No drug is quite like nicotine. When it hits your bloodstream, you’re sent on a ride of double euphoria: an immediate jolt of adrenaline, like a strong cup of coffee injected directly into your brain, along with the calming effect of a beer. Nicotine is what gets people hooked on cigarettes, despite their health risks and putrid smell. It is, in essence, what cigarette companies are selling, and what they’ve always been selling. Without nicotine, a cigarette is just smoldering leaves wrapped in some fancy paper.

But if the Biden administration gets its way, that’s essentially all cigarettes will be. Today, regulators at the FDA announced that they are pushing forward with a rule that would dramatically limit how much nicotine can go in a cigarette. The average cigarette nowadays is estimated to have roughly 17 milligrams of the drug. Under the new regulation, that would fall to less than one milligram. If enacted—still a big if—it would decimate the demand for cigarettes more effectively than any public-service announcement ever could.

The idea behind the proposal is to make cigarettes nonaddictive. One study found that some young people begin feeling the symptoms of nicotine addiction within a matter of days after starting to smoke. In 2022, roughly half of adult smokers tried to quit, but fewer than 10 percent were ultimately successful.

For that reason, the rule could permanently change smoking in America. The FDA insists that the proposal isn’t a ban per se. But in the rule’s intended effect, ban may indeed be an apt term. The FDA estimates that nearly 13 million people—more than 40 percent of current adult smokers—would quit smoking within one year of the rule taking effect. After all, why inhale cancerous fumes without even the promise of a buzz? By the end of the century, the FDA predicts, 4.3 million fewer people would die because of cigarettes. The agency’s move, therefore, should be wonderful news for just about everyone except tobacco executives. (Luis Pinto, a vice president at Reynolds American, which makes Camel and Newport cigarettes, told me in an email that the policy “would effectively eliminate legal cigarettes and fuel an already massive illicit nicotine market.”)

Still, there’s no telling whether the FDA’s idea will actually come to fruition. The regulation released today is just a proposal. For the next eight months, the public—including tobacco companies—will have the opportunity to comment on the proposal. Then the Trump administration can decide whether to finalize the regulation as is, make changes, or scrap it entirely. Donald Trump has not signaled what he will do, and his relationship to cigarettes is complicated. In 2017, his FDA commissioner put the idea of cutting the nicotine in cigarettes to nonaddictive levels on the agency’s agenda. But the tobacco industry has recently attempted to cozy up to the president-elect. A subsidiary of Reynolds donated $10 million to a super PAC backing Trump. Even if the Trump administration finalizes the rule, the FDA plans to give tobacco companies two years to comply, meaning that the earliest cigarettes would actually change would be fall 2027.

If Trump goes through with the rule, it may be the end of cigarettes. But although cigarettes might be inseparable from nicotine, nicotine is not inseparable from cigarettes. These days, people looking to consume the drug can pop a coffee-flavored Zyn in their upper lip or puff on a banana-ice-flavored e-cigarette. These products are generally safer than cigarettes because they do not burn tobacco, and it is tobacco smoke, not nicotine, that causes most of the harmful effects of cigarettes. FDA estimates that should cigarettes lose their nicotine, roughly half of current smokers would transition to other, safer products to get their fix, Brian King, the head of the FDA’s tobacco center, told me.

Whether nicotine’s staying power is a good thing is still unclear. Few people—even in the tobacco industry—will argue with a straight face that cigarettes are safe. Nicotine defenders, however, are far more common. In my time covering nicotine, I have spoken with plenty of people who emphatically believe that the drug helps them get through their day, and that their habit is no more shameful or harmful than an addiction to caffeine. There is clearly a market for these products. Just ask Philip Morris International, which earlier this year invested $600 million to build a new factory to meet surging demand for Zyn. But it’s true, too, that nicotine is addictive, regardless of how it’s consumed. There isn’t much data looking at long-term impacts of these new nicotine-delivery devices, but the effects of nicotine, such as increased heart rate and blood pressure, are enough to give cardiologists pause.

I promised my parents—both smokers during my childhood—that I’d never pick up a cigarette. I kept that promise. But about a year ago, I started to wonder just how bad safer forms of nicotine could actually be. (Mom, if you’re reading this, I’m sorry.) I found myself experimenting with Zyn. Doing so gave me a window into why my parents craved cigarettes, but it also quickly gave me a firsthand look at why it was always so hard for them to quit. My one-Zyn-a-day habit quickly became two, and two became four. And yet, each time the pouch hit my lip, that burst of dopamine seemed to get more and more lackluster. Soon enough, I was reaching for nicotine without even thinking about it. The FDA’s new proposal, if finalized, will mean that misguided teens (or, in my case, 33-year-olds) prone to experimentation won’t do so with deadly cigarettes. But that will be far from the end of America’s relationship with nicotine.

The first time Jamie Cassidy was pregnant, the fetus had a genetic mutation so devastating that she and her husband, Brennan, decided to terminate in the second trimester. The next time they tried for a baby, they weren’t taking chances: They would use IVF and screen their embryos’ DNA. They wanted to avoid transferring any embryos with the single-gene mutation that had doomed their first pregnancy. And then they started wondering what other ailments they could save their future son or daughter from.

The Cassidys’ doctor told them about a company, Genomic Prediction, that could assess their potential children’s odds of developing conditions that aren’t tied to a single gene, such as heart disease, diabetes, and schizophrenia. The test wouldn’t be any more invasive than screening for a single gene—all the company needed was an embryo biopsy. The science is still in its early stages, but the Cassidys didn’t mind. Brennan has Type 1 diabetes and didn’t want to pass that condition on, either. “If I can forecast that my baby is going to have less chance to have Type 1 diabetes than I did, I want that,” he told me. “I’d burn all my money to know that.”

Thanks to more sophisticated genetic-testing techniques, IVF—an expensive, invasive treatment originally developed to help people with fertility troubles—is becoming a tool for optimizing health. A handful of companies offer screening for diseases and disorders that range from life-threatening (cancer) to life-altering (celiac disease). In many cases, these conditions’ genetic links are poorly understood or weak, just one factor of many that determine whether a person develops a particular condition. But bringing another human being into the universe can be a terrifying-enough prospect that some parents are turning to extensive genetic testing to help pick their future offspring.

Genetic screening has been a crucial part of IVF—and pregnancy—for decades. Medical guidelines recommend that any aspiring mother should be given the option to test her own DNA and find out whether she risks passing on dangerous genes, a practice known as carrier screening. If both parents carry a particular mutation, doctors will likely suggest IVF and embryo screening. These measures are traditionally limited to conditions linked to single-gene mutations, such as Huntington’s disease, most of which are exceedingly rare and seriously affect a child’s quality of life. During IVF, embryos are also typically screened for chromosomal abnormalities to help avoid miscarriages, and generally nonheritable conditions such as Down syndrome.

[Read: Genetic discrimination is coming for us all]

As the scientific understanding of the genome has progressed, companies including Genomic Prediction and a competitor called Orchid have begun offering a test that promises a more comprehensive investigation of the risks lurking in an embryo’s genes, using what’s known as a polygenic risk score. Most common ailments aren’t connected to a single gene; polygenic risk scores aim to predict the lifetime likelihood of conditions, such as diabetes, in which many genes contribute to a person’s risk. Consumer DNA-testing companies such as 23andMe use these scores to tell customers whether they have, say, a slightly above-average likelihood of developing celiac disease, along with a disclaimer that lifestyle and other factors can also influence their chances. These risk scores could theoretically help identify customers who, say, need a colonoscopy earlier in life, or who need to double down on that New Year’s resolution to eat healthier. But the current scientific consensus is that polygenic risk scores can’t yet provide useful insights into a person’s health, if indeed they ever will.  

Analyzing an embryo’s DNA to predict its chances of developing genetically complex conditions such as diabetes is an even thornier issue. The tests, which can run thousands of dollars and are typically not covered by insurance, involve sending a small sample of the embryos to the companies’ labs. In the United States, such tests don’t need to be approved by the FDA. Genomic Prediction even offers customers an assessment of which embryos are “healthiest” overall. But the control these services offer is an illusion, like promising to predict the weather a year in advance, Robert Klitzman, a Columbia University bioethicist and the author of the book Designing Babies, told me. A spokesperson for the American Society for Reproductive Medicine told me there aren’t enough quality data to even take a position on whether such tests are useful. And last year, the American College of Medical Genetics and Genomics published a lengthy position statement concluding that the benefits of screening embryos for polygenic risk were “unproven” and that the tests “should not be offered as a clinical service.” The statement raised the possibility that people might undergo extra, unnecessary rounds of IVF in search of ever healthier embryos.

Genomic Prediction published a rebuttal to the ACMG that cited, among other research, several studies led by company researchers that concluded that among siblings, those with a lower risk score were significantly less likely to have a given condition. The truth is, though, the effect of screening embryos for polygenic risk won’t be clear until the embryos chosen to develop into fetuses are born, grow up, and either develop diabetes or don’t. Genomic Prediction and Orchid both told me that humanity shouldn’t have to wait that long for the insights their tests provide. Polygenic risk scores are “one of the most valuable pieces of information that you can get,” Orchid’s founder and CEO, Noor Siddiqui, told me. Nathan Treff, Genomic Prediction’s chief science officer, was similarly bullish. “Everybody has some kind of family history of diabetes, cancer, and heart disease. So we really don’t have a situation where there’s no reason for testing,” he told me.

Many of the experts I spoke with about these tests are concerned that people might opt into IVF because they’re chasing certainty that companies can’t really promise. A study last year found both high interest and approval among Americans when it comes to screening embryos for polygenic risk. For now, most of the customers I interviewed used advanced tests that included polygenic risk because they were going through IVF anyway. Many of Genomic Prediction’s customers using the scores are participants in a clinical trial. But Tara Harandi-Zadeh, an investor in Orchid, told me she planned to do IVF even though she and her husband have no fertility issues or history of genetic disease. Harandi-Zadeh is especially worried about de novo mutations—genetic changes that occur spontaneously, without any hereditary link. She wants to screen her embryos to weed out monogenic diseases and plan for the risks of polygenic ones. If I have that information, I can help my child at the stages of life to be able to get treatment or tests or just prepare for it,” she said. Treff told me that people like Harandi-Zadeh make up a small percentage of Genomic Prediction’s customers, but their numbers are growing.

[Emi Nietfeld: America’s IVF failure]

Scientists just don’t understand enough about the genome to confidently predict what any single embryo will be like should it go on to become a person. Most genes influence many facets of our being—our health, our physical traits, our personality—and only a fraction of those interactions have been investigated. “You don’t know the full package,” Klitzman said. “Bipolar disorder is associated with creativity. So if you screen out bipolar disorder, you may also be screening out genes for creativity, for instance.Because no embryo is completely risk-free, future parents might also have to decide whether they think, say, a risk of diabetes or a risk of heart disease sounds worse. A paper out last week put it this way: “The expected reductions in disease risk are modest, at best—even if the clinical, ethical and social concerns are dismissed.”

Those concerns are significant. More and more people are already turning to IVF for reasons other than infertility. Some select their children based on sex. Jeffrey Steinberg, a fertility doctor with clinics in the U.S. and internationally, offers eye color selection and told me he is working on height. Orchid assesses genetic risk for some autism-spectrum disorders, and Genomic Prediction plans to add a similar screening to its catalog. A paper published last week argued that editing embryos—not just testing them—could mitigate genetic risk for a variety of conditions, while also acknowledging it could “deepen health inequalities.” (In the U.S., clinical trials of embryo editing cannot be approved by the FDA, and public funds cannot be used for research in which embryos are edited.) Critics say that even if technology could cut the prevalence of diseases like diabetes, doing so could drive discrimination against those born with such “undesirable” traits. Social services and support for people with those conditions could also erode—similar concerns have been raised, for example, in Iceland, where pregnancy screenings have all but eliminated Down-syndrome births.

[From the December 2020 issue: The last children of Down syndrome]

Even if the science does catch up to the ambitions of companies like Genomic Prediction, genetics will never guarantee a child a healthy life. “Of the 100 things that could go wrong with your baby, 90 percent of them or more are not genetic,” Hank Greely, the director of the Center for Law and the Biosciences at Stanford University, told me. That’s partly why the Cassidys decided to ignore most of their screening results and simply select the embryo that didn’t have the monogenic mutation that Jamie carried, and had the lowest risk of diabetes. “We’re not trying to have a kid that’s 6 foot 2 and blond hair and blue eyes and going to go to Harvard. We just want a healthy baby,” Brennan told me.

Their son was born in 2023 and so far has been at the top of the curve for every developmental marker: He’s big and tall; he talked and walked early. It will be years, probably, before they know whether or not he’s diabetic. But it’s hard, they said, not to feel that they picked the right embryo.

Influenza cases have been surging. RSV activity is “very high.” Signs of COVID have been mounting in sewer water, and norovirus, too, is spawning outbreaks like we haven’t seen for at least a dozen years. You might even say that America is in the midst of a “quad-demic,” although I really hope you don’t, because “quad-demic” is not a word that anyone should say.

With that in mind, here are The Atlantic’s tips and tricks for steering clear of any illness during this year’s terrible quad-demic. What are The Atlantic’s tips and tricks? They are soap.

Consider the norovirus, a real terror of a pathogen, just a couple dozen nanometers in length, with its invasive acids tucked inside a protein coat. Exposure to fewer than 100 particles of norovirus can leave you with several days’ worth of vomiting and diarrhea. Those particles are very, very hard to kill.

Douse them in a squirt of alcohol, and, chances are, they’ll come through just fine. One study looked at a spate of norovirus outbreaks at nursing homes in New England during the winter of 2006–07, and found that locations where staff made regular use of hand sanitizers were at much greater risk of experiencing an outbreak than others in the study. Why? Because those other nursing homes were equipped with something better.

They had soap.

Research finds that soap is good at cleaning things. At least 4,000 years of history suggest the same. Soap works because its structure mixes well with water on one end and with oils on the other. The latter, hydrophobic side can hook into, and then destroy, the membranes that surround some microbes (though norovirus isn’t one of them). Molecules of soap also cluster up in little balls that can surround and trap some germy grime before it’s flushed away beneath the tap. And soap, being sudsy, makes washing hands more fun.

Not everyone endorses washing hands. Pete Hegseth, whose good judgment will be judged today in his confirmation hearing for secretary of defense, once said that he hadn’t washed his hands in 10 years. He later said this was a joke. After that, he started hawking bars of soap shaped like grenades. The man who picked him is, of course, more than avid in his washing-up; Donald Trump is known to use his Irish Spring down to the sliver.

For all his love of soap, Trump also seems attached to hand sanitizer: His first administration kept Purell supplied just outside the Oval Office, per Politico. This would have helped keep him free of certain pathogens, but not all of them. When scientists compare different means of removing norovirus from fingertips, they find that none is all that good, and some are extra bad. Commercial hand sanitizers hardly work. The same is true for quaternary ammonium cations, also known as QACs or “quats,” which are found in many standard disinfecting products for the home. My local gym dispenses antiseptic wipes for cleaning the equipment; these are tissues soaked in benzalkonium chloride, a QAC. Quats may work for killing off the germs that lead to COVID or the flu, but studies hint they might be flat-out useless against norovirus.

[Read: Can’t we at least give prisoners soap?]

The science of disinfecting stuff is subtle. And a lot of what we thought we knew about killing off norovirus has turned out to be misguided. It’s very hard to grow a norovirus in the lab, so for a while, scientists used another virus from the same family—feline calicivirus, which can give a cat a cold—as a stand-in for their experiments. This was not a good idea. “Feline calicivirus is a wimp compared to human norovirus,” Lee-Ann Jaykus, an expert on food virology at North Carolina State University, told me. Her work has shown, for example, that bleach works pretty well at disinfecting feline virus in the lab, and that the same is true for a mouse norovirus that is often used in these experiments. But when she and colleagues tested human-norovirus samples drawn from patients’ fecal specimens, the particles seemed far more resistant.

You know what works better than hand sanitizers or QACs at getting rid of actual human norovirus? I’ll bet you do! It’s soap.

Or maybe one should say, it’s washing up with soap. A letter published in The Journal of Hospital Infection in 2015 by a team of German hygienists followed up on earlier work comparing hand sanitizers with soap and water, and argued that the benefits of the latter were mechanical in nature, by which the hygienists meant that simply rubbing one’s hands together under running water could produce an analogous effect. (They also argued that some kinds of hand sanitizer can inactivate a norovirus in a way that soap and water can’t.) Jaykus’s team has also found that the hand-rubbing part of hand-washing contributes the lion’s share of disinfecting. “It’s not an inactivation step; it’s a removal step,” she told me. As for soap, its role may be secondary to that of all the rubbing and the water: “We use the soap to make your hands slippery,” Jaykus said. “It makes it easier to wash your hands, and it also loosens up any debris.”

[Read: Wash your hands and pray you don’t get sick]

This is faint praise for soap, but it’s hardly damning. If washing at the sink disinfects your hands, and soap facilitates that process, then great. And soap may even work in cases where the soap itself is grimy—a bathroom situation known (to me) as “the dirty-bar conundrum.” Some research finds that washing up with soap and contaminated water is beneficial too. Soap: It really works!

But only to a point. I asked Jaykus how she might proceed if she had a case of norovirus in her household. Would she wash her hands and wipe down surfaces with soap, or would she opt for something stronger?

She said that if her household were affected, she’d be sure to wash her hands, and she might try to do some cleaning with chlorine. But even so, she’d expect the worst to happen. Norovirus is so contagious, its chance of marching through a given house—especially one with kids—is very high. “I would pretty much call my boss and say I’m going to be out for four days,” Jaykus told me. “I’m sorry to say that I would give up.”

Maybe we should add that to our list of tips and tricks for getting by in January: soap, for sure, but also, when your time has come, cheerful acquiescence.

Updated at 11:11 a.m. ET on January 13, 2025

In 1900, a former schoolteacher named Carrie Nation walked into a bar in Kiowa, Kansas, proclaimed, “Men, I have come to save you from a drunkard’s fate,” and proceeded to hurl bricks and stones at bottles of liquor. The men, interested less in spiritual salvation and more in physical safety, fled to a corner. Nation destroyed three saloons that day, using a billiard ball when she ran out of bricks and rocks, which she called “smashers.” She eventually—and famously—switched to a hatchet, using it across years of attacks on what she considered to be the cause of society’s moral failings. She referred to this period of her life as one of “hatchetation.”

By comparison, U.S. Surgeon General Vivek Murthy, an internist of mild disposition perhaps best known for raising alarm about the “loneliness epidemic,” has taken a gentler approach to the obstinate challenge of alcohol. His recent call to add cancer warnings to alcoholic products was made without violence or yelling. But the recommendation, if followed, would be the most significant action taken against alcohol since at least the 1980s, when new laws set the national drinking age at 21 and mandated warning labels concerning, among other things, alcohol’s pregnancy-related risks. Murthy’s proposal is part of ever-grimmer messaging from public-health officials about even moderate drinking, and comes during a notable shift in cultural attitudes toward alcohol, especially among “sober-curious” young people. In 2020, my colleague Olga Khazan asked why no one seemed interested in creating a modern temperance movement. Now that movement has arrived with a distinctly 21st-century twist. Carrie Nation was trying to transform the soul of her country. Today’s temperance is focused on the transformation of self.

The movement of the 19th and early 20th centuries—which eventually brought about Prohibition—went hand in hand with broad religious revivalism and the campaign for women’s rights. It considered alcohol to be unhealthy for women, families, and the general state of humanity. The depth of the problem posed by alcohol in pre-Prohibition America is hard to fathom: In 1830, Americans drank three times the amount of spirits that we do today, the equivalent of 90 bottles of 80-proof booze a year. As distilled liquor became widely available, men were wasting most of their wages on alcohol and staying out all night at saloons, and what we now call domestic abuse was rampant, the food historian Sarah Wassberg Johnson told me. Members of the Woman’s Christian Temperance Union saw themselves as a progressive group helping the disadvantaged. “They were protecting the home, protecting the family, and protecting the nation by getting rid of alcohol,” Dan Malleck, a health and sciences professor at Brock University, told me. In the latter half of the 19th century, young people signaled their moral virtue by taking temperance pledges.

Today’s sober-curious, by contrast, post on Instagram about how Dry January has reduced their inflammation, sharpened their jawline, and improved their sleep score. The sanctity of the home, or the overall moral health of society—not to mention the 37 Americans who die in drunk-driving crashes every day—appears to be less of a concern. (To be fair, this focus on health is partially a response to research on moderate alcohol consumption’s detrimental effects on heart health, cancer risk, and lifespan.) In a 2020 Gallup poll, 86 percent of respondents said that drinking alcohol was morally acceptable, an increase from 78 percent in 2018. By contrast, more than half of young adults surveyed in 2023 expressed concerns about the health risks of moderate drinking.

[From the July/August 2021 issue: America has a drinking problem]

Colleen Myles, a professor at Texas State University who studies how alcoholic drinks change cultures, told me that such responses don’t mean that the national conversation about alcohol has abandoned morality—simply that Americans’ ethical center of gravity has drifted. She considers modern teetotaling to be steered by a great moral project of our age: self-optimization. In her book Sober Curious, Ruby Warrington wrote that lower alcohol intake “is the next logical step in the wellness revolution.” Myles said that choosing not to drink in an alcohol-soaked culture is seen as an act of authenticity or self-care; social change, but enacted through the individual. In 2019, a nonalcoholic-spirit producer, who calls her product a “euphoric” instead of a mocktail, almost echoed Carrie Nation when she told The New York Times, “Alcohol is a women’s lib issue, an LGBTTQQIAAP issue, a race issue.” But her vision of temperance was much less socially minded: Sober-curiosity, she said, was about a person’s “freedom to choose.” One can hardly imagine Congress or a radical activist like Nation attempting to restrict that freedom by outlawing the sale of espresso martinis. The proposed warning label, however, with its nod to individual health (and absence of radical social action), is more fitting for our age of wellness. It won’t cure society of all its ills, but it at least has a shot of persuading some people to tone down their drinking.

The original temperance movement’s end result—Prohibition—was more ambitious, and took place at the societal level. Prohibition didn’t make the personal act of drinking illegal, but rather the sale, purchase, and transport of alcohol. After Congress proposed the Eighteenth Amendment in 1917, it allowed seven years for the measure to pass; thanks to widespread enthusiasm, the states ratified it in only 13 months. The amendment and the Volstead Act, the law that enforced it, passed in 1919, and Prohibition officially kicked off in 1920.

In this century, “I don’t think we’re going to have Prohibition again,” Myles said, not least because the sober-curious are not advocating for policy change at this scale. Instead, neo-temperates are shifting social and, yes, moral norms about alcohol by emphasizing its effects on health. They also, crucially, are creating markets for nonalcoholic drinks and spaces. The original temperance movement similarly popularized a number of new beverages, such as sodas and fruit juices. But unlike the modern version, it directly attacked the alcoholic-beverage industry. In 1916, the United States was home to 1,300 breweries that made full-strength beer; 10 years later, they were all gone.

[From the April 1921 issue: Relative values in Prohibition]

Alcohol consumption, and the deaths associated with it, decreased significantly during Prohibition. But many people continued to buy alcohol illegally or make it themselves. Part of the reason the temperance movement didn’t usher in utopia, Malleck said, is that it failed to recognize how drunkenness could be fueled by still other societal problems, such as low wages or 12-hour workdays in factories where you were liable to lose a limb or have to urinate in a corner. These issues persisted even when alcohol was outlawed. In 1933, during the Great Depression, legislators decided the country needed the economic boost from alcohol sales and repealed Prohibition. President Herbert Hoover called Prohibition a noble experiment, but many historians consider it a failure. Today, about 60 percent of Americans drink, and that figure has held steady for more than four decades.

And yet, over the past several years, signs have appeared that fewer young people are drinking. If bricks and hatchets couldn’t convince Americans to transform their relationship to alcohol, perhaps the promise of finding your best self through phony negronis and nonalcoholic IPAs will.


This article originally misstated Colleen Myles’s title and the name of the Woman’s Christian Temperance Union.

Three years ago, when it was trickling into the United States, the bird-flu virus that recently killed a man in Louisiana was, to most Americans, an obscure and distant threat. Now it has spread through all 50 states, affecting more than 100 million birds, most of them domestic poultry; nearly 1,000 herds of dairy cattle have been confirmed to be harboring the virus too. At least 66 Americans, most of them working in close contact with cows, have fallen sick. A full-blown H5N1 pandemic is not guaranteed—the CDC judges the risk of one developing to be “moderate.” But this virus is fundamentally more difficult to manage than even a few months ago and is now poised to become a persistent danger to people.

That didn’t have to be the reality for the United States. “The experiment of whether H5 can ever be successful in human populations is happening before our eyes,” Seema Lakdawala, a flu virologist at Emory University, told me. “And we are doing nothing to stop it.” The story of bird flu in this country could have been shorter. It could have involved far fewer cows. The U.S. has just chosen not to write it that way.

[Read: America’s infectious-disease barometer is off]

The USDA and the CDC have doggedly defended their response to H5N1, arguing that their interventions have been appropriately aggressive and timely. And governments, of course, don’t have complete control over outbreaks. But compared at least with the infectious threat most prominent in very recent memory, H5N1 should have been a manageable foe, experts outside of federal agencies told me. When SARS-CoV-2, the virus that sparked the coronavirus pandemic, first spilled into humans, almost nothing stood in its way. It was a brand-new pathogen, entering a population with no preexisting immunity, public awareness, tests, antivirals, or vaccines to fight it.

H5N1, meanwhile, is a flu virus that scientists have been studying since the 1990s, when it was first detected in Chinese fowl. It has spent decades triggering sporadic outbreaks in people. Researchers have tracked its movements in the wild and studied it in the lab; governments have stockpiled vaccines against it and have effective antivirals ready. And although this virus has proved itself capable of infiltrating us, and has continued to evolve, “this virus is still very much a bird virus,” Richard Webby, the director of the World Health Organization Collaborating Centre for Studies on the Ecology of Influenza in Animals and Birds, told me. It does not yet seem capable of moving efficiently between people, and may never develop the ability to. Most human cases in the United States have been linked to a clear animal source, and have not turned severe.

The U.S., in other words, might have routed the virus early on. Instead, agencies tasked with responding to outbreaks and upholding animal and human health held back on mitigation tactics—testing, surveillance, protective equipment, quarantines of potentially infected animals—from the very start. “We are underutilizing the tools available to us,” Carol Cardona, an avian-influenza expert at the University of Minnesota, told me. As the virus ripped through wild-animal populations, devastated the nation’s poultry, spilled into livestock, started infecting farmworkers, and accumulated mutations that signaled better adaptation to mammals, the country largely sat back and watched.

When I asked experts if the outbreak had a clear inflection point—a moment at which it was crucial for U.S. leaders to more concertedly intervene—nearly all of them pointed to the late winter or early spring of last year, when farmers and researchers first confirmed that H5N1 had breached the country’s cattle, in the Texas panhandle. This marked a tipping point. The jump into cattle, most likely from wild birds, is thought to have happened only once. It may have been impossible to prevent. But once a pathogen is in domestic animals, Lakdawala told me, “we as humans have a lot of control.” Officials could have immediately halted cow transport, and organized a careful and concerted cull of infected herds. Perhaps the virus “would never have spread past Texas” and neighboring regions, Lakdawala told me. Dozens of humans might not have been infected.

[Read: America’s bird-flu luck has officially run out]

Those sorts of interventions would have at least bought more of the nation time to provision farmworkers with information and protection, and perhaps develop a plan to strategically deploy vaccines. Government officials could also have purchased animals from the private sector to study how the virus was spreading, Cardona told me. “We could have figured it out,” she said. “By April, by May, we would have known how to control it.” This sliver of opportunity was narrow but clear, Sam Scarpino, an infectious-disease modeler and flu researcher at Northeastern University, whose team has been closely tracking a timeline of the American outbreak, told me. In hindsight, “realistically, that was probably our window,” he said. “We were just too slow.”

The virus, by contrast, picked up speed. By April, a human case had been identified in Texas; by the end of June, H5N1 had infected herds in at least a dozen states and more than 100 dairy farms. Now, less than 10 months after the USDA first announced the dairy outbreak, the number of herds affected is verging on 1,000—and those are just the ones that officials know about.

The USDA has repeatedly disputed that its response has been inadequate, pointing out to The Atlantic and other publications that it quickly initiated studies this past spring to monitor the virus’s movements through dairy herds. “It is patently false, and a significant discredit to the many scientists involved in this work, to say that USDA was slow to respond,” Eric Deeble, the USDA’s deputy undersecretary for marketing and regulatory programs, wrote in an email.

And the agency’s task was not an easy one: Cows had never been a known source of H5N1, and dairy farmers had never had to manage a disease like this. The best mitigation tactics were also commercially formidable. The most efficient ways to milk cows invariably send a plume of milk droplets into the air—and sanitizing equipment is cumbersome. Plus, “the dairy industry has been built around movement” of herds, a surefire way to move infections around too, Cardona told me. The dairy-worker population also includes many undocumented workers who have little incentive to disclose their infections, especially to government officials, or heed their advice. At the start of the outbreak, especially, “there was a dearth of trust,” Nirav Shah, the principal deputy director of the CDC, told me. “You don’t cure that overnight.” Even as, from the CDC’s perspective, that situation has improved, such attitudes have continued to impede efforts to deploy protective equipment on farms and catch infections, Shah acknowledged.

Last month, the USDA did announce a new plan to combat H5N1, which requires farms nationwide to comply with requests for milk testing. But Lakdawala and others still criticized the strategy as too little, too late. Although the USDA has called for farms with infected herds to enhance biosecurity, implementation is left up to the states. And even now, testing of individual cows is largely left up to the discretion of farmers. That leaves too few animals tested, Lakdawala said, and cloaks the virus’s true reach.

The USDA’s plan also aims to eliminate the virus from the nation’s dairy herds—a tall order, when no one knows exactly how many cattle have been affected or even how, exactly, the virus is moving among its hosts. “How do you get rid of something like this that’s now so widespread?” Webby told me. Eliminating the virus from cattle may no longer actually be an option. The virus also shows no signs of exiting bird populations—which have historically been responsible for the more severe cases of avian flu that have been detected among humans, including the lethal Louisiana case. With birds and cows both harboring the pathogen, “we’re really fighting a two-fronted battle,” Cardona told me.

Most of the experts I spoke with also expressed frustration that the CDC is still not offering farmworkers bird-flu-specific vaccines. When I asked Shah about this policy, he defended his agency’s focus on protective gear and antivirals, noting that worker safety remains “top of mind.” In the absence of consistently severe disease and evidence of person-to-person transmission, he told me, “it’s far from clear that vaccines are the right tool for the job.”

[Read: How much worse would a bird-flu pandemic be?]

With flu season well under way, getting farmworkers any flu vaccine is one of the most essential measures the country has to limit H5N1’s threat. The spread of seasonal flu will only complicate health officials’ ability to detect new H5N1 infections. And each time bird flu infects a person who’s already harboring a seasonal flu, the viruses will have the opportunity to swap genetic material, potentially speeding H5N1’s adaptation to us. Aubree Gordon, a flu epidemiologist at the University of Michigan, told me that’s her biggest worry now. Already, Lakdawala worries that some human-to-human transmission may be happening; the United States just hasn’t implemented the infrastructure to know. If and when testing finally confirms it, she told me, “I’m not going to be surprised.”

In the world of nutrition, few words are more contentious than healthy. Experts and influencers alike are perpetually warring over whether fats are dangerous for the heart, whether carbs are good or bad for your waistline, and how much protein a person truly needs. But if identifying healthy food is not always straightforward, actually eating it is an even more monumental feat.

As a reporter covering food and nutrition, I know to limit my salt and sugar consumption. But I still struggle to do it. The short-term euphoria from snacking on Double Stuf Oreos is hard to forgo in favor of the long-term benefit of losing a few pounds. Surveys show that Americans want to eat healthier, but the fact that more than 70 percent of U.S. adults are overweight underscores just how many of us fail.

The challenge of improving the country’s diet was put on stark display late last month, when the FDA released its new guidelines for which foods can be labeled as healthy. The roughly 300-page rule—the government’s first update to its definition of healthy in three decades—lays out in granular detail what does and doesn’t count as healthy. The action could make it much easier to walk down a grocery-store aisle and pick products that are good for you based on the label alone: A cup of yogurt laced with lots of sugar can no longer be branded as “healthy.” Yet the FDA estimates that zero to 0.4 percent of people trying to follow the government’s dietary guidelines will use the new definition “to make meaningful, long-lasting food purchasing decisions.” In other words, virtually no one.

All of this is a bad omen for Donald Trump’s pick to lead the Department of Health and Human Services. As part of his agenda to “make America healthy again,” Robert F. Kennedy Jr. has pledged to improve the country’s eating habits by overthrowing a public-health establishment that he sees as ineffective. He has promised mass firings at the FDA, specifically calling out its food regulators. Indeed, for decades, the agency’s efforts to encourage better eating habits have largely focused on giving consumers more information about the foods they are eating. It hasn’t worked. If confirmed, Kennedy may face the same problem as many of his predecessors: It’s maddeningly hard to get Americans to eat healthier.

[Read: Everyone agrees Americans aren’t healthy]

Giving consumers more information about what they’re eating might seem like a no-brainer, but when these policies are tested in the real world, they often do not lead to healthier eating habits. Since 2018, chain restaurants have had to add calorie counts to their menus; however, researchers have consistently found that doing so doesn’t have a dramatic effect on what foods people eat. Even more stringent policies, such as a law in Chile that requires food companies to include warnings on unhealthy products, have had only a modest effect on improving a country’s health.

The estimate that up to 0.4 percent of people will change their habits as a consequence of the new guidelines was calculated based on previous academic research quantifying the impacts of food labeling, an FDA spokesperson told me. Still, in spite of the underwhelming prediction, the FDA doesn’t expect the new rule to be for naught. Even a tiny fraction of Americans adds up over time: The agency predicts that enough people will eat healthier to result in societal benefits worth $686 million over the next 20 years.

These modest effects underscore that health concerns aren’t the only priority consumers are weighing when they decide whether to purchase foods. “When people are making food choices,” Eric Finkelstein, a health economist at Duke University’s Global Health Institute, told me, “price and taste and convenience weigh much heavier than health.” When I asked experts about better ways to get Americans to eat healthier, some of them talked vaguely about targeting agribusiness and the subsidies it receives from the government, and others mentioned the idea of taxing unhealthy foods, such as soda. But nearly everyone I spoke with struggled to articulate anything close to a silver bullet for fixing America’s diet issues.

RFK Jr. seems to be caught in the same struggle. Most of his ideas for “making America healthy again” revolve around small subsets of foods that he believes, often without evidence, are causing America’s obesity problems. He has warned, for example, about the unproven risks of seed oils and has claimed that if certain food dyes were removed from the food supply, “we’d lose weight.” Kennedy has also called for cutting the subsidies doled out to corn farmers, who grow the crops that make the high-fructose corn syrup that’s laden in many unhealthy foods, and has advocated for getting processed foods out of school meals.

There’s a reason previous health secretaries haven’t opted for the kinds of dramatic measures that Kennedy is advocating for. Some of them would be entirely out of his control. As the head of the HHS, he couldn’t cut crop subsidies; Congress decides how much money goes to farmers. He also couldn’t ban ultra-processed foods in school lunches; that would fall to the secretary of agriculture. And although he could, hypothetically, work with the FDA to ban seed oils, it’s unlikely that he would be able to generate enough legitimate scientific evidence about their harms to prevail in an inevitable legal challenge.

The biggest flaw in Kennedy’s plan is the assumption that he can change people’s eating habits by telling them what is and isn’t healthy, and banning a select few controversial ingredients. Changing those habits will require the government to tackle the underlying reasons Americans are so awful at keeping up with healthy eating. Not everyone suffers from an inability to resist Double Stuf Oreos: A survey from the Cleveland Clinic found that 46 percent of Americans see the cost of healthy food as the biggest barrier to improving their diet, and 23 percent said they lack the time to cook healthy meals.

If Kennedy figures out how to actually get people like me to care enough about healthy eating to resist the indulgent foods that give them pleasure, or if he figures out a way to get cash-strapped families on public assistance to turn down cheap, ready-to-eat foods, he will have made significant inroads into actually making America healthy again. But getting there is going to require a lot more than a catchy slogan and some sound bites.

It took my father nearly 70 years to become a social butterfly. After decades of tinkering with Photoshop on a decrepit Macintosh, he upgraded to an iPad and began uploading collages of photos he took on nighttime walks around London to Flickr and then to Instagram. The likes came rolling in. A photographer from Venezuela applauded his composition. A violinist in Italy struck up a conversation about creativity.  

And then, as quickly as he had made his new friends, he lost them. One night in 2020, he had a seizure. Then he began forgetting things that he’d just been told and sleeping most of the day. When he picked up his iPad again, it was incomprehensible to him. A year or so later, he put an electric kettle on the gas stove. Not long after, he was diagnosed with Alzheimer’s.

An estimated 7 million Americans age 65 and older are currently living with Alzheimer’s; by 2050, that number is expected to rise to nearly 13 million. Millions more have another form of dementia or cognitive decline. These diseases can make simple tasks confusing, language hard to understand, and memory fleeting, none of which is conducive to social connection. And because apps and websites constantly update, they pose a particular challenge for patients who cannot learn or remember, which means that people like my father, who rely heavily on social media to stay in touch, may face an even higher barrier to communication.  

When my father turned on his iPad again about a year after his seizure, he couldn’t find the Photoshop app because the logo had changed. Instagram, which now had Reels and a shopping tab, was unnavigable. Some of his followers from Instagram and Flickr had moved on to a new app—TikTok—that he had no hope of operating. Whenever we speak, he asks me where his former life has disappeared to: “Where are all my photos?” “Why did you delete your profile?” “I wrote a reply to a message; where has it gone?” Of all the losses caused by Alzheimer’s, the one that seems to have brought him the most angst is that of the digital world he had once mastered, and the abilities to create and connect that it had afforded him.

[Read: My dad had dementia. He also had Facebook.]

In online support forums, caretakers of Alzheimer’s and dementia patients describe how their loved ones struggle to navigate the platforms they were once familiar with. One member of the r/dementia Subreddit, who requested not to be identified out of respect for her father’s privacy, told me that, about a decade ago, her father had been an avid emailer and used a site called Friends Reunited to recall the past and reconnect with old acquaintances. Then he received his dementia diagnosis after back-to-back strokes; his PC now sits unused. Amy Evans, a 62-year-old in Sacramento, told me that her father, who passed away in May at the age of 92, started behaving erratically online at the onset of Alzheimer’s. He posted on Facebook that he was looking for a sex partner. Then he began responding to scam emails and ordering, among other things, Xanax from India. Evans eventually installed child-protection software on his computer and gave him a GrandPad to connect with family and friends. But he kept forgetting how to use it. Nasrin Chowdhury, a former public-school teacher’s aide who lives in New York City, once used Facebook to communicate daily with family and friends, but now, after a stroke and subsequent Alzheimer’s diagnosis at 55, she will sit for hours tapping the screen with her finger—even if nothing is there, her daughter Eshita Nusrat told me. “I’ll come home from work, and she’ll say she texted me and I never replied, but then I’ll look at her phone and she tried to type it out in YouTube and post it as a video,” Chowdhury’s other daughter, Salowa Jessica, said. Now Chowdhury takes calls with the aid of her family, but she told me that, because she can’t use social media, she feels she has no control of her own life.

Many patients with dementia and related cognitive disorders lose the ability to communicate, regardless of whether they use technology to do it. It’s a vicious cycle, Joel Salinas, a clinical assistant professor of neurology at NYU Grossman School of Medicine, told me, because social disconnect can, in turn, hasten the cognitive degeneration caused by Alzheimer’s and dementia. Social media, by its very nature, is an especially acute challenge for people with dementia. The online world is a largely visual medium with a complex array of workflows, and dementia commonly causes visual processing to be interrupted or delayed. And unlike face-to-face conversation, landlines, or even flip phones, social media is always evolving. Every few months on a given platform, buttons might be changed, icons reconfigured, or new features released. Tech companies say that such changes make the user experience more seamless, but those with short-term memory loss can find the user experience downright impossible.

On the whole, social-media companies have not yet found good solutions for users with dementia, JoAnne Juett, Meta’s enterprise product manager for accessibility, told me. “I would say that we’re tackling more the loss of vision, the loss of hearing, mobility issues,” she said. Design changes that address such disabilities might help many dementia patients who, thanks to their advanced age, have limited mobility. But to accommodate the unique needs of an aging or cognitively disabled user, Juett believes that AI might be crucial. “If, let’s say, Windows 7 is gone, AI could identify my patterns of use, and adapt Windows 11 for me,” she said. Juett also told me her 97-year-old mother now uses Siri to make calls. It allows her to maintain social ties even when she can’t keep track of where the Phone app lives on her iPhone’s screen.

[Read: How people with dementia make sense of the world]

The idea of a voice assistant that could reconnect my father to his online world is enticing. I wish he had a tool that would allow him to connect in the ways that once gave him joy. Such solutions will become only more necessary: Americans are, on average, getting both older and more reliant on technology to communicate. The oldest Americans, who are most likely to experience cognitive decline, came to social media later in life—and still, nearly half of the population over 65 uses it. Social media is an inextricable part of how younger generations connect. If the particular loneliness of forgetting how to use social media is already becoming apparent, what will happen when an entire generation of power users comes of age?

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

A quiet monologue runs through my head at all times. It is this: dinner dinner dinner dinner. The thing about dinner is that you have to deal with it every single night. Figuring out what to eat is a pleasure until it becomes a constant low-grade grind. It’s not just the cooking that wears me down, but the meal planning and the grocery shopping and the soon-to-be-rotting produce sitting in my fridge. It is the time it sucks up during the week. It is the endless mental energy. Huh, I think, at 6 p.m., dicing onions. So we’re still doing this?

I can compromise on breakfast. It is absolutely normal to eat the same breakfast every single day for years, and equally normal to eat nothing. Lunch: Eat it, skip it, have some carrot sticks, who cares. Lunch is a meal of convenience. But dinner is special. Dinner isn’t just the largest meal in the standard American diet; it is the most important, the most nourishing, the most freighted with moral weight. The mythical dream of dinner is that after a hard but wholesome day at school or work, the family unit is reunited over a hot meal, freshly prepared. Even if you’re dining solo, dinner tends to be eaten in a state of relative leisure, signaling a transition into the time of day when you are no longer beholden to your job. “You could eat a full bag of Doritos,” Margot Finn, a food-studies scholar at the University of Michigan, told me, but that doesn’t quite cut it for dinner: “There’s some paucity there. There’s some lack.”

[Read: The people who eat the same meal every day ]

The Dinner Problem might be especially acute for working parents like me—children are unrelenting in their demand to eat at regular intervals—but it spares almost no one. Disposable income helps mitigate the issue (disposable income helps mitigate most issues), but short of a paid staff, money does not solve it. I could accept this as the price of being human, if everywhere I looked there was not someone promising a way out. The sheer number of hacks and services and appliances and start-ups suggests that some kind of dinner resolution is forthcoming: How could it not be solvable, with this many options? We are living in what might be the world-historic peak of dinner solutions: A whole canon of cookbooks is devoted to quick-and-easy weeknight dinners for busy families and entire freezer cases dedicated to microwavable meals. There is takeout and prepared food and DoorDash and a staggering number of prep guides outlining how to cook in bulk one day a week. And yet, none of it has managed to solve the problem: Dinner exists, daunting and ominous.

As it stands, dinner is a game of trade-offs: You can labor over beautiful and wholesome meals, but it is so much work. You can heat up a Trader Joe’s frozen burrito or grab McDonald’s—there is a reason that as of 2016, the last time the government counted, one-third of American adults ate fast food on any given day—but you don’t have to be a health fanatic to aspire to a more balanced diet. You could get takeout, but it’s notoriously expensive and frequently soggy, more a novelty than a regular occurrence. Delivery apps, at least, offer the promise of extreme convenience, except that they are even more expensive, and the food is often even soggier.

In spite of all these options, if you cannot free yourself from dinner, you’re not alone. The many attempts to make dinner painless have not lived up to their promise. Remember Soylent? One of the bolder possibilities, for a while, was a shake that pledged to make “things a lot less complicated” by replacing conventional food with a deconstructed slurry of nutrients. I do want things to be less complicated, but I also want variety. I want to chew. A lot of other people seemed to want these things too, which is presumably one reason food-based dinner persists and Soylent has mellowed into a “nutritional supplement lifestyle brand.”

[Read: The man who could make food obsolete]

Given the general enthusiasm for eating, most proposed innovations have focused on easing the labor of making dinner. Grocery stores offer pre-chopped produce; Whole Foods briefly experimented with an on-site “produce butcher” who would slice or dice or julienne your vegetables. Meal kits that ship portioned ingredients to your doorstep ought to be an obvious solution, and for a minute, it seemed like maybe they were. In 2015, Blue Apron was valued at $2 billion and, according to TechCrunch, was poised to reach “99 percent of potential home cooks.” It did not, in fact, reach 99 percent of potential home cooks, nor did any of its competitors. “There are still people who really love meal kits,” Jeff Wells, the lead editor of Grocery Dive, a trade publication, told me. “There just aren’t that many of them relative to the overall food-shopping population.” The problem is the cost, or the menu, or the quality, or the lack of leftovers, or the prep time.

When one dinner solution fizzles, there is always another, and another, which will be superseded by still more. Lately, Wells said, grocery stores have been investing in their prepared to-go options, with in-store pizza counters and plastic clamshells of deli salads and ready-to-heat containers of spaghetti. Everywhere I look, I seem to be inundated with new and somehow improved solutions. On Instagram, I learned about a new delivery service that is in the process of expanding to my area. While streaming a movie, I was introduced, repeatedly, to a company that sells healthy meals I could have ready in two minutes. Every time I turn on a podcast, I am informed about a meal-kit company that, if I use the promo code, will give me free dessert for life. They all promise the same thing: that dinner could be painless, if I let it. I could have it all, my dinner and my sanity.

Of course, all of these options still require divesting from the Norman Rockwell dream of home-cooked dinner. The ideal of dinner has made me resentful and occasionally unpleasant, and at the same time, I viscerally do not want to eat a vat of precooked spaghetti. I can make spaghetti, I thought. But then I was back where I began. Most of us have two basic choices: You can make the necessary compromises and accept something less than optimal, or you can surrender to a wholesome trap of your own making. You can buy the pre-chopped onions, or you can suck it up and chop your own onions. Those are the choices. The notion that there is a permanent way out—a hack, a kit, a service that gives you all the benefits of dinner cooked from scratch without the labor—is an illusion. You cannot have a meal that both is and is not homemade: Schrödinger’s salmon over couscous with broccoli rabe.

Dinner resists optimization. It can be creative, and it can be pleasurable. None of this negates the fact that it is a grind. It will always be a grind. You will always have to think about it, unless you have someone else to think about it for you, and it will always require too much time or too much energy or too much money or some combination of the three. It is unrelenting, in the way that breathing is unrelenting. There is freedom in surrendering to this, that even in this golden age of technological progress, dinner refuses to be solved.

Newer Posts