Posted on June 7, 2020 by Jana Schwartz

The following article was originally posted in the Wall Street Journal. Click to view.

Doomscrolling: Why we just can't look away

NN

By Nicole NguyenThe Wall Street Journal

Sun., June 7, 2020 timer 6 min. read

I fixated on the glow in my hand, lighting up an otherwise dark bedroom. In the past few months, after-hours screen time had become a ritual. Last night—and the night before, and the night before that—I stayed up thumbing through tweets, grainy phone-captured videos, posts that gave me hope and posts that made me enraged. I felt like I needed to see it. All of it.

I was "doomscrolling." Also known as "doomsurfing," this means spending inordinate amounts of time on devices poring over grim news—and I can't seem to stop. My timeline used to be a healthy mix of TikTok memes and breaking-news alerts. Now the entire conversation is focused on two topics: the pandemic and the protests .

People are logging on to keep up with it all. This past week, as demonstrations swept the globe, videos from the protests garnered millions of views on social-media platforms. One compilation has been watched more than 50 million times. For the quarter that ended in March, Twitter reported a 24% increase in daily active users over the same period last year. On June 2, Twitter ranked No. 7 in Apple's App Store—above Facebook, Instagram, Messenger and Snapchat.

On April 24, Merriam-Webster added “doomscrolling” to its “Words We're Watching” list but the term has circulated since at least 2018. For many, myself included, it has become an irresistible urge, in part because we're stuck at home, spending too much time on our screens, and in part because that's precisely where social media's power over us is amped up.

This has a lot to do with our primal instincts, say experts. Our brains evolved to constantly seek threats—historically, that might mean poisonous berries or a vicious rival tribe, explains Mary McNaughton-Cassill, professor of clinical psychology at the University of Texas at San Antonio. “That's why we seem predisposed to pay more attention to negative than positive things,” she says. “We're scanning for danger.”

When nervous, afraid or stressed, our bodies' fight-or-flight response kicks in, raising our blood pressure and heart rate. That adrenaline prepares us to fend off physical danger—but the response can also be triggered in situations where it is less useful, like if your boss is rude to you or something is happening on TV, she says.

Distressing news puts us on high alert, and the sheer volume of it on social media keeps us poking at our phone screens for hours on end. “There's this sense that we have to be watching all the time in order to protect our families,” Prof. McNaughton-Cassill says.

Negative news isn't just getting a human boost; it's also enabled by the underlying tech. The interfaces of social-media apps are designed specifically to get us hooked. One key metric for these companies is “time spent on app”—the longer you spend online, the more opportunities to serve you revenue-generating ads.

Algorithmic systems, powered by machine learning and troves of user data, determine what appears on each user's unique Facebook feed, Twitter timeline and YouTube home page. “These algorithms are designed to take and amplify whatever emotions will keep us watching, especially negative emotions. And that can have a real negative impact on people's mental health,” says David Jay of the Center for Humane Technology, a non-profit addressing how social-media platforms hijack our attention.

As social beings, people can find comfort in hearing from others who share their views, Prof. McNaughton-Cassill says, but constant exposure to violence and signs of injustice can also be overwhelming. “That is why I always recommend that people consciously regulate their media intake. You can't save other people from drowning if you are having trouble swimming yourself,” she says.

As my colleague Christopher Mims put it more than two years ago: “If it's outrageous, it's contagious.” This moment in history has overwhelmed social-media companies' ability to fix their most problematic—and lucrative—attribute: the algorithmically tuned infinite scroll.

As you use the platforms, the recommendation engines get better at predicting what captivates you, and serve you content that's similar to what you have already interacted with. What's concerning about personalized, algorithmic feeds is that they confuse relevance to you with importance to the world at large. It's just what some software thought you might click on. “Not everyone realizes that's how their information is being packaged to them,” says Coye Cheshire, professor of sociology at UC Berkeley's School of Information.

Another mechanism built to transfix us: the infinite scroll. Apps such as Twitter and Facebook have no end, leaving us feeling like we might be missing out on something relevant if we don't keep reading. An unlimited amount of content continuously loads in the background. “It leaves people feeling psychologically like they can never catch up on all the information. They never reach the satisfaction of being able to say, ‘Ah, now I understand the problem,'” says Prof. Cheshire.

What's more, the content is served to us with little vetting. Mr. Jay, who is focused on misinformation research, is troubled by how bad actors are taking advantage of this moment to present fake news alongside authentic news. In April, bogus cures and conspiracy theories about the origin of the new coronavirus were promoted on Facebook, despite the companies' efforts to ramp up fact-checking efforts.

“When we look at doomscrolling related to COVID, the protests and the election, all deeply intersecting with one another, there are numerous actors that are very sophisticated at disseminating misinformation shaping the narratives of these crises,” Mr. Jay says, referring to COVID-19, the disease caused by the new virus.

Twitter says that instead of determining a tweet's truthfulness, it will provide context. The company recently slapped a link to “Get the facts” on a tweet by U.S. President Trump about mail-in ballots. Twitter has gone so far as to prevent another tweet by the president from being algorithmically recommended, because the company said the message glorified violence.

Facebook has worked to flag disputed news and hide content, detected by its software, that is likely to contain falsehoods. But that system weeds out only the obviously fake stuff, and labelling misinformation implies that nonlabeled posts must be true, which isn't the case, argues Prof. Cheshire.

A Twitter spokeswoman says the company is working to connect its community with mental-health resources. She adds that users can provide feedback on algorithmically served tweets from their timelines, block or mute people they are following, enable sensitive media warnings and turn off video autoplay. A Facebook spokeswoman points to time-management tools built into Facebook's and Instagram's apps, and the “Hide post” feature, which offers the option to see fewer posts from a specific person, page or group.

In a statement, the Facebook spokeswoman said, “We've built controls to help people manage their time on Facebook and Instagram, and we changed News Feed to prioritize posts from friends and family.”

In a recent lengthy Medium post, Barry Schnitt, a former Facebook communications executive, published a warning: “You know what's engaging as heck? Wild conspiracy theories and incendiary rhetoric. Put together a piece of content that comes to you from a trusted source (i.e., your friend) and Facebook making sure you see the really tantalizing stuff, and you get viral misinformation.”

Last week, I found myself caught in a Twitter riptide. Under the “What's Happening” section, the word “bleeding” appeared as a trending term. As I was about to click, a new post on the timeline caught my eye: “Hey, are you doomscrolling?” It was a tweet from Karen Ho, a reporter at the online publication Quartz who has taken on the nightly responsibility of snapping people out of it. “Take a break, and get some sleep. You'll feel better with a good night's rest,” she tweeted. I logged off.

 

— Jana Schwartz