Rep. Cathy McMorris Rodgers’ biggest fear as a parent isn’t gun violence, or drunk driving, or anything related to the pandemic.
It’s social media.
And specifically, the new sense of “brokenness” she hears about in children in her district, and nationwide. Teen depression and suicide rates have been rising for over a decade, and she sees social apps as a major reason.
At a hearing this March on Capitol Hill, the Republican congresswoman from Washington confronted Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey and Google CEO Sundar Pichai with a list of statistics: From 2011 to 2018, rates of teen depression increased by more than 60%, and from 2009 to 2015, emergency room admissions for self-harm among 10- to 14-year-old girls tripled.
“It’s a battle for their development. It’s a battle for their mental health — and ultimately a battle for their safety,” McMorris Rodgers told the tech leaders.
But when she pointed a question specifically to Zuckerberg, about whether he acknowledged a connection between children’s declining mental health and social media platforms, he demurred.
“I don’t think that the research is conclusive on that,” replied Zuckerberg.
It’s a position that he and his company, which is working on expanding its offerings to even younger children, have held for years. But mental health researchers whom NPR spoke with disagree.
They describe an increasingly clear correlation between poor mental health outcomes and social media use, and they worry that Facebook (which also owns Instagram and WhatsApp) in particular may be muddying the waters on that connection to protect its public image.
“The correlational evidence showing that there is a link between social media use and depression is pretty definitive at this point,” said Jean Twenge, a psychology professor at San Diego State University. “The largest and most well-conducted studies that we have all show that teens who spend more time on social media are more likely to be depressed or unhappy.”
Correlation is not causation, and one area of further study is whether greater social media usage leads to poor mental health outcomes or whether those who are depressed and unhappy are drawn to spend more time on social media. But researchers also worry that not enough government funding is going toward getting objective data to answer these sorts of questions.
Facebook also almost certainly knows more than it has publicly revealed about how its products affect people.
NPR spoke with Twenge and two other academics whose work has focused on the links between depression and social media use and who say Facebook’s public affairs team reached out to them for the first time ever in recent months for input on internal information related to the issue.
The company declined to comment about the meeting requests and about its stance on research about its platforms. But the outreach comes at a pivotal time for Facebook and its plans for growth.
Government regulation is closer than ever before, and the issue of children’s mental health is one of the few concerns about Big Tech that Republicans and Democrats seem to agree on.
After it was revealed that Facebook was working on a version of Instagram for children under 13, a bipartisan group of 44 attorneys general wrote a letter to Zuckerberg this month with a simple message: Stop.
“Use of social media can be detrimental to the health and well-being of children, who are not equipped to navigate the challenges of having a social media account,” the letter reads. “Further, Facebook has historically failed to protect the welfare of children on its platforms.”
Shortly after McMorris Rodgers spoke at the March congressional hearing, Rep. Kathy Castor, a Florida Democrat, asked Zuckerberg whether he was familiar with a 2019 study that found the risk of depression in children rises with each hour spent daily on social media. He said he was not.
“You enjoy an outdated liability shield that incentivizes you to look the other way or take half-measures,” Castor said, “while you make billions at the expense of our kids, our health and the truth.”
Twenge has been studying and writing about technology’s effects on people born between 1995 and 2012 for much of the last decade.
She dubbed the generation “iGen” in a 2017 book that features an abundance of charts showing huge drop-offs in happiness among teens in the last decade compared with previous generations and huge increases in loneliness and suicide risk, especially in teens who are on their phones more than an hour or two a day.
And since 2017, those trends have mostly gotten worse.
“For depression and anxiety and self-harm, those increases have continued,” Twenge said. “As smartphones became even more pervasive, social media became even more pervasive.”
In the four years since her book came out, no one from any of the major social media companies reached out to her.
Until about three months ago.
“I got an email from someone at Facebook who said they were putting together an advisory panel,” Twenge said.
The email came from a lower-level employee at the company on behalf of Heather Moore, a public affairs executive at Facebook who helped create the company’s Oversight Board. (That panel, which is funded by Facebook through a $130 million independent trust, is made up of 20 prominent experts from around the world.) The board recently announced its biggest decision yet, siding with the company on its decision to suspend former President Donald Trump from the platform.
Two other researchers NPR spoke with say they received a similar meeting request. One did not wish to be named in this story.
The request says Facebook is “currently working on speaking with a range of experts who study algorithms and virality,” but it doesn’t specify whether the company is planning to assemble a more organized public- or private-facing group of experts focused on the mental health effects of the platform.
Facebook declined to provide more detail about the requests, but a spokesperson did note that a company as large as Facebook reaches out to a variety of subject-matter experts frequently.
The email does, however, allude to the company having relevant internal information regarding the mental health effects of its platforms.
“The team would like to share some insights about what we’re working on internally and ask for your input,” the email says.
In the March hearing on Capitol Hill, Zuckerberg told McMorris Rodgers that the company has specifically researched the mental health effects that his company’s platforms have on children. But when McMorris Rodgers’ staff followed up after the hearing, she says the company declined to share any of its research.
“I believe that they have done the research. They’re not being transparent,” McMorris Rodgers told NPR in an interview. “They seem to be more concerned about their current business model, and they have become very wealthy under their current business model. But the fact of the matter is we’re seeing more and more evidence … that their current business model is harming our kids.”
Generally in response to these kinds of questions, Facebook has pointed to research indicating that poor mental health outcomes like depression stem from how people use the platforms and specifically whether they are “active” users who post and message people or “passive” users who mostly consume content.
The implication is that people have control over whether they feel bad from using the platforms, since users have a choice in whether they message their friends on Instagram, for instance, or whether they choose to scroll endlessly.
But Melissa Hunt, a psychology professor at the University of Pennsylvania, says it’s not so simple.
The company’s success is dependent on keeping people engaged and selling advertisements based on that engagement, so Facebook, she says, is motivated to create systems that keep people on its platforms no matter the effect to their long-term well-being.
Hunt was another one of the researchers who received an inquiry from Facebook about her work linking social media use and depression.
“Basically all of the things that would contribute to these platforms being healthier for people to use, which is basically spend less time, don’t follow strangers, don’t spend time passively scrolling through this random feed that’s being suggested to you,” Hunt says. “That completely undermines their whole business model.”
When she got a request for her time from the company, she says she thought about it. Then she also thought about what Facebook is currently valued at: close to a trillion dollars.
“I decided that if they were serious about that, they could pay me a nominal consulting fee,” Hunt says. “So I let them know what my consulting fee was. I said I’d be delighted to weigh in and share my expertise with you.”
She never heard back.
Similarly, Twenge responded to the request from the company and said she was interested in setting up a time to talk, but after a few back-and-forth messages, she has yet to hear back again.
Twenge feels strongly that while the research on the psychological impact of social media is relatively new, there are takeaways that can already be drawn, even as some insist on labeling it all “inconclusive.”
“It’s similar to the way that climate deniers can point to a few people in that field and say, ‘There’s a few people who still doubt this.’ It’s that false equivalence that happens too often,” Twenge said. “In this case, that small but vocal group has been very skilled at getting that message out, perhaps because these companies are very receptive to it.”
Questions without answers
“Policy making is facilitated by consensus. However, scientific research is characterized by uncertainty,” wrote researcher Lisa Bero in a paper about how cigarette companies manipulated research. “It is often to the benefit of interest groups to generate controversy about data because the controversy is likely to slow or prevent regulation of a given product.”
But Brian Primack, who leads the College of Education and Health Professions at the University of Arkansas, says comparing the current research situation around social media and cigarettes is too simplistic.
Primack used to study tobacco. (“If it kills a lot of people, I want to study it,” he says about how he has chosen what to focus on throughout his career.)
Now, he spends his time investigating the effects of social media. And he sees a clear connection between depression and the online platforms.
A study he published last year found that young adults who increased their social media usage over a period of time were also found to be significantly more likely to become depressed over that same time period.
“There is an association between the two,” Primack says. “Just meaning that if you put people into equal buckets in terms of how much social media they use, the people who use the most social media are also the people who are the most depressed.”
But unlike cigarettes, which he says have no useful purpose, some people have shown positive health outcomes from using social media.
Brain development research in recent years, for instance, found benefits in 9- and 10-year-olds from using social media.
“Social media is very heterogeneous. In some kids it can be very beneficial, and in other kids it can be very detrimental,” said the author of that study, Dr. Martin Paulus of the Laureate Institute for Brain Research. “But we still don’t understand which group of kids benefit from it and which group of kids may be harmed by it.”
Paulus is not confident the social media companies truly want to get to the bottom of that question either.
Several years ago, Paulus gave a presentation at Facebook with a few other researchers who were looking at the effects of social media. He came away from the meeting feeling like the company wasn’t serious about actually having objective research.
“It was more like a face-saving activity,” Paulus said. “Those companies, whether it’s Facebook or other companies as well, they say they want research… But they’re not necessarily interested in research that potentially would show that some of the things that they do are bad for kids.”
It’s a thorny issue to wade into. The company says that it employs hundreds of researchers and that it also supports efforts like Boston Children’s Hospital’s newly formed Digital Wellness Lab and the Aspen Institute’s roundtables on loneliness and technology.
But it has also been criticized for using its platforms for research purposes. In 2012, the company allowed researchers to change what people saw on the platform in order to see how that would affect the nature of what they then chose to post.
The study did show evidence that people’s moods are affected by what they see other people posting, but some saw the exercise as emotional manipulation, and one of the authors seemed to express regret about conducting it after the backlash.
For a company with one of the largest data troves on the human population, this question of how best to conduct research expands to other sectors too. Disinformation researchers, for instance, have long been frustrated by what the company chooses and chooses not to share.
“They could answer questions that we desperately need answered any time they want, and they just won’t do it,” said Ben Scott, executive director at Reset, an initiative aimed at tackling digital threats to democracy. “They’ve chosen, for public relations reasons, not to participate in helping the public interest… And that’s outrageous.”
“God only knows what it’s doing to our children’s brains”
The potential dangers of kids spending hours hypnotized by their screens has been apparent essentially since the social media platforms were created.
Founding Facebook President Sean Parker once described, in an interview with Axios, the company’s algorithms as “exploiting a vulnerability in human psychology.”
“God only knows what it’s doing to our children’s brains,” Parker said. “The inventors, creators — it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people — understood this consciously. And we did it anyway.”
The problem is compounded by how little government funding is going toward studying the effects of these platforms, relative to how much of each day many Americans spend engaged with the technology.
Funding from the National Institutes of Health (NIH) is mostly focused on curing diseases, but because there is no specific disease officially associated with screen time, experts say it’s difficult to get studies funded by the federal government.
In 2019, Sen. Ed Markey, D-Mass., introduced a bill that would have provided a mechanism for more NIH research on the subject. The legislation had bipartisan co-sponsors and the support of Facebook, but it never made it to a vote.
Without more of that sort of research, parents are essentially left in the dark guessing exactly how much is too much for their kids when it comes to their devices.
“The truth of it, quite frankly, is we are probably living through one of the biggest natural experiments that we’ve gone through with our kids,” said Paulus.
Editor’s note: Facebook is among NPR’s financial supporters.