Opinion

I Designed Algorithms at Facebook. Here’s How to Regulate Them.

Washington was entranced Tuesday by the revelations from Frances Haugen, the Facebook product manager-turned-whistle-blower. But time and again, the public has seen high-profile congressional hearings into the company followed by inaction. For those of us who work at the intersection of technology and policy, there’s little cause for optimism that Washington will turn this latest outrage into legislative action.

The fundamental challenge is that Democrats and Republicans cannot agree on what the problem is. Democrats focus on the relentless spread of disinformation — highlighted, yet again, by the internal documents Ms. Haugen leaked to The Wall Street Journal — while Republicans complain about censorship and bias. This tension plays right into the hands of Facebook and the other social media companies, which continue business as usual.

Yet as Ms. Haugen proposed in her testimony before a Senate panel on Tuesday, there is a regulatory solution that addresses the key concerns of both parties, respects the First Amendment and preserves the dynamism of the internet economy. Congress should craft a simple reform: make social media companies liable for content that their algorithms promote.

In the late 1990s, internet users discovered content through search engines like Lycos and web directories like Yahoo. These early internet services provided no mechanism for fringe content to reach a wider audience. That’s because consuming content required that a user intentionally search for a keyword or browse to a particular website or forum.

That era seems quaint now. Our social media feeds are full of unbidden and fringe content, thanks to social media’s embrace of two key technological developments: personalization, spurred by mass collection of user data through web cookies and Big Data systems, and algorithmic amplification, the use of powerful artificial intelligence to select the content shown to users.

Personalization and algorithmic amplification, by themselves, have undoubtedly made wonderful new internet services possible. Tech users take for granted our ability to personalize apps and websites with our favorite sports teams, musicians and hobbies. The use of ranking algorithms by news websites for their user comment sections, traditional cesspools of spam, has been widely successful.

But when data scientists and software engineers blend content personalization and algorithmic amplification — as they do to produce Facebook’s News Feed, TikTok’s For You tab and YouTube’s recommendation engine — they create uncontrollable, attention-sucking beasts. Though these algorithms, such as Facebook’s “engagement-based ranking,” are marketed as increasing “relevant” content, they perpetuate biases and affect society in ways that are barely understood by their creators, much less users or regulators.

In 2007, I started working at Facebook as a data scientist, and my first assignment was to work on the algorithm used by News Feed. Facebook has had more than 15 years to demonstrate that algorithmic personal feeds can be built responsibly; if it hasn’t happened by now, it’s not going to happen. As Ms. Haugen said, it should now be humans, not computers, “facilitating who we get to hear from.”

Though understaffed teams of data scientists and product managers like Ms. Haugen attempt to keep the algorithms’ worst impacts in check, social media platforms have a fundamental economic incentive to keep users engaged. This ensures that these feeds will continue promoting the most titillating, inflammatory content, and it creates an impossible task for content moderators, who struggle to police problematic viral content in hundreds of languages, countries and political contexts.

Even if social media companies are broken up or are forced to be more transparent and interoperable, the incentives for Facebook and its competitors to supercharge these algorithms won’t change. Worryingly, a more competitive battle for attention may cause even greater harm, if more companies emulate TikTok’s success with its algorithm, which promotes “endless spools of content about sex and drugs” to minors, according to The Wall Street Journal.

The solution is straightforward: Companies that deploy personalized algorithmic amplification should be liable for the content these algorithms promote. This can be done through a narrow change to Section 230, the 1996 law that lets social media companies host user-generated content without fear of lawsuits for libelous speech and illegal content posted by those users.

As Ms. Haugen testified, “If we reformed 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.” As a former Facebook data scientist and current executive at a technology company, I agree with her assessment. There is no A.I. system that could identify every possible instance of illegal content. Faced with potential liability for every amplified post, these companies would most likely be forced to scrap algorithmic feeds altogether.

Social media companies can be successful and profitable under such a regime. Twitter adopted an algorithmic feed only in 2015. Facebook grew significantly in its first two years, when it hosted user profiles without a personalized News Feed. Both platforms already offer nonalgorithmic, chronological versions of their content feeds.

This solution would also address concerns over political bias and free speech. Social media feeds would be free of the unavoidable biases that A.I.-based systems often introduce. Any algorithmic ranking of user-generated content could be limited to nonpersonalized features like “most popular” lists or simply be customized for particular geographies or languages. Fringe content would again be banished to the fringe, leading to fewer user complaints and putting less pressure on platforms to call balls and strikes on the speech of their users.

To be sure, there are potential drawbacks to using Section 230 reform in this way. As Stanford’s Daphne Keller has written, the relevant areas of the law are “notoriously tricky” for the courts to evaluate. Lawmakers would have to write the bill carefully to give it the best chance of surviving a First Amendment challenge.

Congress’s last change to Section 230 led to a host of unintended consequences; this time Congress should consult with activists and marginalized groups at the highest risk of being caught up in online speech regulations, to make sure the law is properly and narrowly targeted.

If these concerns can be addressed, there’s no reason to let Ms. Haugen’s brave act become yet another wasted opportunity to bring these companies to account.

Roddy Lindsay is the co-founder of Hustle and a former data scientist at Facebook.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Related Articles

Back to top button