Big Tech CEOs and US Lawmakers Clash Over Disinformation at Congress Hearing

Social

US lawmakers unleashed a torrent of criticism against social media top executives Thursday, blaming the companies for amplifying false content and calls to violence, while promising new regulations to stem rampant online disinformation.

The video hearing attended remotely by top executives of Facebook, Google, and Twitter got off to a stormy start as lawmakers accused them of intentionally making products that get people hooked.

“Big Tech is essentially handing our children a lit cigarette and hoping they stay addicted for life,” said congressman Bill Johnson, an Ohio Republican. 

“Former Facebook executives have admitted that they use the tobacco industry’s playbook for addictive products.”

Congressman Frank Pallone told the executives that it is time for legislation that forces more aggressive action to eliminate disinformation and extremism from online platforms.

Jack Dorsey of Twitter, Sundar Pichai of Google, and Mark Zuckerberg of Facebook were bombarded with questions for some six hours by members of Congress who blamed their platforms for drug abuse, teen suicides, hate, political extremism, illegal immigration, vaccine bashing and more.

“They didn’t mention cancer, but they might as well have because they mentioned everything else,” Creative Strategies analyst Carolina Milanesi said. “It was sad political theater.”

Democrats slammed the platforms for failing to stem misinformation about COVID-19 vaccines and incitement ahead of the January 6 Capitol riot. Republicans revived unproven complaints that social networks were biased against conservatives.

Republican Representative Bob Latta accused the firms of operating “in a vague and biased manner, with little to no accountability,” relying on a law giving them a “shield” against liability for content posted by others.

“People want to use your services, but they suspect your coders are designing what they think we should see and hear,” said Republican Gus Bilirakis.

Free expression vs moderation
The tech CEOs said they were doing their best to keep out harmful content.

“We believe in free expression, we believe in free debate and conversation to find the truth,” Dorsey said.

“At the same time we must balance that with our desire for our service not to be used to sow confusion division or distraction. This makes the freedom to moderate content critical to us.”

Dorsey advocated establishing open protocols to serve as shared guidelines for social media platforms when it comes to moderating content.

Members of Congress pressed for quick, yes-or-no answers to questions centered on getting complex systems to flawlessly figure out the context, accuracy, danger and legality of posts.

“You can’t just answer everything yes-or-no,” the analyst Milanesi said of the hearing.

“Which is why all of this is a mess; because there are so many nuances.”

Zuckerberg confirmed anew his belief that private companies should not be the judges of truth when it comes to what people say.

“People often say things that aren’t verifiably true, but that speak to their lived experiences,” Zuckerberg told the panel.

At the same time, the Facebook founder said, “we also don’t want misinformation to spread that undermines confidence in vaccines, stops people from voting, or causes other harms.”

Pichai, whose company owns YouTube, defended the actions of the video platform, saying that after the January 6 uprising it “raised up authoritative sources across our province on YouTube,” and “removed livestreams and videos that violated our incitement to violence policies.”

Pichai said Google’s mission is “providing trustworthy content and opportunities for free expression, while combating misinformation. It’s a big challenge.”

Zuckerberg offered lawmakers a proposal to reform the liability shield known as Section 230, suggesting that platforms have systems in place to filter and remove illegal content.

He maintained that Congress “should consider making platforms’ intermediary liability protection for certain types of unlawful content contingent on companies’ ability to meet best practices to combat the spread of this content.”

Lawmakers said they would introduce their own proposals to reform Section 230.

“The regulation that we seek should not attempt to limit constitutionally protected freedom of speech, but it must hold platforms accountable when they are used to incite violence and hatred or as in the case of the COVID pandemic, spread misinformation that costs thousands of lives,” said Democratic Representative Jan Schakowsky. 

Pallone meanwhile said to the executives, “your business model itself has become the problem and the time for self regulation is over.”

Some lawmakers argued that platforms like Facebook use algorithms that amplify inflammatory content.

Representative Adam Kinzinger cited research saying Facebook algorithms “are actively promoting divisive hateful and conspiratorial content because it engages users to spend more time.”

Zuckerberg responded that “there’s quite a bit of misperception about how our algorithms work and what we optimised for now.”

He added that “we are trying to help people have meaningful social interactions” but that “that’s very different from setting up algorithm” that lead to addiction.

© Thomson Reuters 2021


Some important changes are taking place with Orbital podcast. We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.