An article in the Washington Post, by Cat Zakrzewski, reports that New York university researchers have found no evidence that technology firms are biased against content posted online by Republicans. The researchers also suggest retaining Section 230 of the Communications Decency Act, but removing legal immunity for user-generated posts if organisations fail to act responsibly in moderating content on their sites. The researchers are also calling for Biden’s administration to set up a Digital Regulatory Agency to enforce a revised version of the Section 230 law.
Source: Washington Post, Feb. 1, 2021 at 2:28 p.m. GMT
Article by: Cat Zakrzewski – Technology policy reporter
“The researchers want the Biden administration to work with Congress to overhaul the tech industry.
Their recommendations focus particularly on changing Section 230, a decades-old law shielding tech companies from lawsuits for the photos, videos and posts people share on their websites. The law was a frequent target of Trump, who zeroed in on it after tech companies began labelling his posts and eventually suspended his accounts.
The researchers warn against completely repealing the law. Instead, they argue companies should only receive Section 230 immunity if they agree to accept more responsibilities in policing content such as disinformation and hate speech. The companies could be obligated to ensure their recommendation engines don’t favor sensationalist content or unreliable material just to drive better user engagement.
“Social media companies that reject these responsibilities would forfeit Section 230’s protection and open themselves to costly litigation.” the report proposed.
The researchers also called for the creation of a new Digital Regulatory Agency, which would serve as an independent body and be tasked with enforcing a revised Section 230.
The report also suggested Biden could empower a “special commission” to work with the industry on improving content moderation, which would be able to move much more quickly than legal battles over antitrust issues. It also called for the president to expand the task force announced by Biden on online harassment to focus on a broad range of harmful content.”
Cris Pikes CEO of Image Analyzer comments, “whatever the legislators decide, Image Analyzer has content moderation AI technology to help RTC providers and platform owners reduce their corporate liability and comply with new laws governing user-generated image, video and streaming media content.”
To read the article visit:
To read the New York university researchers’ report visit:
Photo by Hello I’m Nik 🎞 on Unsplash