Twitter has now employed automatic technology whose main job is to contain Coronavirus misinformation but this technology too are now making mistakes which is causing concerns about the company’s reliance on artificial intelligence in order to review its content where human make mistakes or are just plain biased.
Back in May 11th, Twitter started labelling tweets which spreads conspiracy theory about 5G technology as the main cause of the Coronavirus pandemic and authorities believes the false theory caused some people taking protests to the violent level by burning down 5G infrastructures in some places around the world.
With this, Twitter was aiming to pull down tweets which are misleading and encouraging people to engage in such behavior of destroying 5G cell towers. While not all tweets are facing towards that direction, others were including false or disputed claims which should be label these claims and direct users to trusted information instead. The label placed on such claims are “Get the facts about COVID-19” and this takes users to a page with curated tweets that debunk the 5G coronavirus conspiracy theory and so on.
And as for Twitter’s technology, the robot had made numerous mistakes by applying the label to tweets that refute the conspiracy theory and provide accurate information instead. Even tweets from reputable sources such as Reuters, BBC, Wired and Voice of America about the 5G-Coronavirus conspiracy theory were all labelled.
To show how imperfect the technology is, Twitter applied the label to tweets that shared a page the company itself had published titled “No, 5G isn’t causing coronavirus.” Tweets with words such as 5G, coronavirus, COVID-19 or hashtags #5Gcoronavirus have also been mistakenly labeled by the automated system the company uses.
And according to experts, the mislabeling could confuse users especially those ho do not click on those label and since Twitter isn’t notifying users when their tweets get labeled, they are probably never going to know whether their tweets have been flagged and the company doesn’t even give its users a way to appeal its evaluation of their posts either.
“Arguably, labeling incorrectly does more harm than not labeling because then people come to rely on that and they come to trust it,” said Hany Farid, a computer science professor at University of California, Berkeley. “Once you get it wrong, a couple hours go by and it’s over.”
Twitter’s bot is making hell of a mistake
The social media giant had declined to say how many 5G-Coronavirus tweets were labeled or provide an estimated error rate but the company did say its Trust and Safety team is keeping track of the label of coronavirus-related tweets. Even those that were identified haven’t yet been corrected.
“We are building and testing new tools so we can scale our application of these labels appropriately. There will be mistakes along the way,” a Twitter spokesperson said in a statement. “We appreciate your patience as we work to get this right, but this is why we are taking an iterative approach, so that we can learn and make adjustments along the way.”
Aside labeling tweets about the 5G-Coronavirus conspiracy theory, Twitter also plan on tackling other hoaxes with 166 million monetizable daily active users, the issue with moderation pose a big challenge to the company considering the huge amount of tweets it has to deal with on the daily basis. The company in fact said that its automated solutions are helping those put in place to tackle this in a more efficient manner by quickly surfacing content which are most likely to cause harm and then helping them to prioritize which tweet to review first and so on.
The approach Twitter is taking is quite similar to that of Facebook in its effort to combat inaccurate content which relies more on human reviewer with automated systems as being the backup to ensure the efficiency of such tasks. Facebook on it’s own works with more than 60 third-party fact checkers around the world who review the accuracy of posts. Now if a fact-checker rates a post as false, Facebook simply displays a warning notice and show the content lower in a person’s News Feed in order to reduce its spread but as for Twitter, it’s doing it the automated way without human review first.
UC Berkeley’s Farid said he isn’t surprised that Twitter’s automated system is making errors. “The difference between a headline with a conspiracy theory and one debunking it is very subtle,” he said. “It’s literally the word ‘not’ and you need full blown language understanding, which we don’t have today.”
Twitter could instead take action against users who are spreading coronavirus misinformation and have a large number of followers. Researchers at Oxford University released a study back in April that showed high-profile social media users including politicians, celebrities another public figures shared about 20% of these false claims but generated 69% of the total social media engagement. So it’s as if Twitter doesn’t want to chase off it’s big users after all, they are the driving force of the company’s social conversations.
Fooling Twitter’s automated system
To get the accuracy of this fact checking capability, @Brumpost had also tried to use the word 5G and COVID-19 in a tweet to test how the automation works but we’ve not gotten anything back just yet but others have tried the same causing the platform to be flooded with tons of inaccurate labeling of tweets.
BRUMPOST (@Brumpost) May 26, 2020
In the case of Ian Alexander who is a YouTuber and posts videos about tech stuffs said he spotted the new label on a tweet on May 11th which had nothing to do with the 5G-Coronavirus conspiracy theory but then decided to test out the system by tweeting about the topic and the label automaticalaly popped up.
Labeling tweets, Alexander said “may be more harmful than good” because somebody might just see the notice on their timeline without clicking through. Other tweets with misleading information about the Coronavirus keeps slipping away in the hands of these moderators and the automated system put in place isn’t doing anything to help either.
Another case would be with actress Fran Drescher with over 260k followers tweeted back in May 12th reads “I can’t believe all the commercials for 5G . Gr8 4cancer, harming birds, bees &mor viruses like Corona. Dial it bac.”. Another from a user user including remarks from Judy Mikovits which featured in the Pandemic viral video containing coronavirus conspiracy theories stating she believes 5G plays a part in the spread of the Coronavirus pandemic. Both tweets didn’t have a label.
While other social networks have been successful at labeling false content as in th case of Facebook for example which displayed warning labels on about 40 million posts about Covid-19 back in March, people who saw those warning didn’t go on to view the inaccurate content about 95% of the time according to the reportd from the social networking giant.
Study from the MIT found that labeling false news could result users believing stories that hand’t gotten labels even if they contained misinformation. The MIT researchers calls this phenomenon the “implied truth effect” which according to a professor at the MIT Sloan School of Management, David Rand, who had also co-authored the study said one potential solution is for companies to ask social media users to rate content as trustworthy or untrustworthy.
“Not only would it help inform the algorithms,” Rand said, “but also it makes people more discerning in their own sharing because it just kind of nudges them to think about accuracy.”