Twitter Tackles #COVID-19 Fake News

It’s at times like these that social media can be both a hinderance and a help to people, as they come to terms with isolation and social distancing.

We rely on social media to see what people are up to, share stories on home schooling and chat to each other as an outlet to promote positive well-being. However, it can also be used to spread misinformation which can have a detrimental impact on people’s mental health.

Now Twitter has come out to say it is taking steps to tackle the spread of misinformation and ‘fake news’ that uses #COVID-19. However, they have admitted their reliance on automated systems may lead to some things slipping through the net.

How are Twitter going to do this?
The social media giant has broadened its definition of the word ‘harm’ to tackle posts that contradict posts from public health bodies and other trusted organisations.

A spokesperson for Twitter has said:

“Rather than reports, we will enforce this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content,” it said.

The long list of content now prohibited includes: description of harmful or ineffective treatments, denial of official recommendations and established scientific facts, calls to action that benefit third parties, incitement to social unrest, impersonation of health officials and claims that specific groups are either more or less susceptible to the virus.

The new Twitter rules around COVID-19 will be reviewed going forward and amended as appropriate as we know the situation changes on a daily basis.

However, users have questioned how these new rules will be policed, and how Twitter will remove harmful content. This is where Twitter have been honest and highlighted the potential flaws in their automated system.

The spokesperson continued to say:

“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes.

“As a result, we will not permanently suspend any accounts based solely on our automated enforcement systems. Instead, we will continue to look for opportunities to build in human review checks where they will be most impactful.”

Do you think these measures will help to prevent the spread of ‘fake news’? Could more be done?