12px13px15px17px
Date:15/09/18

New algorithm to prevent extremists from publishing hatred-inciting content

A group of researchers from the Massachusetts Institute of Technology (MIT) and Brandeis University developed an algorithm that can identify extremists in social networks before they publish their posts.
 
Note that combating extremism and hatred-inciting content is one of the main and difficult tasks for large Internet companies, including Facebook, Twitter and YouTube.
 
Authorities in different countries require that platforms remove such content as soon as possible, but moderators and relevant automatic algorithms do not always cope with the task.
 
And in this case, we speak about blocking and removing extremist content after its publication.
 
The algorithm developed by US researchers allows moderators to prevent extremists from publishing their posts.
 
For this end, the algorithm analyses five thousand microblogs conducted by members of terrorist organizations or users associated with these organizations (information about these persons’ accounts was gathered through the media, blogs, analysts and law enforcement agencies).
 
For the analysis, scientists used 4.8 million tweets related to the selected accounts, descriptions  of these users' profiles, as well as their friends and subscribers (this increased the database to 1.3 million accounts).
 
Based on this data and with the help of statistical modeling, scientists developed a model that can determine with high accuracy whether a particular account is extremist, before its owner publishes the first post.
 
According to one of the co-authors of the study, users who are engaged in online extremism show common behavioral traits in social networks. And this enables the algorithm to identify these users when they are creating new accounts, NEWSru.www.newsru.com reports.





Views: 357

©ictnews.az. All rights reserved.

Facebook Google Favorites.Live BobrDobr Delicious Twitter Propeller Diigo Yahoo Memori MoeMesto






24 April 2024

23 04 2024