12px13px15px17px
Date:09/07/19

Instagram is trying to crack down on bullying with a new way to stop people from commenting publicly on your photos without having to block them

Instagram is trying to crack down on bullying in its app with a feature that flags potentially abusive comments to their author before writing and another mute-style feature to control comments on users’ posts.
 
On Monday, the Facebook-owned photo-sharing platform announced the new initiatives as it attempts to battle the problem of online harassment, an issue that it has faced increasing scrutiny over in recent months. It also comes as its parent company attempts to move past two years of near-constant scandals, often centred around how its social network was abused to spread harmful or toxic material.
 
In a public blog post, Instagram’s head Adam Mosseri wrote:
 
“Our mission is to connect you with the people and things you love, which only works if people feel comfortable expressing themselves on Instagram. We know bullying is a challenge many face, particularly young people. We are committed to leading the industry in the fight against online bullying, and we are rethinking the whole experience of Instagram to meet that commitment. We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves.”
 
One of the features is called “Restrict,” and works in a similar way to “mute” tools on some other social networks. If User A restricts User B, it means User B’s comments on User A’s posts don’t show up to anyone but User B themselves. Similarly, it will stop User B from seeing if User A has read any of their messages, or if they’re online – while not informing User B that they’ve been restricted.
 
It’s essentially a less extreme form of blocking someone: It reduces their ability to interact with you and comment publicly on your posts, but without eliminating them from your profile entirely.
 
“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mosseri wrote. “Some of these actions also make it difficult for a target to keep track of their bully’s behavior.”
 
Second is a new tool that uses AI to try and detect if a comment is offensive, then flagging that to the poster before it’s posted. The logic is that it might give people a chance to reflect on what they’re doing, and maybe change course: “From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” Mosseri wrote.




Views: 531

©ictnews.az. All rights reserved.

Facebook Google Favorites.Live BobrDobr Delicious Twitter Propeller Diigo Yahoo Memori MoeMesto






20 April 2024

19 04 2024