Vivo Y51s price in Pakistan and specifications
The Vivo Y51s is a mid-range smartphone. The Y51s comes with 128...
Elon Musk & Twitter
The new head of trust and safety for Elon Musk’s Twitter told media that the company relies heavily on automation to moderate content. It has gotten rid of some manual reviews and prefers to limit speech rather than take it down completely.
Ella Irwin, Vice President of Trust and Safety Products at Twitter, said that the company is also making it harder to abuse hashtags and search results in areas like child exploitation, even if it might affect the “good uses” of those terms.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Irwin said in the first interview with a Twitter executive since Musk bought the social media company in late October, which was on Thursday.
Researchers are reporting a rise in hate speech on the social media service after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or posted “egregious spam.”
Since Musk fired half of Twitter’s staff and gave them an ultimatum to work long hours, which led to the firing of hundreds more, the company has been asked a lot of questions about its ability and willingness to remove harmful and illegal content.
And advertisers, Twitter’s main source of income, have left the platform because they are worried about the safety of their brands.
Musk vowed on Friday “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with Emmanuel Macron, who is the president of France.
Irwin said that Musk told the team to worry less about how their actions would affect user growth or revenue because the company’s main goal was to keep people safe. “He emphasises that every single day, multiple times a day,” she added.
How safety is dealt with Former Twitter employees who know about this work say that at least some of what Irwin said is an acceleration of changes that have been planned since last year for how Twitter handles hate speech and other policy violations.
One way, which is summed up by the industry slogan “freedom of speech, not freedom of reach,” is to leave tweets up that break the company’s rules but keep them from the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach gives people more freedom of speech while reducing the harm that could come from abusive content going viral.
The Center for Countering Digital Hate found that the number of tweets with hateful content went up sharply in the week before Musk tweeted on Nov. 23 that the number of impressions, or views, of hateful speech was going down. This is just one example of how researchers point to the prevalence of hateful content while Musk points to a decrease in its visibility.
The number of anti-Black tweets that week was three times higher than in the month before Musk took over, and the number of tweets with a gay slur went up by 31%, the researchers said.
Irwin joined the company in June. Before that, he worked in safety roles at Amazon.com and Google. He disagreed with the idea that Twitter didn’t have the resources or the will to protect the platform.
She said that layoffs wouldn’t have a big effect on full-time employees or contractors who worked in the company’s “Health” divisions, such as child safety and content moderation.
Two people who knew about the cuts said that more than half of the people in the health engineering unit were fired. Irwin didn’t answer right away when asked to comment on the claim, but he had already denied that layoffs had a big effect on the health team.
She also said that the number of people working on child safety hadn’t changed since the acquisition and that the product manager for the team was still there. Irwin said that Twitter had filled some positions left open by people who had left the company, but she wouldn’t say how many people had left.
She said that Musk was focused on using automation more. She said that the company had made a mistake in the past by having people review harmful content, which took a lot of time and work.
“He’s encouraged the team to take more risks, move fast, and get the platform safe,” she added.
Irwin said that, when it comes to child safety, Twitter now takes down tweets that have been reported by trusted people who have a history of correctly flagging harmful posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specialises in child sexual abuse material, said that she has seen Twitter take down some content as quickly as 30 seconds after she reports it, without acknowledging that it received her report or confirming its decision.
Irwin said in the interview on Thursday that Twitter had shut down about 44,000 accounts that were unsafe for children. This was done with the help of the cybersecurity group Ghost Data.
Twitter is also putting limits on hashtags and search results that are often used to abuse the site, such as those used to find “teen” pornography. She said that worries about how these kinds of restrictions would affect how the words could be used were no longer present.
The use of “trusted reporters” was “something we’ve discussed in the past at Twitter, but there was some hesitancy and frankly just some delay,” Irwin said.
“I think we now have the ability to actually move forward with things like that,” she added.
Catch all the Sci-Tech News, Breaking News Event and Latest News Updates on The BOL News
Download The BOL News App to get the Daily News Update & Follow us on Google News.