harassment

Auto Added by WPeMatico

In trying to clear “confusion” over anti-harassment policy, YouTube creates more confusion

After a series of tweets that made it seem as if YouTube was contradicting its own anti-harassment policies, the video platform published a blog post in an attempt to clarify its stance. But even though the post is supposed to “provide more details and context than is possible in any one string of tweets,” it is similarly confusing and raises yet more questions about how serious YouTube is about combatting harassment and hate speech on its platform—especially if the abuse comes from a high-profile channel with million of subscribers.

YouTube is currently under fire for not taking earlier, more decisive actions against conservative commentator Steven Crowder after he made homophobic and racist comments about Vox reporter Carlos Maza in multiple videos. The platform eventually demonetized Crowder’s channel, which currently has more than 3.8 million subscribers, but then stated it would allow Crowder to start making ad revenue again if he fixed “all of the issues” with his channel and stopped linking to an online shop that sold shirts saying “Socialism is for f*gs.”

Before demonetizing Crowder’s channels, YouTube responded to Maza in a series of tweets that created confusion about how it enforces it policies. The platform said after an “in-depth review” of flagged videos by Crowder, it decided that even though the language they contained was “clearly hurtful,” the videos did not violate its policies because “as an open platform, it’s crucial for us to allow everyone-from creators to journalists to late-night TV hosts-to express their opinions w/in the scope of our policies.” This was in spite of the fact that Crowder’s derogatory references to Maza’s ethnicity and sexual orientation violate several of YouTube’s policy against harassment and cyberbullying, including “content that makes hurtful and negative personal comments/videos about another person.”

I’ve been called an anchor baby, a lispy queer, a Mexican, etc. These videos get millions of views on YouTube. Every time one gets posted, I wake up to a wall of homophobic/racist abuse on Instagram and Twitter.

— Carlos Maza (@gaywonk) May 31, 2019

In the new blog post, posted by YouTube head of communications Chris Dale, the platform gives a lengthy explanation of how it attempts to draw the line between things like “edgy stand-up comedy routines” and harassment.

As an open platform, we sometimes host opinions and views that many, ourselves included, may find offensive. These could include edgy stand-up comedy routines, a chart-topping song, or a charged political rant — and more. Short moments from these videos spliced together paint a troubling picture. But, individually, they don’t always cross the line.

There are two key policies at play here: harassment and hate speech. For harassment, we look at whether the purpose of the video is to incite harassment, threaten or humiliate an individual; or whether personal information is revealed. We consider the entire video: For example, is it a two-minute video dedicated to going after an individual? A 30-minute video of political speech where different individuals are called out a handful of times? Is it focused on a public or private figure? For hate speech, we look at whether the primary purpose of the video is to incite hatred toward or promote supremacism over a protected group; or whether it seeks to incite violence. To be clear, using racial, homophobic, or sexist epithets on their own would not necessarily violate either of these policies. For example, as noted above, lewd or offensive language is often used in songs and comedic routines. It’s when the primary purpose of the video is hate or harassment. And when videos violate these policies, we remove them.

In the case of Crowder’s persistent attacks on Maza, YouTube repeated its stance that the videos flagged by users “did not violate our Community Guidelines.”

The decision to demonetize Crowder’s channel was made, however, because “we saw the widespread harm to the YouTube community resulting from the ongoing pattern of egregious behavior, took a deeper look, and made the decision to suspend monetization,” Dale wrote.

In order to start earning ad revenue again, “all relevant issues with the channel need to be addressed, including any videos that violate our policies, as well as things like offensive merchandise,” he added.

The latest YouTube controversy is both upsetting and exhausting, because it is yet another reminder of the company’s lack of action against hate speech and harassment, despite constantly insisting that it will do better (just yesterday, for example, YouTube announced that it will ban videos that support views like white supremacy, Nazi ideology or promote conspiracy theories that deny events like the Holocaust or Sandy Hook).

The passivity of social media companies when it comes to stemming the spread of hate through its platforms has real-life consequences (for example, when (Maza was doxxed and harassed by fans of Crowder last year), and no amount of prevarication or distancing can stop the damage once its been done.

Twitter will suspend repeat offenders posting abusive comments on Periscope live streams

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.

The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.

We’re committed to making sure everyone feels safe watching live video, whether you’re broadcasting or just tuning in. To create safer conversation, we’re launching more aggressive enforcement of our guidelines. https://t.co/dQdtnxCfx6

— Periscope (@PeriscopeCo) July 27, 2018

Currently, Periscope’s comment moderation policy involves group moderation.

That is, when one viewer reports a comment as “abuse,” “spam” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.

However, this suspension would only disable their ability to chat during the broadcast itself — it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator — and their community — feel threatened or otherwise harassed.

Twitter says that accounts that repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.

To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts — including spam, despite the impact to monthly active user numbers — is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?

The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter address abuse at a greater scale in the future.

Ripcord CEO faces allegations of improper behavior

 Perry Coneybeer, who left college at age 19 to work full-time at corporate file digitizing startup Ripcord, is alleging improper behavior by Ripcord CEO Alex Fielding. Coneybeer also alleges she was fired in retaliation for reporting a fellow employee to human resources. In a Medium post published today, Coneybeer alleges Fielding told graphic, sexual stories involving other employees. One… Read More

500 Startups’ Dave McClure apologizes for ‘multiple’ advances toward women and being a ‘creep’

 Taking an alternative approach to Chris Sacca’s pre-publication statement on the New York Times’ story of his alleged misconduct, Dave McClure waited a day to respond to the allegations made in yesterday’s article.
In a Medium post published Saturday evening, McClure neither denies nor defends the actions that he says have cost him the executive position at the firm he founded. Read More

Powered by WPeMatico