Here Are Twitter’s Latest Rules for Fighting Hate and Abuse

By -

Bringing you the latest news from across the tech world. Now go ahead and read what you were looking for, but remember keep checking our news section for more of the latest technology news to keep you up to date and in the know.

When Twitter could take credit for revolutionary political movements like the Arab Spring, it was easy for the company’s executives to joke about their liberal stance on free speech. (Twitter, they said, was “the free speech wing of the free speech party.”) But things are a bit more complicated now, as Twitter increasingly plays host to bullies, harassers, Nazis, propaganda-spreading bots, ISIS recruiters, and threats of nuclear war. Twitter’s toxic content problem isn’t just bad for humanity—it’s bad for business, driving people away from the platform.

In early 2016 the company began altering its stance on free speech, forming a Trust and Safety Council made up of safety groups, advocates and researchers to help it address the problem. But critics are not satisfied with results. Reports have outlined many instances of the company’s failure to punish harassers; these shortcomings make Twitter’s recent missteps all the more frustrating to critics. Last week the company disabled features of actor Rose McGowan’s account at a crucial moment amid the Harvey Weinstein sexual misconduct scandal. Groups of women boycotted the site for a day in protest. Twitter’s typical response to complaints about hate and harassment is to affirm its commitment to transparency. But even that is becoming a punch line.

On Friday CEO Jack Dorsey announced plans to act more aggressively. Twitter will introduce new rules around unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence, he tweeted. To add a sense of urgency, the company is holding daily meetings on the issue.

After Tuesday’s meeting, Twitter’s head of safety policy emailed members of its Trust & Safety Council with detailed plans on its new rules, which Twitter plans to implement in the coming weeks.

The new plans stop short of sweeping measures such as banning pornography or specific groups like Nazis. Rather, they offer expanded features such as allowing observers of unwanted sexual advances—as well as victims—to report them, and expanded definitions, such as including “creep shots” and hidden camera content under the definition of “non-consensual nudity.” The company also plans to hide hate symbols behind a “sensitive image” warning, though it has not yet defined what qualifies as a hate symbol. Twitter also says it will take unspecified enforcement actions against “organizations that use/have historically used violence as a means to advance their cause.”

It’s not the first time Twitter has targeted violent groups; its rules already prohibit threatening or promoting terrorism. But the new rules show the company is open to expanding that to any group promoting violence. The company’s new steps also show how Twitter, like Facebook and other digital-media platforms that host user-generated content, struggle with how much editorial oversight and human judgment to introduce.

In a statement, Twitter said, “Although we planned on sharing these updates later this week, we hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them.”

Here’s the email in full:

Dear Trust & Safety Council members,

I’d like to follow up on Jack’s Friday night Tweetstorm about
upcoming policy and enforcement changes. Some of these have already
been discussed with you via previous conversations about the Twitter
Rules update. Others are the result of internal conversations that we
had throughout last week.

Here’s some more information about the policies Jack mentioned as well
as a few other updates that we’ll be rolling out in the weeks ahead.

Non-consensual nudity

  • Current approach
    *We treat people who are the original, malicious posters of non-consensual nudity the same as we do people who may unknowingly
    Tweet the content. In both instances, people are required to delete
    the Tweet(s) in question and are temporarily locked out of their
    accounts. They are permanently suspended if they post non-consensual
    nudity again.
  • Updated approach
    *We will immediately and permanently suspend any account we identify as the original poster/source of non-consensual nudity and/or if a
    user makes it clear they are intentionally posting said content to
    harass their target. We will do a full account review whenever we
    receive a Tweet-level report about non-consensual nudity. If the
    account appears to be dedicated to posting non-consensual nudity then
    we will suspend the entire account immediately.

*Our definition of “non-consensual nudity” is expanding to more broadly include content like upskirt imagery, “creep shots,” and
hidden camera content. Given that people appearing in this content
often do not know the material exists, we will not require a report
from a target in order to remove it.

*While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish
when this content may/may not have been produced and distributed
consensually. We would rather error on the side of protecting victims
and removing this type of content when we become aware of it.

Unwanted sexual advances

  • Current approach
    *Pornographic content is generally permitted on Twitter, and it’s challenging to know whether or not sexually charged conversations
    and/or the exchange of sexual media may be wanted. To help infer
    whether or not a conversation is consensual, we currently rely on and
    take enforcement action only if/when we receive a report from a
    participant in the conversation.
  • Updated approach
    *We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement
    action when we receive a report from someone directly involved in the
    conversation. Once our improvements to bystander reporting go live, we
    will also leverage past interaction signals (eg things like block,
    mute, etc) to help determine whether something may be unwanted and
    action the content accordingly.

Hate symbols and imagery (new)*We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now
be considered sensitive media (similar to how we handle and enforce
adult content and graphic violence). More details to come.

Violent groups (new)*We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against
organizations that use/have historically used violence as a means to
advance their cause. More details to come here as well (including
insight into the factors we will consider to identify such groups).

Tweets that glorify violence (new)*We already take enforcement action against direct violent threats (“I’m going to kill you”), vague violent threats (“Someone should kill
you”) and wishes/hopes of serious physical harm, death, or disease (“I
hope someone kills you”). Moving forward, we will also take action
against content that glorifies (“Praise be to for shooting up. He’s
a hero!”) and/or condones (“Murdering makes sense. That way they
won’t be a drain on social services”). More details to come.

We realize that a more aggressive policy and enforcement approach will
result in the removal of more content from our service. We are
comfortable making this decision, assuming that we will only be
removing abusive content that violates our Rules. To help ensure this
is the case, our product and operational teams will be investing
heavily in improving our appeals process and turnaround times for
their reviews.

In addition to launching new policies, updating enforcement processes
and improving our appeals process, we have to do a better job
explaining our policies and setting expectations for acceptable
behavior on our service. In the coming weeks, we will be:

  • updating the Twitter Rules as we previously discussed (+ adding in these new policies)
  • updating the Twitter media policy to explain what we consider to be adult content, graphic violence, and hate symbols.
  • launching a standalone Help Center page to explain the factors we consider when making enforcement decisions and describe our range of
    enforcement options launching new policy-specific Help Center pages to
    describe each policy in greater detail, provide examples of what
    crosses the line, and set expectations for enforcement consequences
  • Updating outbound language to people who violate our policies (what we say when accounts are locked, suspended, appealed, etc).

We have a lot of work ahead of us and will definitely be turning to
you all for guidance in the weeks ahead. We will do our best to keep
you looped in on our progress.

All the best,

Head of Safety Policy

Source link
Author Erin Griffith

Leave a Reply

Your email address will not be published. Required fields are marked *

CommentLuv badge

Glossary: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z