Twitter's 'not good' button starts global testing with down arrow

Twitter's 'not good' button starts global testing with down arrow

A button to notify headquarters of inappropriate comments?

For months now, Twitter has been testing the "not good" button as an objection to the "like" heart. Now it's time to experimentally implement a down arrow button for the whole world.

Aiming for Peaceful Twitter

The purpose of this declaration is to let Twitter know if the conversation that follows the tweet is what people want and improve the quality. So, if someone pushes you, the result will not be known to the user.

We've been testing how we can surface the most relevant replies within Tweets with the use of downvoting on replies. As we're expanding the experiment to a global audience, we want to share a little about what we have learned thus far! 👇 https://t.co/wM0CpwRgo6

— Twitter Safety (@TwitterSafety) February 3, 2022

Twitter

Various buttons have been experimented with

It used to be a down arrow for "agree" with an up arrow for "against", or a down arrow for "like" with a heart, or a down arrow for "like". Twitter has gone through several tests, including thumbs.

Some of you on iOS may see different options to up or down vote on replies. We're testing this to understand the types of replies you find relevant in a convo, so we can work on ways to show more of them. Your downvotes aren't public, while your upvotes will be shown as likes.pic.twitter.com/hrBfrKQdcY

— Twitter Support (@TwitterSupport) July 21, 2021

These have been experimentally displayed to random users, but the method of simply adding a down arrow was chosen from those results. It seems that both designs helped the users who were subjects to report comments they thought were inappropriate and helped improve quality.

Initially, it will be for browsers, followed by iOS and Android. I hope to see you soon.

There is also a feature that prompts you to reconsider whether you really want to post

On the other hand, more than 30% of those who tried to post offensive language helped them reconsider and edit/delete " A review before tweeting feature will also be implemented.

We've been working on a new feature that prompts users to reconsider Tweet replies containing harmful language. After seeing that users changed or deleted their replies over 30% of the time when prompted, we published new research on this new approach. https://t.co/tLDb294MoKpic.twitter.com/qIInSzPPMA

— Twitter Safety (@TwitterSafety) February 3, 2022

You can write with momentum and post it as it is. However, once they come to their senses and review it, they say that more than 30% of the time, they change their minds. Will these features reduce malicious posts and make Twitterland an easy-to-use SNS for everyone? I'm looking forward to it.

Source: Twitter (1, 2) via HYPEBEAST, SLASH GEAR