Perspective is an API that makes it easier to host better conversations. The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information, as illustrated in two experiments below. We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as “toxic" to a discussion.

This is a project out of Google's think tank. You can test out how the API scores things by using the text box near the bottom of the page; interesting to see how a single word can change 'perceived toxicity'

user-inactivated:

Hm. I thought for a moment that it would be cool to introduce such a meter to Reddit (because it can be a harsh place) and to Hubski (to improve comment quality further - not that it needs to much), but then it occurred to me:

What if it the result is wrong?

The code's in it's childhood at most. What will the implications of such a mistake be for the human writing it? Suppose you write something important, and the mechanism rates it as "toxic". Will it stop you from posting it? Would it seed doubt into you? Could it possibly bully you into refusing to post something on the matter further (because it's a machine, and machines are "never wrong")?

On the more scientific side of things, I wonder what it would make of select phrases from philosophy books after analyzing the whole book. For example, will it treat "God is dead, and we killed him" as toxic after seeing the reasoning that led to the phrase?


posted 2542 days ago