a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
user-inactivated  ·  2521 days ago  ·  link  ·    ·  parent  ·  post: Perspective API: Using machine learning to score the toxicity of online comments

Hm. I thought for a moment that it would be cool to introduce such a meter to Reddit (because it can be a harsh place) and to Hubski (to improve comment quality further - not that it needs to much), but then it occurred to me:

What if it the result is wrong?

The code's in it's childhood at most. What will the implications of such a mistake be for the human writing it? Suppose you write something important, and the mechanism rates it as "toxic". Will it stop you from posting it? Would it seed doubt into you? Could it possibly bully you into refusing to post something on the matter further (because it's a machine, and machines are "never wrong")?

On the more scientific side of things, I wonder what it would make of select phrases from philosophy books after analyzing the whole book. For example, will it treat "God is dead, and we killed him" as toxic after seeing the reasoning that led to the phrase?