But that relentless focus on private, easy sharing did not account for second- and even third-order effects at scale: What happens when there are more than a billion people using the service? What happens when some of those people have a limited understanding of the technology they’re using, of the perfidy of the broader internet? And what happens when an incitement to violence can be shared instantly with hundreds of people who can each share it with hundreds more?
I'm pretty certain I disagree with this notion. Still, got me thinking: can you put blame on anyone when tech meets people who don't understand said tech (or are too gullible for it)?
Immediately reminded me of the Blue Whale challenge, a VK-based hoax that led to alleged suicides, arrests, and moral panic across Russia.
And what happens when an incitement to violence can be shared instantly with hundreds of people who can each share it with hundreds more?
Compare also RTLM, a Rwandan radio station that played a central role in the propagation of genocidal calls to action. I'd argue that whatever communication technology prevails can (and sadly probably will) be abused to incite violence. This is not a WhatsApp-specific problem.