- The good folks at FIRE (the Foundation for Individual Rights in Education) have done a nice takedown of the piece (yay, counter-speech!) discussing ten different things that the New Yorker gets wrong in the piece (over two separate posts), but I wanted to focus on one of the stranger arguments made in the article -- that appears to slam "free speech extremists" as if they're crazy and have no rational basis.
"Speech nuts, like gun nuts, have amassed plenty of arguments, but they—we—are driven, too, by a shared sensibility that can seem irrational by European standards. And, just as good-faith gun-rights advocates don’t pretend that every gun owner is a third-generation hunter, free-speech advocates need not pretend that every provocative utterance is a valuable contribution to a robust debate, or that it is impossible to make any distinctions between various kinds of speech. In the case of online harassment, that instinctive preference for “free speech” may already be shaping the kinds of discussions we have, possibly by discouraging the participation of women, racial and sexual minorities, and anyone else likely to be singled out for ad-hominem abuse. Some kinds of free speech really can be harmful, and people who want to defend it anyway should be willing to say so."
Except, nearly everything said there about free speech "nuts" is wrong. Many are more than willing to admit that much of what they defend has absolutely no valuable contribution to a robust debate. But that's the point. Defending free speech is about recognizing that there will be plenty of value-less speech, but that you need to allow such speech in order to get the additional valuable speech.
It sounds a little silly, that valueless speech must be defended as part and parcel of (potentially) valuable speech, but I think there is a point there, and the author gets to the grist of it by the end:
- The really ridiculous point underlying all of this is this idea that the best response to speech we don't like -- or even speech that incites danger or violence -- is censorship. That is rarely proven true -- and (more importantly) only opens everyone else up to risks when people in power suddenly decide that your speech is no longer appropriate either. Totally contrary to what Sanneh claims in the article, free speech "nuts" don't believe that all speech is valuable to the debate. We just recognize that the second you allow someone in power to determine which speech is and isn't valuable, you inevitably end up with oppressive and coercive results. And that is a real problem.
Consider Facebook, which I think it a good example of the new social speech paradigm we live in. It's free and open to the public, based (ostensibly) around the idea of communication between people, and of course, it's a privately controlled, corporately owned space. I think most everyone would recognize that if a Facebook user is making horribly racist and offensive comments and posts, as a private site, they have the right to (and perhaps, even an obligation) to remove that person's speech from the community. As a "freeze peach" advocate, I can still recognize valueless speech when I see it. However, should anything change about the exercising of Facebook's rights if they choose to remove, ban or otherwise censor posts and comments about, say, a hypothetical protest of a BP drilling operation? Social internet companies regularly kowtow to oppressive regimes around the world, so I don't think they have any moral compunctions about the types of speech they see fit to limit; in practice, they want to limit any speech that could cut into their profits.
To me, where that line is drawn, is the crux of what modern free speech means. How do you separate and deal with valueless (even harmful and damaging) speech while protecting worthwhile, unpopular speech (not unpopular from the sense of racist/not-racist, but rather unpopular in the eyes of corporate capital and political leaders)? I suppose the simple solution is to turn the task over to our corporate overlords, and expect them to navigate morality in a responsible way, but personally, I'm distrustful of them, and I think for good reason. This is why I believe hubski has a good model so far, valueless speech which people determine to be harmful or damaging can be democratically isolated (or responded to with counter-speech), meanwhile, there is no corporate or capital authority empowered with setting top down boundaries on speech. I can understand some people find it frustrating to be exposed to democracy's rough edges, but when I consider a world without those rough edges, I don't necessarily see some kind of utopia.
Edit: I'd just like to add, while it is fun and games to point fingers at your neighbors and blame all the idiot voters that seem to empower our dysfunctional political system literally every election cycle, the real patriarchy, the real racist white fuckers we all hate, the ones who are racist not out of plain ignorance, but racist out of convenience and greed, they are the corporate captains in boardrooms all over the world, you don't want to hand the keys to the kingdom over to them.
I can't tell what your point is. Freedom of Speech/The Press is protected under the First Amendment. But it doesn't apply to what a corporation allows on its website. You recognize that it does more good than harm to remove harassing/violent/racist language or profiles, but more harm than good to remove the words or profile of an activist from a website. So...what? What's your point? That you think Facebook should have no tools at all to remove anything posted on its website? That won't happen, and doesn't have anything to do with the First Amendment anyway. I am confused by what you have written. Please elucidate.
Okay, perhaps I didn't explain myself clearly enough, and I'm trying to look at both sides of this debate from a slightly different angle, so I could see it being confusing. You say that freedom of speech is protected under 1A, but that it doesn't apply to what a corporation allows on its website. That is the problem, and maybe this goes against my "free speech" cred (I'm not a purist or an absolutist), but I think there should be carve-outs against corporate speech, to protect individual speech. When we look at telecommunication companies, there are all sorts of regulations and rules enforcing neutrality. Despite the fact that they're a private entity, they own the cables and the servers, the speech that travels through them is not theirs to police, and nor should it be in my opinion. Social media companies are a slightly different beast, but the internet has completely revolutionized what we think of as the public square, I see no reason not to reevaluate how our rights should apply in a new age, especially when you think about all the laws and court decisions about free speech which were made after the invention of mass media and telecommunications in the early 20th century. On one side of the debate, we have the "no freeze peach" crowd, which seems to take the point of view that corporations should have ultimate control in what the boundaries of free speech are (political speech, hate speech, advertising speech, you name it); then, we have the "freeze peach" crowd which seems to argue to varying degrees for an individual right to expression, which would contravene the corporate right. If we frame the debate as "no freeze peach" people want civil discussions where women and minorities are welcome, and the "freeze peach" people want the right to yell "FAGGOT" where ever they want, that's disingenuous, and it does a massive disservice to the debate that I think we should be having, questioning the limits of corporate power. Maybe this is just an issue of transparency. I posted this: a while back and I think it raises a valid point, Facebook doesn't disclose its content restrictions. Facebook has no qualms about removing content for any reason, moral or immoral; they have no respect for the universal human right to freedom of expression because no one forces them to, and it's more profitable to censor at the behest of governments and corporations. Here in the US, Facebook clearly restricts content, but they don't even bother to say what that content is. I'm not arguing that yelling a slur is somehow tantamount to an inalienable and universal human right, or that Facebook should have no control over the content of their severs, but I do think there should be more transparency about how and why corporations filter people's speech, and maybe even limits on the types of speech and circumstances under which they're controlled. rd95 made a good point in that post, saying people are free to not use Facebook, which is true, mostly. At least here in the western world, we have a fairly open market with plenty of choices (here we are on hubski), and only a handful of large corporate players in the social media field, but in other markets, which aren't so open, Facebook is working hard to cultivate a social media monopoly; I have no doubt they would do it here too if they could. Perhaps this just comes down to an issue of culture; social media users who are only interested in sharing cat pictures or Toyota and Pizza Hut advertisements are probably not going to be concerned with the threat of political censorship, so the idea of "Who is empowered to censor you?" simply doesn't resonate with them. I suppose it's just a sign of the times, but I think that's only half the story; when you look at the people who are flooding into voat or hubski, the dearth of young people using Facebook, and how heated discussions of "freeze peach" get, I think people do have a latent recognition of how important and valuable communication is. It's a completely reasonable expectation that communities give users tools to manage and control the speech that they're exposed to. I am extremely sympathetic to the idea that hateful or offensive speech has a chilling effect on the participation of women and minorities; when we look at all the advantages of democracy, the big and obvious time-worn downside is tyranny of the majority, and if we are to expect a democratic mechanism to police speech, this is an issue that we should be concerned about. At the same time, if our reaction to the tyranny of majority speech is to seek out a benevolent dictator, someone empowered with absolute control over the boundaries of speech, then I think we need to take a very careful and cautious look at that benevolent dictator. Corporations pursue profit above all else, and they will be happy to enforce civility insofar as it is profitable; however, I think it is a mistake to think they're doing it because they're concerned about the participation of women and minorities. Furthermore, without legal limitation, unaccountable and opaque control over the "political correctness" of speech can potentially function as little more than a smokescreen for censorship and oppression. It's easy to support unaccountable and opaque control of speech when it feels like it's in your favor, but when the shoe is on the other foot, what then? #BlackLivesMatter is a great hashtag, everyone should support the idea that black lives matter as much as the lives of any other race, but if Facebook, Twitter, Reddit decide to block #BlackLivesMatter (as is their right to do so) because [insert reason here] (not like they even need to give a reason), would your feelings change? I think social media companies understand the Streisand effect, and currently, that is the biggest thing which ties their hands as far as censorship of a popular political movement goes, insofar as it could have a knock-on effect on their bottom line. The fact of the matter is, I think there will be political movements (there already have been some), here and abroad, that threaten powerful vested interests, both governments and private capital, and I think the open question is, how much control will they be able to exert over communication in social media, to subvert the movement and protect themselves. Freedom of speech is a very broad general idea, but in practice, we have to remember what its purpose is. This has been a pretty long and rambling comment, but I hope this clarifies where I'm coming from here. Lastly, just to add, I'm under no illusions that our current political climate or justice system is likely to pass laws or interpret the constitution in ways that empower individuals at the expensive of corporate capital power, so what I think should happen is probably pretty far from what is likely to happen. Also, I'm as far from a lawyer as you can get, so I'd be happy to see to see a formal critique of my ideas from a legalistic standpoint. Whether 1A could simply be interpreted differently, or perhaps a full amendment is required, is debatable; I think Citizens United is another free speech issue which deserves to be reexamined by society, I could probably write at length about that as well.
I see. I couldn't tell that from your OP. Thank you for clarifying.
Internet companies can make your point of view stop existing on their sites whether they allow racism or not. You should try to post something on Reddit in support of gun control after a local school shooting and see how much the moderators of your supposedly liberal city's sub-Reddit support free speech. The idea that all speech should be allowed everywhere is preposterous. It never was. You're just standing up for racism.
I'm not sure I understand what you're saying. I don't think reddit as a whole is particularly conservative or liberal although most of its user base certainly is one or the other. Considering the way moderation works on reddit, it's also not very democratic, and you're absolutely correct in the sense that internet companies have always had the right to make points of view disappear from their sites. My judgement about "free speech" comes down to the question of "when is that socially acceptable?" If /r/news wants to ban the word "nigger" or "faggot" from its comments I think that would and should be considered socially acceptable; however, if they want to ban the words "Trans Pacific Partnership," I don't think that should be considered acceptable. Similarly, if Facebook wants to remove the profile of an "offensive racist," okay, I can see that being reasonable; but if Facebook does that to the profile of a protester because Facebook is receiving market access or capital from an authoritarian organization, I don't think that's right. As it currently is, both those communities can act in either of those ways, but my point is that in each of those examples, one of the possibilities falls short in what we should expect from our universal human right to freedom of expression. How do you protect the latter, while minimizing the former? I do not ever want to "stand up for racism" but my fear of being labeled a racist will not stop me from speaking up for something fundamental like freedom of expression. I think it is important for communities to have tools to address and limit the detrimental effects of hateful speech, but at the same time, those tools need to be targeted and balanced so they are not abused or exploited for purposes beyond what they were intended.