- I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue.
We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it.
Here's a timeline of the events:
In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends' birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.
In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends' data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends' data.
In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the data apps could access. Most importantly, apps like Kogan's could no longer ask for data about a person's friends unless their friends had also authorized the app. We also required developers to get approval from us before they could request any sensitive data from people. These actions would prevent any app like Kogan's from being able to access so much data today.
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent, so we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We're also working with regulators as they investigate what happened.
This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.
In this case, we already took the most important steps a few years ago in 2014 to prevent bad actors from accessing people's information in this way. But there's more we need to do and I'll outline those steps here:
First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.
Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days.
Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.
Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.
I started Facebook, and at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.
I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we'd like, but I promise you we'll work through this and build a better service over the long term.
Here's the thing about Facebook: It does not have a design. It does not have an architecture. It does not have a product roadmap. Software development is performed by semi-isolated teams who are working on very narrow aspects of the product. The biggest team, and broadest mandate, is the group that designs the Main Feed, which is what you see when you first log in. But just about every link you click on Facebook has a distinct, separate development team working on it, honing it, iterating, and trying and testing new things, live, with your real data. There is no oversight. There are rules and guidelines, but there is no formal review process before a team pushes out updates, and it has been broken by a dev promoting broken code many ... times ... and manipulated before. So to EVER think that a corporate policy or rule would ever be consistently implemented across the entire platform, is naive and lacks an understanding of what more than 6,000 programmers do every day on a web site. Facebook is a billion-headed hydra, that is powered off clicks and impressions, and none of the heads are really very interested in each other, and may eat one another without realizing they are eating themselves. The old corporate model has been imposed upon Facebook by outside investors who buy the stock and expect X, Y, and Z from the Board of Directors, and for a top-down management approach. That ain't facebook. Every developer has the ability to destroy the platform at any time on any day. The reason it works so often is simply because a bad actor has not started working there yet... .... or, more likely, they have been working there for a long time already, and simply haven't been found because there is no audit trail in place.
It's a good read, and I agree with the point made. However... I don't think you can have that discussion. The only way Facebook would change is with its whole philosophy, and that's set. They've already violated user trust multiple times. And wasn't the Cambridge Analytica situation about them selling user data? (I might be wrong, but I have a notion of Facebook selling user data to third parties in my head from some place) It seems that, as an entity, Facebook's made its choice. Sure, Facebook is still made of people, and people's opinions are subject to change, but how much power a grunt could have in a billion-dollar private corporation? And the high circle? Isn't it because their mind is made that things go this way?Our colleague Sheera Frenkel put this well on Twitter: “If you want to delete Facebook, go ahead. Just know that's a privilege. For much of the world, Facebook is the internet and only way to connect to family/friend/business. That's why it’s important to have a real discussion re Facebook's security/privacy issues.”
It's a discussion worth having. I was talking to my wife the other day about how much culpability Facebook has in all this. I pointed out that Facebook was a tech company like Reddit... and that I, personally, had chatted, had phone calls, had in-person sit-downs with five successive community managers at Reddit over the erosive effects they were having on online discussion, the vectors they were opening up to weaponizing community, the negative impact their inaction was having on the character of the internet at large. And five successive community managers stared at me (figuratively and literally) with a mix of disinterest and helplessness. And when they came for me, I left. One of the main guys who started /r/TheDonald was a troll I used to curbsmile regularly. They're not sophisticated. But when the waters in which they swim aren't kept safe for other swimmers, the sharks take over. One man with a harpoon is useless. Facebook and Reddit have a lot in common: their monetization is based around virulence and impressions. The more inbound links they generate, the more eyeballs they poke, the more clicks they generate, the more money they make. And controversy sells. Anger sells. Base emotion sells. Long-form does not. So they rock the controversy. My wife pointed out that in her community, mostly what she sees of Facebook and censorship is people getting their breastfeeding videos and images taken down. I pointed out that their content needs to be policed by $7/hr monkeys, which means they have a flowchart including boob/not boob. "Boob but breastfeeding" opens up a whole can of worms in the land of MILFporn and the like and "boob/notboob" is an easier decision tree to write. Because if it doesn't scale, it doesn't matter. That's what it comes down to - what's the profit margin on an eyeball? Up until now, this has been a discussion between Facebook and anybody buying eyeballs. At the moment, the eyeball-buyers are fuckin' pissed. Now, the people whose eyeballs are being sold are fuckin' pissed too. Fundamentally, Facebook's profit margins are gonna get a kick in the nuts. As they should. Because they're doing what they're doing for money and when bad behavior is incentivized, bad behavior prevails. Digg died in a week. Everybody in tech remembers. Snapchat is dying in months. Everyone is watching them do it. The conversation is not whether or not Facebook will change, it's whether or not Facebook is agile enough to change. They're losing $150/mo from this advertiser, I tell you what. We'll see what they're willing to do to get us back.
Perhaps I was too willing to look at things as set in stone when I put forth my argument. The way I see it (which echoes what someone else — maybe you? — said in a different thread), if they're only thinking about changing things now, after such a massive and extensively-publicized cock-up, things are already too bad. I don't think there's coming back from that. Even if Facebook is revived, they aren't going to be the same thing they used to be... right? Then I look at Reddit. I'd read the comment section on every one of the big, publicized Reddit derbies (remember "chairman Pao" and the Ellen Pao-Hitler memes?), where people swore off the platform en masse. 2+ years later, and Reddit's still alive and kicking. Voat's been getting enough of a sunlight that I occasionally see GIFs watermarked with the... whatever Voat's subreddit equivalent are, but it's not nearly as big (and, as Wikipedia says, full of alt-right member et al.). So, the way I (so narrowly) see it: what does Reddit have that led to it staying afloat, and does Facebook have the same quality to it? (I remember reading about how Facebook is very important in South-East Asia (Phillipines?) because for many people it's the only way to keep in touch with their families back home when they leave to other countries to look for work. That seems like a decent factor for it to stay afloat)it's whether or not Facebook is agile enough to change.
Someone somewhere (mighta been Twitter) that Facebook is a monopoly. It's not, though. It just happens to be the only social network anybody bothers with. Once upon a time Digg was the only aggregator anybody bothered with. Then they decided to monetize and they were dead within two weeks. Everybody bolted for Reddit and never turned back. Of course, traffic was higher-quality back then and people had better taste and the only people bothering with aggregators were computer nerds but the point remains: Reddit makes no money but is worth $1.8 billion. Put that in your pipe and smoke it. As soon as they have to make money, they'll have to act like they respect their customers (advertisers) and advertisers will go out of their way to avoid pissing off the people buying their soap. It's my opinion that the horrible stuff Reddit gets away with, they get away with because nobody holds them accountable. As soon as they have to sing for their supper like every other company in the history of mankind, they'll sing a different tune.
I heard Omarosa say something absolutely chilling once: "I'm not good with people and their feelings because I don't really have them so whenever things are emotional I kinda have to guess so tell me if this offends you or not." I've known narcissists. I've known sociopaths. It was the only time I've ever encountered a narcissist sociopath. But once you see one, you can't stop seeing them. "I'm going to say the words you expect me to say. But I'm going to say them in a way that makes it clear I did nothing wrong. I also think you should know that you're here because you want to be here, because of the choices you made, that were part of the arrangement you made with me. You also have no reason to be upset, but I understand that you're upset anyway, but I want you to notice that I'm making that distinction. Also, we're going to talk about this narrow interpretation of the problem as it relates to the terms of service everyone signs. There will be no discussion of the broader issue. There is no broader issue. Tell me if this offends you or not because if it does I'll try restating the facts a different way so you understand them better."
Oh, man. She seemed cool before I heard that.Omarosa