Sadly, privacy is quickly going the way of the dodo. This is painfully obvious when you realise that very few people are standing up for the right to privacy, while many powerful corporations and government agencies are pushing with all their might to expand their already considerable powers. These entities all operate in their own self-interest, so for example the people working at Google are not pleased with the NSA messing with their networks, but none of them care about your privacy. It also doesn't help that technological advancement seems to have an anti-privacy bias. Video resolution is getting better, computers can process information faster and faster, data storage is cheaper than ever, etc; new and improved algorithms mean higher rates of success for facial recognition and gait analysis; entirely new technologies might make it possible to see you even without having to actually see you. The last frontier, of course, is the brain. Scientists can already sort of guess what you're thinking or dreaming about by scanning your brain waves, but the technique is still primitive and as far as I know it hasn't been shown to work outside of the laboratory. Governments are very interested in figuring out how the brain works, obviously for humanitarian reasons! The above quote comes from the book "Between Two Ages", written in 1970 by Zbigniew Brzezisnki, an American political scientist, geostrategist, and statesman who served as a counselor to Lyndon B. Johnson from 1966–1968 and held the position of United States National Security Advisor to President Jimmy Carter from 1977 to 1981. Also co-founder of the Trilateral Commission with David Rockefeller. What a guy! Encryption won't save us this time.In addition, it may be possible—and tempting—to exploit for strategic-political purposes the fruits of
research on the brain and on human behavior. Gordon J. F. MacDonald, a geophysicist specializing in problems
of warfare, has written that accurately timed, artificially excited electronic strokes "could lead to a pattern of
oscillations that produce relatively high power levels over certain regions of the earth. ... In this way, one could
develop a system that would seriously impair the brain performance of very large populations in selected regions
over an extended period. . . . No matter how deeply disturbing the thought of using the environment to
manipulate behavior for national advantages to some, the technology permitting such use will very probably
develop within the next few decades."
A month or so ago, I attended a lecture by Dariu Gavrila, researcher for Daimler and working on getting Mercedeses to recognize you. In 2003, their technology could only recognize 40% of pedestrians driving just 30 km/h and it produced over 600 false positives an hour. In 2010, this was down to 90% recognition at 60 km/h with zero or one false positive an hour. They can now not only recognize you as a person, but they can build a rudimentary 3D model of you including textures (!). This is what that looked like in 2008, two years before Kinect was launched: If there's anything people - myself included - have a hard time internalizing, it's the unforeseen, long-term consequences of technological advancements. I've started reading Townsend's Smart Cities and he makes the argument that for most technological advancements, those unforeseen consequences overshadow all of the original benefits and goals of the technology. So I understand and share your fears. I don't think technology has an anti-privacy bias. Privacy, I think, is mostly a design choice. It is not that hard to design a system that anonymizes data. If you have the technology to recognize and build 3D models of people out two cameras and a bunch of cpu's, you can also make it so that those faces are blurred / not linked to actual names. In discussions of technology and privacy we need to remember that there are always people who are behind placing, using and maintaining them. Camera's are placed by people with a goal. Sensors are installed to provide data for people to analyze. Those people have a bias towards solutions that work, and what seems to work really well is to combine lots of data about people to get to know them better. You can get much more information out of big data the more relations there are, both inside a dataset and between dataset, and real names are, sadly, a great way to do just that. But I wouldn't blame the dataset for that, or the technology, rather the people implementing that technology. Another argument Townsend makes is that technology is ill-suited to fix the more complex problems. It favors / biases the quick, easy, cookie-cutter solutions to as much problems as possible. From what I understand about neurology, the brain is immensely complex. I don't think we know nearly enough to start manipulating it in a reasonable way, and I don't see that happening anytime soon.
Isn't the most obvious solution to "brain reading" to put a copper mesh inside your hat? I'm not sure it's that simple. Just throwing it out there. Also, would you like an invitation to keybase? If so, I'll need an email (you could use a disposable address). You can send it to me here using the email acstubbins@keybase.io).