a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
veen  ·  3442 days ago  ·  link  ·    ·  parent  ·  post: Facial recognition technology is everywhere. It may not be legal.

A month or so ago, I attended a lecture by Dariu Gavrila, researcher for Daimler and working on getting Mercedeses to recognize you. In 2003, their technology could only recognize 40% of pedestrians driving just 30 km/h and it produced over 600 false positives an hour. In 2010, this was down to 90% recognition at 60 km/h with zero or one false positive an hour. They can now not only recognize you as a person, but they can build a rudimentary 3D model of you including textures (!). This is what that looked like in 2008, two years before Kinect was launched:

If there's anything people - myself included - have a hard time internalizing, it's the unforeseen, long-term consequences of technological advancements. I've started reading Townsend's Smart Cities and he makes the argument that for most technological advancements, those unforeseen consequences overshadow all of the original benefits and goals of the technology.

So I understand and share your fears. I don't think technology has an anti-privacy bias. Privacy, I think, is mostly a design choice. It is not that hard to design a system that anonymizes data. If you have the technology to recognize and build 3D models of people out two cameras and a bunch of cpu's, you can also make it so that those faces are blurred / not linked to actual names.

In discussions of technology and privacy we need to remember that there are always people who are behind placing, using and maintaining them. Camera's are placed by people with a goal. Sensors are installed to provide data for people to analyze. Those people have a bias towards solutions that work, and what seems to work really well is to combine lots of data about people to get to know them better. You can get much more information out of big data the more relations there are, both inside a dataset and between dataset, and real names are, sadly, a great way to do just that. But I wouldn't blame the dataset for that, or the technology, rather the people implementing that technology.

Another argument Townsend makes is that technology is ill-suited to fix the more complex problems. It favors / biases the quick, easy, cookie-cutter solutions to as much problems as possible. From what I understand about neurology, the brain is immensely complex. I don't think we know nearly enough to start manipulating it in a reasonable way, and I don't see that happening anytime soon.