- There's no guarantee that this will be a major issue if and when self-driving cars become commonplace. Petit's technique only works so long as LIDAR units' pulses aren't encrypted or otherwise obscured. While that's true of many commercial systems at the moment, it's possible that production-ready vehicles will lock things down. Still, this is a not-so-friendly reminder that car makers have a lot of work ahead of them if they're going to secure their robotic rides.
From the original source. What made him think automakers / tech companies aren't crosschecking already? Besides... The discussions around the limits of the technology that I think are much more interesting are a) how will people respond to autonomous vehicles in traffic? b) given that "irresponsibly" is a good answer to a), how can the car respond in a safe way? c) what are the technical limits of the sensors? How can this be accounted for? My answers are respectively "eventually well", "over-cautiousness" and "redundancies". Petit touches only slightly on c) but that's about it. It's no wake-up call, not by a long shot.Petit argues that it is never too early to start thinking about security. “There are ways to solve it,” he says. “A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible. But I don’t think carmakers have done it yet. This might be a good wake-up call for them.”
My educated guess is that everyone not in an autocar is going to interact aggressively with autocars. - They won't play chicken with you. Any dick move you wanna pull in traffic, the autocar is gonna roll over and take it. - You don't have to look them in the eye. That's not a fellow driver you cut off, it's a trillion-dollar corporation. - They're quicker than you and can cope with your bullshit better than your average distracted driver. Shit that would be a guaranteed fender bender is gonna be an autocar slamming on its brakes. - People with money will be driving them for a decade longer than people without money and there will be class issues. Any company launching autocars into a hybrid road system is going to assume all other drivers are not just negligent, they're likely hostile. When you start from a standpoint that every unknown vehicle is going to fuck you over if given half a chance, you end up with defensive programming.
I'll bet there are interesting hacks and attacks you can perform against an autonomous vehicle. "I spoofed the LIDAR" is... mmmmmmmaybe a small part of that? I mean, Google isn't really much of an "autonomous" company. They're a "let's collect stone craptons of data and accomplish everything in the cloud" company. So let's say you stand on a street corner with your arduino laser and slow a car down. What do you think are the odds that Google's control software shoots an instantaneous email saying "hey - my LIDAR data is all fucked up at this street corner, y'all wanna check this out?" and then every other autonomous car anywhere near says - "wasn't fucked up a minute ago" - "wasn't fucked up thirty seconds ago" - "wasn't fucked up ten seconds ago" - "I'm the next lane over and I don't see shit" - I'm coming from the other direction and I don't either" And all of a sudden the car being spoofed goes "there's someone spoofing my LIDAR I'd best report that to Google, the local police department and the department of roads" faster than you can say "SEO detection." The funny thing is if you want to get an autonomous car to slow down abruptly, run in front of it. You need not worry one iota about its reaction time. Hijacking an autonomous vehicle would be as simple as having a bunch of dudes swarm it so it couldn't get anywhere and then bust the window and grab the occupant. No LIDAR necessary - do it Somalia-style. The funnier thing is if you did that, Google would have exquisite surveillance coverage of the whole affair, immediate and realtime, as well as 3D telemetry on the entire attack. By reviewing the tracks of cars coming through, they'd have 3D telemetry of the setup. And by tagging everything that happened after, they'd be able to follow your ass anywhere you went. All of a sudden "carjacking" becomes an extraordinarily difficult crime to commit... all through the leveraged power of creepy big brotherdom. That's the story no one wants to tell - it's not - "holy shit you can spoof LIDAR with a laser pointer and arduino" it's - "self-driving cars will operate through heavily leveraged totalitarian surveillance and if you think that won't change society, you aren't paying attention."
Nicely put. One problem with the realtime checkup is that that level of communication and processing of data likely can't happen in the way you described anytime soon. The sensors they used a while ago spit out 750MB per second. My guess is that they only upload critical errors and path diversions live. Connecting to all cars around, filtering to those that passed this particular road segment, querying them for past data, retrieving their info and running a Bayesian-like algorithm can take quite a while. Longer than it takes to brake, that's for sure. By the way, did I ever show you how your future surveillance state might look like? I found these cool images recently. They take the Lidar data and color every dot according to the camera footage. The first image is from Oxford university, the second and third are from HERE (now bought by Audi / Mercedes / BMW):
Those are fantastic images. I'd love to see more. I think we can both agree that if Google had a choice between "my version" and "your version" they'd choose "my version." I think we can also both agree that Google has incentive and resources to make "my version" a reality on a timeline complimentary to an autocar rollout - it's not like Google's business model favors low bandwidth. Finally, 750MBps isn't necessary for the discussion - - (victim) - critical LIDAR error detected - error code - error values (100k or less) - (adjacent vehicles) - LIDAR query - state - coarse data (100k or less) x however many vehicles are adjacent - cross-check 100k of data with a half-dozen 100k datasets - not a lot of server time. I reckon my read isn't too far outside the parameters of "doable" particularly when we're discussing 3G bandwidth and a reasonable response time in seconds. I mean, the car will slow down. The smart move is to assume the operational parameters of the autodrive are compromised and forward motion should be curtailed and the routine should go into failover. But the loop from failover through error correction and backup analysis is a tenths-of-seconds to seconds-scale problem and in the meantime, Google has fired up a digital flare saying "something is fucked up about this intersection." So whatever reason you had to spoof the LIDAR, you'd best act quick... and you'd best act on the assumption you're on Candid Camera.
I had a longer reply but lost it to an accidental F5 press. The gist of it was that I agree, but that I was thinking more about the decision whether to brake or not - which has to happen almost instantly. What I also now recall is that that is a problem they already tackled with the 2005 DARPA challenge. They had a problem of ghost rocks / ridges appearing in their Lidar data because of the vibration of the sensor: a small vibration and a spinning sensor means that you get vertical displacement that isn't there. Which gets worse the farther away you are. So what they used was a temporal check - was this object here in this shape a millisecond ago? A second ago? If not, the probability that it is an actual thing now is deemed very low. The same logic applies to a suddenly-out-of-nowhere car. It might dazzle the car for a second.
You know what's funny? When I first heard about automated cars, I always figured it'd be easier to disrupt them physically than it would be digitally. Of course, I never figured how to do it, but you came up with a good example right there.The funny thing is if you want to get an autonomous car to slow down abruptly, run in front of it. You need not worry one iota about its reaction time. Hijacking an autonomous vehicle would be as simple as having a bunch of dudes swarm it so it couldn't get anywhere and then bust the window and grab the occupant. No LIDAR necessary - do it Somalia-style.
Autonomous vehicles leverage routine. The examples you list above are all about improvisation. There's the additional issue of the legality of any enforcement or interdiction performed by a non-human agent; the simple example of red light cameras should point out how sticky that's gonna be. Cop cars, ambulances, fire engines and tanks are gonna have drivers for a long, long time to come, I reckon.
To my understanding LIDAR is still really young, and for the most part is only made to complete the task of autonomously driving a vehicle. I wouldn't be surprised if Google already knew this, and it wasn't particularly a concern at this stage of development.
I don't know much about LIDAR, but apparently it's used in a lot of applications. I would agree with both you and the author of the quick article though, that automakers are aware of the drawbacks of their systems and are working on them. After the whole article about the hacked Jeep that came out the other month, they're probably starting to realize that they need to step up the security of their systems.