Hah. NHTSA basically saying it's not safe enough and fining him if he puts it on the road.
The more of these AV startups I see, the more I am judging them by whether their methods are shortcuts or are actually aiming to understand the difficult problem of driving. Neural networks have 'shortcut' written allll over it. Did you see this bullshit a while ago?
They're really not fucking around when it comes to these self driving cars. The luddite in me really appreciates that. I posted an article a while back about Delphi and Mobileye working together to create equipment for car companies who haven't gotten a head start in self driving tech. It's comparing apples, oranges, and watermelons but it'd be interesting to compare and contrast what Delphi anf Mobileye are doing that's different here. Their deep pockets alone would hint that they're building more advanced, reliable technology.
As well they should. KE = 1/2mv^2. m = two tons. v = 75mph. KE = 2.29 megajoules, =, according to Wolfram Alpha, about a pound of TNT. Wu Tang Clan ain't nuthin' to fuk wit. There are two approaches to autonomous vehicles: Google's and everyone else's. Google intends to map the world so precisely that they know about new potholes in realtime. Everyone else intends to hope they can come up with a robodriver capable of dealing with every other asshole on the road. Everyone else may beat Google out of the gate, but they will only serve to demonstrate that Google's approach is better. And by then, Google will have the maps.
Today was a home day. Had breakfast. Did the dishes. Went to my kid's halloween party. Took her out on the scooter. You know, normal shit. Even went for a run. Running muscles ain't biking muscles. Tomorrow begins another long slog of flying ratbites and Mexicans resenting me for living in their neighborhood. Know what's fucked up? Buying coffee in Seattle because the grocery store here is more convenient, even incorporating the TSA, a thousand mile flight and a Lyft ride into the equation. That's how i'm livin'.
I shared this blog with you veen, and I find his input interesting on recent news, whether it's this particular case http://ideas.4brad.com/comma-ai-cancels-comma-one-add-box-after-threats-nhtsa Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions (known as “path planning”.) NVIDIA’s teams have been actively working on this, as have several others. They plan to make commentary to NHTSA about these element of the regulations, which should not be forbidding this approach until we know it to be dangerous. It is challenging. It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases. On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to. If you have two systems: A black box machine learning system that has one safety incident per 150,000 miles, but you have minimal understanding of Which is the one that is better to put on the road? It’s not a no-brainer. or about the Trolley Problems : http://ideas.4brad.com/yikes-even-barak-obama-wants-solve-robocar-trolley-problems-now-0 People grossly underestimate how hard some of these problems will be to solve. Many of the situations I have seen proposed actually demand that cars develop entirely new capabilities that they don’t need except to solve these problems. In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems. Since robocars are planned as a life-saving technology, each day of delay has serious consequences. Real people will be hurt because of these delays aimed at making a better decision in rare hypothetical situations.Black box
A conventionally coded system which you fully understand, which has one safety incident per 100,000 miles
The cost of solving may be much higher than people estimate
No, no, NO. That's not how this works. I didn't sign up to drive on the road you intend to prove your "not dangerous" on. Go prove that shit to be SAFE and then apply to play in traffic, not the other way around. I'm sorry, "it's hard" is not a valid excuse for foregoing proof, particularly when playing in the "tons'n'km/h" regime. Sure it is. The coded machine can be examined by everyone and made better. The black box might get better, might get worse, might do completely random, unpredictable shit because it found a corner case to shit all over. Well, shit. Guess we'll have to do it by hand. JUST LIKE THE WAY ACURA INTENDED. Holy fuck did he just play the "think of the children" card? He just played the "think of the children" card. "Don't make me prove it's safe, children are dying everywhere all day long! Get those infernal drivers away from the wheel!" Your dude is an asshole.which should not be forbidding this approach until we know it to be dangerous.
It is challenging. It’s hard to do QA on neural networks.
Which is the one that is better to put on the road? It’s not a no-brainer.
In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems.
Since robocars are planned as a life-saving technology, each day of delay has serious consequences.
I hear you, but I don't have a strong stance like you seem to have. He never said that. You have three scenarios : 1. Human driver 2. Robocars which outperforms human driver who doesn't take into account specific Trolley problems 3. Robocars which takes into account everything From step 1 to step 2, you are saving x lives per day. From step 2 to 3, you are saving y lives per day. Should you go from step 1 to step 3 and waiting z more years, or should you go from step 1 to step 2 ? And then enhance the cars every year. I think it's a valid question, as step 2 proves the technology is safe (way safer than humans, but not foolproof). Waiting for your input."Don't make me prove it's safe, children are dying everywhere all day long! Get those infernal drivers away from the wheel!"
I suspect you don't have this strong stance because you have less experience with the regulatory apparatus of the US federal government, less experience with mechanical design, and possibly less experience with Silicon Valley "bad boys." Because he did say that. Check it: That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks. He also said this: That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way. Which is fucking bullshit. Which isn't even worth respecting. Which is making me lose respect for you for not recognizing the fallacy. How much was safety compromised with the introduction of seat belts? How much was safety compromised with the introduction of crumple zones? How much was safety compromised with the introduction of collapsable steering columns? How much was safety compromised with the introduction of antilock brakes? Yer li'l buddy Brad literally argued that "safety" is too important for testing and verification. His reason? His excuse? Citation needed, bitch. And this is why computer scientists shouldn't be allowed to run rampant on all this - they keep saying stupid shit like "trolley problem" without understanding down to their very bones that it's abject bullshit. So yeah. Your guy is an asshole, he did say that, and you're wrong. Sorry.It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases. On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.
I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.
It is challenging. It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases.
On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.
I'd imagined that that statement was based on some collection of training and test data run through various simulation algorithms. It's hard to imagine where comma one would have gotten such a data set, but the problem he's commenting on is one universal to all of machine learning. What algorithm do you use to identify pedestrians from a point cloud? What is the estimated velocity of neighboring vehicles? Is that debris on the ground roadkill or nails? At the end of the day you're mapping some noisy inputs to an abstracted output, and you can try fitting a bipedal model to a person, but that may error on statues near a roadway. Or you can plug the whole thing into a shiny set of general algorithms that integrate over space and time, let them work their magic, and pick the one with the highest false positive or false negative. Not saying it's a right or wrong approach, and obviously any tests should include an appropriately large data set, along with added perturbations for all manner of lighting, noise, angles, added vehicles, etc. But it's a common question: do you trust the most accurate model or the one you understand the most? Ideally the former is the latter, but that can sometimes only come after years of analysis. I think the hip-young computer scientist answer to this would be make all data and analysis pipelines open, but something tells me comma wants the quick and easy solution.That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks.
It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy. I include Tesla in that grouping. I do not believe that on-the-job training for any neural network won't kill way too many people unless you offer people the sensors for free and compare real-time human decisions with projected neural network decisions until you have near-total overlap over millions and millions of miles driven. Tesla could be doing that but I don't think they would have let the monster loose as early as they did if they'd taken this approach.
I include Tesla in that grouping. Ditto, though it seems like Google has at least taken that approach. I'm a little surprised that Tesla skipped the line on the regulation. Maybe something to do with the legalese of how the feature is offered?It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy.
Google has no interest in selling cars. Google will license their dataset and path-following technology (because that's what they're building) to anybody who wants to pay the fees, thereby allowing anyone and everyone to hop onto a crowdsourced traffic system rather than building "autonomous vehicles." Tesla mostly wants to sell batteries and battery-powered cars. They do that by being innovative and a leader in the industry. They're largely appealing to rich eccentrics (for now) but Tesla's exit is probably to another car company. Elon Musk never wanted to be Henry Ford; his spiritual hero is DD Harriman. I think he's said as much although I can't find a source at the moment.
I definitely have less experience and why I find what you have to say on the subject interesting. Yep. I guess sometimes I should read with my finger.That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way.