a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  2728 days ago  ·  link  ·    ·  parent  ·  post: George Hotz cancels Comma One because of NHTSA letter

I suspect you don't have this strong stance because you have less experience with the regulatory apparatus of the US federal government, less experience with mechanical design, and possibly less experience with Silicon Valley "bad boys."

Because he did say that. Check it:

    It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases. On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.

That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks.

He also said this:

    I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.

That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way.

Which is fucking bullshit. Which isn't even worth respecting. Which is making me lose respect for you for not recognizing the fallacy.

How much was safety compromised with the introduction of seat belts?

How much was safety compromised with the introduction of crumple zones?

How much was safety compromised with the introduction of collapsable steering columns?

How much was safety compromised with the introduction of antilock brakes?

Yer li'l buddy Brad literally argued that "safety" is too important for testing and verification. His reason?

    It is challenging. It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases.

His excuse?

    On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.

Citation needed, bitch.

And this is why computer scientists shouldn't be allowed to run rampant on all this - they keep saying stupid shit like "trolley problem" without understanding down to their very bones that it's abject bullshit.

So yeah. Your guy is an asshole, he did say that, and you're wrong. Sorry.





thundara  ·  2727 days ago  ·  link  ·  

    That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks.

I'd imagined that that statement was based on some collection of training and test data run through various simulation algorithms. It's hard to imagine where comma one would have gotten such a data set, but the problem he's commenting on is one universal to all of machine learning. What algorithm do you use to identify pedestrians from a point cloud? What is the estimated velocity of neighboring vehicles? Is that debris on the ground roadkill or nails?

At the end of the day you're mapping some noisy inputs to an abstracted output, and you can try fitting a bipedal model to a person, but that may error on statues near a roadway. Or you can plug the whole thing into a shiny set of general algorithms that integrate over space and time, let them work their magic, and pick the one with the highest false positive or false negative.

Not saying it's a right or wrong approach, and obviously any tests should include an appropriately large data set, along with added perturbations for all manner of lighting, noise, angles, added vehicles, etc. But it's a common question: do you trust the most accurate model or the one you understand the most? Ideally the former is the latter, but that can sometimes only come after years of analysis. I think the hip-young computer scientist answer to this would be make all data and analysis pipelines open, but something tells me comma wants the quick and easy solution.

kleinbl00  ·  2727 days ago  ·  link  ·  

It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy.

I include Tesla in that grouping.

I do not believe that on-the-job training for any neural network won't kill way too many people unless you offer people the sensors for free and compare real-time human decisions with projected neural network decisions until you have near-total overlap over millions and millions of miles driven. Tesla could be doing that but I don't think they would have let the monster loose as early as they did if they'd taken this approach.

thundara  ·  2727 days ago  ·  link  ·  

    It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy.

    I include Tesla in that grouping.

Ditto, though it seems like Google has at least taken that approach. I'm a little surprised that Tesla skipped the line on the regulation. Maybe something to do with the legalese of how the feature is offered?

kleinbl00  ·  2727 days ago  ·  link  ·  

Google has no interest in selling cars. Google will license their dataset and path-following technology (because that's what they're building) to anybody who wants to pay the fees, thereby allowing anyone and everyone to hop onto a crowdsourced traffic system rather than building "autonomous vehicles."

Tesla mostly wants to sell batteries and battery-powered cars. They do that by being innovative and a leader in the industry. They're largely appealing to rich eccentrics (for now) but Tesla's exit is probably to another car company. Elon Musk never wanted to be Henry Ford; his spiritual hero is DD Harriman. I think he's said as much although I can't find a source at the moment.

Creativity  ·  2728 days ago  ·  link  ·  

I definitely have less experience and why I find what you have to say on the subject interesting.

    That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way.

Yep. I guess sometimes I should read with my finger.