a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  2735 days ago  ·  link  ·    ·  parent  ·  post: George Hotz cancels Comma One because of NHTSA letter

    which should not be forbidding this approach until we know it to be dangerous.

No, no, NO. That's not how this works. I didn't sign up to drive on the road you intend to prove your "not dangerous" on. Go prove that shit to be SAFE and then apply to play in traffic, not the other way around.

    It is challenging. It’s hard to do QA on neural networks.

I'm sorry, "it's hard" is not a valid excuse for foregoing proof, particularly when playing in the "tons'n'km/h" regime.

    Which is the one that is better to put on the road? It’s not a no-brainer.

Sure it is. The coded machine can be examined by everyone and made better. The black box might get better, might get worse, might do completely random, unpredictable shit because it found a corner case to shit all over.

    In these cases, we are talking about serious cost, and delays to deployment if it is judged necessary to solve these problems.

Well, shit. Guess we'll have to do it by hand. JUST LIKE THE WAY ACURA INTENDED.

    Since robocars are planned as a life-saving technology, each day of delay has serious consequences.

Holy fuck did he just play the "think of the children" card? He just played the "think of the children" card. "Don't make me prove it's safe, children are dying everywhere all day long! Get those infernal drivers away from the wheel!"

Your dude is an asshole.





Creativity  ·  2734 days ago  ·  link  ·  

I hear you, but I don't have a strong stance like you seem to have.

    "Don't make me prove it's safe, children are dying everywhere all day long! Get those infernal drivers away from the wheel!"

He never said that.

You have three scenarios :

1. Human driver

2. Robocars which outperforms human driver who doesn't take into account specific Trolley problems

3. Robocars which takes into account everything

From step 1 to step 2, you are saving x lives per day. From step 2 to 3, you are saving y lives per day.

Should you go from step 1 to step 3 and waiting z more years, or should you go from step 1 to step 2 ? And then enhance the cars every year.

I think it's a valid question, as step 2 proves the technology is safe (way safer than humans, but not foolproof). Waiting for your input.

kleinbl00  ·  2734 days ago  ·  link  ·  

I suspect you don't have this strong stance because you have less experience with the regulatory apparatus of the US federal government, less experience with mechanical design, and possibly less experience with Silicon Valley "bad boys."

Because he did say that. Check it:

    It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases. On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.

That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks.

He also said this:

    I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.

That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way.

Which is fucking bullshit. Which isn't even worth respecting. Which is making me lose respect for you for not recognizing the fallacy.

How much was safety compromised with the introduction of seat belts?

How much was safety compromised with the introduction of crumple zones?

How much was safety compromised with the introduction of collapsable steering columns?

How much was safety compromised with the introduction of antilock brakes?

Yer li'l buddy Brad literally argued that "safety" is too important for testing and verification. His reason?

    It is challenging. It’s hard to do QA on neural networks. You can examine any single state of them, but not really understand them. You can fix the errors they make, but not know how you fixed it or whether your fix is going to work in other cases.

His excuse?

    On their own that sounds too scary, but the problem is they are outperforming other algorithms at many of the problems they are being applied to.

Citation needed, bitch.

And this is why computer scientists shouldn't be allowed to run rampant on all this - they keep saying stupid shit like "trolley problem" without understanding down to their very bones that it's abject bullshit.

So yeah. Your guy is an asshole, he did say that, and you're wrong. Sorry.

thundara  ·  2733 days ago  ·  link  ·  

    That's an admission that neural networks are unknowable, but an assertion that they are better because, you know, neural networks.

I'd imagined that that statement was based on some collection of training and test data run through various simulation algorithms. It's hard to imagine where comma one would have gotten such a data set, but the problem he's commenting on is one universal to all of machine learning. What algorithm do you use to identify pedestrians from a point cloud? What is the estimated velocity of neighboring vehicles? Is that debris on the ground roadkill or nails?

At the end of the day you're mapping some noisy inputs to an abstracted output, and you can try fitting a bipedal model to a person, but that may error on statues near a roadway. Or you can plug the whole thing into a shiny set of general algorithms that integrate over space and time, let them work their magic, and pick the one with the highest false positive or false negative.

Not saying it's a right or wrong approach, and obviously any tests should include an appropriately large data set, along with added perturbations for all manner of lighting, noise, angles, added vehicles, etc. But it's a common question: do you trust the most accurate model or the one you understand the most? Ideally the former is the latter, but that can sometimes only come after years of analysis. I think the hip-young computer scientist answer to this would be make all data and analysis pipelines open, but something tells me comma wants the quick and easy solution.

kleinbl00  ·  2733 days ago  ·  link  ·  

It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy.

I include Tesla in that grouping.

I do not believe that on-the-job training for any neural network won't kill way too many people unless you offer people the sensors for free and compare real-time human decisions with projected neural network decisions until you have near-total overlap over millions and millions of miles driven. Tesla could be doing that but I don't think they would have let the monster loose as early as they did if they'd taken this approach.

thundara  ·  2733 days ago  ·  link  ·  

    It would be a herculean labor to convince me that any "startup" has access to or has put the effort into training a car to drive in such a way that won't lead to tragedy.

    I include Tesla in that grouping.

Ditto, though it seems like Google has at least taken that approach. I'm a little surprised that Tesla skipped the line on the regulation. Maybe something to do with the legalese of how the feature is offered?

kleinbl00  ·  2733 days ago  ·  link  ·  

Google has no interest in selling cars. Google will license their dataset and path-following technology (because that's what they're building) to anybody who wants to pay the fees, thereby allowing anyone and everyone to hop onto a crowdsourced traffic system rather than building "autonomous vehicles."

Tesla mostly wants to sell batteries and battery-powered cars. They do that by being innovative and a leader in the industry. They're largely appealing to rich eccentrics (for now) but Tesla's exit is probably to another car company. Elon Musk never wanted to be Henry Ford; his spiritual hero is DD Harriman. I think he's said as much although I can't find a source at the moment.

Creativity  ·  2734 days ago  ·  link  ·  

I definitely have less experience and why I find what you have to say on the subject interesting.

    That's another assertion that the goal of "safety" must not be put ahead of the goal of "innovation" and that further, "innovation" is not possible without compromising "safety" in some way.

Yep. I guess sometimes I should read with my finger.