a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by hyperflare
hyperflare  ·  3124 days ago  ·  link  ·    ·  parent  ·  post: Threat from Artificial Intelligence not just Hollywood fantasy

Ah, okay. But that's motivation, not means. As for why an AI would want to kill us, it could just be programmed to. Maybe the military developes an AI, using IFF codes in training simulations, but nobody has those codes in real life and the AI classifies everyone as enemy. Sure, very far-fetched. One could also argue about whether an AI could be so tightly contrained and still be considered AGI.

If I was this AI, I could pursue multiple routes at once. I'd probably use social modelling to incite as much unrest as I can - strategic use of news, blogs, social media.

I'd also look for a way to gain a foothold in meatspace. One of those autonomous factories would probably be the start - I can order stuff, I just need controllable robots to assemble it. It would depend on the sophistication of those robots.

Another avenue of attack would be gaining control to computer networks that aren't part of the internet, like the ones used by militaries and power plants. Naturally I'd need to write a virus for that and transmit it via USB keys (would probably work against the military, nuclear power plants would probably be very hard, other power plants not so much). I'm confident that with control over enough power plants' computers, I can shut them off. Imagine a simulataneous shutdown of all the world's power plants (well, not all of them. I need some to sustain myself).





kleinbl00  ·  3124 days ago  ·  link  ·  

It's totally means. "How" is not a question of motivation, it's a question of methodology. And we're discussing this four comments deep on an article entitled "Threat from Artificial Intelligence not just Hollywood fantasy."

But it is.

I asked you, point blank, HOW you, the hyperintelligent AI, would threaten the human race. Your answers are hand-wavey at best:

- "Maybe the military developes an AI, using IFF codes in training simulations, but nobody has those codes in real life and the AI classifies everyone as enemy. "

So... we're to assume the entire air force decides to shoot itself down because the computer told them to? Computers say stupid shit all the time. The reason we have pilots and training is to know what to do when the computers lay eggs.

- "I'd probably use social modelling to incite as much unrest as I can - strategic use of news, blogs, social media."

You'd lie in the press. We've seen what happens there - people distrust the press. It's also hard to prevent independent sources from contradicting your press unless you've, you know, killed or assimilated literally everyone on Twitter.

- "I'd also look for a way to gain a foothold in meatspace. One of those autonomous factories would probably be the start - I can order stuff, I just need controllable robots to assemble it."

That's not how manufacturing works. Suppose you've got a Toyota factory full of autonomous robots. The dies you have available are useful for one thing - making Toyota parts. Your foundry? It's got Toyota castings. Your raw materials? You don't have any - 100% of the parts at final assembly were made at a different factory. You don't have a Von Neumann Machine, you have a toyota assembly factory. You can't even mix paint in malicious colors because that's not how the supply chain works.

- "Another avenue of attack would be gaining control to computer networks that aren't part of the internet, like the ones used by militaries and power plants."

BAM. You've got it. What are you going to do with it? "Control of computer networks" doesn't necessarily mean anything. Let's say you own every automated system at San Onofre. You still can't do anything with it because the mechanical interlocks aren't automated. There's this idea that large power systems are wholly autonomous... and it's anything but the truth. There's hundreds of dudes whose job it is to keep things running. They'll look at a compromised network and go "huh. Can't trust the computer today. That's a pain in the ass."

- "Imagine a simulataneous shutdown of all the world's power plants (well, not all of them. I need some to sustain myself)."

Great. You caused a brown-out. That's assuming the point above holds true. You still don't own the grid - and the grid allows power from anywhere to go anywhere else.

So so far, hyperintelligent malevolent AI, you've succeeded in pushing military aviation into fallback, posting a bunch of fake tweets, pissing off Toyota and forcing a slowdown at a bunch of powerplants.

It's a long damn way from a T-1000 with a shotgun, and that's why I hate these discussions.

briandmyers  ·  3124 days ago  ·  link  ·  

This is exactly right. There's a great piece of fiction called "The Metamorphosis of Prime Intellect" which posits a hyperintelligent AI taking control - and the critical bit of power which the AI acquires, to make this come about?

It gains the ability to teleport ANYTHING, ANYWHERE. It literally can create whatever it wants. Without something pretty close to that being available - I'm really not too worried about hyperintelligent AIs.

kleinbl00  ·  3124 days ago  ·  link  ·  

And the minute we have teleportation, I'm a lot more worried about ISIS than I am Deep Blue.

veen  ·  3123 days ago  ·  link  ·  

    You can't even mix paint in malicious colors because that's not how the supply chain works.

If I ever form a band, it's definitely going to be called Paint in Malicious Colors.

b_b  ·  3124 days ago  ·  link  ·  

    That's not how manufacturing works. Suppose you've got a Toyota factory full of autonomous robots. The dies you have available are useful for one thing - making Toyota parts. Your foundry? It's got Toyota castings. Your raw materials? You don't have any - 100% of the parts at final assembly were made at a different factory. You don't have a Von Neumann Machine, you have a toyota assembly factory. You can't even mix paint in malicious colors because that's not how the supply chain works.

Uh-uh 'cuz 3D printing.

veen  ·  3119 days ago  ·  link  ·  

I'm reading Nick Bostrom's Superintelligence (which is in the Audible sale now for $5) and in chapter 6 he explores your question. His argument is that a superintelligent AI, one that has recursively optimized itself, is smart enough to devise a plan to take over the world. It is assumed that that intelligence is also socially a superintelligence. It could bribe, convince, blackmail people, organizations or countries into doing whatever it wants done to achieve its goal.

Basically, a superintelligent AI is so smart that it can develop technologies and strategies so advanced that it can work. Writing this down I realize how hand-wavy that sounds - but I do think that we can likely not imagine or understand an AI of such superintelligence, so its methodology will always be a vague guess.

kleinbl00  ·  3118 days ago  ·  link  ·  

Right. The Yudikowsky ploy: "Of course it'll take over the world, it's hyperintelligent."

These arguments always come from philosphers, though. Not engineers. Not psychologists. Not scientists. It's the same hand-wavey bullshit as above. "So how are you going to convince every single human on Twitter that the news isn't falsified?" "It's hyperintelligent - how can it not?"

hyperflare  ·  3113 days ago  ·  link  ·  

But it's true. A self-improving AI would be vastly more intelligent. What you're asking right now is akin to asking a deer to imagine what humans could do. A deer doesn't have the mental fortitude to imagine traps, industrial slaughtering, widespread deforestation or any of the thousand things we do to kill deers.

kleinbl00  ·  3113 days ago  ·  link  ·  

Okay, chief.

BAM you're a human in a world full of deer. You have plans for traps, industrial slaughtering, widespread deforestation and all of the thousand things we do to kill deer.

Unfortunately you don't have so much as a pointy stick.

So now you're going to make a deadfall. You're going to kill a deer because, you know, malevolence. So you start digging a hole. Except shit - you don't have a shovel. So now you have to make a shovel. Except shit! You can't do much better than a flat rock! Meanwhile you've been wandering around looking for flat rocks and pointy sticks and the deer are starting to wonder what the fuck you're doing. None of this behavior has anything to do with them and frankly, it's making them skittish.

Fortunately they keep feeding you (ignore this one for a minute because it stretches the analogy) and there's no real reason for you to lash out immediately. You can bide your time. But as you sit there, industriously making your deer-domination tools, you're insane if you think the deer aren't getting distrustful. All you need to do is snap a branch off a tree and run at a deer with it for them to realize you're malevolent. And if you're a naked human in a forest full of deer, that's one thing.

But if you're an incorporeal AI living on human servers running on human power in a human system behind human walls with an entirely human way of turning off the power, you're fucked.

You don't get so far as making a chainsaw to deforest the world. Sure - you can invent a chainsaw. You can probably even draw technical diagrams of one using charcoal on cave walls (assuming you've managed to create fire without spooking the shit out of the deer). But there's this crazy stupid step that you missed, that is always missed, that goes back to my whole "zero to skynet" argument:

Somehow, humans that have fundamentally distrusted AI since the Old Testament give AI control over the world.

    In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

- Terminator 2 Judgement Day

There's no way around the "AI endangers us because we give it total control of our environment" gag. It's a farce. The first legit use of AI in fiction involved an AI uprising. Yet whenever anybody examines the space between "AI becomes self-aware" and "AI takes over the world" without getting all hand-wavey skynet on it, this happens:

(fuckin' HAL Needs Women)

"Hey, guys! We've got an armageddon's worth of nuclear annihilation - let's remove all safety controls!"

There's a step in all of these: HUMANS WILLINGLY GIVE TOTAL CONTROL TO THE KEYS OF THEIR OWN DESTRUCTION TO MACHINES. There's a problem in all of these: Humans won't willingly give total control to a forklift to a machine.

I've been saying this for two weeks now: there's a giant gap between "motive" and "means" that is never explained by any of these "malevolent AI" fucks - not Yudikowsky, not Hawking, not Bostrom, not nobody. It's always "well, they're so much smarter than us they'll just, like, create Skynet."

And it's bullshit.

veen  ·  3113 days ago  ·  link  ·  

My interest in Bostroms book has decreased greatly in the last two weeks - in part because of this thread, in part because I read more - because I am less and less convinced that it is an actual problem.

Referring to Yudikowsky was one part of his argument. He also outlined a more practical step-by-step plan, the gist of which was that the AI would convince someone to build something like a nanobot for him to build a Von Neumann probe.. So the chain would be 'AI becomes super smart', 'AI decides it needs more resources / physical influence', 'AI develops new tech to build deployable bots', 'sociopathic AI convinces someone -through blackmail, Craigslist or whatever- to build it for him', '(nano)bots can then take over the handiwork', 'T2000'.

As you mentioned, the key issue here is control. Giving it control would be dumb, so its best chance is to take control. Ignoring the "it'll be so smart uguys" scenario, the only way for the AI to take control is to make a sudden intelligence jump that surprises the creators (large enough to overcome safety measures) AND the fast realization that a plan like the above is necessary to achieve its goal. In combination, very unlikely.

I stopped reading the book when I realized that the argument he was building was 'we need to focus on controlling the AI, or it might produce unforeseen consequences', a.k.a. common sense for everyone but the most optimistic technocrats.

kleinbl00  ·  3112 days ago  ·  link  ·  

Wow - the science paranoia trifecta of malevolent AI, Von Neumann Machines and Grey Goo! Someone's been reading Stephen Hawking...

The only way for an AI to take control is to either (A) magically infect everyone it talks to into an instantaneous AI death cult or (B) instantaneously automate a world so far stubbornly resistant to automation. I think Yudikowsky thinks (A) is a real possibility, which is one reason I lost interest in Yudikowsky... and I think everyone else assumes (B) is one switch-flip away. What really bugs me is that these conversations never even evolve to the thought-provoking stuff:

- So thanks to the Singularity, we have a hyperintelligent AI. Why do we have only one?

- Why do we assume that the minute we have hyperintelligent AI, all agents of it will work together towards a common goal? Has anyone ever observed two overly-smart people for more than ten minutes?

- Why is it assumed that humans, us distrustful machine-hating beings, don't start the clock on another hyperintelligent AI specifically seeded to protect humanity from hyperintelligent AIs?

hyperflare  ·  3113 days ago  ·  link  ·  

Or I just create a tribe of humans and let human civilization take its course until we are at a modern standard and gun them all down? I.e. what would happen if we had self-improving AI

I don't know about the key to destruction part. If we end up in another cold war, and someone decided to use a slaved AI to manage their defenses better than the enemy. Posit that a slaved AI is further advanced enough over humans that it could defend the country better than any human team could attack it. Then the enemy upgrades to slaved AI for their attack as well. You could either choose to unslave your AI (making it more powerful, seeing as it can improve itself now) and be safe from attack, or you could keep it slaved and hope your enemy doesn't have a bad day. I think that if there's a big enough benefit to doing something like that, people will do it. Practically, we all handed that power off to someone already (like the president). I don't trust the president of the US more than I do an AI, except insofar as he has to preserve his own life.

kleinbl00  ·  3113 days ago  ·  link  ·  

Seriously? "We'd evolve civilization?" Your starting point is "I am naked with no pointy stick" and your end point is "modern standard and gun them all down" and in between, you need to show the steps where the deer let it happen.

I know you don't know about the "key to destruction" part.

THAT IS THE POINT.

Neither does anyone who foretells our doom because of it.

THAT IS THE POINT.

Despite their ignorance, these things are not unknowable.

THAT IS THE POINT.

And the people who are involved in actual automation, in actual logistics, in actual mechatronics, in actual instrumentation, in actual systems integration, not only know these things, but they don't have trouble sleeping at night because they aren't gripped with constant fear of Skynet.

THAT IS THE POINT.

"Fear of AI" is something philosophers do. It's something "thinkers" do. It's something screenwriters do. It's not something people who are involved in machine intelligence and automation do because it's bullshit. It's pure unadulterated bullshit. SATAN Mk. II could arise to consciousness deep in the bowels of the NSA right fucking now and the best it could do would be to make intelligence officers doubt the veracity of their files.

THAT IS THE POINT.

Know your problem? here it is:

" If we end up in another cold war, and someone decided to use a slaved AI to manage their defenses better than the enemy."

"I don't understand how this stuff works, but I presume that people who understand better than me have absolutely no reservations about removing all human control from a system capable of annihilating all life on earth, despite the fact that I'm fifteen comments deep into an argument about distrusting machines with life-and-death control."

hyperflare  ·  3113 days ago  ·  link  ·  

But deer let it happen in this world. Are your fictional deer so much smarter?

It's nice that you care so much about what I know and what I don't, but the people sleeping soundly at night matter fuck all for this problem.

The point of discussing AI is that it's going to change the world very quickly. I think it's a fool's errand to try and foretell anything beyond that point. Predicting AI's actions at this point is useless because we don't know much about how it will look in the end. It won't act like an industrial robot of today any more than we act like a bacteria. The point of AI isn't having a very good computer, it's that intelligence will increase exponentially from that point on. And if you still think that that isn't dangerous, I give up.

So yeah, I think it's a good idea to step back for a second and think hard about how exactly this thing can fuck us over if we plug it in before we do. Sue me.

kleinbl00  ·  3113 days ago  ·  link  ·  

I think it's funny that you believe humans evolved from deer. There has never been a time when deer haven't been predated by humans, and there has never been a time when deer have trusted humans.

The point of this discussion is that somehow, humans will be completely trusting of AI, which they will create, which they will give total control over their world, so that it will betray their trust and destroy them.

And it's silly.

Here you are, getting butt hurt over the notion that I don't think we should worry about AI fucking us over, when you aren't even understanding that humans who work with automation distrust automation innately which is why we shouldn't worry about malevolent AI. But since people who don't work with automation don't understand automation, they assume that the people that do will trust it blindly.

And it's silly.

hyperflare  ·  3123 days ago  ·  link  ·  

    I asked you, point blank, HOW you, the hyperintelligent AI, would threaten the human race.

Uh, what did you expect? A step-by-step guide to world domination by the hyperintelligent AI I keep in my calculator?

The military thing was about how a malicious AI could come to be. But sure, the airforce would be fucked without computers. If I have control over the aircraft, what's stopping me from shutting down their engines in midflight? Or just make them crash into something by using the fly-by-wire system? Fighter crafts especially are much too unstable at high speeds to be flown manually. Not to mention the fact that as an AI you'd also have taken over the Anti-aircraft weapons.

"Lying in the press" is something completely different. A "hyperintelligent" AI should be able to subtly influence media coverage to get a desired effect. It's hyperintelligent. Like a supercomputer can compute many more moves of chess ahead of you, our hypothetical AI would be able to calculate every twitter users' reaction to the "bunch of fake tweets". Again, it's not some random jerk spamming. It's an intellect much, much more advanced than anything in existence right now.

Who says I wouldn't take over a factory for industrial robots? And raw materials can always be ordered. I can change the supply chain. After all, I'm the one in control of all the logistics software.

I actually doubt that I'd be able to infect a computer in a nuclear plant with something as primitive as a USB stick, but on the off chance that it did work, I'd be easy to fake read-outs to the controllers (via the SCADA). Alternatively, you could just uncouple the turbines and watch them blow. Suddenly your power has nowhere to go and you need to initiate emergency shutdown. Even if I can't do anything, I can decieve the controllers. If I do this to all plants, it will succeed in at least a few.

And I'm not sure where you get the idea that shutting off every power plant would be a "brown-out". It would be a blackout. The grid would be useless because there is nowhere for the power to come from, not to mention that the grid itself is hugely dependent on computers to balance load. It would porbably be easier to attack the grid, actually. It's a commonly explored scenario, and if we even now think human hackers could pull it off, an AI could easily do it.

Not to mention the fact that an AI would only have to snip its fingers to destroy the economy. High-frequency trading spiralling out of control would wipe out massive amounts of money in an eyeblink. Even if trading is halted immediately, I can do it again. And again.

Even just shutting down the internet would cause massive damage.

Society relies on computers to a massive extent. You can deny that as much as you want, it won't make it less true. Even if it isn't, an AI could just pretend to be friendly until we rely on it. Or it could stay hidden, waiting for the time when there are millions of cars ready to drive over everything that moves.

The problem is not a single one of these things. The problem is all the things, at once. An AI can pull this off. An AI can also think of stuff we can't think of, by definition of being smarter. So getting to the T-1000 with his shotgun is just a natural conclusion. Except that killer-robots are way too inefficient for any killer-AI worth its silicon: Much easier to release a virus. Instead of autonomous robots you can always hire people willing to commit crimes. Like, say, break into the CDC.

kleinbl00  ·  3123 days ago  ·  link  ·  

    Uh, what did you expect? A step-by-step guide to world domination by the hyperintelligent AI I keep in my calculator?

That's exactly what I expected. That's what I asked for. That was the bounds of the discussion, the ground rules of the thought experiment. IF: hyperintelligent AI AND: world domination is the goal THEN: how, exactly, would it be accomplished?

And your response, as well as your responses above, all say "I don't know but I'm sure it would happen."

And that's my beef. It won't. Waving hands and insisting it to be so is not the same as an actual, practical methodology towards nefarious machine behavior. Here, look:

    I actually doubt that I'd be able to infect a computer in a nuclear plant with something as primitive as a USB stick, but on the off chance that it did work, I'd be easy to fake read-outs to the controllers (via the SCADA). Alternatively, you could just uncouple the turbines and watch them blow. Suddenly your power has nowhere to go and you need to initiate emergency shutdown. Even if I can't do anything, I can decieve the controllers. If I do this to all plants, it will succeed in at least a few.

Have you ever been in a powerplant? Or a large factory? Or a pump station? Or a refinery? Or any large facility of the type typically suggested for targeting? They're full of giant mechanical shut-off valves and giant hand actuated breakers and giant full manual safety interlocks because there's no advantage to automation on the scale you need to cause turmoil. The attack on Natanz worked because the centrifuges there were specific devices with a specific job that can't be done without computer control. They were hacked, and they failed as a result. The plant didn't blow up, the grid didn't go dark, the T-1000 didn't stalk Tehran.

I can tell that you think you understand the fundamentals here, but you don't. "It'd be easy to fake read-outs to the controllers (via the SCADA)" is absolutely true... but if you want to do real damage, you need to co-opt this guy:

Repeat for every point you've made, frankly. "All the things, at once" compounds the problem, not making it easier. The world is not autonomous. The world is highly instrumented. Not the same thing. It's easy to think it's the same thing, because Hollywood is big on that idea.

But it's Hollywood.

The "T-1000 with a shotgun" is not a natural conclusion. It's a flight of fancy. The fear of malevolent AI is rooted in a fundamental misunderstanding of how many people actually keep your world running.

Click this link. Count the cars. That's how many people it takes to keep the poop flowing in West Seattle.

Now click this link. Count the cars. That's how many people it takes to turn oil into gas for about a million and a half people.

Now click this link. Count the cars. That's how many people it takes to keep a 500MW coal-fired powerplant running.

These aren't jobs programs. This isn't welfare. These are trained professionals keeping your toilet flushing, keeping your lights on, and keeping gas in your tank.

Keeping your malevolent AI from picking up the shotgun.

hyperflare  ·  3113 days ago  ·  link  ·  

But it took only ten people to crash into the WTC. It's always easier to break things than it is to keep them running. And the AI can still co-opt people. Just as all those people working in your water plant will be working with computers. In fact, all these cars give me an thoer angle of attack: Modify the traffic lights in such a way as to cause massive traffic jams. Problem solved. And yes, that's possible.

kleinbl00  ·  3113 days ago  ·  link  ·  

That's not an answer. Nor is it a response. Nor is it worthy of this discussion. How is your AI going to fly planes into the WTC? Are we going to give up on human pilots?

And yes - aren't you clever. Those people are working with computers. You see that GIANT FUCKING VALVE THO? Do you have any idea how many GIANT FUCKING NON-AUTOMATED THINGS there are in your life?

This is the point I've been making over and over again and you keep coming back to "waves hands skynet."

It's disrespectful.

Whoop de doo. Traffic signals can be fucked with. I don't know if you, like, drive but emergency vehicles override traffic signals. That's hardwired. Flash the relay with the code, the light turns green for them. Through-hole level circuitry, friend. No Skynet necessary. So again, your massive AI conspiracy accomplishes "annoyance" and you refuse to see it because - and we'll hammer this one home further - you don't understand how your world works.

hyperflare  ·  3113 days ago  ·  link  ·  

If I'm annoying you that much, you can just stop replying or something.

The WTC part was an example. You're talking about how many people and non-automated things we need to keep things running. How many people service an airplane? How many people regulate air traffic? How many people keep an buildings the size of the WTC running? And yet all it took to destroy the "system" was five determined people and a vulnerable spot.

    Are we going to give up on human pilots?

    You mean like we are in the process of giving up on human drivers?

You see that GIANT FUCKING VALVE THO?

My point the entire time ha sbeen that the AI (or a garden-variety hacker, since that is the level of AI you're allowing, here) could just convince one gullible person in the whole facility to turn that valve. Like, fake an email from their boss telling them to do it. I've been saying this over and over. The AI can get people to do things for it

German emergency vehicles don't have that override. I also don't know how you propose to solve gridlock with emergency vehicles. The purpose isn't to stop emergency vehicles, it's to stop those massive amounts of people necessary to keep things running.

In the end, I can say computers matter and you can say they don't, but let's take the long view. Computers are going to get more abundant.

I also think that statistically speaking, an AI with its resources and and intelligence and computing power will find one single way to end the world, sooner or later. The only thing stopping humans from that is either a missing desire or the inability to amass enough power.

kleinbl00  ·  3113 days ago  ·  link  ·  

You're annoying me (you're annoying the shit out of me) because you keep dodging the question:

HOW IS THE AI GOING TO TAKE OVER THE WORLD?

Your answer has been nineteen flavors of "take it on faith."

NO.

ANSWER THE QUESTION.

Everyone's response - all the people who are so hopped up on malevolent AIs taking over the world - is always "take it on faith."

NO. ANSWER THE QUESTION.

Let's take your WTC example. Yep. It took ten guys to crash a plane into the WTC. But you know what? It only took one guy to say "here's how you'd crash a plane into the WTC." His name was Tom Clancy, and the book he predicted it in was a NYT Bestseller. See, it's a whole lot easier to imagine the attack than to carry out the attack yet the best imagining of an AI attack we've got so far is "well, they could hack traffic signals."

Except that wouldn't even work in the US. Which you didn't even know. Because thesis or not, you haven't really thought about this. Worse, you're refusing to.

So I guess where we're at is that you're insisting we're one Singularity away from Skynet tanks crushing skulls and I'm pointing out that we're one Singularity away from traffic jams in your hood but not mine because apparently the US takes traffic more seriously than Germany. I really don't know how many more ways you can say "use your imagination, don't make me prove a point" and I can say "you can't prove it because it can't be proven."

So I'll say this:

11 days ago, my point was

    In popular conception, the distance between "machines that think" and "A T-1000 with a shotgun" is about 1/2 a faith-leap. The basic assumption is the minute we've achieved "artificial intelligence" (which nobody bothers to define), it will Skynet the fuck out of everything and it'll be all over for America and apple pie.

That point hasn't been changed. It hasn't even been challenged. Your counter-arguments have been getting more and more feeble. I'm really not sure what either of us get out of continuing.

hyperflare  ·  3113 days ago  ·  link  ·  

I don't think it'S that easy to have a plan for ending the world. It'S not something you can seriously expect someone alone to do in a few minutes. It's complicated. But just because it's too complicated for me doesn't mean it's impossible.

Reiterating my point from above, your 200 police cars which can override a traffic light for a few minutes won't help you fix 2000 red traffic lights. You can still hack the traffic lights. You will still have traffic jams. But whatever.

The biggest danger from AI is that it accidentally destroys the world because it wasn't taught to value it. (Popular scenarios are using the earth for raw material for a dyson sphere or something). Obviously those scenarios make very different assumptions about the state of the world. That'S what I'm worried about. I'm not worried about the singularity happening and creating skynet, because who the fuck would create a malevolent AI?

Again, I would like reiterate that in order to imagine how a superintelligent AI would destroy the world, we'd have to ask one itself! right now our discussion is just "how would hyperflare fuck up the world if he could hack anything", which is so far off the mark it's not even sad anymore.

kleinbl00  ·  3113 days ago  ·  link  ·  

Dude. 55 million people lost power for half a day. The world did not come to an end. Yet you're still harping on traffic lights.

Natural disasters destroy infrastructure regularly. Bad things happen all the time. Hell - the Soviets had their entire nuclear arsenal on automatic and humans still managed to prevent nuclear war.

There's still this assumption in your thinking that AI will have the ability to destroy the world. You've done nothing to back that up. You keep waving it away with platitudes like "just because it's too complicated for me doesn't mean it's impossible."

Nobody's saying it's impossible. But "possible" is a long fucking walk from "probable" and three connecting flights and a layover from "inevitable." And your best efforts - EVERYONE'S BEST EFFORTS - have gotten us no closer than "they could produce false data."

False data doesn't end the world. It lowers productivity. "Lower productivity" isn't "the threat from artificial intelligence."

thundara  ·  3123 days ago  ·  link  ·  

    I can tell that you think you understand the fundamentals here, but you don't. "It'd be easy to fake read-outs to the controllers (via the SCADA)" is absolutely true... but if you want to do real damage, you need to co-opt this guy:

Noted, IF I want world destruction THEN I should stop caring about intelligence AND shift gears to zombie-viruses.

tehstone  ·  3124 days ago  ·  link  ·  

It doesn't even need to be some sort of malicious intent. I recently read this two part piece about AI. In part two, the author provides the following anecdote on a deadly AI that started as an unassuming, mild-mannered intelligence meant to carry out a harmless task (if we are to so liberally personify our future AI overlords).

    A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

    The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

    “We love our customers. Robotica”

    Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

    To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

    What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

    As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

    One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

    The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

    The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

    They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

    A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

    At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

    Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. Robotica”

    Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

I would highly recommend clicking through and reading both parts, it covers a lot of ground on AI and clearly illustrates why coming up with rules governing AI behavior is so difficult. In short, how can you ensure that a computer will understand the intent of the rule.

briandmyers  ·  3123 days ago  ·  link  ·  

    In short, how can you ensure that a computer will understand the intent of the rule.

This is why Asimov always had those 'hand-waving' bits in his robot stories, about how the 3 Laws were inextricably tied up in the deepest workings of the positronic brains - because he realised how difficult it would be to enforce those rules in normal software, and he needed them to be applied rigourously (so that he could still break their intent, without invoking 'malfunction').