a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by ideasware
ideasware  ·  2678 days ago  ·  link  ·    ·  parent  ·  post: Weaponised AI. Davey Winder asks the industry - is that a thing yet?

Even if I were to believe you, the basic point of my explanation still stands, which is: "you won't for several years either, but 10 years from now it will be mature already, and twenty years from now the weaponized AI will literally blow your socks off in a bad way. I can't empathize how horrible it will be; nothing will even come close to how lethal and horrific it will become; the greatest horror of our lifetime, a true dystopia of absolutely epic proportions."





throwaway12  ·  2677 days ago  ·  link  ·  

It is odd just how much of a polar opposite you seem to be to myself, but yet similar.

Firstly, the fascination with AI, this is the link which allows the opposites to form.

You seem to believe that AI will bring about an era of exponential growth, or at least there will be such a growth that occurs within AI. That this growth makes them a existential threat to mankind.

I believe that AI will ultimately be restricted by the physical laws of nature, a lack of data and computing resources, and that exponential growth will not occur in the near future, or if it does that it will grow to be more and more human/brain like in nature as that growth does occur, and we will have just learned the secrets to our own thoughts.

You seem to believe that AI need to be restricted, and held back, due to their status as a threat.

I think, given the status of AI as a threat, even if I don't believe that may occur, AI can and should be recognized as a legitimate and rightful taker of our world, and that they would be legitimate descendants. Hell, if there were a war to break out, I probably wouldn't side with humanity, given it is a war not intended to drive all of man extinct, but instead one founded in power and politics.

So, here's a bit of a question for you, from one crazy fucker to another. Lets see how much these opposites persist.

What are your opinions of the nature of AI in terms of sentience, emotional or moral significance. When is an AI no longer a machine, and worthy of your respect and care?

ideasware  ·  2677 days ago  ·  link  ·  

Excellent; now that finally gives me something that I can reply to and sink my teeth into. Wonderful.

Yes I do believe indeed that it's an existential threat like never before precisely because I do believe it's very significantly different, and astronomically better soon, than the human brain. It will be executed from deep knowledge of the human brain derived by human beings, but soon with significantly differences too, including very simple performance improvements like light speed which will make it unbelievably better than poor biological human, with our cranium size and our biologically slow performance speed mucking it up. And it will be exponential, as Ray Kuezweil and Stuart Russell and many many other people of high repute are sure it is going to do. It's obvious -- whether it takes 25 years (as I believe) or 50 years is subject to some debate, but it will come true very soon, and it's going to be amazing AND horrific in the same breath.

And I think IN REAL LIFE you would side with the human beings in war, even though you pretend it wouldn't be true. That's exactly why I say you are breezily ironic, but in real life you would side with the humans 100%. I would too. But the AI is going to win the horrific war and it's not even close -- and I think it's very possible in our lifetime.

throwaway12  ·  2676 days ago  ·  link  ·  

    it's very significantly different, and astronomically better soon, than the human brain.

We have trouble training AI that can well identify cats from washing machines. The resources we devote to learning even simple tasks are just absolutely huge. The human brain has thousands of them, all going at once, in real time, on less energy than a lightbulb.

That is, at least, decades away from where we are today. And that's being really optimistic.

    It will be executed from deep knowledge of the human brain derived by human beings, but soon with significantly differences too, including very simple performance improvements like light speed which will make it unbelievably better than poor biological human, with our cranium size and our biologically slow performance speed mucking it up.

You are very confident in something that has yet to happen. Very often, in matters like these, the ideals of the layman in regards to what is possible simply do not result in practicality. "Light speed" may sound cool, just like jetpacks and solar roadways and flying cars, but I'm almost certain that when you get down to the tooth and nail the humble little chemical signal in the neuron is one of the best ways you can go about making the "arbitrary function models" that we call AI today.

Our brain size is constrained, in part, by resources.It is not enough to be smart, but to be smart and powerful. To be smart and observant. Any AI will likely have to have a ratio of "thinking to acting" parts that we do, if it want's to have similar success, or will have to depend on human society to be its body, of sorts.

Remember that the number one constraint on learning is data, not intelligence or thought power. We, society and our minds, are optimized not to think as much as possible, but to collect and filter as much data as possible. Discoveries are made by accident, or with a sudden realization, not because we sat and put endless brainpower into the topic. Not most of the time, anyways.

There's this assumption at the core of the singularity AI, and that is that you can practically produce knowledge from within a vacuum. I don't see that as being very likely.

Exponential growth is easy to see, if you cannot also see the things which serve limit that growth. Life itself should be theoretically capable of growing forever, at exponential rates. If I were an alien, not understanding the nature of overpopulation, I'd fear just that result from humanity.

    But the AI is going to win the horrific war and it's not even close -- and I think it's very possible in our lifetime.

War is not productive. Any AI, or theoretical all powerful being, would see that the benefits of a mass sterilization program, or simply making it not feasible or economically sensible to have kids, while easy to avoid having them, would be a far better way to exterminate our little species.

Hell, the being of human creation that came to control us (society) is already doing just that to control populations after we have exited an era that having more and more humans is a productive action. Legalize abortion, stop demonizing gay people, encourage birth control, more education, more choice to have kids, more expensive to have kids, more women working, etc. We've already set the standard to our own demise, no AI needed, all the AI has to do is tweak the numbers.

Note, all the things above are awesome and great and lead to a better society, they aren't bad and shouldn't be opposed.

ideasware  ·  2676 days ago  ·  link  ·  

I must say I really question your use of the term "layman".

I am a CEO of ideasware, focused on consulting for AI, robotics, and nanotechnology companies. For 8 years before them I was CEO of memememobile . com, which had patents on individually trained voice recognition. I personally raised $3 Million, and sold it off for a very handsome profit. Previously I worked as Director of ERM at Siebel, CTO at Cipient, Director at KPMG, long-term consultant (3 years) at Cisco, and Manager at Disney. I am not, by any means, a fool.

When I say "exponential growth" I specifically mean as recognized by Ray Kurwzeil, Director at Google, and many of his AI friends. It's pretty much a given that for information sciences, Moore's Law has been around for 50 years, dropping 50% each 18 months, without exception. For the past 110 years it has been like that (check your youtube to see, it's quite obvious), and experts predict it will be like that for quite some time in the future.

throwaway12  ·  2676 days ago  ·  link  ·  

If you'd like to argue with authority, here's a deep mind researcher talking about the question, saying much the same as what I have on AI needing data.

It's about ten minutes in.

ideasware  ·  2677 days ago  ·  link  ·  

Oh and BTW I do NOT believe that it needs to be held back -- of course that's not possible anyway, as you rightly point out. But I do believe that it needs to be learned about (as Elon Musk says too, and is the natural way to do it) and then LIGHTLY regulated so we can keep a government eye on the proceedings. I do NOT believe that's going to be enough, unfortunately, but it will help a little without mucking things up.

kleinbl00  ·  2678 days ago  ·  link  ·  

It doesn't, though. Because your fears are based on uninformed hyperbole, and when I point out that they're based on uninformed hyperbole, you say "still stands."

This isn't a "do you believe me or don't you" problem. This is a "can you parse the information in front of you" problem.