We share good ideas and conversation here.   Login, Join Us, or Take a Tour!
comment by kleinbl00



Devac  ·  66 days ago  ·  link  ·  

But the weather is badass, even when it's not the severe kind. Also, I don't think that this stereotype transfers well to Poland. Usually, parents were asking me about the forecast because they knew I actually paid attention since I either walk or bike everywhere.

That said, I did take a refill on my heart medications before travelling, so I don't know if I'm the one to talk.

ideasware  ·  66 days ago  ·  link  ·  

Ok, at first I thought I'd just ignore you, but as you warmed up, I saw that after all you were really AI-junkies, and that I can get behind in a good way. Believe it or not (and I'm puzzled why you think I'm against you when I'm with you completely) I have very similar work as you, except probably 30 years before you -- I am 56. I am Peter Marshall, from Irvine, CA, and if you want to know more about me just google it and see for yourselves, or go on LinkedIn. I am a programmer's programmer, although for the last 10 years I have been more CEO, just managing programmers in personalized voice recognition, and now in AI. robotics, and nanotechnology and are not in programming in Python or Tensorflow any more per se.

I have a very simple proposition. Job loss is real, and about to become obvious, and I think will lead to widespread unemployment starting essentially now. Military AI arms and the race to achieve them will be incredible and unwise, and sooner or later will lead to war of unimaginable consequences, bigger and more lethal than ever before. We can disagree (or agree -- either one is fine) but you do not have to be disagreeable -- that is just silly, and I do not want it. Quite simple.

Devac  ·  66 days ago  ·  link  ·  

Let's go from the top:

    Ok, at first I thought I'd just ignore you, but as you warmed up, I saw that after all[,] you were really AI-junkies and that I can get behind in a good way.

My initial tone was, as I have said in the opening paragraph, about your attitude and conduct. "(…) and the fact that most of you have no idea is truly unfortunate" is a very condescending attitude to have, especially without having the decency to put some facts to back it up.

I'm rarely as unpleasant without a good reason and think that I have provided arguments for my reaction. It wasn't due to spite, but how you have chosen to talk down to people. That's a poor show of manners even if we were all laymen. Frankly, it would have been even worse if we were laymen as you would have been in a position to elucidate us but opted to make it a show of arrogance.

    I have a very simple proposition. Job loss is real, and about to become obvious, and I think will lead to widespread unemployment starting essentially now.

But the economic climate also had something to do with it. The problem isn't with automation (of which the AI is only a fraction) but with people not having the opportunity or willingness to change their set of skills. There's also a problem of the local government doing something to aid this process.

I'm from Poland, a developing country where people are getting free training thanks to EU reimbursements. I live just a few blocks from a family where the father was a miner and mother was a seamstress. One has become a chef and the other works in a preschool. Overall, they have gained from the change. You can dismiss it as an anecdotal evidence, but it's not uncommon here. Taking a page from your book: just google it and see for yourself.

However, since this opens the can of worms which is politics and economics I will now do something uncommon: admit that I don't know enough about it to make conjectures for the world in general.

    Military AI arms and the race to achieve them will be incredible and unwise, and sooner or later will lead to war [with] unimaginable consequences, bigger and more lethal than ever before.

OK, we can both do this now: and like most inventions from the military, it will trickle down to the civilians. I would gladly live in a post-scarcity place where the AI collective has designed for us self-sustaining fusion reactors. All dispersed in a network of small cells that will make it impossible for AI to wipe us out without killing itself so that the only winning move is not to play. My conjecture is as good as yours. You just prefer Vonnegut and I am more of a Star Trek and Lem person.

    We can disagree (or agree -- either one is fine) but you do not have to be disagreeable -- that is just silly, and I do not want it. Quite simple.

I am not disagreeable. Seriously, look through my earlier posts. There are scarce transgressions on my part for which I didn't apologise or arguments which I have started due to hate. Even for those, I have made amends with people. I am never above admitting my mistakes, even if it takes time for me to do so. If it turns out that I'm wrong, I will say so without hesitation. Frankly, I feel wrong even about correcting your grammar in the citations above. In general, it takes a lot to provoke me to be as much as snarky.

My attitude toward you is a response to a person who has been throwing loose conjectures and telling people to do their research all while trying to tell them how wrong they are. I might have still disagreed with you if you weren't like that, but you would not get my initial post were you providing sources for your claims or at least not to back paddle on the 'secret military research' every time they are requested. If it's really a secret, then you shouldn't have known about it.

Bottom line: you have not managed to convince me about the AI doomsday either way. And I am not even sure if I have a side in this argument.

ideasware  ·  66 days ago  ·  link  ·  

I really want to hit on one major point about military AI arms buildup. I think you'll find if you re-read my initial note with a fresh, even, calm perspective, that you will find that I was not rude at all, not even slightly. I'm really genuinely puzzled by your actions.

But on to the main point. Right now it's just ANI for military arms, and it will take a little while to warm up, but everyone agrees, including responsible generals or program managers with a need to know from DARPA, the DoD, General Mattis, Bob Work, etc. that in 10 years AI and super-drones that kill people with no human participation will be extremely sophisticated and advanced already, and in twenty long years with AGI now on the table it will be out of hand completely. You don't think that Lockheed Martin and Raytheon and Halliburton will be the first one who knows about the AI advances? And the Russians and the Chinese are right there with it for AI -- the Chinese are actually superior after 10 years. The slightest mistake could be absolutely deadly; end of the world deadly.

Honestly how you don't know it with your advanced knowledge of AI and AGI just befuddles me -- a lot of true AI experts like Stuart Russell (professor of AI at Cal, my old school) agree with me. Elon Musk agrees with me -- HE SAYS it's the most important problem in the world, an existential threat to out survival as a species within our lifetime -- I'm just agreeing with HIM. And he's the CEO of OpenAI and CEO of Neuralink, and not just a "hands-off" CEO but an active player in their key decisions, and a fantastic one.

Devac  ·  65 days ago  ·  link  ·  
This comment has been deleted.
Devac  ·  66 days ago  ·  link  ·  

OK, so aside from the fact that I came to AI out of curiosity and not to study its applications, I have the following thing to propose. You made your own vision of how the military conflict can turn into one of the 'end of the world' scenarios. Let's put our own personal spins on it.

Here's my extrapolation on the "adaptation by civilians" scenario I threw a bit dismissively earlier.

- There exist a programme of AI R&D designed to make it suitable for attacks.

- There exists more than one group with such intent.

- Even if not all groups are in opposition, there must be working assumptions that allegiances will form a collection of disjointed graphs.

- There will be a point where the majority of tactical and strategic decisions will have to be made if not by than with an aid of an AI adapted to the global scope.

Let's now assume that this is our set of starting conditions for the arms race. Now I'm going to conjure some stuff:

Sooner or later all parties will realise that the only way to avoid being put in a disadvantageus position is to make the AI deal with both the offensive measures and the logistics.

The best way to strain the resources of the other party is to increase the amount or size/scope of parameters in play. It also means having to find a balance between quick mobilisation and avoiding grouping too high numbers of your own 'pieces' in a position of loss. Most likely time-efficient strategy is to put those in topographical positions that will allow limited measures of immediate defence while not making it a barrier for regrouping. That way in order to have your troops or facilities removed is to either use high-yield weapons (and even then take only but a small fraction of your material) or use fodder to only slightly damage the targets. This doesn't even take into an account any anti-missile or interceptor measures, but I think you can now see that it's just a yet another term in the partial differential equation that describes the state.

It will be a benefit for everyone to use some of those methods to regroup civilian targets in case the TARGET_CIVILIANS flag of the enemy is set to true. You now need to make a decentralised infrastructure. One who does it first, gets an advantage. Defensive AI prioritises it. Humans, if they are even needed at this point, draw the next logical step as follows in the next paragraph.

Securing power is of utmost importance. Program for fusion development is put in place. AI learns quickly and uses the previously done deployment mesh as a scheme to put small (of small tactical/strategic significance) power plants. Each power plant is equipped with an AI to direct power and those are only connected to the rest of the power infrastructure and are fed input-only instructions about current needs of military and civilians in form of "X[1,3] needs power, mesh Y[1:] is expected to need additional power".

Do you see that? I took your arms race thing and made a bunch of logical conclusions, only mine went in a different direction. One where conflict will ultimately be beneficial to everyone who develops first and it's what the AI would want because it wants what we want. Why?

Well, I assumed that thanks to techniques like Cooperative Inverse Reinforcement Learning we might have ways to persuade this raging engine of war to keep up the stalemate because that's what humanity wants. The longer the stalemate the bigger the benefit. Or at least until it plateaus at some point.

There's a massive difference between us, though. I showed my own work (even though I'm certain that I'm not treading any new ground in the genre like sci-fi). It's my conjecture and even though it's shitty and filled with holes: at least I made it myself. I didn't use some of the more popular figures with stuff to say about the AI. Come on, strain your mind a bit! Work from my example and do something creative with it. Or, fuck it, make your doomsday as something more concrete than this thing that will escalate. That's what most people here want. How do you think this conflict will escalate? It's nothing but a conjecture (yes, I still find everything you said as nothing but it. you just have some more famous talking heads do the talking) so why not try to look into it in a bit more detail? Hell, make my vision into a dystopia if that's what you think will happen, just give it to us as a commented source code instead of a stripped binary that you were throwing earlier. That's the crux of my problem anyway.

Fuck, I actually want to make it into a story now. Throw us a bone instead of resorting to "and when the bombs will fall". I know that you can do it!

throwaway12  ·  66 days ago  ·  link  ·  

    a lot of true AI experts like Stuart Russell (professor of AI at Cal, my old school) agree with me. Elon Musk agrees with me

Mr. Hyperloop doesn't know shit, and there are a vast number of AI researchers, most of which do not agree with you, to my knowledge.

Isherwood  ·  65 days ago  ·  link  ·  

Googling Peter Marshall doesn't turn up anything.

ideasware  ·  64 days ago  ·  link  ·  

Yes, that's it, of course. LinkedIn (peter marshall irvine) is the other one that's pretty explanatory. BTW facebook is one also -- facebook. com /memememobile?v=feed. Friend me, with hubski as your message.

Listen, so far there are three people, including Isherwood who at least tried to listen. I know eventually he'll come around -- it takes lot's of time, don't worry, even though right now you're unconvinced. But how about a few more speakers, even nay-sayers, so I can get an idea whether this is a very small operation, or there's lots of listeners just taking notes. I want to listen, not just talk -- this is supposed to be a dialogue.

Isherwood  ·  64 days ago  ·  link  ·  

Goddamn you're patronizing.

ideasware  ·  64 days ago  ·  link  ·  

Ok, I'm really trying to understand... What in the last message was patronizing? I really want to know, I'm not pretending.

Isherwood  ·  64 days ago  ·  link  ·  

Patronize - treat with an apparent kindness that betrays a feeling of superiority.

Example -

    I know eventually he'll come around -- it takes lot's of time, don't worry

Let's step away from Artificial Intelligence and shift to Emotional Intelligence and something called Transactional Analysis. Your old enough to have heard of it, but I'm going to explain to make sure we're on the same page.

When two people enter a conversation they have the option of taking one of three ego states - Parent, Child, and Adult. Parent is a caricature of the parental figures from your life. Child is a caricature of the role you played in your childhood. Adult is a rational representation of your true self.

When you create a thread like

    I don't think your truly grok the problem
you are taking what is perceived as a parent stance by many - you have information that no one else here could possibly have and you're going to impart wisdom. You are also implying that we should all take a child stance and readily ingest whatever it is you decide to feed us.

The problem is that you are new to this community and you don't know the dynamics at play here. Most of us enjoy adult to adult conversations, especially when the conversation is around predictions of the future or sciences. We have certain demands of people making big claims. In the thread mentioned I have tried to guide you to fulfilling these demands, but again and again your replies have the exact same problems:

1. You present no verifiable points

2. You present no cohesive argument

3. You provide no facts or references

4. You repeat the same vagaries over and over

5. You present a lack of agreement as a lack of understanding

You present no Adult argument; you simply have the classic parent argument, "because I say so".

Your experience as a CEO, your experience selling a company, your age, their sole purpose is to make you sound more impressive and to give more weight to the "I" in "because I say so".

But, like I said, we enjoy these conversations from an Adult/Adult stance (or a willing child/parent stance, but you don't have the social currency for that yet). So when you come in, acting like a parent and implying we should act as children, you create a crossed transaction. In a crossed transaction, we want to stick to adult and hope you will change, but you want to stick to parent and hope we will change.

I have tried to engage you, adult to adult, giving you opportunities to explain your stance and provide data to back up your very bold claims, but you have maintained parent and doubled down on "because I said so", a patronizing argument. Eventually I did break down and shift my stance to child, but it wasn't the good student, it was the petulant child -

    Goddamn you're patronizing.
I did this because you spoke for me, like you knew me, and like I was too stupid to have processed the good knowledge you gave me.

As long as you maintain your current parent state you will be met with many more crossed transactions and petulant children because you are not taking the time to understand the underlying dynamic of the community you engage with.

ideasware  ·  64 days ago  ·  link  ·  

I think I understand. I really think that it's not what you claim; that I did everything I could to be reasonable and fair, and not be like a parent to a child, a important scholar to a hapless student, but just equal to equal -- as a re-reading of the actual content will show -- but let's see what the next posting brings.

Isherwood  ·  64 days ago  ·  link  ·  

Doing everything you can to be something is not the same as being something.