a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  158 days ago  ·  link  ·    ·  parent  ·  post: OpenAIs Alignment Problem

Okay fuck all this, fuck OpenAI, fuck Effective Altruism, fuck this entire line of thinking. Ockham's razor on all this shit is THERE'S NOTHING THERE.

You know Uber's business model? Undercut taxis by breaking the law. Eventually the law caught up. Price difference between Uber and a taxi is now nominal; taxis are cheaper half the time. Uber? Their profits come from bringing you take-out.

You know WeWork's business model? Lose money on subleases but make it up on volume. Softbank was all in, though, 'cuz "vision." So Wall Street hallucinated a $47b valuation and got all upset when it turns out losing hundreds of millions of dollars a year is bad for profit. Now they're bankrupt.

You know Theranos' business model? Con investors into giving them money while they struggled to violate the fundamental laws of fluid mechanics in pursuit of epidemiology. Theranos is a special case in that they fucked up cancer tests and shit like that but ultimately? They lied about impossible shit and eventually the public caught them out.

You know Enron's business model? Lie about how much money you have so you can borrow money so you can bet money so you can lose more money. Enron is a special case because they caused rolling blackouts across California to the point where eventually even the left bank grifters looked into it but ultimately? They lied about impossible shit and eventually the public caught them out.

You know FTX's business model? Lie about how much money you have so you can borrow money so you can bet money so you can lose more money. FTX is a special case because fuckin' Seqoiah et. al. gave a frizzy-haired autist and his friends multiple billions of dollars based on their utter obtuseness about the one thing they should fucking understand (money) but ultimately? They lied about impossible shit and eventually the public caught them out (fucking Coindesk? Fuck off with that shit).

You know what drives me batshit? Everyone wringing their hands about how Sam Altman has Skynet on a leash and if we don't give him exactly what he wants we're all gonna be Borg'd into grey goo or some shit. Meanwhile, Skynet?

We've got a friend involved in an Amazon-affiliated startup. She confides to my wife yesterday "I'm afraid that if I tell them that no one will ever sign up for their services they'll just pivot into something even stupider." I said "make sure she waits until she's grifted as much money as she can possibly get; that's what everyone else is doing."

Microsoft has comped $13b worth of cloud time to OpenAI. That doesn't mean Microsoft has invested $13b in OpenAI. This is Hollywood accounting; when you pay yourself from someone else's cut you never seek a bargain. Friend of mine wrote the movie Deja Vu. It made $180m in the box office. He never saw a dime beyond the advance because according to Disney, they spent $780m promoting it. This is like how a Skinny Puppy album can cost $11m to produce: You bring the band members to your house, let them use your studio, encourage them to hang out, and bill them $100k a day against their future profits to ensure you never have to pay them.

Everyone is talking about how OpenAI has somehow scuttled an $86b valuation when what they've done is jeopardize the idea that someone is going to give them a billion dollars for 1.1% of one of their many, convoluted, funds-incinerating subcompanies. Meanwhile Microsoft has pulled some kind of coup by hiring the CEO even though Bill Gates himself says GPT5 is a nothingburger.

You know how this all makes sense?

1) GPT is a dead fucking end

2) Sam Altman wants to spend a billion dollars on it anyway

3) The board can't kill one of their white calves

    That left three OpenAI employees on the board: Altman, Brockman, and Sutskever. And it left three independent directors: Toner, Quora co-founder Adam D’Angelo, and entrepreneur Tasha McCauley. Toner and McCauley have worked in the effective altruism movement, which seeks to maximize the leverage on philanthropic dollars to do the most good possible.

4) the monkeys throw shit at each other

hey, how much money have "effective altruists" actually given to anything? I mean, it's full of tithing billionaires and shit, right? But they've squirrelled away half a billion across fifteen years, putting them nowhere near anything effective. "For all humanity" is TESCREAL bullshit code meaning "for absolutely no one living or breathing." It's all a LARP. It's sociopathic twitchfucks doing their selfish best to screw over us NPCs so they can pretend their building a bug-out bunker for the betterment of all mankind.

what if I told you... it was all a grift

    But to most people, it will never matter how much OpenAI was worth. What matters is what it built, and how it deployed it. What jobs it destroyed, and what jobs it created.

What has it built? What has it deployed? What jobs has it destroyed? What jobs has it created? There's this fundamental assumption at all this that Skynet is just around the corner but what we get, time and time again, is a hallucinating chatbot that can't even play Not Hotdog.

This whole Sam Altman conundrum is solved in one if you assume nobody wants to be the first person to give the game away.

That's all it is.

And every. single. company. listed. above. went through the same cycle.





uhsguy  ·  155 days ago  ·  link  ·  

This may be confirmation that LLMs right now are a dead end.

kleinbl00  ·  155 days ago  ·  link  ·  

Kaius  ·  156 days ago  ·  link  ·  

Certainly a lot of grifting going, particularly in terms of the valuations and projections. Maybe OpenAI was on track to build AGI, but it almost certainly is not going to get there now. I think its more likely to become a product arm for Microsoft, helping make PPT better at formatting your shitty slides that no one reads.

But that's the company. The technology around LLMs will be useful, are already useful, even if they don't progress much further than where they are today. The reason for this is that there are lots of mediocre people working jobs and churning out very mediocre content. The LLMs can churn out mediocre shit at a higher rate at a fraction of the cost. So our mediocre slide deck needs are met. Ask me how I know...

kleinbl00  ·  155 days ago  ·  link  ·  

Here's where I'm at: it's all Markov chains. Markov chains started life in number theory, moved on to stock markets, were then applied to sentence completion and finally integrated into image recognition/code. I think it's a triumph of ingenuity to take a hundred-year-old computational technique and diversify it into useful applications across such a diverse constellation of uses but I think it's a failure of imagination to assert that we'll somehow create God with it.

You know what the perfect application of GPT is?

Speaking as an occasional video/seasoned audio editor, there are plenty of processes that can be automated, be it in photoshop, premiere, pro tools or whatever to turn several questionable takes into one master document. They're tedious, time-consuming and with a little training, I could teach an eight-year-old to do them. What mostly slows you down is the interface because the companies that have traditionally done it - Adobe, Adobe, Adobe, Avid and Adobe - really lean into making things harder than they are. Once you got the interface down you can do this shit in a hurry. Once you've built out a dozen go-to presets you look like a wizard. So why not give that to an AI to speed-run all the presets, LUT them and put it on a chip to make your selfies better? That's classic, perfect Google. That's where they made their trillions.

And the thing is? It's Pareto Principle optimization: really, what it means for the future experts of audio and video is that the tedious, trivial bullshit we all spend most of our time doing goes away. The machine will handle the easy, obvious stuff. The hard stuff? Neither you nor the gadget knows how to deal with it, so you'll hire me. I may make less money. I may get more time for triumphant work. You'll get "good enough" for your selfies and if you've got something you need to get paid for, you'll know that you can't take it any further because the gadget is better than you. The gadget is not better than me. Never will be. The gadget doesn't know how to improvise. It doesn't know how to synthesize. It doesn't know that barking dog is actually desirable and it never will because if it opens up the parameters enough to actually fine-tune the process that much, it's going to require you to have enough casual understanding of the process to turn into an audio editor yourself. Which, frankly, is fine with me, too! That means I get to spend my time exercising expertise, rather than pushing buttons, and the bright and shining digital future is better for all of us! Thanks, Google!

I think there's a lot of utility in the GPT approach to problem solving. Where I leave the accelerationists and the neophytes behind is where I assert that it's a better shovel, not a golden calf. The way the technology works is by finding the missing number in an equation. That's all. It writes the equation, and it writes it based on a massive pile of other equations, but if its model is built around sines and cosines it will never, ever ever throw a log function in there no matter how obvious.

AGI, since the dawn of the golem, has been about synthesis. Problem solving, at a basic level, is very different than plug'n'chug. GPT is a plug'n'chug engine - and it's useful in all sorts of crazy ways. But it will never be useful in the way people want it to be, which is to create digital friends.