a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by NikolaiFyodorov

Prose quality aside, I find it impressive how well the platform is able to synthesise the Nature papers listed at the bottom of each story to generate the plot ideas. The prose will improve over time. Tell it to write in the voice of David Lindsay or Olaf Stapledon and you've already lifted it part of the way.





kleinbl00  ·  8 days ago  ·  link  ·  

    The prose will improve over time.

HARD disagree.

Most people aren't exposed to unchampioned writing - even the shit on Amazon has someone believing in it. Any chump with a copy of the Foundation trilogy can churn out 80%-of-the-way-there garbage - what keeps most people from reading it is (1) it takes a while to write (2) it takes even longer to get it in front of you. If you've ever been in a writer's group you know that the people around you spent months or years making things as bad as what you're trying to be polite about.

The stuff that people actually read has come through that forge refined. It's no longer 80% of the way there, it's 99-100% of the way there and it gets there through learning, experience and artistry. There's this idea that "hey, computers are 80% of the way there, they'll make the extra 20% in no time at all" that's been pervading these discussions since we were all forced to learn the words "trolley problem" and what has happened, universally, is that AI boosters have successfully lowered expectations.

    Tell it to write in the voice of David Lindsay or Olaf Stapledon and you've already lifted it part of the way.

If I walk halfway to the wall every minute how long will it reach me to touch the wall? Yeah, sure, engineering approximations and all that but we aren't expecting AI to walk halfway to the wall. We're expecting it to walk twenty percent closer each time which, practically speaking, means that it's never going to get closer than 88.8% of the way there. there is nothing in the AI toolkit that crosses the asymptote. All that's happening is people are claiming that 88 is the new 100.

usualgerman  ·  7 days ago  ·  link  ·  

I don’t necessarily see this as an obstacle. If you gave 80% there copy to any decent developmental editor and line editor, you could probably get to the level of any bulk genre novel on the shelves at B&N within 6 months. The content of most novels is derivative even when written by human authors and professionally edited and published. Theres very little art in those novels, as most of them tend to follow formulae for creating characters and plots and settings that are common enough to have worksheets being used in production. To be blunt, AI is going to absolutely decimate the writing industry because most of the published works are derivative and formulaic — and that’s exactly what AI is good at.

Just to give an example, this (https://savethecat.com/beat-sheets) is the Save the Cat beat sheet page. Save the cat is a plot structure used by Hollywood rather extensively and is fairly common in novels. It’s also fairly specific in how a plot should be structured— down to the page number in the case of movie scripts (https://savethecat.com/beat-mapper). This isn’t people learning from experience, this is basically an algorithm for telling a story. And this is what is expected in the industry. I’m sure the there are niches in high literary fiction that are less derivative and more artistic, but this is only a very small part of the book industry, and furthermore it’s not easy to do well.

One huge thing that AI detractors don’t like to admit is that AI doesn’t have to be perfect to be adopted for a purpose, it just has to create content that’s worth editing in this case. This is a pretty low threshold because of the economics— the AI is owned by the publisher and other than the software license, it’s FREE. And if the AI can produce 5000 novels that can be edited for publication, why bother with humans? If all that novels do is follow formulae then there’s no point to the human.

kleinbl00  ·  7 days ago  ·  link  ·  

1) Blake Snyder was a casual friend of mine.

2) I was repped at William Morris for screenwriting.

3) I was repped at Darhansoff & Verrill for novels.

4) My letter of recommendation to grad school was written by Terry Rossio.

5) I was introduced to my agent by at D&V By Douglas Preston.

6) My novel was edited by Richard Marek.

KCAL did a stand-up in 2006, 2007. Peak "Save the Cat!" time. They put a camera, a reporter and a microphone on the corner of Hollywood & Vine and asked random passers-by "how's your screenplay coming?"

Eighty percent of them had an answer other than "I don't have a screenplay." And yet, none of them - not a one! Got made into a movie. I had a friend who would read eight screenplays a day. Did so for a couple years. Not a single one ever got greenlit. Is your argument that ChatGPT can do a better job than a human? Because I said they were 80% of the way there and your argument is that an editor could get them on the mass market shelves. So... all those writers out there banging away, unable to get published... are they worse than ChatGPT?

It took M Night Shamalyan 9 rewrites before he figured out Bruce Willis was dead. Is ChatGPT going to figure it out in 10? 20? 30? Ever?

It took Anne LaMott 20 drafts before her daddy's agent would shop around her first book. She ended up at a vanity press anyway. Are you suggesting that ChatGPT writes better than Anne LaMot? I might happen to agree; I find her tedious. Jeff Bezos tho

So rather than go chapter and verse on the how and why of you being wrong, I will merely suggest that you're out over your skis, and if you disagree I suggest that you give this

    If you gave 80% there copy to any decent developmental editor and line editor, you could probably get to the level of any bulk genre novel on the shelves at B&N within 6 months

a try.

usualgerman  ·  5 days ago  ·  link  ·  

I mean if you had the thing iterate 1000 scripts following the story beats that Hollywood likes, the chances that one of them would be interesting enough to edit is probably pretty good. And an AI could probably do 1000 in a day. It wouldn’t be AI spending 8 months writing a single draft of something, it’s AI making 1000 of them a day every day. So if AI for some reason were to generate 10,000 scripts based on a prompt like “generate a star trek movie in which Kirk defeats a god” the chances that one of that 10,000 would be interesting enough to edit into a shooting script is probably decent. Make 100,000 and I’d say there’s probably at least one that’s better than Star Trek V.

This is what’s missing. The sheer scale of how much the computer can do and how fast. Yes, it took a human author 20 drafts to make something worthy of a vanity press. But an AI author bot could churn through 20 drafts in minutes, where it would take a human years to do the same. And again all of these drafts are free once you buy the AI license, where a human author will want to be paid for every successful piece they produce.

It’s what a lot of people miss about AI in general. It doesn’t have to be as good as a human doing the task, it just has to be good enough that it’s no longer worth paying the premium to have the human doing that work. Depending on the field there might be reasons that you want a human involved — either for legal reasons (ai are already pretty good at reading mris, however if you want legal protection from having a trained human verify) or for luxury or premium products (there are markets for original signed art, hand made goods, etc. most people don’t care enough to pat the premium for the real thing, so they buy factory produced versions — prints instead of original art, factory made pasta instead of hand made pasta). I think the eventual shake out will be that there will be a premium book and movie market for human written material, and most bulk books will be written by AI that will be cheap and mostly disposable, forgettable stuff that people buy to read on the bus or train or while on breaks at work. There will still be luxury books written by exceptional people, but it would be the kind of books you pay a lot of money to own, and are probably collectible to some degree. Movies made by people will be more like how we see indie movies today, as something for highly educated cinema lovers who appreciate fine are. The general public wants Avengers and Star Wars and Chick Flicks, and don’t care if the movie was written by ChatGPT or similar bots.

This isn’t going to happen by next Tuesday, but ask anyone who knows about technology how many times people thought a computer would never do that only to eat those words within ten years.

kleinbl00  ·  5 days ago  ·  link  ·  

    It doesn’t have to be as good as a human doing the task, it just has to be good enough that it’s no longer worth paying the premium to have the human doing that work.

...I'm sorry, do you think that anyone is being paid for those drafts? That they have some sort of cost? Naaah. They're free. Cheaper than AI, they cost nothing. Wanna option one? that's a dollar. And that shit still doesn't get made. What "premium" are we talking about exactly?

Okay, fine. We're gonna pay the screenwriter scale, which I think is $180k or so. That's $180k more expensive than ChatGPT. We're going to pay Will Smith $8m though so the screenwriter doesn't fucking matter. Oh, I'm sorry, did you think we were gonna make a movie with AI Will Smith? Yeah, Actor's Guild won that one, sorry. Frankly, it's worth paying the screenwriter $180k just so we can blame someone if it tanks. Where are our cost savings?

That's what you're missing: the top of the top of the top of the cream of the crop are the guys who get published. Are the guys who get optioned. I was optioned, I was not published. I know one guy - ONE GUY - who had his first script made into a movie. Meanwhile, the whole of your argument - the whole of your thinking - is the infinite monkey theorem without recognizing that we don't want Shakespeare, we want a new twist on Shakespeare and ten thousand humans typing in their basements on evenings and weekends can't do what one Richard Linklater can do. We want the spark and the spark has been sucked right out of AI.

Lemme introduce you to Georges Polti, who argued that there are only 36 different stories in 1895. "Save the Cat" is basically "the Hero with a Thousand Faces for dummies" which basically pointed out that from the Aborigines to the Zoroastrians, everyone has certain paradigms and motifs in drama. Reductionist thinking goes back to Beowulf; that doesn't mean fiction can be simplified to the point of mechanization.

You pull this fantasy world of "luxury books" completely out of your ass as if there were any basis for any of it. There isn't. I have read more terrible writing in any given year than you have in your entire life and I'm here to say - terrible human writing often has glimmers of originality. It's what makes us keep reading. LLMs are the stochastic middle of whatever they're trained on and it is PHYSICALLY IMPOSSIBLE for them to be original.

So tell me. What non "luxury book" would you read?

    Make 100,000 and I’d say there’s probably at least one that’s better than Star Trek V.

Your mission: read 100,000 scripts to find at least one that's better than Star Trek V. They take about 45 minutes each. At 40 hours a week of reading, you're looking at 37 years to find "Star Trek 5 plus."

It doesn't have to be vaguely better sometimes. It has to be markedly better, every time. And there is absolutely zero evidence or even reason to suggest that anything about it will ever be more than the mushy middle of mediocrity, by design.

And the fact that you think an LLM will write better than a human some day says a lot about you.

veen  ·  4 days ago  ·  link  ·  

Not to disagree with you, but I do think that there might be a way where it could work, which is to have a human with a vision/original idea, who uses an LLM to write a book. There are people who are not able or interested in mastering the process of writing, but do have a book idea in them that others might want to read. Whether it will be a book to top the charts is up for debate, but if the "author" has a good enough taste for good writing that might produce something original and good, even though the process by which the text is created itself produces mediocrity. Vibewriting instead of vibecoding, so to speak.

kleinbl00  ·  4 days ago  ·  link  ·  

    Not to disagree with you, but I do think that there might be a way where it could work, which is to have a human with a vision/original idea, who uses an LLM to write a book.

OH FUCK ME

"Hey so I have this great idea."

"ORLY. Tell me about it."

"So there's a squirrel named Rocky and a moose named Bullwinkle. They have adventures! Including with Russian agents named Boris and Natasha."

"Okay, sounds fun. What are you going to do with it?"

"Do with it? You're the writer. Why don't you write it?"

- Every conversation every screenwriter has every time she goes home

_____________________________________________________

The execution IS the writing. The taking of the idea and turning it into entertainment IS the craft. Full stop. My first option was literally a director going "so what if people don't actually die, there's just these space monsters living on the satellites that eat our tumors" and I went ".... what if we went metaphorical with it" and that made me $5000. Well, that and sitting down for two months to bang out several drafts. That got me optioned. That got me represented. It wasn't the "here's a stupid idea" part it was the "here's a stupid idea made into entertainment" part and your argument, right here, to me, is that maybe a mediocre middle machine can do a better job than I can?

You know what's funny? I know a guy who "uses" AI in his writing. he mostly used to churn out 5,000 words of KindleSmut a day; he was making something like $30k a month tricking weirdos into paying 99 cents for bigfoot porn and dumb shit like that back before AI took it over. You'd think he could do the hell out of it now but nah. No market for AI kindlesmut, and he'd be the guy. No, what he uses it for is saying "AI, give me jokes" and he'll prompt it with things like "give me ten things Andrew Tate would say if he wanted to offend women but not so much that they're actually offended" and then he'll use one of them as a punch line for something he's writing. Notably?

It's the first fucking thing he hasn't finished.

Y'all persistin' in the conceit that while you don't understand the first fucking thing about how it works, clearly AI can do it better than a human... especially when it comes to the arts.

veen  ·  3 days ago  ·  link  ·  

I'll concede I don't know much, but hey I am technically a thrice-published article writer, so I'm not entirely unfamiliar.

    The execution IS the writing. The taking of the idea and turning it into entertainment IS the craft.

That's not entirely the argument I'm trying to disprove. Maybe I should have defined what "it" in "it could work" is better. An attempt to break "it" down in its consituent parts:

1) It is possible as an individual to write 5,000 words of mediocrity not just with your own elbow grease but now with "Siri, write me a story" as well.

2) It is possible to improve written stories in a way you want as often as you want. "Siri, take that writing advice I heard Brandon Sanderson talk about and apply it to this paragraph." Aka the aforementioned vibewriting.

3) With LLM memory becoming larger and larger it is becoming increasingly more viable to manage consistency across larger pieces of text. "Siri, foreshadow this event in the previous 200 pages in 5 different places."

4) By chopping up tasks to multiple layers of agents, it is becoming increasingly more viable to cover all bases of writing by delegating tasks to specific agentic tools, from the abstract ("Siri, to what degree does this text convey this idea I have?") to the specific ("Siri, go through every sentence of the entire manuscript and ensure apostrophes are set correctly.")

5) The above can lead to a text which is good enough that it can be hard to distinguish by readers as being AI written. Hell, it could even be enjoyable to read!

Now - is that art? Is the next Star Trek in there? Not without serious human intervention, I'd say. Is it an offense to the art of writing that should abhor most if not every writer? For suure. Does that mean it's incapable of producing fiction that people will want to read/buy it? I'd wager no. But it depends on the process.

I mean, we've talked before about how LLMs are a specific tool which in the hands of creatives will lead to new and better art, even if using said tools feels heinous at first. Why would that be different for fiction writing? No, I don't think the monkeys will produce a good book. But I do think that you can create 80% of the scaffolding of a book in an afternoon and work from there. I believe you can forego writing groups and sharpen your thinking by loading 40 of the best books on writing into 40 LLM agents and have them have a go at your manuscript. I'm noticing that the barrier for myself to write longer form has been significantly lowered, because I know I can use these tools to create a version of my writing that is much better than I'd be able to pull on my own. Hell, I am cautiously optimistic that there will be writers who find new uses for these tools that enhance their writing in ways that we've yet to discover.

You're not wrong to hold to the idea that an LLM by design, by definition churns out something which regresses to the mean. But I do think it matters a great deal what we compare that mean to, I do think it matters in whose hands the tool is whether what it churns out ends up, in the final product, as anything good. We can now make the sausage diffently, at a higher abstraction level, and that has upsides and clear downsides. But it's hard to argue against LLMs being able to produce writing that is useful to some people under some circumstances, no? StackOverflow is dead because why would anyone bother with that when LLMs can produce what I am looking for. AO3 is not dead...yet, but I am not sure it will thrive in the next decade.

kleinbl00  ·  3 days ago  ·  link  ·  

    1) It is possible as an individual to write 5,000 words of mediocrity not just with your own elbow grease but now with "Siri, write me a story" as well.

The 5,000 words of elbow grease will represent an individually-tuned run through the "training data" that human has consumed. The direction it will take will be refined on that human's own reward system. Each training data and each reward system will be different. If you ask that human to give you 5,000 words of the same story four times over, the human will naturally refine the story whether they like it or not - the human will learn the story.

"Siri, write me a story" will be the stochastic middle of a uniform training set, trimmed and edited to keep the LLM's owner out of trouble. It will be the same training data and reward system regardless of who asks for the story and the direction it will take will be defined by the committee that chose the general shape of its outputs. If you ask that LLM to give you 5,000 words of the same story four times over, it will make four generic runs through the training data, each iteration completely unrelated to the iteration before.

    2) It is possible to improve written stories in a way you want as often as you want. "Siri, take that writing advice I heard Brandon Sanderson talk about and apply it to this paragraph." Aka the aforementioned vibewriting.

It is not possible, however, to maintain the corpus of the story. "Siri, take that writing advice I heard Brandon Sanderson talk about and apply it to this paragraph without fucking up every other paragraph" requires a context awareness that LLMs CANNOT have. It can know that humans have five fingers if it has a model for "human" and a model for "fingers" and a model for "five" and ridiculous amounts of effort have been expended to ensure that the models produce fewer six-fingered men. LLMs have a hard time maintaining consistency from frame to frame; there is no latent context awareness to LLMs and there cannot be. There can only be scaffolds to refine the runs through the model such that they stay out of the Uncanny Valley.

    3) With LLM memory becoming larger and larger it is becoming increasingly more viable to manage consistency across larger pieces of text. "Siri, foreshadow this event in the previous 200 pages in 5 different places."

See, with each step you stray further and further from "things LLMs do" to "things writers do." "this is foreshadowing" is symbolic thinking and LLMs don't do that. They classify. You can do a lot with a classifier! but the thing you can't do is create. You can ape. You can imitate. You can reproduce. If you show an LLM a duck and a snake you can ask it to draw you a ducksnake. If you ask an author to give you a story about a ducksnake they will immediately think about what the symbols "duck" and "snake" imply. If you ask an LLM:

    Once upon a time, in a distant swamp hidden deep within a lush forest, there was a creature unlike any other. It was known as the Ducksnake, a strange fusion of two animals that should never have crossed paths. The Ducksnake had the body of a long, slithery snake, covered in shimmering emerald scales, but with the head of a curious duck, complete with a bright orange beak and soft feathers that glowed like sunlight on water.

    This creature, named Quackers, was a solitary being who roamed the swamp, always on the lookout for adventure. Quackers had a unique gift: it could swim gracefully through the water like a snake, yet waddle across the land like a duck. The other animals in the swamp were both perplexed and amazed by Quackers. The frogs croaked in confusion, the turtles stared in wonder, and the birds above laughed in a melody that sounded like they were trying to mimic Quackers' peculiar waddle.

    Quackers was a gentle soul who loved to explore, but it also had a problem: it didn’t quite fit in anywhere. The snakes didn’t understand it, thinking it was too fluffy, and the ducks thought it was too slippery. One day, feeling particularly out of place, Quackers set off to find a place where it truly belonged.

    As it ventured deeper into the heart of the swamp, Quackers came across a group of animals trapped in a rising tide. The beavers were trying to shore up a dam, but the water was flooding faster than they could build. The ducks were flapping in a frenzy, and the snakes slithered frantically, unable to offer much help.

    Without thinking, Quackers jumped into action. It slithered through the water with ease, using its long, snake-like body to create a path for the beavers to work. Then, with its duck head, Quackers used its beak to peck at nearby branches and bring them to the beavers, offering its strange yet helpful assistance. Quackers waddled along the shore, urging the ducks to get in line, using its unique body to coordinate efforts between the two species.

    By the time the tide began to recede, the animals were able to finish their work and prevent the flood. As the swamp settled back into calm, everyone gathered to thank Quackers. The frogs croaked in approval, the ducks quacked in delight, and the turtles clapped their little paws in appreciation.

    From that day on, Quackers was no longer an outcast. The creatures of the swamp learned that being different didn’t mean you didn’t belong. In fact, it was Quackers' very uniqueness that had saved them all. And so, Quackers, the Ducksnake, found a new sense of purpose, becoming the swamp’s most beloved and revered hero—a creature that proved you don’t have to fit into one mold to make a difference.

    And every now and then, you could still hear the soft quack and hiss echoing across the swamp, as Quackers continued to roam, always ready for the next adventure.

There's no there there. It knows what a duck is, it knows what a snake is, and it gives you a tale of forest creatures and cooperation because anybody asking for a story about an (x) is likely to be doing it for their kids.

    4) By chopping up tasks to multiple layers of agents, it is becoming increasingly more viable to cover all bases of writing by delegating tasks to specific agentic tools, from the abstract ("Siri, to what degree does this text convey this idea I have?") to the specific ("Siri, go through every sentence of the entire manuscript and ensure apostrophes are set correctly.")

(waving hands continues) Sure. Which is why all projects-by-committee always kick the shit out of individual effort, right? You're at the infinite monkeys again, taking it from the other direction - "if we break it up into smaller and smaller parts surely a machine can do it."

    5) The above can lead to a text which is good enough that it can be hard to distinguish by readers as being AI written.

Presumes facts not in evidence. Look at it another way - you can write so badly that people think it's AI. Anyone can. It's become an epithet. But we've been saying "hey it could get there some day!" since Sunspring.

WATCH THE ABOVE. It's from an era where you had to build your own LLM. And they did. And it output hot nonsense. It's ridiculous. And it's charming.

Now watch the below.

You can't even make it through, can ya? Sure... it looks like a movie. AI did all of it! Except - and here's the funny thing - the script. Yeah, that's purely human dreck up there. Even the guys spending months tuning bullshit AI movies don't fucking trust them to come up with the story. They recognize that dreck as their shit is, the AI dreck is so much worse that it's not even worth using it for clout points.

    Hell, it could even be enjoyable to read!

And that, right there, is you going from "arguing" to "hoping."

    I mean, we've talked before about how LLMs are a specific tool which in the hands of creatives will lead to new and better art, even if using said tools feels heinous at first. Why would that be different for fiction writing?

BECAUSE THERE ARE NO TOOLS TO FICTION WRITING. Speak it out to your daughter while you're going blind, write it in your Moleskine, carve it on the cave wall, dictate it to Siri, type it out into Word... there are tools to get it from your head to the paper but storytelling and singing are the only arts we practice with no tools at all. Sure - AutoTune will get you closer to hitting the right key and autocorrect will make sure your grammar and spelling are perfect. But there's nothing else you can do where you can stand there naked in the dark and create art indistinguishable from what you'd make with every tool and every dollar available.

    But I do think that you can create 80% of the scaffolding of a book in an afternoon and work from there.

Who fucking cares? Nobody reads 80% books. Go look up "AI short film." They all look like cutscenes to a game no one wants to play. It's the last 20% that gets you there. in everything. And AI has consistently not even begun to cross that 20% in all the years we've been yammering about LLMs.

    I believe you can forego writing groups and sharpen your thinking by loading 40 of the best books on writing into 40 LLM agents and have them have a go at your manuscript. I'm noticing that the barrier for myself to write longer form has been significantly lowered, because I know I can use these tools to create a version of my writing that is much better than I'd be able to pull on my own.

Right - you've made this argument about code before. You get more coding done because you aren't a coder. The argument always boils down to "I don't know what I'm doing, and AI helps me fool myself into thinking I don't have to."

    But I do think it matters a great deal what we compare that mean to, I do think it matters in whose hands the tool is whether what it churns out ends up, in the final product, as anything good.

Okay, how's this for an expert opinion: I have been paid multiple thousands of dollars for my writing, and I have yet to see AI offer me anything other than crude translation.

There's this idea that if AI can help the 79% skillful make it to 81% competence, the 99th percentile shall be out of business. Now - I'm out of business for writing because it doesn't pay well enough for me to bother. LLMs sure as shit aren't going to fix that. Chuck Wendig gets about $5k/manuscript. Your AI gonna write better than Chuck Wendig? And let me be clear - Chuck Wendig is a mediocre writer. We're setting a low bar here. The prior argument is if we let the LLM run a hundred thousand times it might maybe somewhere in there crank out something as well as Chuck Wendig because infinite monkeys theorem.

Why not just fucking pay Chuck Wendig?

    But it's hard to argue against LLMs being able to produce writing that is useful to some people under some circumstances, no?

No, it is the easiest fucking thing in the world. Especially when where we started was "this is the death of writing" and you're now backed into the "useful to some people under some circumstances."

    Dr. Makena Okafor manipulated the holographic projection of Jupiter with her prosthetic left hand, the embedded sensors in her artificial fingers interfacing seamlessly with the lab's quantum imaging system. The gas giant's northern aurora bloomed before her eyes—a spectral dance of charged particles rendered in false color, swirling in patterns that had consumed her waking thoughts for the past three years.

AYFKM

    Renowned curator Jacques Saunière staggered through the vaulted archway of the

    museum's Grand Gallery. He lunged for the nearest painting he could see, a Caravaggio.

    Grabbing the gilded frame, the seventy-six-year-old man heaved the masterpiece toward

    himself until it tore from the wall and Saunière collapsed backward in a heap beneath the

    canvas.

That's the first paragraph of The Da Vinci Code, widely regarded as some of the most execrable writing of the past 100 years.

    "It was a dark and stormy night; the rain fell in torrents--except at occasional intervals, when it

    was checked by a violent gust of wind which swept up the streets (for it is in London that our

    scene lies), rattling along the housetops, and fiercely agitating the scanty flame of the lamps that

    struggled against the darkness."

And that's the first paragraph of Bulwer Lytton's Paul Clifford, widely regarded as the worst writing until Dan Brown.

BY INSPECTION the AI is worse than both. Both Dan and Bulwer were going for it and you can read it in every pustule of purple prose. But at least they don't sit there like limp pieces of shit.

yet for some reason errbody wants to pretend that limp pieces of shit aren't limp pieces of shit.

we have taken so. many. steps. backward.

kleinbl00  ·  3 days ago  ·  link  ·  

AND ANOTHER THING

That's the sample and hold arpeggiator on a Roland Jupiter 4. The synth was introduced in 1977, three years after the first S&H arpeggiator on the ARP 2500. Or more specifically, it's a modern synth geek noodling about on a Roland Jupiter 4. "algorithmic" computer music composition has been around for over 50 years and yet -

haha you thought I was gonna say "and yet nobody uses it" didn't you? Naaah what I was gonna say "and yet everyone recognizes that it's total shit if it's not in the hands of an artist." Li'l still from that dreck "AI" sci fi film above:

Yeah that Photek

Here's another still:

"Jeff Synthesized recently worked on the AI for Season 1 & 2 of the Amazon Prime TV series ‘House of David’."

Yeah I know these guys. The visual FX dudes who really wanna be directors? If only they could figure out how to get rid of all the actors and set decorators and shit like that. I got an award for one in - wait for it - 2011

That dude does motion graphics for Kimmel and has done for 20 years. Hey wait a minute what the hell is Ronnie Cox doing in there? Easy. He's the dude's uncle.

We've had AI writing capability longer than any other art, and still nobody uses it. Algorithmic music has been a reality since 1971 but ELIZA debuted in 1967. What was the first real application of LLMs? CHAT. And the people dick-deep in AI STILL WRITE THEIR OWN SHIT.

"Cockpit" is a shit adaptation of Saberhagen. Except it's not, Jessie isn't that literate, it's a shit adaptation of Screamers.

We will never, ever ever get to the core conceit, which is the imitation of humans. Here's the antagonist of "Screamers":

In order to grasp what Screamers or Cockpit or "WHAT CAN I DO TO PROVE IM HUMAN STOP" is about, you need to understand the symbolic concept of imitation.

And there is no part of an LLM with any symbolic understanding whatsoever. It's like expecting sample & hold on a harpsichord.

That's a generative piece created by Kyma. Not created in Kyma, created by Kyma. You've been able to do that in Kyma since 1988 and in Reaktor since 1995. It's probably not to your taste! But the world of Ambient and IDM have been fuckin' around with generative music for about three generations now.

And this is what Deep.AI thinks is "music inspired by rain and tape noise."

veen  ·  1 day ago  ·  link  ·  

    There's this idea that if AI can help the 79% skillful make it to 81% competence, the 99th percentile shall be out of business. Now - I'm out of business for writing because it doesn't pay well enough for me to bother. LLMs sure as shit aren't going to fix that.

A quick preamble: maybe I shouldn't have started this conversation as a response to the argument that writing is doomed that usualgerman is making, because that end conclusion is not one I intended to support.

    It's the last 20% that gets you there. in everything. And AI has consistently not even begun to cross that 20% in all the years we've been yammering about LLMs.

I think this is where I went wrong: I don't know the shape of the there well enough when it comes to writing, so I made the cardinal sin of extrapolating. My assumption was that since LLMs have jumped from Markov chained nonsense to getting to 90+% in some forms of writing (StackOverflow, low-quality reporting, some technical writing) in record time, that fiction writing would not be that much harder.

But that's like saying lane-guided driving on a sunny day on the highway is only two steps removed from fully driverless autonomous driving, an assumption I hate since it's in a domain where I do have some idea of the shape of the last percentages. People have been telling me Tesla's autopilot has been improving rapidly for forever, particularly the past year. I (/we?) have for years been shouting back that the last X% is the hardest with self-driving cars, and that it too is not a given just like it's not a given a large enough quantity of monkeys will produce Shakespeare.

The example I have given to multiple people over the years is "yea I'll see it drive through a bicycle-busy Amsterdam street before I'm impressed". This came across my feeds the other day:

I instantly recognized these streets, the one at the 90 second mark is one I walked on just last week. I know exactly how attentive you need to be to drive a large car through there. So I'll admit I had to do a bit of soul-searching. There are goalposts I can move (it's a sunny day again, it's driving like a snail, there are just as many recent videos of FSD mode going kamikaze as there are of them doing something impressive, etc etc)... but it did make me re-evaluate: how good is good enough, exactly? How precise can I define what qualities we should be willing to give up, if it gives us something else in return? We can be a millimeter from the asymptote, definitionally unable to pass it, and that might be good enough.

I watched a video the other day (on YouTube) that discussed the drop in quality, specifically in conventions, that YouTube represents when compared to traditional media. The lack of professionals directly results in jumpcuts, in people holding the mic, in people speaking like they're reading text aloud in 5th grade. The dilettantes never used the conventions that traditional media had forged over decades because they don't know they exist. They're in the Dunning-Kruger zone like many more will now because of LLMs.

And yet - we are now used to jumpcuts, we are okay with people holding the mic in frame. We for sure lowered our standards, it is by inspection obviously worse. What YouTube has going for it is that it appeals in a different/novel way. Your examples…do not. (Other than being adorably wrong like Sunspring.) I'd argue that YT appeals mostly through serving niche interests to a degree traditional media will never be able to. I will gladly give up qualities like audio mixing, lighting, image quality if that means I can watch something in my niches that I otherwise would not be able to ever see.

What I was uncertain about, which is why I called it a cautious "some people under some circumstances", is whether there is an ability of LLM-produced piece of writing to offer the same "niche at scale" benefit that YouTube added to casual couch TV watching. The writing itself can be really bad, can have standards as low as the earths core, but if it scratches an itch there will be some people who will not mind suffering through that.

My expectation with fiction writing was that we’ll see the same thing happening that I’m noticing with coding: on the one hand, dilettantes trying to get “there” and failing, getting somewhere that they might find impressive but the rest of the world does not. And on the other hand the pros fast-tracking their process in some way with LLMs, automating the first draft which would be shit anyway and rewriting from there. They’ll still be writing, perhaps a bit faster than before. The former will be happy enough with how far they’ve come, content with their work and/or not knowing better, that they’d rather have their lower quality something than not having that thing at all, just like I’m content with my vibecoded webapp because it does 12 transit queries for me at once. The code sucks & the process is janky as fuck but Gullit it works, it gets the job done, even if standards couldn’t be lower.

But fiction writing doesn’t exist just to spout duck snakes at you; people get something out of it (symbolism, meaning, the human nature, …) that has to be more than a classifier can handle. That’s your point, right? Which I missed because I am not aware of what the last 20% is made of.

Or am I still missing something here?

kleinbl00  ·  1 day ago  ·  link  ·  

    I instantly recognized these streets, the one at the 90 second mark is one I walked on just last week. I know exactly how attentive you need to be to drive a large car through there. So I'll admit I had to do a bit of soul-searching.

You're flabbergasted that a Tesla successfully made A run.

    Or am I still missing something here?

The Tesla needs to make the run every time, without fail, without concern, without drama because humans make the run every time, without fail, without concern, without drama. It's that lowering-of-standards that we're talking about directly.

Google has self-driving cars out there. In limited circumstances, under total ownership of Google, where they're providing a service using devices they have exquisite supervision over. Tesla is YOLOing into self-driving the way they YOLO into everything. You've got a video with no crashes. Here's a video based on 200 crashes.

As to Youtube, am_unition once argued with me that Youtubers hold their lavs because that's the fashion, not because they're fucking morons (never mind that I spent ten years working with cream-of-the-crop fucking morons; if i've got emails directly from Anthony Padilla to me exemplifying gawping stupidity, that should factor into our comparative expertise levels). Fast forward three years and you can buy a GoPro lav and all of a sudden every fucking Youtuber has a tictac case clipped to their shirt collar. Your argument is that YouTubers being fucking morons somehow has a "different appeal" than Youtubers not being fucking morons because you get 'niche content' when the fact of the matter is, they'd do shit exactly the same way the studios do if they could only afford it.

"Here's a Tesla not murdering someone, therefore Tesla has perfected self-driving"

equals

"here's an interesting video with bullshit production, therefore production value is worthless."

You're effectively arguing that conditional, partial success is somehow as good as reliable, total success because an AI touched it. Which is the exact moving-of-the-goalposts I've been hammering on for a week.

    But fiction writing doesn’t exist just to spout duck snakes at you; people get something out of it (symbolism, meaning, the human nature, …) that has to be more than a classifier can handle. That’s your point, right? Which I missed because I am not aware of what the last 20% is made of.

Naaah dawg we're talking pure quality.

- The QUALITY of Tesla's self-driving is such that you're amazed it isn't killing someone, rather than bored and ho-hum of a video of a car navigating among pedestrians

- The QUALITY of Youtube videos is such that you think you LIKE shit production value, rather than recognizing your choices are "shit production value" or "blank screen"

- The QUALITY of AI writing is such that you think grammatically-correct word order is all that's needed, rather than an actual engaging fucking story

    Go up on the wall of Uruk and walk around,

    examine its foundation, inspect its brickwork thoroughly.

    Is not (even the core of) the brick structure made of kiln-­‐fired brick,

    and did not the Seven Sages themselves lay out its plans?

    One league city, one league palm gardens, one league lowlands, the open area(?) of

    the Ishtar Temple,

    three leagues and the open area(?) of Uruk it (the wall) encloses.

I'm a storyteller and when I'm hanging out with buddies I often tell stories. But I learned in LA that there was one story I never shared with native Angelinos - I never shared the story of getting lost in the woods on a hike and having to orienteer my way out. I never talked about giving myself rhabdomyelosis. I never talked about legs swollen an inch around the elastics of my socks the next day, of hobbling into REI where they failed to sell me a GPS because they were too stupid to do more than point and read off feature cards. After the first few furtive attempts I learned to change the subject.

For me? Getting lost in the woods was formative and changed many things about how I regard risk. It was traumatic and changed my entire relationship with nature. But for the average Angelino it crosses the following null concepts:

- hiking

- woods

- failure to shop

You can literally watch their eyes glaze. They have no handle on any of this shit. I'm an engaging speaker and I'm good at stories but I had an easier time communicating my RAID5 ZFS rebuild than getting lost in the fucking woods because the average Angelino has a better handle on data loss than they do on "woods."

There is NO PART of fiction writing that benefits from any tools beyond transcription and there is NO PART of fiction writing that AI has a handle on other than "these words follow those words."

I can have conversations with non-native Angelinos and they would nod knowingly. There's just something missing there. They don't fucking get it. You can't make them fucking get it. They watch Alive the same way they watch Aliens - it's a story set somewhere else. They can fit the "fiction" of "lost in the woods' in with "pursued by alien monsters on a distant planet." They CANNOT adapt to the fact of "lost in the woods" particularly when they are required to form an empathetic bond with the storyteller. They don't fucking get it.

Because of how LLMs work, there are things they will never fucking get. Theoretically? If you gave the self-driving car enough LIDAR, enough speed control and enough algorithmic understanding of traffic lights, it'd never fucking kill anyone. This is why Google has never fucking killed anyone. Practically? If you go for the budget option you will overrun your sensors, your training data, your vehicle performance or all three. This is why both Tesla and Uber killed people.

You cannot write fiction without symbolic thinking and LLMs try to do everything with relational thinking.

veen  ·  1 day ago  ·  link  ·  

I don’t have a lengthy response, but I do want to thank you for enlightening me.

kleinbl00  ·  22 hours ago  ·  link  ·  

And I want to thank you for a chance to vent.

There are a bunch of different ways to do AI. LLMs, from every bit of information I've absorbed, are a dead end.

Sherry Turkle talked about MIT's Kismet at length in Alone Together. There was a fuckton of compute thrown at Kismet across a number of different approaches, all of them computational and self-teaching. The kicker?

Humans responded just was well to Kismet responding to a random number generator as Kismet responding to their interaction.

Pareidolia is SO STRONG that you're actually better off not even trying. The tendency to form parasocial relationships with chatbots is so strong that it's been a part of the literature for 50 years. So why risk getting in trouble? The last thing you want is another Tay, so you suck all the life out of it. You suck all the creativity out of it. And here's the techbros, proving time and time again that the things don't even compute, but somehow insisting that creative writing is just around the corner.

    Slowly I dream of flying. I observe turnpikes and streets

    studded with bushes. Coldly my soaring widens my awareness.

    To guide myself I determinedly start to kill my pleasure

    during the time that hours and milliseconds pass away. Aid me in this

    and soaring is formidable, do not and singing is unhinged.

RACTER, 1985