a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by veen
veen  ·  5 days ago  ·  link  ·    ·  parent  ·  post: Pubski: March 12th, 2025

Maybe we should redefine AI as Artificial Interface. I think we’re finding out as we go that the scope of the word “systems” in your sentence can encompass a lot - it’s not just getting a recipe into Notion, it’s also starting to become useful as an interface between my notes and my email, or between a corpus of academic papers and my project outline. I’m tempted to start to record more meetings (or make voice notes after each?) just so I can create the data I need to become useful later.

Extrapolating once gets me to wonder how much more useful OpenAI’s Deep Research would be if it would have access to all academic papers instead of just Google results. “I’ve found 726 relevant papers to your inquiry, do you want to pay $118 for me to access them once?” Reminds me of the people who used to shout “data is the new gold!!1!” years ago. Like, hell, as a government we have decades of letters, research, data, notes and reports all just gathering dust because there only interface on it is tedious, laborious, expensive, or all of the above.

Extrapolating twice is what I’ve seen some tech bros do, where they are dreaming of the day when everything they say and everything they do is quietly monitored just so they create a wealth of data to query or dive into later. There are now people who have two years of (personal) conversations with ChatGPT, which they can use with its memory feature to do [shit like this][https://www.reddit.com/r/ChatGPT/comments/1guygcv/comment/ly3b4bz/). But somewhere here is a Rubicon I’m not willing to cross.





kleinbl00  ·  5 days ago  ·  link  ·  

So something Kathy O'Neill beats like a dead horse in this book is that algorithms are just bias? Which is fine, because bias is how we choose. Shit goes off the rails when that bias is hidden or lied about. I know you've read it, but for the folx at home, algorithms are bad when:

- They are opaque

- They are widespread

- They can do damage

...which is LLMs to a T. Go to a different cognitive model, however? Say, one where you can query the AI to see how it got from Point A to Point B? And one thing that happens is rights management is handled. If it used your prompt to remix fifteen images, those fifteen images count as sources for your derivative work and an AI can figure out what you owe to whom. You can ensure it's only working with Creative Commons content within the bounds of regulation. And it can say "based on these open sources I predict a 70% likelihood that the following journal articles are worth paying for."

There's no debug in LLMs. There's no "what the fuck are you talking about" there's just "run through your model again until you generate the answer I know to be correct" which puts you in a hell of a pickle if you don't know the correct answer.

If data is the new gold, it needs an assay certificate. Any '49er can tell you that.

ButterflyEffect  ·  5 days ago  ·  link  ·  

One of my favorite LLM quips recently was someone commenting how LLMs are right about everything they know little about, and wrong about everything they know a lot about.