a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  6 days ago  ·  link  ·    ·  parent  ·  post: Pubski: March 12th, 2025

So something Kathy O'Neill beats like a dead horse in this book is that algorithms are just bias? Which is fine, because bias is how we choose. Shit goes off the rails when that bias is hidden or lied about. I know you've read it, but for the folx at home, algorithms are bad when:

- They are opaque

- They are widespread

- They can do damage

...which is LLMs to a T. Go to a different cognitive model, however? Say, one where you can query the AI to see how it got from Point A to Point B? And one thing that happens is rights management is handled. If it used your prompt to remix fifteen images, those fifteen images count as sources for your derivative work and an AI can figure out what you owe to whom. You can ensure it's only working with Creative Commons content within the bounds of regulation. And it can say "based on these open sources I predict a 70% likelihood that the following journal articles are worth paying for."

There's no debug in LLMs. There's no "what the fuck are you talking about" there's just "run through your model again until you generate the answer I know to be correct" which puts you in a hell of a pickle if you don't know the correct answer.

If data is the new gold, it needs an assay certificate. Any '49er can tell you that.





ButterflyEffect  ·  6 days ago  ·  link  ·  

One of my favorite LLM quips recently was someone commenting how LLMs are right about everything they know little about, and wrong about everything they know a lot about.