a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by veen
veen  ·  7 days ago  ·  link  ·    ·  parent  ·  post: Pubski: March 12th, 2025

I’ve been enjoying cooking more lately! Found a Whole Foods-esque home delivery grocer with a spectacular assortment of good quality & organic ingredients. Finally bit the bullet and bought two carbon steel pans to add to my enameled Dutch oven and stainless steel pan. I spent a Saturday afternoon seasoning them and so far, they’ve been great at their job. Mostly I’m glad I can now cook without any nonstick pans.

As per recipes, Rukmini Iyer is fantastic. I’m also still exploring NYT’s recipes so if y’all have veggie/vegan recipes that you love lemme know, as I’m now trying to cook three new meals a week.

I also built an AI bot to copy all recipes (physical or digital) and throw it into a Notion cookbook I’m building. Have also been playing around with tools like Cursor and Wispr. I’m still regularly surprised at its simultaneous stupidity and genius. But I do feel AI slowly creeping into the useful territory.





cgod  ·  7 days ago  ·  link  ·  

One of my favorite things to make is farro (wheat berries) with collards, sweet potato and black eyed peas.

Now I do make my collards with bacon, bake the cubed baked sweet potato with butter and use chicken stock with the farro but it would still be very satisfying without those things.

Find a good sauce for it, we like Bitchin' Sauce which I think is vegan.

kleinbl00  ·  7 days ago  ·  link  ·  

"AI" as "middleware between two disparate systems previously only connectable via laborious coding" is the future and always has been. "AI" as "Siri, but omniscient" was never going to happen.

veen  ·  7 days ago  ·  link  ·  

Maybe we should redefine AI as Artificial Interface. I think we’re finding out as we go that the scope of the word “systems” in your sentence can encompass a lot - it’s not just getting a recipe into Notion, it’s also starting to become useful as an interface between my notes and my email, or between a corpus of academic papers and my project outline. I’m tempted to start to record more meetings (or make voice notes after each?) just so I can create the data I need to become useful later.

Extrapolating once gets me to wonder how much more useful OpenAI’s Deep Research would be if it would have access to all academic papers instead of just Google results. “I’ve found 726 relevant papers to your inquiry, do you want to pay $118 for me to access them once?” Reminds me of the people who used to shout “data is the new gold!!1!” years ago. Like, hell, as a government we have decades of letters, research, data, notes and reports all just gathering dust because there only interface on it is tedious, laborious, expensive, or all of the above.

Extrapolating twice is what I’ve seen some tech bros do, where they are dreaming of the day when everything they say and everything they do is quietly monitored just so they create a wealth of data to query or dive into later. There are now people who have two years of (personal) conversations with ChatGPT, which they can use with its memory feature to do [shit like this][https://www.reddit.com/r/ChatGPT/comments/1guygcv/comment/ly3b4bz/). But somewhere here is a Rubicon I’m not willing to cross.

kleinbl00  ·  7 days ago  ·  link  ·  

So something Kathy O'Neill beats like a dead horse in this book is that algorithms are just bias? Which is fine, because bias is how we choose. Shit goes off the rails when that bias is hidden or lied about. I know you've read it, but for the folx at home, algorithms are bad when:

- They are opaque

- They are widespread

- They can do damage

...which is LLMs to a T. Go to a different cognitive model, however? Say, one where you can query the AI to see how it got from Point A to Point B? And one thing that happens is rights management is handled. If it used your prompt to remix fifteen images, those fifteen images count as sources for your derivative work and an AI can figure out what you owe to whom. You can ensure it's only working with Creative Commons content within the bounds of regulation. And it can say "based on these open sources I predict a 70% likelihood that the following journal articles are worth paying for."

There's no debug in LLMs. There's no "what the fuck are you talking about" there's just "run through your model again until you generate the answer I know to be correct" which puts you in a hell of a pickle if you don't know the correct answer.

If data is the new gold, it needs an assay certificate. Any '49er can tell you that.

ButterflyEffect  ·  7 days ago  ·  link  ·  

One of my favorite LLM quips recently was someone commenting how LLMs are right about everything they know little about, and wrong about everything they know a lot about.