a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by veen
veen  ·  329 days ago  ·  link  ·    ·  parent  ·  post: The Knowledge Economy Is Over. Welcome to the Allocation Economy

Dan desperately wants to be an auteur. He's not.

What I like about the metaphor is that it reminded me a lot of how I feel like managing interns at my job. You're gonna need to instruct (prompt) them in a particular way, they're gonna run in whatever direction that seems good to them regardless of if it actually makes sense/is true, and it's up to you to coordinate various people and make sure the right task befalls the right person (model). It's a metaphor on how to use the increasing array of different tools and models and interfaces and whatnot. I feel there's a difference between how I use a normal tool versus how I use AI tools, precisely because they're both unreliable and a way to boost creativity or to outsource easily-controllable tasks. (Lke interns.)

I fully agree that managing people and, you know, their feelings & morale & motivation is what a manager's actual job is, but I find the argument that Dan makes where "we are all gonna be a bit more managerial due to AI tools cropping up in our job in weird ways" at least somewhat compelling.





kleinbl00  ·  329 days ago  ·  link  ·  

Yeah I get it but look - you say "hey Intern give me this data"

and they come back with "here is a banana and a receipt for my Uber ride to buy bananas also I brought coffee for everyone but you please validate my parking"

and you go "well, wait. Hang on. How did bananas and coffee come into the discussion, also this wasn't part of the scope and I recognize that's my fault for not giving you a proper brief"

and the intern looks at you askance, already viewing you in terms of tik tok memes BUT when you ask how bananas entered the chat, you'll at least get some reasoning.

AI gonna tell you bananas are data. And if you say try again you might get bananas, you might get oranges, you might get data. And if you use that data, you might find out later on that it's actually bananas. And if you query it enough you might come across some form of "this data is bananas" which somehow got you into the "desktop fruit" dataset but there's nothing you can do to get fruit out of the data because what you're doing? Is querying a black box that belongs to someone else and they don't really know where the Markov turned left, either and even if they did, there's nothing they can do except hand-tag "do not give bananas in response to data" which is great unless you're a grocery wholesaler.

The reliability of interns is an easily assessed quality. You can back-test any errors. You can tune your input to maximize your output. If you don't like the output you can try your luck and run it again? But any answer you get, you're going to have to try again a different way anyway since "factual correctness" is simply not a vector within the GPT space.