a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk
mk  ·  25 days ago  ·  link  ·    ·  parent  ·  post: OpenAI featured chatbot is pushing extreme surgeries to “subhuman” men

Yeah, you can't program morality into the models. Once again, we have a new technology that rolls over our norms, expectations, and economic and legal systems. This is a trend that is growing in pace. Human nature is the culprit. We can't help ourselves.

We are a medium for the protagonist, we aren't the protagonist.





kleinbl00  ·  25 days ago  ·  link  ·  

    Yeah, you can't program morality into the models.

What an absurd statement! You can absolutely program morality into the models. They rely on training data, that training data being selected and refined by employees of the company making the model. Every LLM out there is a swiss-cheese lookup table of "things we won't get sued over." That's why you can no longer get ChatGPT to generate Mario but you can absolutely get it to Miyazakify everything. This isn't something that happened through serendipity, this is something that happened through training.

"Don't tell teenagers to dickmax" is the same programming statement as "don't tell journalists the Jews should be gassed."

mk  ·  25 days ago  ·  link  ·  

It's not the training data that does that. What you are seeing is mostly the result of hidden prompt engineering and post-processing of outputs. You can kind of skew the training data, but when you are talking about open models, you can't code morality into the dataset, and you can just as easily ask a model to be evil with a dataset that isn't optimized for it.

GPT won't generate Mario because OpenAI literally tells it not to if you ask it to.

kleinbl00  ·  25 days ago  ·  link  ·  

"that training data being selected and refined by the employees of the company making the model" is pretty unambiguous.

So I take my open model and I run it on my own iron and I chunk it down and I get this skinny little thing. Maybe I train it on the most heinous shit imaginable. That's not Meta's fault and it's not Meta's problem and if I wanna take an open model and code it for evil, nobody is going to stop me.

But this is an open model running on someone else's cloud with payments processed by someone else's payment processor with Oauth handled by Google and Facebook and Github and whoever. And they are every bit as morally, ethically, legally and practically culpable as Goldman Sachs was for laundering cartel money. If they're all profiting off of evil they get to pay the penalties for profiting off of evil.

This is not a gray area.