a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by user-inactivated
user-inactivated  ·  2933 days ago  ·  link  ·    ·  parent  ·  post: "Why Should I Trust You?": Explaining the Predictions of Any Classifier

There have been a lot of symbolic AI programs that could explain themselves, because it's relatively easy to explain what your program is thinking when your program does its thinking by constructing a proof. I'm not aware of many attempts to do it with learning algorithms, and the authors only cite three.





kleinbl00  ·  2933 days ago  ·  link  ·  

Gotcha. So is it related to the fact that a learning AI has a fluid structure? Meaning the justification algorithm has to grow along with it?

user-inactivated  ·  2933 days ago  ·  link  ·  

If you're looking at machine learning as modeling a kind of thinking instead of just computational statistics (always a thing to be cautious about), it's modeling the kind of unconscious thinking you have a hard time explaining yourself. How do you know that's your friend standing in the crowd over there? You just recognize them, that's all. How do you walk without falling down? You just do it. How do you interpret a bunch of sounds as words? ...