Good ideas and conversation. No ads, no tracking. Login or Take a Tour!
user-inactivated · 3416 days ago · link · · parent · post: "Why Should I Trust You?": Explaining the Predictions of Any Classifier
There have been a lot of symbolic AI programs that could explain themselves, because it's relatively easy to explain what your program is thinking when your program does its thinking by constructing a proof. I'm not aware of many attempts to do it with learning algorithms, and the authors only cite three.