Share good ideas and conversation.   Login, Join Us, or Take a Tour!
comment
bfv  ·  1196 days ago  ·  link  ·    ·  parent  ·  post: "Why Should I Trust You?": Explaining the Predictions of Any Classifier

There have been a lot of symbolic AI programs that could explain themselves, because it's relatively easy to explain what your program is thinking when your program does its thinking by constructing a proof. I'm not aware of many attempts to do it with learning algorithms, and the authors only cite three.