a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
user-inactivated  ·  2932 days ago  ·  link  ·    ·  parent  ·  post: "Why Should I Trust You?": Explaining the Predictions of Any Classifier

There are some learning algorithm you need to know more math than the average undergraduate just to understand what sort of object the inputs and outputs are, and in real applications some voodoo often happens that the authors don't really understand either. We've talked about this. What these guys propose is learning a simpler model of the model learned by the complicated algorithm, with inputs and outputs that are easy to explain and, usually, generate a visualization for and for which the relationship between input and output is also relatively simple. They propose that, if the relationship between inputs and outputs of the second model is close enough to the first for points near those particular inputs and outputs, then a visualization or some other hint at what's going on, of the second model can serve as an explanation for what actually happened. If that really is an acceptable use of "explanation", then we have a way for black magicy learning algorithms to be at least a little transparent.