a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  2933 days ago  ·  link  ·    ·  parent  ·  post: "Why Should I Trust You?": Explaining the Predictions of Any Classifier

Okay. So if I understand you correctly, what's being described here is an algorithm and a process whereby an unknowable AI model can be synthesized down to a knowable AI model, basically by highlighting and relating the big peaks that caused the unknowable AI model to make its prediction in such a way that it's giving a relatable "slice" through the data.

So while the algorithm as presented works on "is this dot black or white" or "is 7 more or less than 10" (I looked up "sparse linear classifier"), the theory would be that this method of computation could eventually lead to "The Weather Channel predicts it's going to rain Tuesday afternoon because this pressure profile has led to rain 30% of the time, there's a wave of humidity sweeping north out of the Gulf, the jet stream is acting weird and there are half again as many sunspots as normal" out of a dataset that includes all of the above plus eleventy dozen other things.

Close?

Appreciate your patience. Statistics was a long time ago...





user-inactivated  ·  2933 days ago  ·  link  ·  

Yes, that's pretty much it. Sparse linear classifiers aren't as simplistic as your googling led you to believe; look at figure 4, where they carved out pixels of the image that led to the three classifications they got for it. Their algorithm could also use some other easily comprehensible model than sparse linear classifiers, just like all learning algorithms you have to decide in advance the sort of model you're going to learn.

kleinbl00  ·  2932 days ago  ·  link  ·  

Copy copy. Thanks. I saw the dog and his guitar and learned what a superpixel was but the math was too rigorous for me to follow along without a spotter. Last question: what is it about their approach that's novel, and why hasn't an approach like this been attempted before? "Parzen windows", whatever they are, appear to be like 50 years old so I have to assume attempts at doing stuff like this has to have been around for as long as AI itself... but again, I'm a plebian.

user-inactivated  ·  2932 days ago  ·  link  ·  

There have been a lot of symbolic AI programs that could explain themselves, because it's relatively easy to explain what your program is thinking when your program does its thinking by constructing a proof. I'm not aware of many attempts to do it with learning algorithms, and the authors only cite three.

kleinbl00  ·  2932 days ago  ·  link  ·  

Gotcha. So is it related to the fact that a learning AI has a fluid structure? Meaning the justification algorithm has to grow along with it?

user-inactivated  ·  2932 days ago  ·  link  ·  

If you're looking at machine learning as modeling a kind of thinking instead of just computational statistics (always a thing to be cautious about), it's modeling the kind of unconscious thinking you have a hard time explaining yourself. How do you know that's your friend standing in the crowd over there? You just recognize them, that's all. How do you walk without falling down? You just do it. How do you interpret a bunch of sounds as words? ...