- “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
The EU will effectively require explanations of AI decisions.
There has been a very little bit of research on explaining the 'thought process' of various machine learning algorithms: 1 2, but not nearly enough in my opinion.
The real blind spot: how is this all going to be financed? Apparently the authors of this article have a huge blind spot with how the financial world works.Or it could investigate why high-rolling investors are given the right to understand the financial decisions made on their behalf by humans and algorithms, whereas low-income loan seekers are often left to wonder why their requests have been rejected.
This is absolutely stupid. The beauty of AI is that we don't have to understand how they work. We set up a system and the system learns how to perform some task. And it isn't what the EU is doing, anyways. They want to explain when and what decision was made, not necessarily the details of how. Although, we could make AI designed to understand AI... Wouldn't that be making them sentient?The EU will effectively require explanations of AI decisions.
I'd argue that the benefit of AI is that it can make decisions better, or faster, or at a larger scale than humans can. AIs can base decisions on larger datasets, etc. But there's no reason to expect that AIs can't explain the factors that went into a decision or alternates they rejected for other reasons. Furthermore, plenty of researchers are interested in this for a more mundane reason: understanding how AIs work helps build better AI and helps improve AI training. Deepdream is an example of this--an explanation of how 'deep learning' is able to produce the results it does.
An AI, or a deep learning neural network is most often a series of arrays with weights in them multiplied together before the weights are adjusted based on the derivatives of a cost function. Trying to turn that process into a coherent "why" is nearly impossible without very focused study and research into how the neural network operates under different conditions, and by that time there will be a new one in training that makes entirely different decisions. Yes, but just because we understand how AI behave or when they learn situations most quickly doesn't mean we can tell you why an AI made x decision or y decision at any point for any person.But there's no reason to expect that AIs can't explain the factors that went into a decision
understanding how AIs work helps build better AI and helps improve AI training.