Because noise and overlap. Sound is largely characterized by harmonics and while you can break the fundamental loose, the harmonics are forever fucked.
Sony Spectralayers used to advertise that they could do this - it was their leg up over Izotope. however, once you break out of their carefully sandboxed demos, you discover that trying to get a guitar out of a song is a lot like trying to get piss out of a swimming pool.
Here's the white paper of Izotope, using their original RX algorithm. I have RX4 and it's orders of magnitude more refined.
In digging a little deeper, it appears that the primary use of ICA is facial recognition. To use an analogy, what we need musically is "facial reconstruction." Yes, I can carve up a sound file enough to go "yeah, that sounds like a bass." I cannot, however, do it cleanly or surgically enough to go "and it's so pretty I'm going to frame it and hang it over the mantlepiece." And I'm really f'ing good at noise reduction and the strategies thereof.