This reminds me of Martin Ford's Lights In The Tunnel, but the consequences of advancing automation technologies and artificial intelligence are broader than that.
Thinking about how to organize society where traditional labor has less value is important, but perhaps more important is the existential threat hidden in the creation of artificial general intelligence. For a long discussion on that you might want to see Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.
The main thrust of Nick's concern is that artificial intelligence is improving at a much faster rate than human intelligence is, and that an agent or species of agents that far surpass humans in general intelligence will have great power over the future of humanity. I think Mr Bostrom speculates a bit too much upon plausible conjectures, but this one key concern seems valid to me.
I feel that the only workable strategy for tackling the control problem would be to merge the development of artificial intelligence with the development of human intelligence or to bring the development of cognitive enhancement technologies in humans up to speed with the development of artificial intelligence. I feel so strongly about this that I am pursuing a career in cognitive and neural engineering to try to make this happen.
To me it seems inevitable that humanity-as-is will be transcended. The question I ponder is whether we will be replaced completely or evolve into something new ourselves.