I agree that testability is vital, but I don't think time spent interpreting the data we have is wasted if it leads to a simpler model -- a paradigm shift that doesn't have any obvious testable conclusions can be the basis of models making new predictions (heliocentrism didn't lead to noteworthy testable predictions for decades), and the author admits the possible tests of quantum gravity they're advocating are "farfetched". Maybe the reason experts are advocating more theorizing than testing right now is we just don't have very promising theories at this point; maybe we have a theory that the sun is the center of the solar system, but it won't offer any better predictions than epicycles until we consider that the planets' orbits don't have to be round.
You raise a very astute point with the Copernican example. That said, the point I took from the article is that the science is suffering in part because money that could be invested in experimental testing is instead being channeled toward work on theory. The idea being that solid experimental results could help lay the groundwork for more constructive theorising (which at the moment is going nowhere).
That does seem to be the problem the author is describing, but the author presents it as a failing of the scientific method, when it seems to me more a problem with incentives in research funding; that article would probably get a lot less attention, though. I'd read it: why are the incentives wrong? Is it risk aversion since most potential experiments aren't likely to yield breakthroughs? Do the people who conduct the right experiments not receive enough credit relative to the authors of the theories the experiments help build? Is it just a random aberration in a system where actors receive basically no reward for choosing a better strategy -- making the best course non-obvious and the incentives to identify and pursue it approximately nil?