Ah, I think I am beginning to see more clearly his point. So can we say that a function can be said to have low Kolmogorov complexity if an infinite series expansion (Taylor series, as your example, but I suppose any infinite series could substitute) is relatively simple and repetitive? If I am making a correct assumption here, then I can better understand his point, at least qualitatively. Most physical systems are built on sets of equations that have relatively simple infinite expansions. Take for example the steady state solutions to an arbitrary separable partial diff eq, as is common in systems whose force weakens with distance. The solutions are oscillators; sines waves, bessel functions, etc. Even in QM, we see recursion formulae that determine the stable states of the wave function, single formulae that can describe the entire set of possible states for particles in a given boundary condition. I suppose this is what is meant by low Kolmogorov complexity (or I am missing the point, which is equally--or perhaps more--likely).
There is another extension of this that comes to mind, and that is the convergence of the form of equations that describe vastly different systems (beyond inverse square laws, I mean). For example, Newtonian oscillators, in some circumstances, form solutions to population models in certain ecological systems. It seems that every time that Equation X satisfies problem 1 and (unrelated) problem 2, the complexity of the system (in a descriptive sense, anyway) has decreased.If there is some generalized string out of which these common forms of equations can be backed, I would be speechless, stunned, but not really surprised. Maybe there is hope for a unified field theory in information theory, instead of quantum (or the wretched string theory).