Defining an objective goal and defining a small set of actions to try at random, and then iterating with small changes to the most successful series, will obviously produce a series of instructions capable of achieving that goal, but to say it resembles intelligence is silly. It's not intelligence, it's brute force trial-and-error. It's like they are comp sci freshmen who wrote MyFirstGeneticAlgo.java, read a bit of Heidegger, and now they think they're geniuses. This NES bot is a *way* cooler example of the same basic idea. It even finds its own goals: Instead of a defined goal of "entropy staying high", it searches for sets of lexicographically ordered values in RAM that get progressively higher based upon some example input sequences, and it "learns" to play the game based upon its self-defined goal and even invents new techniques to "win".
Computers aren't predictive when it comes to simulations, if their predictions are indistinguishable from the real thing because they are simply continuations of the simulation. The Weisner-Gross bot in the paper linked by OP runs the simulation into the future and maximizes its score of "entropy", calculated by an equation provided by the researchers. Tom7's NES bot runs the emulator into the future and maximizes whatever memory values the example input demonstrated were valuable. They both return to the present and choose "optimal" paths that achieve their objective by rejecting choices that played out poorly. Perhaps in a simulation where the results of the input were nondeterministic, or in a fuzzy simulation of an external environment out of the program's control, it would be valid to call a program predictive.