We tend to give out resources to those who have a past history of success, and tend to ignore those who have been unsuccessful, assuming that the most successful are also the most competent.
But is this assumption correct? I have spent my entire career studying the psychological characteristics that predict achievement and creativity. While I have found that a certain number of traits-- including passion, perseverance, imagination, intellectual curiosity, and openness to experience-- do significantly explain differences in success, I am often intrigued by just how much of the variance is often left unexplained.
Many meritocratic strategies used to assign honors, funds, or rewards are often based on the past success of the person. Selecting individuals in this way creates a state of affairs in which the rich get richer and the poor get poorer (often referred to as the "Matthew effect"). But is this the most effective strategy for maximizing potential? Which is a more effective funding strategy for maximizing impact to the world: giving large grants to a few previously successful applicants, or a number of smaller grants to many average-successful people? This is a fundamental question about distribution of resources, which needs to be informed by actual data.
The article argues, based on simulations, that it's more beneficial for everyone to give out smaller grants and rewards to a larger pool of average-or-higher-talented people.