What would constitute an objective standard here? By what standard does the researcher judge which automatically generated statements are meaningless? If the algorithm slings words together into something that one person finds meaningful but another doesn't, who's to say that the more skeptical judge is simply correct? One person might find a poem profound that another finds meaningless, and in that context we don't say one is wrong and the other right, because we understand that reading poetry is an active interplay between the reader and the poem. Why then declare that there is an objective standard of meaningfulness in computer-generated new agey statements? I clicked the bullshit generator a few times. Most of these statements strike me as pseudo-profound junk, but occasionally one pops up that gives me pause. For example, "Turbulence is born in the gap where awareness has been excluded." I read that and thought, yes, it's easy to slip into a frantic spin when you're not paying attention to your life. This randomly generated statement suggests something to me that it might not to someone else. Who's right? There's no right, of course. How you read it depends what you bring to it. So I'm a little suspicious of an experiment that starts out from the premise, "These statements are meaningless." Which isn't to say the experiment is worthless. It shows that people handle these statements differently, and one group might see the other as credulous while the other may see the first as lacking imagination. But if you then ask, "who's right?" you should watch what standards you smuggle in to that question.