They should run the analysis on the U.S.
I worry that if they're using sentiment analysis that they're not correctly isolating out the multitude of hidden variables involved with analyzing journalistic text. Examples include journalistic bias, extreme quotes from one viewpoint juxtaposed to moderate quotes from an opposing viewpoint, etc. Then again, the more data you throw at a model, the better its predictions become, even using terribly naive algorithms. Since they're in the terabytes range, it's certainly credible.
I mean I would just point out the Black-Scholes model. It was extremely good but it couldn't predict everything. Things like 6 sigma events are always possible and unpredictable and I think, just as most rational people would, that this could get worry some if it was actual used to predict foreign policy or something similar. However, the information it could find would be very interesting to researchers I would think.