I'm predicting no effect, until we change more things to load dynamically. Because it's loading all publications on startup. So, right now, once loaded, the only difference will be the ctags which I made load dynamically. And that's only loaded when an individual story is displayed, and saved when a user adds a community tag.
And in fact, it already loads less. The ctags used to be part of the pub, loaded from file, the full list of ctags. The only data actually needed to display a story is, "has the current user created a ctag for this publication?" So, instead of loading all ctags, it queries only that, "select count(1) from publication_community_tagses where id = $1 and username = $2;" I think it will be faster, but insignificantly so.
For the same amount of data, in theory, SQL should be slower than raw disk reading. But right now, we have to read and parse the full pub, for every pub. So, I think it will be faster, once everything is changed to only fetch the data that the currently requested page needs. More importantly, it will be infinitely more scalable.
If not, memcached. If hubski had significantly more traffic, memcached would be essential. But I think hubski typically has <1 request/second (can you verify?). If hubski were getting say, 1k requests/second, there's no way a dozen SQL queries for each request would be fast enough. We'll just have to see.