I’m always interested in what’s going on in other fields, but can't sift through the onslaught of articles to find the interesting ones (and significant/important/citable doesn't necessarily mean interesting). There’s so much more research beyond what's in “the news” and it’s always exciting to hit upon a new topic I didn’t even know existed! So what have been some of your favorite articles over the years?
Here’s two that amazed me when I first read them to get us started:
This study was about why a certain species of fly has evolved to only be able to live on a single species of cactus. It appears that at some point, the fly species developed a mutation that caused it to be unable to make a necessary chemical for itself, but which it had also been consuming from the cactus. Thus, once it couldn’t make it’s own chemical it became restricted to only living on that cactus so that it could get that chemical. Neat!
Mutations in the neverland Gene Turned Drosophila pachea into an Obligate Specialist Species (Science 28 September 2012) http://www.sciencemag.org/content/337/6102/1658
There’s this general belief that you shouldn’t touch bright amphibians like frogs and salamanders because they are the most poisonous. This study found that some groups of bright poisonous frogs in Central America have actually evolved to be more poisonous and less visible. All of their ancestors and relatives are brightly colored and poisonous, but they are dark green and even more poisonous.
INVERSELY RELATED APOSEMATIC TRAITS: REDUCED CONSPICUOUSNESS EVOLVES WITH INCREASED TOXICITY IN A POLYMORPHIC POISON-DART FROG (Evolution June 2011) http://onlinelibrary.wiley.com/doi/10.1111/j.1558-5646.2011.01257.x/abstract
Yay this is one of my favorite topics!!!! I work in psycholinguistics, specifically studying how people process sentences. I'm really interested in how people use probabilistic information about their language while processing & producing. Here are my favorite articles: Aylett & Turk (2004). The Smooth Signal Redundancy Hypothesis: A Functional Explanation for Relationships between Redundancy, Prosodic Prominence, and Duration in Spontaneous Speech. Language and Speech. Levy (2008). Expectation-based syntactic comprehension. Cognition. I always find myself linking people to this paper because it's extremely useful when doing categorical data analysis since apparently all intro statistics ever teaches are t-tests and ANOVA: [Jaeger (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language.](http://www.researchgate.net/profile/T_Florian_Jaeger/publication/38063025_Categorical_Data_Analysis_Away_from_ANOVAs_(transformation_or_not)_and_towards_Logit_Mixed_Models/links/09e41500d7edfc3aa5000000.pdf) One of my favorite recent experimental papers: Jaeger & Snider (2013). Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition.
Jaeger & Snider (2013). Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition. That's a really neat paper, thanks so much for sharing it! I'd actually noticed this effect in myself, but never really thought much about it and this paper was a great intro to the topic for me. The most vivid and extensive time it happened to me was when I spent a lot of time reading "The Grapes Of Wrath." I noticed that I was thinking and occasionally speaking in the dialect of the main characters in the book. I have noticed it at other shorter intervals in conversations with others and as a result of reading other books or watching TV shows. What an amazing thing our brain can do. As a science teacher, we are frequently encouraged to get the students to "talk" in the language of science and to model the language of science so that they will use it themselves. I wonder if there is a connection here and this study helps explain what's going on? PS that stats paper is interesting too. I actually love stats and find the topic fascinating, but my advanced math skills are not that great. I did however find this line about ANOVAs interesting in the context of the paper: "For continuous outcomes, the means, variances, and the confidence intervals have straightforward interpretations. But what happens if the outcome is categorical?" I had never really thought about this disconnect between ANOVA and categorical outcomes in this way. It seems almost silly to use ANOVA in such a situation. PPS The other papers are on the longer side, I'll get to them soon :)
Almost everyone in my field (natural language processing) is currently working on word embeddings (and is sick of everyone working on word embeddings). Word embeddings are continuous vector representations of words based on their representation that can be used as inputs in place of specifying the exact word as input to a statistical model like a neural net. So you don't have to learn how to deal with the word "hamburger" in every situation as long as you know how to deal with words that are distributed similarly to "hamburger". The cause is one paper from a team at Google, which used a shallow neural net that developed representations for tasks like predicting a word from its context. The representations they found had the property that you could perform analogical reasoning with them, so roughly speaking you could add and subtract vectors to find "paris" - "france" + "italy" = "rome". Mikolov et al. (2013). Distributed Representations of Words and Phrases
and their Compositionality. And a paper last year found that this neural model, with its regularizer and everything, was actually implicitly performing matrix factorization on a matrix where cell (i,j) described how often word i and word j occur together. This was incredible, because NLP researchers had been using matrix factorization of this sort to find word embeddings for a long time, and this fancy new technique for learning embeddings suddenly had a clear intuition for why it worked! Levy and Goldberg (2014). Neural Word Embedding
as Implicit Matrix Factorization.
This is really cool. PMI is a super useful measure, so it's nice that this model that works really well turns out to use it.And a paper last year found that this neural model, with its regularizer and everything, was actually implicitly performing matrix factorization on a matrix where cell (i,j) described how often word i and word j occur together. This was incredible, because NLP researchers had been using matrix factorization of this sort to find word embeddings for a long time, and this fancy new technique for learning embeddings suddenly had a clear intuition for why it worked!