What if we could automate the writing of clickbait headlines, thus freeing up clickbait writers to do useful work? That’s the question Lars Eidnes wanted to answer when he programmed a recurrent neural network to generate “formulaic and unoriginal” headlines like these:
- Top Yoga Songs For Halloween
- How To Make A Classic Cold Cheese Cake
- Are You Living Without A 5,000-Year-Old Style?
- Jimmy Kimmel And David Beckham Play A Girl At The San Francisco Comic Con
Eidnes trained the network by feeding it two million headlines scraped from Buzzfeed, Gawker, Jezebel, Huffington Post and Upworthy.
How realistic can we expect the output of this model to be? Even if it can learn to generate text with correct syntax and grammar, it surely can’t produce headlines that contain any new knowledge of the real world? It can’t do reporting? This may be true, but it’s not clear that clickbait needs to have any relation to the real world in order to be successful. When this work was begun, the top story on BuzzFeed was “50 Disney Channel Original Movies, Ranked By Feminism.” More recently they published “22 Faces Everyone Who Has Pooped Will Immediately Recognized.” It’s not clear that these headlines are much more than a semi-random concatenation of topics their userbase likes, and as seen in the latter case, 100% correct grammar is not a requirement.
After training the neural network, Eidnes concludes, “It surprised me how good these headlines turned out. Most of them are grammatically correct, and a lot of them even make sense.”
Take a look at the results on his site, Click-o-Tron, “possibly the first website in the world where all articles are written in their entirety by a Recurrent Neural Network. New articles are published every 20 minutes.”