Social media is the battleground of the selfie. Our egos live and die over likes, retweets, and comments, but until now only humans could be the judge of that sweet pic you tweeted from brunch.
Using pictures from social media, Stanford researcher Andrej Karpathy trained a 140-million parameter neural network to gauge the ultimate selfie, based on the amount of likes each picture received.
Karpathy used a convolutional neural network, a flavor of AI used most often when working with images. It was invented by Yann LeCunn, who now leads AI research at Facebook. First, he showed the network 2 million photos of selfies, which he got by scraping the web for social media posts tagged #selfie. Selfies with more likes, proportional to audience size, were ranked as better. The neural net then broke down these images to varying levels of abstraction, and ultimately learned what a good selfie looked like.
The machine helped Karpathy realize a few things about #selfies. First, the most-liked selfies are posted by females. When the machine was asked to rank new photos, there wasn’t one male in the top 100 selfies. It also favors tricks like using a filter, oversaturating the face, and adding borders.
And with the best, comes the worst. The bottom-tier selfies were mostly too close, and in low lighting. Group shots were consistently ranked lower, even though they were given preference in the training process.
Karpathy built his AI into a Twitter bot, @deepselfie , which ranks your selfie out of 100 percent. It’s been pretty brutal so far. For instance, the only picture of me in the last 5 years without a beard was ranked at 44.7 percent. That means that out of 100 selfies, mine has a 44.7 percent chance of being in the top half of all selfies. If I were to have gotten a 90 percent, it would have been 90 percent sure I was in the top half. Try it out, and see if this AI machine thinks your selfie game makes the cut.
Updated on October 27, 2015 to correct Andrej Karpathy’s name and affiliation.