Machine-Learning Algorithm Generates Videos From Stills

It examines a photo and extrapolates what happens next

Share

MIT Video Generation

MIT has used machine learning to create video from still images, and the results are pretty impressive. As you can see from the above image, there’s a lot of natural form to the movement in the videos.

The system “learns” types of videos (beach, baby, golf swing…) and, starting from still images, replicates the movements that are most commonly seen in those videos. So the beach video looks like it has crashing waves, for instance.

But like other machine-generated images, these have limitations. The first is size: what you see above is the extent to which the program can render its video. Length is also an issue: only about a second of video gets produced.

But by far the biggest limitation is that, up close, these videos are nightmarishly unrealistic. The golf videos, for instance, move the same way a video might move when it’s displaying someone’s swing trajectory. But the machine learning system is only showing the general shape of movement, not replicating the exact movement. Here’s another example. Look back up at the beach photos. You see the movement estimate of a crashing wave, but it looks more like a ’90s made-for-tv effect of some alien beaming down from a spaceship.

Google has the same problem with machine-learning based images. And we’re happy to encourage the first artworks by the young A.I.s… we just hope their skills get better and start looking less like a bad acid trip.

[MIT via Kyle McDonald]

https://twitter.com/kcimc/status/773933808542375936//