The roar of the engines is deafening. Directly in front of me, I’ve got the No. 1 car, more than 3,000 pounds of hot steel, locked in my sights. I’m right on my rival driver’s rear bumper, a supermodel-thin distance between us as my 760-horsepower Chevy bears down at 184 mph. As we go into the last turn, No. 1 offers the tiniest of openings to the inside. I go low for the pass, giving my ride everything it’s got left to pull ahead in the final straightaway . . .
And there’s the checkered flag! The No. 8 car wins the 2017 Daytona 500! Or, more accurately, I win. I inched past the real lead car and crossed the finish line first, but with a digitally rendered Chevy I drove from my couch while playing along with the race in real time on my PlayStation 5. I just flip on the TV, and instantly I can see any spot on the track from any angle I choose, get an update on my fantasy racing league on the screen, and play Nascar ’17 against actual Sprint Cup drivers during a live race.
Within 10 years, this won’t sound any more far-fetched than the first-down line superimposed on a football field. In fact, it’s the natural extension of that technology, and it will come from the same small company: Sportvision, a broadcasting-technology firm in Chicago that has developed a whole host of familiar technologies, from a graphic showing a curveball’s entire flight to home plate (plotted to an accuracy of half an inch), to that now-indispensable first-down line, a digitally generated marker that looks as natural as if it were painted on the field.
Sportvision got its start in 1996, when engineers from News Corp. developed a hockey puck that appeared to glow onscreen—an effect created by embedding an infrared emitter in the puck—to make NHL broadcasts easier to follow. Hockey purists protested (“We joke that some of our key scientists aren’t welcome in Canada,” says Sportvision CEO Hank Adams), but the company made its bones with other innovations shortly after being spun off as an independent company two years later. Its first offering, the first-down marker, became the best known; before long, watching a game without the glowing line began to seem unthinkable.
Today, Sportvision controls about two thirds of the live sports-broadcasting-enhancement industry, working on 3,000 broadcasts a year, including NBA games, Nascar races and golf tournaments. By collecting huge amounts of data on the field of play and the participants, it has shifted the viewer’s focus from the sort of perspective you could see in the stadium with a pair of binoculars to the field-level views of the players. Instead of stats you could track on the back of an envelope, it offers fans information previously beyond the reach of entire coaching staffs.
Take auto racing, for example. The company has commissioned computational fluid-dynamics analyses on vehicles and combined that with mounds of positional data to simulate airflow, showing how the draft from one car affects the performance of those near it using a trail of virtual wind [see “Draft Tracker,” above].
Effects like the airflow graphics represent the essence of Sportvision’s work—collecting information to deliver visual narratives that viewers care about.
Already, another technology dependent on data gathering has started to shape the way fans experience soccer on TV. Producers can take a still image of a play and shift the viewer’s perspective to virtually anywhere on the field. Software constructs an optimal view on a virtual field, extracts players from the video freeze frame, and then superimposes them. The process takes some five minutes of intensive manual photo manipulation [see “Seeing Every Angle,” next page] and depends on knowing the precise location of the camera.
The technical hurdles of automating the process for easy use in other sports are still considerable, and may require remote camera tracking of each appendage of every player. But the potential is enticing enough that the NBA has already approached Sportvision about creating digital game reconstructions for coaches, and Sportvision scientists dream of eventually being able to watch a pass from a quarterback’s point of view in real time, enabling fans to see for themselves why he missed that wide-open receiver.
Naturally, this kind of simulation also lends itself to gaming. “Already there’s data being collected at the venue, and there’s video,” says Sportvision engineer Ken Milnes. The problem, he says, is that current set-top boxes don’t have anywhere near the processing power of gaming consoles. As boxes evolve, fans will be able to race their own ghost cars around the track during live broadcasts. The game players wouldn’t be interfering with the actual race, of course; rather, the race would be generating instant scenarios for the game. Baseball is another target, with plans to let virtual batters swing against real pitchers in live games via their Wii. Applying this level of interactivity to other sports will be tougher, since team play is coordinated (a real quarterback can’t pass to a virtual receiver).
But Milnes is researching how data from live games can be uploaded into videogames on an ongoing basis, so the performance of players on your console will closely mirror their performance in the games you’re watching on TV. Moreover, as data collection becomes more thorough, the virtual renderings can be made more realistic. A decade from now, games might actually be holographically projected across your living room. It will be the next best thing to being there—if you can even tell the difference.
Seeing Every Angle
Sportvision is researching ways to provide a perspective from every point on a sports field by automating the process of photo manipulation. Right now, the process works best in motor sports. Because cars are easy to outfit with onboard GPS and don’t have messy moving limbs, they can be rendered from any angle as they drive.
A Sportvision-created feature on nascar.com, for example, lets fans follow the race through Dale Earnhardt, Jr.,’s simulated windshield. But in a soccer game, a player’s limbs might be obscured in the original still, requiring some work to prevent him from looking like an amputee as the perspective is artificially shifted from the original view.
The company hopes to master that technology in the next five to 10 years.