What if you could watch the next World Cup play out on your coffee table in 3-D? Researchers at the University of Washington Reality Lab are hoping to make that science fiction scenario a reality by using sophisticated computing techniques to turn two dimensions into three.

Konstantinos Rematas, a postdoctoral researcher at UW, led the team that developed a program they call “Soccer on your Tabletop.” It takes a simple highlight video and maps it into three dimensions. With the aid of an augmented reality device, the video can be displayed on any flat surface and the viewer can walk around to see the play from different angles.

The driving force behind their software is a highly-trained neural network. A neural network is essentially a computerized brain that scientists develop to solve complicated problems. Just like your brain is made up of interconnected neurons that work together, a neural network is made up of algorithms and functions that pass bits of data to each other.

The more complex the problem, the bigger the network needs to be, and the more examples it needs to see in order to produce the right results. To teach their program to display soccer players correctly, Rematas and his colleagues played soccer video games—a lot. They figured out how to extract data from the game and use it to train their software.

Behind the 2-D image on a video game display, there is underlying data that stores information about the depth of the object, in this case, the soccer player. The researchers extracted over 10,000 image-depth map pairs from 150 games played on their gaming system. They were not short of volunteers who wanted to pitch in, said Rematas.

“It was fun, so everyone wanted to help,” he said.

Now that the program is trained on the video game data, it can take a soccer video, determine where each player is, and recreate them in 3-D. The field and the goal are added back in at the end.

Other computer scientists see this as an important step forward.

“I think what’s missing is really the content in augmented reality,” said Angjoo Kanazawa, a postdoctoral scholar at UC Berkeley who was not affiliated with the project. To her, the field is a symbiotic relationship, where better content drives better hardware, and vice-versa.

“This [project] is really in that direction,” Kanazawa said. “Three years ago, people wouldn’t have thought we could do what we do today.”

To render the 3-D image, the whole process takes about 20–30 minutes from playing the clip on a computer to watching it on the table. The researchers are continuing to refine their process to cut down on this waiting time. One day, they hope, the program will be fast enough that this can be done in real time.

It hasn’t all been fun and games, of course. There was argument between the European and the American members of the team over whether to use “soccer” or “football” in the name of the project. And there are still key elements missing from the game—like the ball.

The ball is harder for the program to see because it spends so much time in the air. To remedy this, the researchers hope to encode laws of physics into the next version. This will help give the ball the right trajectory once it’s hit.

Despite the wrinkles, the team is optimistic. By the next men’s World Cup in 2022, they want you to be able to watch a live match using their technology.

Rematas described the future he wants to see.

“You put your glasses on and you gather with your friends, and then instead of looking at a monitor, you are looking at a hologram," Rematas said. "Then you can argue if it was a penalty or not.”