What do you do when you have a $2.5-billion vehicle on a world so distant it would take between 4 and 24 minutes (depending on the distance between Earth and Mars) to tell it to “STOP, STOP NOW!”
The answer is you drive carefully. So before the Curiosity rover moves from waypoint to waypoint on Mars, Ray Arvidson, a planetary scientist at Washington University in St. Louis who serves as a surface properties scientists for the mission, meets by phone with NASA engineers to discuss and plan the rover’s route. Together, the scientists pick a path that skirts deep sand, steep inclines, sharp rocks and cul de sacs.
Two years ago Arvidson prepared for these teleconferences by clicking through images made by cameras aboard the rover or satellites orbiting Mars. But path planning from images requires sophisticated photo interpretation skills. Foreground objects obscure background objects, and the flat images make it difficult to gauge size and distance. The planner also must mentally stitch one image to the next to reconstruct the flow of the landscape.
How much easier it would be if the planner could just step onto the Martian surface and walk around, inspecting boulders, sand pits and scree as if he or she were actually there.
And that’s exactly how Arvidson does it today. He was a beta-tester for OnSight, a system co-developed by Microsoft and the Jet Propulsion Laboratory that integrates data from the Curiosity rover to produce a 3-D simulation of the Martian landscape. The imagery is projected on a see-through screen in a head-mounted display called HoloLens, which was developed by Microsoft.
The key to the illusion of being on Mars is a changing vantage point. Artists have labored for years to turn a sheet of the paper into a window pane, through which the viewer could glimpse an apparently three-dimensional world. But perspective, the mathematical system that was worked out for doing this, works only when the drawing is seen from a particular vantage point.
Today, fast computers and efficient computation methods have finally made it possible to track the viewer’s gaze and continually update a perspective rendering so that the viewer remains at the magical vantage point no matter how he or she moves. The result said Phil Skemer, an associate professor of earth and planetary sciences, is a “three-dimensional model that appears to hover in the center of the room.”
Skemer got involved when Arvidson asked if he might be able to use HoloLens as a teaching tool or for data discovery. Skemer, who directs undergraduate education in earth and planetary sciences, had already done some work with virtual geology and jumped at the chance.
Skemer’s HoloLens applications differ from Arvidson’s in one crucial way. For path planning Arvidson wants the virtual landscape to be at human scale. Skemer, on the other hand, wants to be able to change scale.
“In earth and planetary sciences, a lot of phenomena are scale dependent, so being able to drill down or zoom out gives students a much better idea of how these scales are integrated,” Skemer said.
“We want to be able to look at crystal structures — atomic scale — jump to the size of a hand sample, then jump up to outcrop scale, jump again to mountain-range scale, and maybe go all the way to planetary scale,” Skemer said.
Crystal structures are a good example of the benefits of virtual geology, he said. “It’s very difficult for students to understand them. Two dimensions don’t do them justice. Even if you have a ball-and-stick model, you have to turn it around in your hands and study it from many angles to grasp its symmetry.”
But with Hololens plus software called CrystalMaker, we can generate a three-dimensional model of every crystal there is. And because the Hololens headsets talk to one another, a class of students can all look at the same part of that structure at the same time. It’s perfect for shared discussions.
“Everyone who has tried it thinks it is spectacular,” Skemer said.
“When we were in Argentina over spring break for an undergraduate field class,” he said, “we collected kilometer-scale drone footage and outcrop-scale photography of the geology and are currently stitching these images together to create more three-dimensional models. Our goal, which is made possible by an anonymous donation, is to assemble a large data set of three-dimensional structures that we can incorporate into a single app for use in earth and planetary sciences classes both at Washington University and other institutions.”