Many – if not most – scientific experiments present us with a vast amount of numbers that we cannot understand without the help of various visualisation techniques, which render them comprehensible.
While, in some cases, these are nothing more than relatively simple charts or graphs, scientists often need much more sophisticated data-rendering methods to gain insight into complex phenomena.
As computers grow more powerful, so does visualisation software. Currently, the latter comes in two main flavours: rasterization and ray-tracing. The first method works by projecting a flat surface onto the 3D model of an object, scene or person, whereas the second one simulates the photons of light as they bounce from a light source off an object and into our eyes, based on the laws of optics.
The main difference between the two techniques is that rasterization produces only a surface representation, whereas ray-tracing recreates the whole thing.
“Rasterization looks realistic from the outside, but you can’t explore beyond the surface,” explained Paul Navratil of the Texas Advanced Computing Centre (TACC). Ray-tracing, on the other hand, is like a real street in a Western ghost town. “You can walk into the saloon and sit down at the bar.”
Up until now, vector graphics-based rasterization has been the dominant paradigm, as ray-tracing – even though much more scientifically accurate – also requires much more computing power.
With today’s hardware, though, this is quickly becoming an issue of the past – ray-tracing software is finally making its long-awaited comeback.
Developed in collaboration by the TACC, the universities of Oregon and Utah, Intel Corporation and the visualisation software company ParaView, GraviT (pronounced “gravity”) is a new computer program that automatically recognises the type of problem a researcher is working on and the configuration of the system in use, and then appropriately distributes data from the simulation to multiple computer processors – potentially thousands of them – for visualisation.
The beauty of this new software is that it doesn’t require a lot of knowledge about visualisation, allowing researchers get on with their work without interruption.
“Software-based ray-tracing is now viable again. To bring it into the future, so it works on current and future hardware, we need sustainable software. This work can be incorporated into different visualisation packages and into the community of visualisation tools,” said a National Science Foundation (NSF) Project Director Daniel S. Katz.
The software was designed with the near future in mind, when scientists working on super-computers in the cloud will be creating simulations that are simply too big to be moved for rendering, and will have to be visualised locally, even as they’re still running – a process known as “in-situ visualisation”.
Currently, the release of a beta version is set in the fall of 2015. The program will extend the first component of the system – called GluRay – which the team rolled out several months ago as an open-source tool on GitHub that allows researchers to visualise their work on distributed computers, regardless of their architecture or hardware parameters.
GluRay has already been tested on a variety of problems by geologists and astrophysicists, making the final release a highly anticipated event.
Another reason why the software is likely to render quite a few services to science is that many phenomena scientists study look a lot like ray-tracing.
“Whether it’s fluid flow or stellar magnetism, these problems involve tracing particles,” Navratil said. “For all of these problems, the solutions we’re developing will be a big help.”