During an emergency situation, the speed of data exchange between responders on site and decision-makers back at the office is crucial for implementing a quick and effective rescue mission.
To this end, a team of researchers at the Rochester Institute of Technology (RIT) has developed a new technique, called the Multi Node Label Routing Protocol (MNLRP), which could be used to improve the flow of information at a time of need.
“Sharing data on the Internet during an emergency is like trying to drive a jet down the street at rush hour,” said co-principal investigator on the project Jennifer Schneider from RIT. “A lot of the critical information is too big and data-heavy for the existing internet pipeline.”
Currently, emergency responders have no other choice but to use the same networks as the civilian population, which means that critical information, such as mapping images, cell phone location data, video chats, and voice recordings, have to compete with the inflow of tweets and messages, thereby clogging the network and causing local failures.
The MNLRP works by finding an alternate path for data to take as soon as a link or node fails, which may allow it to recover up to six times faster than an older protocol. It also runs below the existing Internet protocols, allowing normal traffic to function without interruptions.
“The new protocol is actually of very low complexity compared to the current routing protocols, including BGP [Border Gateway Protocol] and OSPF [Open Shortest Path First],” explained lead author on the study Professor Nirmala Shenoy. “This is because the labels and protocols leverage the connectivity relationship that exists among routers, which are already sitting on a nice structure.”
A test conducted this month, where the team ran data between 27 nodes representing the network of the incident control centre, the 911 control centre, and the office of emergency management, showed that while the BGP took about 150 seconds to recover from a link failure, the MNLRP took only 30.
According to Shenoy, the main problem with current protocols is that they were invented several decades ago and are therefore poorly suited for network scenarios that arise on the Internet as it is today.
The team will keep improving the protocol and attempt a real-world test in the foreseeable future.