The rise of digital devices and technologies has dramatically increased online activities for individuals, businesses, and governments. And though this accelerated connectivity brings many benefits, it also creates a treasure-trove of data to plunder — along with new forms of foul play.
For example, while traditional bank robberies have declined dramatically since 1991, cyberheists are on the rise. Among these cyberbandits is a multinational gang known as Carbanak, which reportedly siphoned as much as $1 billion from bank coffers in 30 countries over a two-year period.
“The proliferation of online devices and services means that attack surfaces continue to expand — along with the amount of valuable data that is exposed,” said Bo Rotoloni, deputy director of the Information and Cyber Sciences Directorate at the Georgia Tech Research Institute (GTRI) and co-director of the new Georgia Tech Institute for Information Security and Privacy (IISP). “In addition, attribution is difficult, if not impossible. It’s easy for cybercriminals to hide their tracks versus brick-and-mortar bandits that go in with guns blazing.”
An interdisciplinary research center that launched in July 2015, the Institute for Information Security and Privacy connects security experts within GTRI, Georgia Tech’s College of Computing, Scheller College of Business, College of Engineering, and Ivan Allen College of Liberal Arts. “The idea is to address very complex problems in cyberspace and conduct research that has a positive societal impact,” Rotoloni said.
By sharing research talent and support infrastructure, the Institute for Information Security and Privacy provides a mechanism to better coordinate large-scale projects, said Wenke Lee, a professor in the College of Computing who serves as the other co-director. “We will enable researchers to move seamlessly between basic and applied research,” he explained. “Information security is an arm’s race, and this cross-campus partnership will enable us to stay in front.”
Putting the Sting on Malware
Among cybersecurity challenges, malicious software (malware) threats continue to loom large. “In the past five years, commoditization of malware has grown because the investment is small for criminals,” pointed out Christopher Smoak, a research scientist and division chief at GTRI’s Cyber Technology and Information Security Laboratory (CTISL). “People can spend $25 to $50 to get point-and-click tools that not only build malware but also obfuscate it and make it more resilient.”
Enter Apiary, GTRI’s automated malware intelligence system, which allows members to anonymously submit suspicious files for fast analysis — as well as receive information about attacks on other organizations and how they responded.
The Apiary project began in 2010, and since then it has grown from a handful of members to a community of more than 120, including Fortune 500 companies, nonprofits, academic institutions, and government agencies. All members are carefully vetted and anonymity is strictly enforced.
The community involvement is an important piece, Smoak stressed: “Organizations are often reluctant to share information about how they got attacked because it’s akin to airing dirty laundry. And though they may not want to release information in a public arena, it’s critical from a technical perspective to help people in other industries learn from their experiences. Apiary provides a sort of crowd-sourcing threat intelligence.”
Apiary also serves as a research platform. “If someone has a new technique to reverse-engineer malware, they can run it against our repository, which has more than 140 million samples,” said Andrew Howard, CTISL’s director. “All we ask in return is that they share their intelligence.”
In contrast to other detection systems, Apiary features modular capabilities so GTRI can quickly add new technologies without needing to rebuild its analysis engine. Apiary also leverages a hardware-only analysis technique developed at Georgia Tech for transparent analysis, which prevents malware authors from knowing they’ve been outed.
Researchers are currently working to make Apiary more robust by adding machine-learning models as well as correlation techniques to uncover similarities between seemingly disparate malware. The goal is to provide greater context about the threats.
“Looking at just one file in a vacuum isn’t helpful anymore,” Smoak said. “The big impact comes from understanding the actors behind the malware and what their intentions are.”
Into the Woods
Another tool in GTRI’s cybersecurity arsenal is BlackForest. This open-source intelligence system blends sophisticated collecting with analysis capabilities to identify possible attacks — before they happen.
“Although it may be anonymous, there’s a lot of information in social media and hacker forums about attack targets or new malware releases,” Smoak said. Attackers may use Twitter or Facebook to enlist others for distributed denial-of-service (DDoS) attacks, or malware authors may post new code to announce its availability and get feedback, he explained.
To expose malevolent activities, BlackForest crawls through the deep, dark Web looking for clues; then it builds a graph database to connect information. For example, it might link personas in different chat rooms who are working together or related in some way.
By automating the collection and monitoring of this kind of data, BlackForest enables security analysts to be more proactive. If an important persona speaks up about a piece of malware, analysts can take steps to protect their networks in advance. Or passwords, credit card numbers, or intellectual property might show up for sale, indicating that a company’s network has been breached.
Researchers are now making BlackForest even more robust by adding machine-learning models, which will enable the intelligence system to start recommending actions.
Protection beyond the Perimeter
Although passwords and fingerprints can block illegal access to cellphones and tablets, they aren’t foolproof. Consider the 7-year-old boy who gained entry to his father’s cellphone by holding the device to his sleeping parent’s hand.
“Passwords are a one-time authentication that only protect the perimeter,” pointed out Polo Chau, an assistant professor in Georgia Tech’s College of Computing. “And if you get past that gate, you can do anything you want.”
Raising the bar on mobile security, Chau and a research team that included College of Computing professor Hongyuan Zha, and undergraduate students Premkumar Saravanan and Samuel Clarke, have developed LatentGesture, a new approach to authentication based on “touch signature.” For example, some people touch their screens harder or hit the edge of buttons rather than the center. Others may drag their fingers across the slidebar faster or move from the lower left to the top right. Tracking these minute differences, Chau’s technology establishes a touch signature for the mobile phone or tablet owner — and then constantly compares that ID with whoever is currently using the device.
In a lab study, LatentGesture, which is supported by National Science Foundation (NSF) funding, scored a 98 percent accuracy rate for smartphones and a 97 percent rate for tablets. Currently the researchers are making the technology more efficient and investigating how different movements and environmental settings, such as walking or lying on a couch, might affect touch signatures.
“This won’t replace the password,” Chau said. “It’s a complementary security technology that provides ongoing authentication in the background. Even if someone gets past the first line of defense, we can continue to monitor the user to ensure they really are who they claim to be.”
Guilt by Association
In another project, Chau and graduate student Acar Tamersoy have developed a scalable patent-pending algorithm that can detect malware with extreme precision. Named Aesop, after the ancient Greek fabulist’s moral that “a man is known by the company he keeps,” the patent-pending technique determines a software file’s “goodness” or “badness” by analyzing its relationship with peer files.
Developed in collaboration with Kevin Roundy at Symantec Research Labs, Aesop leverages locality-sensitive hashing and graph mining techniques to quickly see how files relate to one another and establish a reputation score.
“Downloading an application, such as Microsoft Word, involves thousands of files,” Chau explained. “If a malware detection solution knew which files are related, it could label them simultaneously. Yet most current solutions don’t distinguish applications; all they see are files. To get around this blind spot, Aesop essentially reverse-engineers files to uncover their relationships, which improves accuracy in labeling the files as good or bad.”
Aesop builds on previous reputational scoring that Chau did as an intern for Symantec while earning his graduate degree at Carnegie Mellon University. This earlier technique looked at the relationship between files and machines — assuming computers with good hygiene would attract fewer malicious files. Although the approach was successful, Aesop detects malicious files more accurately.
In fact, Aesop can identify 99 percent of benign files and 79 percent of malicious files at least a week earlier than current state-of-the-art techniques. In addition, it boasts a 0.9961 true positive rate at flagging malware and a 0.00001 false positive rate. Symantec is now deploying Aesop into its suite of security solutions.
Defending the Voice Channel
Among new landscapes for nefarious activity is the phone.
“Telephony used to be a closed and trusted system, but with the rise of smartphones and technologies like VOIP, telephony and Internet systems have converged,” pointed out Mustaque Ahamad, a professor in the College of Computing who serves as an external technical adviser to the Federal Trade Commission and recently won a Google Faculty Research Award to study telephony-based threats. “As a result, threats that we’ve been dealing with on the Internet side are now showing up in telephony.”
Issues range from annoying robocalling and voice spam to more malicious activities, such as phone fraud campaigns, voice phishing (vishing), and caller-ID spoofing.
To help combat these new threats, Ahamad and former Ph.D. student Vijay Balasubramaniyan developed an audio fingerprinting technology that can determine the true source of a phone call. Licensing the technology from Georgia Tech, the two researchers launched a startup company in 2011. Since then, Pindrop Security has been growing quickly and now counts more than 100 employees. (See below.)
In other groundbreaking work, Ahamad and collaborators from New York University Abu Dhabi and industry recently built the first large-scale telephony honeypot — PhoneyPot — to lure voice-channel villains and study their exploitation techniques.
“Although honeypots are common on the Internet, they present greater challenges in the voice channel,” Ahamad observed. Among these: the expense of obtaining a large, diverse pool of phone numbers and routing calls, determining how best to engage callers to reveal their real agendas, and adhering to telephone recording laws.
Ahamad’s team obtained more than 39,000 phone numbers from a cloud-based telecom service provider to construct PhoneyPot. Over a seven-week period, they received 1.3 million unsolicited calls from 252,621 unique sources, and analysis of the calls revealed several abuse patterns, including debt collection, telemarketing, and DDoS attacks. Among trends, the researchers found that older phone numbers attracted more calls than newer ones.
The researchers presented a paper on PhoneyPot at the Internet Society’s 2015 Network and Distributed System Security Symposium in February. This paper, which outlines how to construct a successful telephony honeypot, won a distinguished paper award. (Several telephony honeypots now operate around the globe to collect intelligence on telephony attacks.)
Moving forward, Ahamad and Manos Antonakakis, an assistant professor in the School of Electrical and Computer Engineering and an adjunct faculty member in the School of Computer Science, are now studying an even newer phenomenon: cross-channel attacks.
Cross-channel attacks combine resources from both telephony and Internet channels. For example, a text message may trick smartphone owners into clicking a link that causes excessive charges on their phones — or lure them to a bogus website where they are conned into inputting credentials.
“This is a mutation of online abuse that now reaches our mobile devices,” Antonakakis said. “And it’s quite successful. Because mobile devices are smaller, you’re less likely to notice something fishy about a domain name or the method itself.”
Sponsored by the NSF, the research aims to gain situational awareness and develop techniques to mitigate attacks. “In addition, we want to understand how intelligence available from one channel can help us defend the other channel,” Ahamad said.
Measuring Network Security
Earlier this year Antonakakis launched the Astrolavos Lab, which specializes in network security, anomaly detection, and data mining. Among recent milestones, the researchers have created a tool to show how companies’ technology investments have mitigated risk of attacks.
“Our metric solves a large problem in the security community,” Antonakakis observed. “Until now, the only thing available was ethical hackers — consultants who come in and try to attack existing infrastructure and then give their subjective opinion on how resilient the network is.”
Yet by leveraging large datasets and machine-learning techniques, Antonakakis’ team has been able to create an objective methodology that security officers can use to independently evaluate and score network resiliency. Currently they are testing the metric on Georgia Tech’s network.
In other security projects, Antonakakis’ team has been investigating the impact of botnets (networks of Internet-connected computers that are infected without their owner’s knowledge). Looking at the TDSS/TDL4, one of the largest mass infections to hit the online advertising community, the researchers revealed financial damages of more than $650 million. In contrast to one- or two-week snapshots, the team used four years of data from a major North America ISP — marking the first large-scale longitudinal study to measure botnet abuse.
“The extent of the abuse is a key takeaway,” said Antonakakis. “This is not only important for developing network policy and remediation strategies, but also to prosecute the people behind these criminal activities. Judges must be able to see how much damage has occurred.”
The researchers are now creating a standardized unit to measure botnets and other mass infections — a project sponsored by the U.S. Department of Commerce’s National Institute of Standards and Technology. “This is important not only to understand the size of the botnet population and their impact, but also to help organizations more effectively prioritize their responses,” Antonakakis explained.
Currently an estimated 15 billion physical objects use the Internet to exchange data — a number expected to reach 50 billion by 2020. Known as the Internet of Things (IoT), this includes everything from cellphones and smart watches to heart-monitoring implants and home automation.
Within the IoT community, embedded controllers are a growing security concern, especially those used in industrial control systems (ICS) to control physical processes. “Once locally connected, these devices are increasingly connected via the Internet,” said Lee W. Lerner, a researcher at GTRI’s CTISL. “Their security lags behind general computing devices like laptops, and Internet access makes them much easier to find and attack.”
Fallout depends on the specific device. “ICS environments are a primary concern from a nation-state level because that’s how attackers can harm critical infrastructure, such as energy utilities or manufacturing processes — things that can have devastating economic impact,” Lerner said, pointing to the StuxNet worm that infected programmable logic controllers in Iranian industrial facilities in 2010.
In response, GTRI is developing novel inspection tools and techniques to determine how trustworthy embedded controllers might be or if anything malicious has been inserted in their design.
Another IoT initiative takes a proactive approach to security by building in integrity from the get-go.
In collaboration with Virginia Tech, GTRI has developed an architecture that provides process resilience against cyberattacks on physical targets. Known as Trustworthy Autonomic Interface Guardian Architecture (TAIGA), the design ensures stability regardless of what else may be happening within a computational system. “The idea is to develop a root of trust — a core computational component that will always perform the way the designer intended without any additional functionality,” Lerner explained. “It acts as a last line of defense, much like interlocks on mechanical equipment.”
Now that TAIGA has reached a level of maturity, researchers are developing lab experiments to demonstrate the design. Among these is a Johnny 5 robot, whose IP address will be accessible over the Web, and whose control system individuals will be encouraged to try to hack. Another experiment will feature a motor in an industrial control system that receives commands from higher-level units. GTRI visitors will be able to see how the motor remains protected under attack.
Beefing up security on embedded controllers is a different ballgame from protecting networks, data encryption, or how servers connect to devices. “We’re working at the leaf node — the computational component of the system that directly interfaces with physical processes or people,” Lerner explained. “We’re focused on information that is configuring the hardware or implementing control algorithms on these devices.”
Even when computers and smartphones are not connected to the Internet, they can be vulnerable to hackers due to the low-power electronic signals they emit. These “side-channel signals” include electromagnetic emissions, acoustic emissions, and power fluctuations, which can be measured up to a few yards away by a variety of spying devices. Electronic eavesdroppers can learn passwords and encryption codes — and even see what someone is writing in an email or Word document.
“Although side-channel emissions is not an epidemic, they have been abused — it’s just not as well known as hacking a computer,” said Alenka Zajic, an assistant professor in the School of Electrical and Computer Engineering who is investigating the phenomenon along with Milos Prvulovic, an associate professor in the School of Computer Science, and graduate student Robert Callen.
Among other milestones, the team has developed a way to measure the strength of side-channel emissions. In a test on three different laptops, the researchers found the largest signals occurred when processors accessed off-chip memory. “It’s impossible to eliminate all side-channel emissions, so the idea is to determine which ones cause the largest threats and try to muffle them,” Zajic explained.
Building on this earlier work, the researchers are now developing algorithms to quickly evaluate spectral patterns and find system vulnerabilities in the frequency domain. For example, in one experiment, the researchers determined that the loudest amplitude-modulated emissions were generated by voltage regulators, memory refresh activity, and DRAM clocks. The research is sponsored by NSF and the Air Force Office of Scientific Research (AFOSR).
“What distinguishes our research is that we’re looking beyond breaking encryption to monitor software activity,” Zajic said. “We’re building analytic tools to understand why and how side-channel emissions occur. Once we have answers, they can be used in many ways — from protecting computers so they don’t leak to exploiting the side emissions to help with program debugging.”
Source: Georgia Tech