AWS has dominated the IT industry. With a comprehensive range of services spanning from object storage to machine learning, it is simple to market as one of the most reliable companies.AWS Course In Pune
The deep and extensive hacking of Sony Corporation in late 2014—stealing Sony’s most precious pieces of intellectual property and embarrassing secret email conversations—shows how vulnerable every company is to cybercrime. With the growing reliance on software and communications for global supply chain operations comes growing vulnerabilities in these systems. Since 2004, five key trends have increased the risks of IT-related disruptions to companies’ operations. These key trends are: more outsourcing of IT infrastructure to cloud computing, more Internet connectivity of industrial devices, more use of personal devices on corporate networks, more personal data online, and an overall increasing reliance on Internet-connected technologies to run global supply chains.
All of these trends stem from the declining cost and increasing utility of the technology, and they increase the risk of supply chain disruptions resulting from IT or communications failures. In fact, disruptions of information systems and telecommunications consistently rank as the number one or number two most frequent source of supply chain disruptions.1 These vulnerabilities go beyond accidental disruptions caused by power outages, computer downtime, or software bugs. Intentional disruptions include theft of intellectual property, monetary and reputational damage from theft of customer data, and sabotage of products and manufacturing operations.
“Our operating system was never built for digital security,” said an oil industry executive to the US Council on Competitiveness.2 “There have been specific cases in which hackers got all the way into the digital process controls. As we’ve moved into higher levels of digital integration, creating visibility through the value chain, our systems have become electronically linked. Automating oil field production increases the level of exposure as well. And cyber-vulnerabilities create physical security problems. Physical security is enabled by digital security—all physical security-locking mechanisms are now IT controlled. Security has become a strategic issue.”
In June 2010, a Belarusian malware-detection firm found a strange new computer worm: a type of malware that spreads independently without user action or knowledge. This new type of computer worm was infecting computers in Iran, and it had a very complex design, used stolen security certificates, employed four previously unknown vulnerabilities, and behaved quite strangely.3
Named Stuxnet, the computer worm illustrated a systemic vulnerability of manufacturing systems. Stuxnet spread within networks of Windows PCs via previously unidentified weaknesses in both USB flash memory sticks and in networked printers. This enabled it to spread across an organization, and even to PCs that were isolated from the Internet (so-called air-gapped computers).
But Stuxnet did not attack these PCs. Instead, it looked for any computers that were being used to manage Siemens programmable logic controllers, which are used in factories around the world.4 When Stuxnet found a connection to an industrial controller, it gathered information about the equipment attached to the controller and could reprogram the controller and turn off alarms. Stuxnet is thought to have been cooked up by US and/or Israeli spy agencies to infect and damage Iran’s nuclear weapons program by making Iran’s uranium enrichment centrifuges spin out of control.5
Given Stuxnet’s sophisticated abilities to spread without being detected, it’s not surprising that it escaped Iranian detection. After Stuxnet was first reported, Chevron found Stuxnet in its systems, possibly arriving on an infected thumb-drive. “I don’t think the U.S. government even realized how far it had spread,” said Mark Koelmel, general manager of the earth sciences department at Chevron. “I think the downside of what they did is going to be far worse than what they actually accomplished.”6 By September 2010, Stuxnet had spread to 100,000 infected hosts in Iran, Indonesia, India, Azerbaijan, Pakistan, the United States, and over 100 other countries.7
Nor are Siemens industrial controllers the only ones vulnerable to hackers. Security researchers found major vulnerabilities in facilities management devices that use Tridium’s Niagara framework. Niagara runs on some 11 million devices in 52 countries to remotely control electronic door locks, lighting systems, elevators, electricity and boiler systems, video surveillance cameras, alarms, and other critical building facilities.8 “There are hundreds of thousands of installations on networks, including [Defense Department] installations and Fortune 500 firms,” said Billy Rios, coauthor of Hacking: The Next Generation, a handbook for security experts.9 “These customers have no idea they are exposed,” Rios continued. For example, Singapore’s Changi Airport—a major airfreight logistics hub for Asia—uses Niagara to manage more than 110,000 devices and sensors.10
Residents of Great Falls, Montana, got a bit of a shock on the evening of February 11, 2013, when their TVs broadcast an official Emergency Alert System (EAS) message warning: “The bodies of the dead are rising from their graves and attacking the living. Do not attempt to approach or apprehend these bodies as they are considered extremely dangerous.” Zombies were in Great Falls! “This was a prank,” said James Barnett, a retired Navy rear admiral and partner in the cybersecurity practice at law firm Venable. “But if something was done to try and panic the public—or even worse, to interrupt communications during an actual emergency—that’s pretty serious.”11 After the hoax zombie warning, the Federal Communications Commission issued an “urgent advisory” to all television stations, requiring them to immediately change the passwords on all EAS-related equipment, place the devices behind firewalls, and check for bogus alerts.
Examples like Stuxnet, Niagara, and the EAS Zombie hoax illustrate the amount of critical infrastructure that is on the Internet. In the past, industrial and facility-related devices were relatively isolated from the external environment because they used dedicated and proprietary connections to an organization’s computers. But modern machines rely more on open Internet-based connectivity and easy-to-use web-based interfaces that create a double vulnerability. First, hackers can now reach in and access these devices to then monitor or disrupt company activities. But the more insidious vulnerability is that hackers can infect these devices. Thus, a printer can become a host for a computer worm that attacks any machine on the same network as the printer.12 Malware can even infect smartphone rechargers,13 keyboards, and computer mice.14 The problem will only get worse, because by 2015 an estimated 25 billion devices will be on the Internet in what is referred to as “The Internet of Things,”15 or, more aptly, “The Internet of Everything.”
During the first half of the 2013 holiday shopping season, US retailer Target enjoyed “better than expected sales,” said Chief Executive Gregg Steinhafel.16 But in the midst of that holiday cheer and ringing cash registers lurked a major cybercrime in progress. During the frenzy of shopping over the Thanksgiving weekend (November 29 was “Black Friday”—the day of the highest shopping volume in the United States) and until the intrusion was detected on December 12, every time a Target customer swiped his or her card, hackers swiped the card number.
This crime began months before and illustrates the security threats latent in supply chains. Extensive analysis by private and public security experts17 estimated that sometime in September 2013, criminals (probably operating in Russia or Eastern Europe) sent a phishing email to one or more workers at Fazio Mechanical Service in Sharpsburg, Pennsylvania. At least one recipient opened the email, thereby infecting one of Fazio’s computers with an off-the-shelf password-stealing program known as Citidel.18
That may not seem relevant to the Target story, except that Fazio was a vendor to Target, working on the retailer’s HVAC (heating ventilation and air conditioning) systems at Target stores in the western Pennsylvania region.19 As a vendor, Fazio had accounts on Target’s normal electronic billing, contract submission, and project management systems.20 The criminals’ password-stealing bot gave them access to Fazio’s accounts on Target’s systems.21
Although Target maintained a separation between the vendor portal and its credit card processing systems, the criminals seem to have found a way to breach that firewall and gained access to Target’s network of 62,000 POS (point-of-sale) terminals. Sources estimate that between November 15 and November 28 (the day before Black Friday), the attackers successfully installed data-stealing software on a small number of cash registers within Target stores.22 When the test worked, the criminals rolled out their malware to the rest of Target’s POS terminals.
Getting into Target was only half the battle; the hackers also needed to get the stolen data out of the company—an act that can raise alarms. To do this, the criminals commandeered a control server inside Target’s internal network to become a central repository for the data taken from all of the infected registers.23 Beginning on December 2, the attackers used a virtual private server (VPS) located in Russia to download the stolen data. Over a two-week period, they extracted a total of 11 GB of stolen customer information.24 The data included 40 million credit card numbers as well as nonfinancial data (names, email or physical addresses) on as many as 70 million customers.25
The thieves then began uploading blocks of 1 million credit cards at a time for sale at prices ranging from $20 per card to $100 per card.26 Bank security officials first became aware of the breach through routine monitoring of known black market sites. A bank bought some of their own cards from the thieves and discovered that the common denominator among the cards was purchase activity at Target. The bank alerted federal officials, federal officials talked to Target, and Target uncovered the breach and announced it to the public on December 19.27
The crime and the aftermath of the very public, headline-making announcement of the breach hit Target from many directions. “Results softened meaningfully following our December announcement of a data breach,” said Target’s CEO.28 Profit in the holiday quarter was nearly halved from a year earlier to $520 million, and revenue slid 5 percent to $21.5 billion.29 Other damages included: $17 million in expenses (net of insurance payout) at Target,30 $200 million in card replacement costs for financial institutions,31 an estimated $1.1 billion for fraudulent transactions on stolen cards,32 and the potential for up to a $3.6 billion fine for Target for violating PCI-DSS (payment card industry data security standards).33
Target’s problem was by no means unique in a world of Internet-connected supply chains, outsourcing, Internet-of-Things devices, and cloud-hosted IT systems. Customer data was stolen in 2014 from Home Depot, Goldman Sachs, and many others—in total, 375 million customer records were reported stolen in the first half of 2014.34 Even the US Department of Homeland Security was hit as detailed records on employees and contractors were stolen from a contractor responsible for background checks.35 “We constantly run into situations where outside service providers connected remotely have the keys to the castle,” said Vincent Berk, chief executive of FlowTraq, a network security firm.36 Cybercriminals have entered and gained a beachhead inside company networks via video conferencing equipment, vending machines, printers, and even thermostats. Estimates of the percentage of breaches caused by third-party suppliers range from 23 to 70 percent.37
On August 16, 2013, the unthinkable happened. Google went down.38 And with it went 40 percent of all Internet traffic.39 Although Google’s systems came back up in only a few minutes, the crash showed just how much of the world depends on this one company’s sprawling global network of servers. Beyond its use for Internet searches, many companies use Google’s public and corporate services for mail, file storage, collaborative document editing, mapping, navigation, Android smartphones, and more.40
Google isn’t the only Internet-based service on which people and companies depend. Another key trend affecting the safety of IT systems is the rising use of cloud computing—outsourced software services and data hosted on distributed server systems. Also called software-as-a-service (SaaS), the cloud vendor promises high reliability, low cost, and worldwide access. Although cloud-based systems use geographically distributed data centers and independent server farms to promise extremely high reliability, they can still fail. Microsoft’s Azure platform failed twice in twelve months: once over a lapsed security certificate41 and again because of a rather minor component of its cloud going down, but going down globally.42
In other instances, a minor outage cascades into something larger. In April 2011, a minor outage in Amazon’s East Coast data center cascaded when systems designed to ensure Amazon’s reliability actually clogged the network with what Amazon described as “a re-mirroring storm.” A sudden loss of access to data caused automatic data replication across the network, jamming it with data traffic.43
Cisco, like many companies, has adopted a BYOD (bring your own device) policy, in which employees use their personal devices instead of company-provided ones. “At the end of 2012, there were nearly 60,000 smartphones and tablets in use in the organization—including just under 14,000 iPads—and all of them were BYOD,” said Brett Belding, senior manager overseeing Cisco IT Mobility Services. “Mobile at Cisco is now BYOD, period.”44 BYOD lowers the cost of information technology for Cisco and enables employees to carry just one device of their own choosing. It’s a win for both the company and employees, but it’s also a potential win for cyber attackers.
“If you get an infected device or phone coming into your business, your intellectual property could be stolen,” said Chuck Bokath, senior research engineer at the Georgia Tech Research Institute.45 For example, mobile versions of the FinSpy/FinFisher malware allow attackers to log incoming and outgoing calls; conceal calls to eavesdrop on the user’s surroundings; and steal data such as text (SMS) messages, contact lists, and phone/tablet media (such as photos and videos).46
Mobile malware is growing.47 Malware can infect a smartphone and enter corporate IT systems through several means. The most prevalent vectors for infection are Trojan apps—malware apps masquerading as something the user might want, such as a popular game, a plug-in for playing media, or even a fake antivirus app. Although Apple and Google try to run malware-free app stores, the vetting process may be imperfect. Moreover, the open Android platform lets users “side-load” apps from any source they choose, which makes the platform vulnerable. And if the user “jail-breaks” his or her phone to bypass security systems imposed by the phone maker or cellular service provider, then the phone becomes much more vulnerable. For example, jail-broken iPhones are susceptible to a worm called IOS_IKEE that has the ability to accept remote commands, collect corporate information, and send it to a remote server. 48
Mobile malware can come in other forms, too. For example, one attack on Android phones comes in the form of an SMS message that appears to be a link for a DHL package tracking message. If the user clicks on the link, the phone downloads malware that steals the user’s data and sends infected SMS messages to everyone on their contact list.49 Malware even came installed on the Android SD cards of Samsung’s S8500 Wave and Vodafone’s HTC Magic smartphones.50
As of 2013, Android devices were the target of 98.1 percent of these malicious programs, owing to the platform’s growing popularity and more open app-loading environment.51 Criminals do follow the path of money. As tablets begin to outsell computers, criminals are building more tools to attack tablets, too.52
In 2013, most mobile malware targeted consumers with spam advertising, theft via premium SMS services, theft of banking information, and pay-to-unlock extortion-ware scams.53 Of concern to corporate and supply chain security are three categories of malware. The first category is systems used in bank account theft, because they can also be used to breach corporate two-factor security systems by stealing SMS messages that contain the second password. The second includes cases in which Android malware was a vector for installing Windows malware: bring-your-own-device can become bring-your-own-virus. Third are more general backdoor malware systems that can be used to eavesdrop and steal mobile device data, including corporate data. “BYOD is a real problem,” acknowledged Matthew Valites, Cisco’s CSIRT manager for information security investigations.54
At the 2013 Defcon hacker’s conference, two security researchers demonstrated a particularly nasty proof of concept.55 The researchers showed that they could take over the steering wheel of a Toyota Prius through its lane-keeping and parking-assist software, and they could command the steering wheel to jerk sharply while driving on the highway. They could also apply the brakes or disable the vehicle at any time.
Nor was the Prius the only car they found a way to control. They also found that they could completely disable the brakes on the Ford Escape at low speed. Other hacks affected dashboard displays—changing the speedometer and the odometer, and even spoofing the GPS location of the vehicle. Fortunately, these security researchers were “good guys.” In fact, DARPA had funded their efforts to help root out security vulnerabilities in automobiles.56
Car makers aren’t the only manufacturers who may have to worry. More and more products come with smartphone or direct web integration. Besides the unsurprising range of Internet-connected consumer electronics such as big screen TVs, stereos, gaming platforms, and home automation, is a growing range of connected home appliances such as ovens,57 crockpots,58 and washing machines,59 as well as children’s toys.60 These systems can be breached to allow criminals to control these devices, monitor the home, and access the homeowner’s local network. In fact, security analysts uncovered five different vulnerabilities in Belkin’s WeMo line of home automation products (light bulbs, light switches, and remote monitoring devices) that could be used for such purposes.61 Another major flaw was the Heartbleed vulnerability that allowed penetrations of widely used secure networking protocols and went undetected for two years.62 This vulnerability demonstrated that virtually any smart product might contain an exploitable flaw.
Furthermore, the issue of counterfeit chips discussed in chapter 7 implies that procurement also plays an ongoing role in product security. “As we connect our homes to the Internet, it is increasingly important for Internet-of-Things device vendors to ensure that reasonable security methodologies are adopted early in product development cycles. This mitigates their customer’s exposure and reduces risk,” said Mike Davis, principal research scientist at the security research firm IOActive.63
Finally, as mentioned earlier in this chapter, the Internet-of-Things trend is also invading freight transportation. Tractors, trailers, cargo, and even individual packages can be tracked in real time.64 Wireless monitoring of axle weight, reefer temperature, and engine performance can certainly improve operations. All these digital links and devices, however, likely introduce unknown vulnerabilities.
When researchers at MIT wanted to test the intensity of digital devilry, they simply attached a clean, brand-new computer to MIT’s network and monitored all the attempts to access or attack the virgin machine.65 Within 24 hours, they had logged some 60,000 attempts to breach the machine. Attacks had come from every country in the world with the exception of Antarctica and North Korea. Most disturbing was that many of the attacks had come from other machines inside MIT’s network, indicating that they had presumably been infected earlier.
“We are seeing some disturbing changes in the threat environment facing governments, companies, and societies,” said John N. Stewart, senior vice president and chief security officer at Cisco.66 Far more serious than bored teenage vandals or Viagra-touting spammers are the growing risks of deliberate IT disruptions and threats. Hit-and-run malicious pranksters have been supplanted by more malevolent persistent threats tied to organized crime, state-sponsored corporate espionage, and cyber warfare.
An analysis of over 47,000 IT security incidents during 2012 found that 75 percent were financially motivated cybercrimes.67 Typical targets for these kinds of intrusions include retail organizations, restaurants, food-service firms, banks, and financial institutions.68 The crimes often involve highly organized, tightly coordinated teams operating on a global scale. For example, criminals stole 1.5 million records from an electronic payments processor in 2008 and made fake ATM cards. They then used the cards during a tightly timed period to withdraw more than $9 million in 49 cities around the world.69
Malware has matured, literally. Famous computer viruses of the past such as Melissa,70 the Love Bug,71 and Bagle72 were largely the work of mischievous individuals. But what started as a loose assortment of youthful hackers has become an industry with a bona fide supply chain of tool vendors, vulnerability suppliers, data thieves, and distributors and retailers of stolen data.73 Criminals can now buy “fraud as a service.”74
Packaged attack kits sell for between $40 and $4,000, with an average retail price of approximately $900.75 Underground marketplaces and organizations cater to creating and selling vulnerabilities in popular operating systems and software (Windows, Adobe Flash, Adobe PDF, Java, and server software). A vulnerability for a financial site might retail for as much as $3,000.76 The worst, and highest-priced, vulnerabilities are the so-called zero-day vulnerabilities, which are IT security flaws in a piece of software that have no known countermeasures because no one even knew the vulnerability existed until the attack occurred. Zero-day vulnerabilities that allow a hacker access to any Windows machine anywhere sell for as much as $100,000.77 Attackers can also rent networks of infected PCs called botnets for $40 per day for launching penetration attacks, distributed denial of service attacks, click fraud, and spam.
Bogdan Dumitru, CTO at BitDefender, an antivirus firm, estimates that between 70 percent and 80 percent of malware now comes from organized, well-financed groups.78 The sites offering these services are as sophisticated as any e-commerce site. Just click on the malware product you want, pay, and it is on its way to you. There’s even the opportunity to buy optional modules, maintenance agreements, and customization from the underground vendors of malware. Of course, malware buyers might well wonder to whom they just gave their credit card details for these services….
Google uncovered a cyber-intrusion in December 2009. At first, it seemed to target the email accounts of human rights activists,79 but further study found that the attack was both deeper and broader. What became known as “Operation Aurora”80—named after clues inside the attacker’s software—emanated from China and sought proprietary data and software codes from Northrop Grumman, Dow Chemical, Juniper Networks, Morgan Stanley, Yahoo, Symantec, Adobe, and at least 27 other companies.81,82
“We have never ever, outside of the defense industry, seen commercial industrial companies come under that level of sophisticated attack,” said Dmitri Alperovitch, vice president of threat research for McAfee, Intel’s antivirus software subsidiary.83 Aurora exemplified a new kind of attacker: the advanced persistent threat (APT). Aurora burrowed deep into the networks of target companies, using several levels of encryption and nearly a dozen pieces of malware. “In this case, they’re using multiple types against multiple targets—but all in the same attack campaign. That’s a marked leap in coordination,” said Eli Jellenc, head of international cyber-intelligence for VeriSign’s iDefense Labs.84 “It’s totally changing the threat model,” Alperovitch added.85
Security firm Mandiant uncovered one advanced persistent threat (which they called APT1 and others have called “Comment Crew”) that had compromised at least 141 companies spanning 20 major industries. Unlike the Hollywood vision of hackers broadcasting zombie alerts or making laughing skulls dance on their victims’ screens, APTs try to stay undetected so they can plunder broad categories of companies’ intellectual property over a long period of time. In a report entitled, “Exposing One of China’s Cyber Espionage Units,” Mandiant stated that APT1 maintained access to victims’ networks for an average of a year, with the longest incursion lasting four years and ten months.86 APT1 searched corporate networks to steal copies of technology blueprints, proprietary manufacturing processes, test results, business plans, pricing documents, partnership agreements, emails, and contact lists.87 Even the source code of Windows 2000 was stolen from Microsoft in 2004, allowing hackers not only to sell unlicensed software but to analyze the code in order to find new vulnerabilities. In a final ironic twist, malware-infected copies of the Mandiant security report were used in a phishing campaign.88
“This is not some 15-year old trying to hack your database to see if he can,” said Andy Serwin, adviser to the Naval Post Graduate School’s Center for Asymmetric Warfare and chair of the information security practice at international law firm Foley & Lardner. “This is a large-scale organized effort to steal your company’s most valuable information.”89 An analysis of 168 of the largest US companies found evidence that machines inside 162 of them had transmitted data to hackers.90
These cyber-espionage risks extend into commercial supply chains and include attacks on the suppliers and partners of large companies. “When we thought of espionage, we thought of big companies and the large amount of intellectual property they have, but there were many small organizations targeted with the exact same tactics,” said Jay Jacobs, a senior analyst with the Verizon RISK team. Jacobs found that cyber-espionage breaches were split almost fifty-fifty between large and small organizations. “We think that they pick the small organizations because of their affiliation or work with larger organizations.”91
Over 95 percent of cases of cyber-espionage attacks originated from China, Jacobs claims.92 “It is fundamentally important that the American private sector wake up to the fact that dozens of countries—including China—are robbing us blind,” said Tom Kellermann, head of cybersecurity at Trend Micro and former commissioner of President Obama’s cybersecurity council.93
Some analysts allege that governments now back or support some high-profile cybercrime activities. Alleged perpetrators include China (APT1 and Aurora),94 the United States (Stuxnet, Flame, Duqu, and Gauss),95 Iran (the Saudi Aramco attacks),96 Syria (Denial-of-Service attacks on US banks),97 and Russia (using the “snake” toolkit against Ukraine).98 Unfortunately, the attacks can be difficult to trace because attackers can use botnets in any country to perpetrate, manage, or route their malicious activities.
These governments have technology resources that dwarf those of organized crime or the average credit card thief. “They outspend us and they outman us in almost every way,” said Dell Inc.’s chief security officer, John McClurg. “I don’t recall, in my adult life, a more challenging time.”99 Estimates of APT1’s current attack infrastructure include over 1,000 servers.100 Stuxnet may have involved the work of more than 30 coders working for months, if not years, on the worm.101
China vociferously denies the allegations that it was behind APT1, claiming that it has been the victim of cyber-attacks, too.102 “China resolutely opposes hacking actions and has established relevant laws and regulations and taken strict law enforcement measures to defend against online hacking activities,” said Hong Lei, a ministry spokesman.103
Yet Mandiant traced APT1’s high volume of activity and some of its people to the same neighborhood in the Pudong area of Shanghai as a 12-story building housing People’s Liberation Army Unit 61398. This building has a special high-capacity fiber-optic communications infrastructure “in the name of national defense” and Unit 61398 has specifically sought to hire large numbers of English-speaking computer security experts.104 “Either they [the attacks] are coming from inside Unit 61398, or the people who run the most-controlled, most-monitored Internet networks in the world are clueless about thousands of people generating attacks from this one neighborhood,” said Kevin Mandia, the founder and chief executive of Mandiant.105
The participation of governments marks a dangerous new phase of IT disruptions. “Nations are actively testing how far they can go before we will respond,” said Alan Paller, director of research at the SANS Institute, a cybersecurity training organization.106 Estonia suffered a country-wide Internet blackout on July 15, 2007, that is believed to have been caused by Russia.107 South Korea suffered a series of cyber time-bomb attacks on banks and broadcasters that may trace back to North Korea’s open declaration of seeking online targets in the south to exact economic damage.108 At oil company Saudi Aramco, 30,000 PCs were destroyed by the Shamoon virus, which may have been created by Iran.109 “The attacks have changed from espionage to destruction,” said Alan Paller.110
US defense secretary Leon E. Panetta warned of a “cyber Pearl Harbor.”111 He said: “An aggressor nation or extremist group could use these kinds of cyber tools to gain control of critical switches. They could derail passenger trains, or even more dangerous, derail freight trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country.”112
In 2013, a total of 13,073 vulnerabilities were discovered in 2,289 products from 539 vendors.113 Given all the threats of malware-laden websites, zero-day OS vulnerabilities, phishing emails, infected USB flash drives, insecure suppliers, cell phone backdoors, and dodgy Wi-Fi hotspots used for cybercriminal intrusion, cybersecurity seems hopeless. Yet as leaky as the cybersecurity walls around a global supply chain organization may be, organizations can defend themselves in many different ways.
The military uses the concept of a “kill chain” to describe the steps for finding and successfully destroying a target. By analogy, cybercriminals must also accomplish a sequence of actions in order to reach their goals against their target. By understanding those steps in the cybercriminal’s kill chain, an organization can defend itself.
Cybercrime, especially the advanced persistent threats faced by corporations, involves seven steps, according to defense technology firm Lockheed Martin.114 First, the criminal uses reconnaissance methods to research, identify, and select targets. Second, the criminal creates a weaponized deliverable such as an infected Adobe PDF or Microsoft Office document. Third, the criminal delivers the payload via a phishing email, infected website, or USB media. Fourth, the criminal’s code will exploit some vulnerability to run that code once inside the target’s firewall. Fifth, the criminal will install some kind of remote access Trojan horse software or backdoor. Sixth, the criminal will establish a command and control system to manage their system’s activities inside the target corporation. Seventh, and finally, the criminal will pursue nefarious objectives such as collecting and exfiltrating sensitive data, sabotaging the target’s systems, or using the target to gain access to another organization.
“We look at what they are trying to do and focus on whatever their objectives are ... and cut off their objectives,” said Steve Adegbite, director of cybersecurity for Lockheed Martin.115 Because criminals must accomplish all seven steps to achieve their objectives, an organization’s cybersecurity only needs to thwart the attacker at any one of the steps to prevent or halt the intrusion. The Lockheed Martin approach to cybersecurity is based on depth of defense. For each of the cybercriminal’s seven steps, Lockheed Martin deploys a matrix of tools or processes that detect, deny, disrupt, degrade, deceive, or destroy the cybercriminal’s attempted actions.116,117 “I can still defend the doors, but I’m not going to sit there and put all my efforts there,” Adegbite said.
Reducing the vulnerability of a system markedly reduces the rate of cybercriminal attacks on the system. As of mid-2013, Android devices somewhat outnumbered Apple devices by 900 million118 to 600 million.119 Yet Android malware outnumbered iOS malware by more than 750 to 1 in 2013.120
Security analysts cited two factors that make iOS a harder target. The first is Apple’s dictatorial “walled-garden” app model, whereby users aren’t permitted to load apps on their iPhones and iPads except via Apple’s curated store.121 In contrast, Android users can freely visit a variety of open market places and “side load” any apps they choose, but they run much higher risks because cybercriminals put infected copies of popular apps on these third-party marketplaces.
The second is Apple’s vertical integration and ability to push updates (including security updates) to iOS users.122 In contrast, Google leaves the updating process to Android device OEMs and cellular service carriers, which leads to delays in updates.123 In 2014, nearly 90 percent of iOS users were on the latest version of Apple’s software, while less than 10 percent of Android users were on Google’s latest version.124 The issue of missed or delayed security updates is far more serious than it seems, because hackers can actually analyze each security update to learn about vulnerabilities in computers or phones that aren’t using the latest version.
The walled garden model applies to corporate IT systems as well. A number of technology advances give companies more control over workers’ computers and smartphones, such as limiting apps’ access to sensitive files125 or enabling a remote-controlled erasure of stolen or lost devices.126 Companies can create their own internal app stores that contain only the safest subset of carefully vetted apps. Ongoing efforts are aimed at creating more secure operating systems, application installation systems, locked-down boot systems, email filters, and software mechanisms that “sandbox” potential malware in a limited part of the computer.127
Dr. Zaius is one cute cat, a Turkish Angora with a purple Mohawk hairstyle. He was the lead image in an email sent in 2013 to some 2 million people. The email promised more feline photographic foolery. But recipients who clicked the link—and 48 percent of them did—were disappointed, even chagrined, when all they got was a warning from cybersecurity firm PhishMe Inc. on behalf of their IT departments about the dangers of phishing.128
Phishing emails attempt to lure a victim to click on emailed links or attachments with promises of titillating images, hoax messages from package delivery services, threats of discontinued bank services, and the like. Employees remain on the front line of a company’s IT security. As software vendors work to patch vulnerabilities in operating systems and software packages, attackers must rely more and more on the users agreeing to open a message or download an app that contains malware.
Over the 2014 Labor Day weekend in the United States naked and seminaked photos of celebrities including Kirsten Dunst and Kate Upton appeared online. Many media outlets put the blame on Apple iCloud storage service, yet the iCloud was not hacked. Instead, a combination of phishing emails or possibly “sniffed” user names and passwords over open Wi-Fi networks allowed hackers to penetrate the account of these celebrities on Apple’s cloud storage service and download the private photos.
Brian Fees, chief financial officer of CedarCrestone, a consulting and managed services company, knows all about phishing. He even hired MAD Security to do quarterly red team test attacks to train CedarCrestone’s staff. One day, Fees received an urgent email from CedarCrestone’s CEO mentioning one of the company’s key clients. He immediately opened the attachment and realized he’d made big mistake. The email was actually a hacker’s ruse designed to infect the CFO’s computer. Fortunately, the ruse was perpetrated by MAD Security using the newer phishing strategy known as spear-phishing. As MAD’s managing partner, Michael Murray explained, “We went through their website and figured out who one of their key clients was, and then set up a fake email chain.”129 Red team services such as those offered by PhishMe Inc. and MAD Security help train employees to be aware of the potential untrustworthiness of the emails they receive and to immediately report them.
In spear-phishing, the attacker uses public data (e.g., the web and social media) to construct a realistic and urgent spoof message to the victim. As people share more and more data about their careers, their lives, and their plans via LinkedIn, Facebook, Twitter, and other sites, it becomes easier for determined criminals and cyber-espionage attackers to create realistic messages that seem to come from trusted friends, family, and colleagues from the victim’s social network.130 In 2011, IBM found that spam is declining, but spear-phishing is increasing.131 A June 2012 study by Deloitte and Forbes Insight ranks social media “as the fourth largest source of risk over the next three years,” following the global economic environment, regulatory changes and government spending.132 Whereas consumer identity theft is a numbers game relying on hit-or-miss generic phishing, corporate cyber-espionage is more likely to target specific individuals in specific companies with spear-phishing.
“Chaos Monkey” and “Chaos Gorilla” are two mischievous bits of software that try to wreak havoc in Netflix’s video distribution systems.133 But they aren’t the product of evil hackers or unfriendly governments. Instead, Netflix itself intentionally attacks its own systems to find and prevent larger vulnerabilities. Chaos Monkey randomly disrupts part of Netflix’s network to ensure the company can quickly respond to outages. Chaos Gorilla disrupts entire regions to test Netflix’s systems automatic rebalancing of the load without user-visible impact or manual intervention.134 Testing the system with many small, controlled disruptions helps Netflix find weaknesses and prevent larger, uncontrolled disruptions.
On October 4, 2012, Europe attacked Europe.135 Yet the attack on Europe’s online e-government and financial services was an example of ethical hacking,136 in which one group is authorized to attempt to breach or disrupt a voluntary target to test the target’s ability to withstand some form of malevolent attack. “This was a collective effort with members of the organizations working with a friendly botnet to strike the services of members and point them in the right direction,” said Paul Lawrence, VP of international operations at Corero Network Security.137
Stress testing can assess the resilience of an organization’s infrastructure under DDoS (distributed denial of service) attacks by a nation-state, terrorist group, or extortion group, as well as so-called hacktivist CSR-related attacks. This kind of stress testing uses one or more methods that attempt to disrupt a company’s web sites, email, or other Internet-mediated activities by flooding the company with spurious requests, bulky data, or computationally costly commands.138 “Working together at the European level to keep the Internet and other essential infrastructures running is what today’s exercise is all about,” said Neelie Kroes, vice president of the European Commission and European Commissioner for Digital Agenda.139
“Consumer–grade antivirus you buy from the store does not work too well trying to detect stuff created by the nation-states with nation-state budgets,” said Mikko Hypponen, chief research officer at Finland’s F–Secure Oyj.140 File signatures (a kind of digital fingerprint used to detect computer viruses) can change often or be previously unseen in the case of well-funded or state-sponsored attacks. Moreover, advanced types of malware can disable or hide from antivirus software via a variety of tactics.
Yet from a kill chain perspective, that attacker’s installation of a piece of malware is not the final objective. Although malware might remain hidden, its behavior in a computer or on the network may be quite visible, detectable, and thus defeatable. That’s why companies such as IBM and Sift Science are working to automate detection of unusual and potentially malevolent patterns of activity in computers and networks. “You can spot patterns the naked eye would never notice,” said Sift Science cofounder Brandon Ballinger.141
Defense contractor Lockheed Martin caught an attacker in 2011 by watching its behavior. The intruder had a valid security token—possibly stolen from the token provider—from one of Lockheed’s business partners. “We thought at first it was a new person in the department ... but then it became really interesting,” said Lockheed Martin’s Steve Adegbite. What tripped Lockheed’s alarms was that the intruder was attempting to access data unrelated to the work of the user he or she was impersonating. “No information was lost. If not for this framework [Kill Chain], we would have had issues,” Adegbite said.142
Some of the most vexing risks emanate from within the organizations—disgruntled employees intent on harming their company or enriching themselves by stealing data (à la Bradley Manning, who gave stolen data to Wiki-leaks, or Edward Snowden who shared his NSA-stolen data with media outlets). Yet, as the risks of cyber-attacks proliferate, so do defensive methods. Companies can now easily monitor when any employee attaches a USB device to a computer that has network access or downloads any file, especially “tagged” documents, which then trigger real-time alarms.
Cooperation among victim corporations aids cybersecurity via improved detection, characterization, and elimination of cybersecurity threats. A new generation of detection systems benefit from network effects—the more companies that share intrusion data or allow joint monitoring, the sooner these systems can spot infections and mount preventative countermeasures. “Our biggest issue right now is getting the private sector to a comfort level where they can report anomalies, malware, and incidences within their networks,” said Executive Assistant Director Richard McFeely, head of the FBI’s cyber-crime efforts. “It has been very difficult with a lot of major companies to get them to cooperate fully,” McFeely added.143
The idea is to arrive at the same level of cross-learning that systems like the aviation industry’s “near miss” tracking provide. Whenever two planes pass too close to each other, the pilots and air traffic controllers involved report the occurrence to the Aviation Safety Reporting System of the US Federal Aviation Administration. The incidents are investigated, and enhanced safety procedures are distributed worldwide. The US system is voluntary, confidential, and run by NASA, which has no enforcement power, contributing to the high rate of reporting. Similar systems are operated by relevant agencies in other countries, such as Canada’s Transportation Safety Board, or the UK’s Civil Aviation Authority.
The US National Cyber Forensics and Training Alliance (NCFTA) offers a neutral third-party venue for sharing of cybersecurity events, threats, and knowledge. As a neutral venue, NCFTA lets subject matter experts from the private sector, academia, and government collaborate freely. NCTFA also addresses other kinds of illegal activities that have a significant online component, including online sales of counterfeit merchandise and illicit drugs. In Europe, the European Union Agency for Network and Information Security (ENISA) plays a similar role.
Software vendors and IT security firms continue to find and close avenues of attack. For example, once Stuxnet was detected, its spread was halted by revoking the stolen security certificate used by the worm to masquerade as trusted software.144 Microsoft then issued a series of patches to close the zero-day vulnerabilities used by Stuxnet to further prevent Stuxnet variants from spreading.145 At Microsoft, the “Security Development Lifecycle (SDL) was built on the concept of mitigating classes of potential exploits rather than specific exploits, and reducing vulnerabilities to help provide protection against unforeseen threats,” said Dustin Childs, group manager, response communications, at Microsoft Trustworthy Computing.146
Beginning in 2003, Microsoft designated the second Tuesday of each month as “Patch Tuesday” so that corporate IT security people could plan their testing and deployment efforts for reducing the numbers of vulnerable PCs on global networks.147 In 2013, Microsoft released 96 security updates (up from 83 in 2013148) that covered 330 vulnerabilities.149 Other software companies such as Adobe, Mozilla, and Oracle also use periodic, scheduled security updates.150 Yet “patching” brings its own risks of disruption.151 On more than one occasion, the patch itself has disrupted IT systems by degrading, crashing, or damaging users’ PCs.152,153
Prevention of shifting threats depends on detection: finding new malware infections and vulnerabilities before they can inflict significant damage, and adjusting defenses to prevent further infections. “In response, attackers continue to evolve their techniques to find new avenues into an organization,” said Tom Cross, manager of threat intelligence and strategy for IBM X-Force.154 “When you know you’re the target and you don’t know when, where or how an attack will take place, it’s wartime all the time,” said Arabella Hallawell, vice president of strategy at Arbor Networks, a network security firm. “And most organizations aren’t prepared for wartime,” she said.155
Responsibility for cybersecurity cannot lie with the IT department alone. Most other corporate functions have a role in ensuring cybersecurity. Procurement departments, working with legal professionals, must ensure that supplier contracts include cybersecurity measures, allow for auditing supplier IT security processes, and manage the authorization and de-authorization of supplier employees on the company’s networks or portals. Finance plays a role as a result of its expertise in risk mitigation measures and because any disruption is likely to have financial consequences. Human resources must vet employees, train workers on cybersecurity issues, secure employee data, and ensure safe processes when using corporate email, databases, and other resources. Sales, marketing, and investor relations should prepare for communicating any intrusions with customers and investors. To coordinate cyber-defenses across the enterprise, several companies have created a multifunctional council chaired by a chief operating officer or the CEO to oversee, coordinate, and enforce company-wide defenses.
Ultimate responsibility for cybersecurity belongs at the highest level of the organization—those who provide direction and governance of the organization. Thus, the board of directors has a special responsibility to ensure that the company is as protected as possible. Unfortunately, while many board members at large companies are versed in finance, marketing, law, and operations, very few board members are versed in advanced information and communications technologies. A 2014 survey by the Institute of Internal Auditors found that 58 percent of audit executives said the board “should be actively involved” in cybersecurity issues, but only 14 percent said the board was actively involved in these issues.156