The holiday season, normally spent in activities with friends and family, was anything but festive for electrical power authorities in the Ukraine, who struggled with extensive outages in the Ivano-Frankivsk region of that country. Computers infected by malware caused the outages.
The power failure cascade began on December 23, quickly spreading as the malware disconnected substations from the grid. Cybersecurity researchers were able to collect samples of the malware and quickly identified it as “BlackEnergy,” which was first seen in 2007 and subsequently updated by hackers in 2013.
The “new and improved” version of the malware contains two particularly troubling components, the first being called “KillDisk,” which attacks computer hard drives. KillDisk permanently damages critical elements while simultaneously seeking out and significantly damaging industrial control systems. In Ukraine, this was accomplished through the ELTIMA serial-to-ethernet connectors used in industrial control systems.
The other feature causing concern is that the malware now includes what is called a “back doored secure shell (or SSH) utility,” which enables hackers to have permanent accessibility to computers once they have been infected. In short, if the malware doesn’t kill your computers, it ultimately has permanent control of them because of the ability to shut them down.
Controlling the computers that control the industrial control systems means those systems can be monitored, attacked or manipulated remotely at will by adversaries. Put another way, imagine driving down a highway at 70 miles an hour and then suddenly having the steering wheel disintegrate in your hands. This is the level of trouble we are talking about here.
Where did the malware come from and how was the Ukrainian power grid infected?
Cyber experts differ somewhat in their opinions, but the general consensus is that BlackEnergy likely originated with a group called the “Sandworm Gang,” which appears to have at least some connections to Russia. It is important at this point to emphasize that no accusation of Russia’s culpability in the attack is being made or implied here. “Connections to Russia” do not equate to prima facie evidence that the Russian government was the originator of the malware, only that hackers—some of whom reside in Russia—are associated with the malware.
The malware has been used before, including for attacks on the North American Treaty Organization (NATO), Polish government agencies and several European industrial organizations. In the Ukrainian case, it appears highly likely that systems were first infected through the use of social engineering ploys, using Microsoft Office documents booby-trapped with macro-embedded functions and then opened by unsuspecting recipients. The level of sophistication in the social engineering efforts is not yet understood, but cyber adversaries often use simple ploys. Often, system security is only as good as the user’s level of competence and skepticism.
What are the implications of this malware for the food industry?
The implications for this malware are actually quite serious. Cybersecurity experts are labeling this malware attack as an “escalation,” categorically characterizing this as something different. Whether this is the first attack on a power system is debatable and frankly not the important issue. The most important implications are that the malware can destroy cyber-based control systems, and that this malware has now been released to the world.
Actually making the malware is the hard part, in this case requiring substantial expertise. Distributing it is as easy as making the malware available on hacker forums. Once the malware is out there, anyone with the competence to find the hacker forum can get the malware and point it at any target they desire. A significant level of technical competence is not necessary. The hacker forums serve a second purpose for cyber adversaries in that these can be the places where “improvements” are solicited and perfected while enabling the maintenance of anonymity.
In the past, discussions in food defense circles have focused on how malware intrusion might affect industrial control systems in food processing. One possible scenario discussed was how food-processing components could be made to appear to be functioning normally despite tampering. For example, the cooking temperature could be changed without the knowledge of an operator, who would see the correct, normal cooking temperature. The obvious solution was to use independent thermometers, creating a fail-safe to verify the process. The threat of cyber-based attacks not withstanding, redundant systems monitoring is an essential part of any well-designed business continuity plan and now pretty much standard practice across much of the food industry.
Another possible scenario was an attack on the power grid used to power processing facilities. Turning the power off at a food processing facility quickly causes the facility to shut down. This also rapidly causes a backup in the integrated system, particularly if alternative facilities are not available or able to handle the surge. Again, the food industry found simple solutions, including the use of independent power sources such as gas- or oil-powered generators—another good business continuity practice.
This Ukrainian attack, however, was possibly something very different. Would it be unreasonable to extrapolate and envision a future scenario for the food industry? The malware is now able to not only impair an industrial control system but—in a worst case scenario—may permanently disable the system. An uninterrupted power supply does little good if the system it is powering is destroyed. Imagine the effect on a brand if the facilities producing the products are no longer able to function without a complete rebuild of the industrial control system. Costs would rapidly soar to staggering levels.
Are there solutions and, if so, how practical are they?
There are solutions, although unfortunately some can range from rather costly to very costly. Companies are going to have to do a cost/benefit analysis to determine what can and cannot be done. In the future, government or insurance carriers likely will mandate a lot of changes. Industrial control systems must be closed loops, with no possibility of their ever touching other corporate cyber systems such as email, web access and customer interfaces, etc. When we perform cyber defense surveys, operators usually say their industrial control systems are not connected to other cyber systems. But are they? That depends on the expertise of their “cybersecurity experts,” whether those experts work for the company in its own IT department or whether the company relies on consultants.
How can corporate leadership know for sure? The only real solution is to test all cyber systems through “Red Team” penetration testing. Red Teams work to find the chinks in the corporate armor, probing the systems and the corporation as a whole. Many times, they find surprises. Systems thought to be secure might not be. When first in place, Red Teams always win, meaning they find ways around corporate security. After the problems found in the exercise are dealt with, the Red Team again performs penetration testing. Social engineering tactics are often the most effective means of penetrating a system, even among IT professionals who maintain corporate systems. This is reason why IT personnel are the most frequent target for adversaries.
After each Red Team exercise, systems will be more robust. More effort will be required to overcome defenses, making it harder for “amateurs” or even moderately skilled adversaries to penetrate corporate systems. This brings up two uncomfortable issues, however. First, it is very difficult to stop an attack from a highly skilled adversary, such as a nation state. If a nation state with its government- and military-grade cyber capability has an express desire to penetrate corporate cyber systems, it will be exceedingly difficult (read very expensive), though not impossible, to stop the attack, depending on which nation state is involved. Nation state attacks do occur, but for now they are largely relegated to penetration for the purposes of gathering proprietary information.
Foreign governments most definitely do want your proprietary systems information, enabling them to incorporate it into their own systems, minus the necessary investment. The U.S. government will not really help you much in keeping bad things from happening, leaving that responsibility largely to the corporations themselves. However, Information Sharing Analysis Centers, so called “ISACs,” have been established to act as conduits between critical infrastructures (food is one) and the government to share information and hopefully relay solutions.
Executives from companies involved in critical infrastructure industries themselves now complain about unidirectional information flow (the so called “black hole effect”) as the ISACs have become more quasi-governmental. The reality is that the National Security Agency and CYBERCOM, the entities charged with gathering and analyzing cyber intelligence, are not going to expend their capabilities to protect corporations. Their cyber-war capabilities are reserved for just that—war.
The best judges as to whether a corporate cyber system can be penetrated and damaged are exactly the kinds of people who corporations want to keep out of their systems. In other words, food corporations need to hire their own hackers—“ethical hackers” who meet strict behavior standards and have technical knowledge commensurate to the level of the threats likely to be encountered. Often called “White Hats,” ethical hackers are worth their weight in gold for food corporations and should most certainly be on the payroll, whenever possible. They do not come cheap, but the really good ones can prevent a lot of corporate headaches.
This brings us to the second uncomfortable reality. The truth is that systems only have to be made robust enough so that most cyber adversaries can’t be bothered to expend the effort needed to penetrate the system. Put more bluntly, you need to make your security robust enough that adversaries choose to go after your neighbors instead.
That sounds very mercenary and Machiavellian, but is the reality in which we as individuals, corporations, industries and nations operate. The world can be an exceedingly bad place, but if a corporation is strong, well-armed in defensive capabilities and has the will to do whatever necessary to survive, chances are that adversaries will go elsewhere and pick on someone else. Something to think about.
Robert A. Norton, Ph.D., is a professor at Auburn University and a member of the Auburn University Food System Institute’s core faculty. A long-time consultant to federal and state law enforcement agencies, the Department of Defense and industry, he specializes in intelligence analysis, weapons of mass destruction defense and national security. For more information on the topic or for more detailed discussions about specific security related needs, he can be reached at firstname.lastname@example.org or by phone at (334) 844-7562.
2. Major attacks on industrial control systems have been seen before, the first and most spectacular being the likely Iranian origin attack on Saudi Arabia’s oil company Saudi Aramco. Taking place in August of 2012 during the Moslem holy day of Lailat al Qadr, the malware destroyed data in three quarters of the company’s PCs, replacing the data with the image of a burning American flag. In this case, oil production was not affected. A second more widely known attack was caused by the Stuxnet virus, allegedly jointly developed by the United States and Israel, and which infected industrial control systems, damaging Iranian nuclear facility centrifuges.