Because all countries engage in espionage, intrusions like Russia’s latest data hack are devilishly hard to deter.
Amy Zegart / by The Atlantic.
Media www.rajawalisiber.com – The recently revealed SolarWinds hack unfolded like a scene from a horror movie: Victims frantically barricaded the doors, only to discover that the enemy had been hiding inside the house the whole time. For months, intruders have been roaming wild inside the nation’s government networks, nearly all of the Fortune 500, and thousands of other companies and organizations. The breach—believed to be the work of an elite Russian spy agency—penetrated the Pentagon, nuclear labs, the State Department, the Department of Homeland Security (DHS), and other offices that used network-monitoring software made by Texas-based SolarWinds. America’s intelligence agencies and cyberwarriors never detected a problem. Instead, the breach was caught by the cybersecurity firm FireEye, which itself was a victim.
The full extent of the damage won’t be known for months, perhaps years. What’s clear is that it’s massive—“a grave risk to the federal government … as well as critical infrastructure entities and other private sector organizations,” declared DHS’s Cybersecurity and Infrastructure Security Agency, an organization not known for hyperbole.
The immediate question is how to respond. President-elect Joe Biden issued a statement vowing to “disrupt and deter our adversaries from undertaking significant cyber attacks in the first place” by “imposing substantial costs.” Members of Congress were far less measured, issuing ever more forceful threats of retaliation. It was a weird bipartisan moment in which liberal Senate Democrats sounded like hawkish House Republicans, issuing statements about “virtually a declaration of war” and the need for a “massive response.”
All this tough talk feels reassuring, especially with crickets coming from the White House. But to assume that punishing Russia now will stop Russia later would be a mistake. Cyber deterrence is likely to fail.
The only thing universal about deterrence is the misguided faith in its applicability. In reality, deterrence works in very limited circumstances: when the culprit can be identified quickly, when the behavior has crossed clear red lines defining unacceptable behavior, and when the punishment for crossing them is credible and known in advance to would-be attackers. These conditions are rare in cyberspace.
Breach attribution is often difficult and time-consuming. Defining red lines is vexing: When a North Korean cyberattack on a Hollywood movie studio is called an act of war but Russian meddling in a presidential election doesn’t trigger much of anything, it’s fair to say red lines aren’t nearly clear enough. And because America’s arsenal of cyberweapons—hacks, viruses, and other ways of targeting network vulnerabilities—can become useless if they’re revealed, credibly threatening tit-for-tat punishment to strike fear into the hearts of hackers isn’t feasible. To be sure, a country can respond to cyberattacks in other ways. But if you’re figuring out what sanctions you might impose or how many diplomats you might expel after the fact, you’re not deterring. You’re just responding. For deterrence to work, bad actors have to know what punishment is coming—and fear it—before they act.
What’s more, so far the recent hack looks like the least deterrable type of breach—cyberespionage. Although some spying in cyberspace is the opening act for more aggressive behavior, early indications are that the SolarWinds operation was an intelligence-gathering effort, not a cyberattack meant to disrupt, corrupt, or destroy. Espionage is nearly impossible to deter in cyberspace for the same reason it can’t be deterred anywhere else: Everyone does it. All nations spy. Espionage has never been prohibited by international law. For 3,300 years, ever since people in the Near East chiseled the first known intelligence reports on clay tablets, spying has been considered fair game.
The United States engages in cyberespionage on a massive scale all the time. In 2015, after China hacked the Office of Personnel Management and stole 22 million highly classified security-clearance records, James Clapper, then the director of national intelligence, declared, “You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.” It’s hard to set convincing red lines against espionage when every country has been crossing them forever.
Understandably, American officials face intense domestic political pressures to talk tough now and figure out the details later. But hollow threats can undermine credibility with adversaries in the future. As former Secretary of State George Shultz likes to say, he learned in the Marine Corps never to point his rifle at someone unless he intended to shoot.
A more effective approach for the incoming Biden administration is to get back to basics and focus on preventing cyber intrusions and bouncing back more easily from the ones that inevitably get through. Although cybersecurity efforts have gotten much better in the past decade, they’re still underpowered, underresourced, and overly fragmented. Many government agencies are still struggling to meet basic cyber-hygiene and risk-management standards. The fledgling Cybersecurity and Infrastructure Security Agency has enhanced the coordination of public- and private-sector cybersecurity (including protecting the 2020 election). But the agency is just two years old and has only 2,200 employees to help secure vital American networks. The National Park Service, by contrast, has nearly 10 times more people to secure America’s vacation destinations. Perhaps most important, the cyberdefense buck currently stops nowhere: The Trump administration eliminated the White House cyberdirector’s office, a move so ill-advised that a bipartisan commission and a recent bipartisan vote of Congress called for reestablishing it.
Better cybersecurity also requires upping America’s own intelligence game. This includes prioritizing counterintelligence efforts to penetrate adversary nations’ intelligence services and their cyberoperations—to better understand how they work; to hobble their activities; and to make them doubt the trustworthiness of their own people, systems, and information. Success requires not just technology but talent—operatives who can persuade foreigners to betray their country to serve ours. The SolarWinds malware didn’t just make itself. Humans created it. And wherever there are humans, human intelligence can make a difference.
Intelligence history also suggests another approach to handling the Russians: creating a cyber version of what the CIA veteran Jack Devine has called “Moscow rules.” A product of the Cold War, these were informal, mutually accepted norms that Soviet and American spymasters gradually established for dealing with each other. Moscow rules didn’t stop spying or conflict. But they kept tensions from escalating and triggering nuclear war.
When CIA officers posing as U.S. diplomats were caught in the Soviet Union, they weren’t executed or sentenced to life in the gulag—actions that could have turned the Cold War hot. Instead, they were “PNG’d”—declared persona non grata and forced to leave the country. The same thing happened to Russian intelligence officers posing as diplomats in Washington if they were caught engaging in espionage. Moscow rules also involved occasional spy swaps, in which each side released people it had caught working for the other. The last time this happened was in 2010, when the U.S. traded 10 deep-cover Russian “sleeper agents” discovered in the United States for four American and British assets. Moscow rules certainly weren’t perfect and weren’t always followed. But over the course of the Cold War, the rules made a difference.
Notably, Moscow rules didn’t require any formal declarations of norms, treaties, or summits. These were quiet arrangements, not loud pronouncements. They involved just two nations, not multilateral institutions. And they were shaped by hard incentives, not wishful hopes. Each side knew that it stood to gain if both observed the rules and stood to lose if they didn’t. Because spying was constant, everyone knew they were playing what decision theorists call a “repeated game”; if one side violated Moscow rules this time, the other could reciprocate in the future, and the whole thing could unravel.
In today’s world, Russians and Americans don’t share a strong interest in managing all their potential cyberconflicts. But one area stands out: computer systems related to nuclear weapons. Hacks that penetrate any such systems could change how they operate, making nuclear accidents more likely. And even if hacks didn’t change anything, the other side could never be sure. Simply finding evidence of a breach might undermine confidence that nuclear systems will work as intended, making miscalculation more likely and giving the breached country stronger incentives to build more weapons and strike first—just in case. A cyber-era Moscow rule to put nuclear-related networks and systems out of bounds for any outside intervention—including cyberespionage—is a promising place to start.
Cyberconflict is here to stay, and policy makers need to be clear-eyed about what steps will actually make us safer. Sounding tough won’t. Acting tough will—through stronger defense and resilience, better intelligence, and, where possible, informal rules of cyber engagement to keep tensions from spiraling out of control.
This story was originally published by The Atlantic. Sign up for their newsletter.