share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
REBECCA SLAYTON: And today, we're very pleased to be hearing from Jon Lindsay, who is an Assistant Professor of Digital Media and Global Affairs at the Munk School of Global Affairs at the University of Toronto. And he holds a Ph.D. in Political Science from the MIT and a masters in Computer Science from Stanford. He has also served in the Navy. His research focuses on the impact of technology on the world of security. He's one of the few scholars that I know who has published both in high-impact political science journals like International Security and high-impact history of technology journals like Technology and Culture.
And he recently co-edited a book on China and cybersecurity with Tai Ming Cheung and Derek Reveron. He's currently completing two book projects-- one with Erik Gartzke on cross-domain deterrence and another entitled Shifting the Fog of War-- Information Technology and the Politics of Control on the strategic and organizational dimensions of military networks, drone warfare, and cybersecurity. And I think we'll be hearing a little more about that book. So please join me in welcoming him. Thank you.
[APPLAUSE]
JON LINDSAY: Great. Thanks very much for that kind introduction and invitation to join you here. It's great to be here in this really high-tech room in this really high-tech building. And I'm not going to use any technology at all in a talk about technology. So I hope that's OK with everybody here.
So it's great to be here in an interdisciplinary form here at the Einaudi Center in a program that is sponsored by both science and technology studies on the one hand and computer science. The computer science aspect of cybersecurity is critical, but the STS dimension emphasizing the social, political, and economic context of technology is fundamental to, I think, the practice of cybersecurity and our emerging theoretical understanding of it. And it's also great that it is here in an international relations forum. This is an emerging topic, maybe a subfield in IR. But while Donald Trump's 10-year-old son may be really comfortable with the cyber, I think international relations is still struggling to get its arms around what these new technological, social, and economic developments really mean.
So as Rebecca mentioned, this talk will draw a little bit on a book that I just finished and am currently revising called Shifting the Fog of War-- Information Technology and the Politics of Control. And this book has been gestating for quite a while. In fact, its intellectual and very interdisciplinary origins go back about two decades to my college experience at Stanford-- here with Anne Bracy in the second row-- in the symbolic systems program, where I was very interested in questions of artificial intelligence, the nature of knowledge. Is machine knowledge like human knowledge? If not, why? How might we think about that?
And then its practical origins come from my experience in the United States Navy, and in particular, my first wartime experience. And I almost want to put that in quotes, although it was a very real war in 1999. This is the NATO air war against Kosovo, where I served as a targeting officer in Naples. And this was a very strange experience for me, having grown up on my father's stories. He served as a helicopter pilot in Vietnam and had many more traditional experiences that you might associate with war, and that war in particular.
And yet, here I was, staying in an Italian hotel built on an old Roman bath, eating mozzarella de bufala in the morning, driving to work. We would do our intelligence analysis, present our targeting briefings. We would have digital teleconferences. And then we would watch things blow up on CNN. Now this is a very strange way to fight a war. Look at all the technology that mediating what's going on. And I started thinking about how do we think about knowledge when it both is highly consequential and highly mediated by increasing layers of technology between individuals and organizations that are perceiving the world and the world that you're actually trying to perceive or act on.
Now that war was interesting for a number of other couple of reasons. It was America's first war in which no Americans were killed. This was a war that ended with no NATO casualties. A couple of jets were shot down, pilots were recovered. And it was a very, very high-tech war. Lots of [INAUDIBLE] sensors, lots of precision weapons. And it really seemed that the United States was a beneficiary of what people were talking about in the 1990s as a revolution in military affairs-- the idea that more precise sensors networked together with more precise weapons would allow warfare to be fought more efficiently, more quickly, and would be able to do away with the uncertainly that had been fundamentally associated with war for thousands and thousands of year. What the Prussian general and scholar, Carl von Clausewitz called the fog of war might finally be lifted by information technology.
But of course, that's same war also had some spectacular disasters, the most iconic of which, of course, would be blowing up the Chinese embassy, a perfect storm of bureaucratic noncoordination, opportunistic targeting by various government intelligence agencies, and with the result that bombs went exactly where there were supposed to-- high-precision GPS munitions delivered from a B-2-- but those coordinates just happened to mark a building which wasn't the building that the US was targeting. In fact, it was a building that the Chinese had just moved into a year before the State Department knew it, didn't let DoD know. People were very surprised, especially when it happened to hit the intelligence cell of the embassy-- the kind of mistake that you just can't make up. But investigation after investigation has validated that this was indeed a mistake. Now that target, prior to the strike, showed up on a PowerPoint target which looked exactly like many other targets, had been just as well-vetted-- in this case, by the President himself-- and yet went through.
So this really got me thinking about, OK, if we've got this increasingly distributed socio-technical system for focusing people's knowledge about places that are far away, enabling them to intervene in a very precise way, this is not only amplifying knowledge, but it also has the potential to amplify misperceptions about the world. So the problem is that, in the process of lifting the fog of war, we're also shifting the fog of war into the infrastructure and institutions that make that knowledge possible.
So a lot of the triumphalism that is associated with this discussion of the revolution of military affairs really needs to be tempered by the fact that war has not turned out to be quick, cheap, and easy. If anything, the most technologically advanced military in the world went into a war with the same stuff they had at Kosovo, but ground down for a decade of really ambiguous, difficult fighting against insurgencies in Iraq and Afghanistan.
So the RMA-- the Revolution in Military Affairs-- didn't quite work out. And yet, there are places in which it did work, and you did see incredible efficiency gains in the way that smaller units were able to act. And when you actually opened up the black box of operations and looked at what was actually happening, what you would find is that servicemen and women were improvising with the tools that they had on hand. They were using things in ways that contractors never intended, with doctrine that they hadn't worked out before their deployment, in many cases, actually writing scripts if not entire applications, ersatz targeting problems, working with the things that they have basically hacking the system to extract the value that they could.
This has implications for cybersecurity that I want to talk about today. The same things that made the army difficult in a military context also should give us pause when we start thinking about the efficiency of cybersecurity on the offense. And maybe some defensive attributes that, if we appreciate the socio-technical context of cyber operations better, might give us a little bit more optimism when we think about these things.
Now the theme I want to stress is a bit of a paradox. And the paradox is embedded in the title of this talk. And that is the increasing danger and complexity in the cyber domain that brings us all together, has created cybersecurity commands and study programs and research grants, is becoming more dangerous and more complex, but it's happening in a world that paradoxically is becoming less dangerous, where war is less likely. And I want to argue that there is an important relationship between these two opposite trends.
So I'll try and explain the title of this talk in five parts. I want to begin by reviewing the debate in international relations, maybe giving some of you a different perspective on thinking about cybersecurity, and talk about why I don't find it very satisfying. So that's the cybersecurity part. And then I want to talk a little bit about how we might think about cybersecurity as a set of institutions, a set of cooperative practices. OK, this is what I mean by design. I'll talk a little bit about why offensive cyber operations in this context are difficult, and that difficulty tends to incentivize restraints in planning those operations. That's the restraint part. Then I'm going to talk about, given that understanding of cyber operations, where we would expect it to be useful. This is the attenuation of war part. Then we can conclude and open up for discussion, which I'm really looking forward to.
So the cybersecurity, cyber warfare debate in international relations, both on the academic side and policy side, has sort of emerged as this stylized debate between what you might call revolutionaries and skeptics. And it really kind of resembles this 1980s Miller beer campaign where one side would yell "tastes great," the other side says "less filling." Like, tastes great, less filling, they go back-- of course, they're talking about the same thing.
So the "tastes great" side of this debate, much more common in policy circles-- we have arguments that cyberspace is a fundamentally new domain of conflict. And there are a couple of arguments that come up again and again in two categories. Some arguments in each of those I'll go through. But first, there are a few inductive, empirical arguments. People walk around and say, wow, cyberattacks are really on the rise. Look at ways of industry companies, stealing their stuff, interacting in ways that we never thought was possible before.
Second, firms are penetrated, they are hemorrhaging intellectual property. Keith Alexander, former NSA director, described this as the greatest transfer of wealth in history. Third, when we look at major governments, they are really worried about cybersecurity. They are writing policy documents. They're standing with cyber commands. They're spending money on cybersecurity. Clearly, there's something going on here.
In trying to explain what that is, there are a series of deductive arguments that try to make arguments from the supposed nature of the technology to their military or strategic properties. The first is that the technology is fundamentally categorically offense-dominant. This is a technology which really advantages attackers against the defenders because the defenders have an expanded attack surfaces as more and more societal functions move online, or critical infrastructure is dependent on digital technology and it's connected into the internet like it never was intended to be before, while attackers can learn about techniques and download code off the internet at fairly low cost.
The second argument is that, because of this offense-dominance, which includes the ability to maintain anonymity and hide oneself, that weak hackers have a particular advantage. Strong hackers like the United States, Great Britain, Canda, are heavily wired. If you're heavily wired, you have a large attack surface. You have a lot to lose. You have difficult collective action problems, coordinating industry firms, government organizations, especially in a democracy where people are supposed to be able to do their own thing. This makes defense very, very difficult, while weaker actors have particular strengths and little to lose against them.
Third, attribution is often thought to be a major problem in cybersecurity-- very difficult to understand who is behind an attack. It might be redirected through a different jurisdiction. It might be a false-flag operation. How can you figure out who's really responsible? And if that's true, then deterrence becomes really, really difficult. How do you respond when there's no return address to the attack? How can you credibly promise that you're going to retaliate to no one in particular? So these two categories-- that's six arguments altogether-- come up again and again.
On the "less filling" side of this debate-- and this is a lot more common, I would say, on the academic side, whereas the "tastes great" revolutionaries are more on the policy and military professional education literature. We have a set of counterarguments. But first are some inductive counterarguments that say, yeah, OK, there's a lot of stuff going on, but it's really mainly crime and espionage. There's a lot of crime going on, but it's fairly marginal compared to the great benefits that the information economy is enabling. There's a lot of espionage, but there's always been a lot of espionage. We're just not seeing the digital Pearl Harbors or the cyber 9/11s that the revolutionary crowd has been talking about.
Second, while firms are hemorrhaging their intellectual property, they're not hemorrhaging the stuff that really makes those firms valuable. You can steal text, but you can't steal context. And the more sophisticated your widget, the more dependent you will tend to be as a firm on tacit knowledge, on interactions with industries. And that is very difficult to steal remotely, unless you already understand what that firm is doing so well that you probably wouldn't need to steal it in the first place.
People that are making these arguments will often point to the problems that many countries have when they try to reproduce the Silicon Valley or the Route 128 miracle somewhere else, any they just can't quite get the special sauce combination together of venture capital, and recreational opportunities, and academia, and startup culture, and all of the things that exist, but exist in an ecosystem.
Lastly, yes, governments aren't really worried they're doing a lot, but government and cybersecurity industrial actors have significant incentives to inflate the threat, especially in the last couple of years when we saw waning Iraq-Afghanistan [INAUDIBLE], especially after the financial crash. Defense budgets were starting to go down. If you want to arrest that, you'd need to argue that there's a big threat-- that there's was a reason to continue to say cybersecurity looks good. Cybersecurity also has a brand new set of constituents within the Department of Defense that cuts across military and intelligence agencies. The more you can talk up a coherent reason for being, the more reason you have to exist.
And of course, on the cybersecurity industry side, we know that a lot of the talk about threats might be reliable, but it also needs to be thought out quite literally as advertising. If these cybersecurity firms understood security well enough to be able to say, this is our product, we're really good at it, buy some security, and the customer could say, I'm going to buy that much security, you wouldn't need to spend all of this money on really good analysis and give it away free. But you want to give it away for free because you want to advertise, we were smart enough to reverse engineer Stuxnet. Just imagine what we could do defending the network. So you've got these kinds of [INAUDIBLE] inflation arguments.
Onto the deductive side-- and these foreshadow a couple of the things that I'll talk about. I've made a lot of these arguments myself on this side of this debate. The offense dominance argument really overlooks a lot of the things that need to be done to actually plan and execute a sophisticated attack. And Stuxnet is a wonderful example here, because it's sometimes held up as the revolutionary example of how you can use your code to break something-- in this case, to degrade the performance of uranium-enriching centrifuges in Iran.
Well, that case also shows that we have an actor that's spending a great deal of time-- in fact, several years-- cautiously doing the intelligence preparation, mapping the facility in Natanz, trying to understand it, actually mocking up facilities that look very much like Natanz with the same kinds of centrifuges that the United States was able to acquire from Libya. Because everybody in here that writes code knows that you don't write the perfect code the first time. And if you don't debug it and test it, you're probably not going to be able to convince the President of the United States to endorse a covert action against Iran. So a lot of effort goes into that.
Now how much effort is Iran putting into defense? Iran pays nothing for reverse engineering of Stuxnet [INAUDIBLE], pays nothing for the patches that Siemens and Microsoft start to initiate. The initial discovery of Stuxnet is fascinating. Stuxnet is very sophisticated. [INAUDIBLE] Zero Days, manufacturers that it's exploring, one are the things that it does is it looks to see which antivirus is defending a particular machine. It says, well, if you're ESET, I'm going to behave in this way. If your McAfee, I'm going to behave in this way. If you're Symantec, I'm going to behave in yet a third way.
But who would've looked for a little company called VirusBlokAda from Belarus? Because what kind of market share do they have? Well, Iran's under a great deal of sanctions, and they happen to be buying this AV from Belarus. And that happens to get into a conflict with Stuxnet, which starts the reboot loop meant for one particular computer, which then compromises the entire operation. So here you have the best attackers in the world, the National Security Agency backed up by Unit 8200 out of Israel, that has made a mistake that compromised that operation and basically started to unravel years and years and hundreds of millions of dollars of work that went into this particular operation.
This discussion is also interesting because it kind of gives a line to this question about asymmetry, which was the argument that weak actors have the advantage over the strong. Well, this was exactly backwards, right? This was the strong having advantages against the weak. And in many of the major events that we think about, whether it's China exploiting firms, China exploiting human rights organizations, Russia exploiting Georgia or Ukraine, or US and Israel exploiting Iran, it is the stronger and more sophisticated actors that are using cyber techniques as a force multiplier to the power advantages that they already have.
Further, attribution is feasible, especially where it matters. Because the more sophisticated the conspiracy, the more likely it is that somebody is going to leave a clue, the more careful the parties need to be to not leave things, and the more likely it will be that the aggrieved party is going to initiate an investigation, especially if they have the resources to look at things other than computer forensics. They've got other intelligence streams that they'll be able to bring to that attribution problem. So ironically, while attribution is very difficult at the low end, where people are less motivated to try and work through the vast number of combinations you might have. At the high end with high-value targets, you have a smaller number of potential perpetrators and more motivation on the political side to do it. And if attribution is feasible, then so is deterrence.
The bottom line on these arguments, I think, is that we need to think about the logic of technological possibility together with the logic of political utility. And by that, I mean it's not enough to say, given the configuration of this machine, this infrastructure with these vulnerabilities and these potential ways to access and exploit those vulnerabilities, X might happen. That's the logic of technological possibilities. When you show that you can hack an airplane or a car or what have you, that's what we're demonstrating.
But you need to be able to take that possibility and weaponize it. If you're going to weaponize it, you have to tell a commander, it's going to create this kind of effect at this particular place at this particular time. Or if you're not time-sensitive, it's going to, you know, 1, 2, 3. And you have this risk of retaliation or of blowback or collateral damage or what have you. OK, so there's got to be a story of what kind of political or economic gain is going to be realized from that particular operation.
So I find the skeptical angle generally convincing in theory, but I've been struck how unsatisfying it is in practice. People that are in the practical cybersecurity business, whether they're on the corporate side or on the government side, almost have this sense of indignation where they say, what do you mean, there's not a threat? What do you mean, the cyber domain is not an active domain of novel conflicts? Step into my shoes. Sit in my chair. Every time I kick the Chinese out, they're in somewhere else. EPT-28 seems to have this incredible Swiss army knife toolkit. How can I possibly compete with that?
Look at the news, right? It's this litany of ongoing events. And these events are-- they're creative. They're often surprising. I mean, the only thing unsurprising about cyber is that we're surprised that we're surprised about the new dimension that's been opened up. And whether it's the Snowden revelation that the National Security Agency was willing and able to have a much broader scope of exploitation than we expected, leveraging US companies on US soil like Google and Yahoo. That's a surprise.
Whether it's North Korea being willing to hack a Japanese company on US soil to prevent them from releasing a movie, failing to do so. Again, that seems strange. That's a surprise. Whether it's China hacking the Office of Personnel Management, exposing-- depending on who you believe-- somewhere between two million and 30 million government employees, all of their personal information, leading many to believe, hey, this isn't just espionage. This is espionage on such a scale that it's really starting to look and feel different. To, of course, the ongoing drama with Russian interference in US electoral infrastructure.
Continuing creatively, but inevitably drawn out and ambiguous provocations, we make it reasonable to conclude that all conflicts do now have, and will have the future, a cyber component. And when you start thinking about more high-end complex scenarios with China and Russia, some of those can get particularly scary.
So even if we've got this "tastes great, less filling" debate that goes back and forth, but we've got all this activity that looks a little bit different than either side is characterizing it. What should we do to make sense of it? So I think we really have a problem here, not just of the facts-- though there are certainly plenty of things we don't know. We could learn a lot more if we had better data. But a problem of really understanding what this domain is and what profit in it looks like.
So I want to argue something that might seem a little strange, but I want to argue that cyberspace is an institution. Not just that it has institutions like [INAUDIBLE], but that it literally is one, and that's political science speak for saying it is a set of coordinated relationships that depend on cooperation. It's not the way we normally, in pop culture, think about the internet.
Eric Schmidt, of course, described the internet as the greatest experiment in anarchy ever. General Hayden, former NSA, former CIA director, said, well, the internet's kind of like Mogadishu. And I want to argue, no, this is exactly the opposite. In fact, the internet might be the greatest experiment in hierarchy or institutions ever.
We should actually listen to the word that we use to describe this melange of practices and technology. Cyberspace, of course, is not just a 1984 Canadian science fiction writer's fantasy. It comes from the cybernetics movement, and when this word was coined, it was coined from a Greek word meaning steersman, kuber, which is actually the same root for the word government. And the intuition was that cybernetic systems, naval gunfire control systems, digital computers, were somehow doing something very similar to what governments were doing.
And if you look at the bureaucratic nomenclature of computer science, it's overwhelmingly about files, processes, procedures, protocols, methods, routines. This is the language of being in a large bureaucratized organization. And when you look at the history of the development of computing machines, it's overwhelmingly in corporate and government bureaucracies to expand the scope and precision of control that these organizations beyond the capacity of the human [INAUDIBLE].
Now, in the discipline of international relations and international political economy, when we think about institutions, I like to use the definition of Douglas North. He won a Nobel Prize for thinking about the role of institutions in economic history, and he defined institutions, colloquially, as the rules of the game, or more formerly "the human devised constraints that make human collective action possible." Institutions are overwhelmingly defined by specialization in the functions that people use to implement them, as opposed to true systems in anarchy, where every individual in that anarchic system is helping themselves. They implement functions like measurement to find out, to improve flows of information from the outside world into an organization, coordination amongst the actors in the organization, articulation or enforcement of the intentions or goals of that organization, all to provide joint benefits to the actors that cooperating to realize them.
Most institutions rely on technologies. It's really hard to think of institutions that don't have technologies. Maybe handshakes or bows might be very simple ones, but any real institution, like the government or a university or a military organization, if you think about, has technology along with it, and of course, technology has associated institutions. And so if we take North's sporting metaphor seriously, that institutions are the rules of the game, we can think about technology as equipment and the playing field, and all of these have to go together.
All that is preamble, my way of saying that computers become particularly interesting in this framework because computers are literally tools with rules. Rules that those procedures, programs, all the things we talked about, this is the language of institutions, and if we take that quite literally, we might expect large scale information systems to be subject to many of the same dynamics that affect any kind of institution anywhere. Namely, we should expect to see struggles over the definition and adaptation of these institutions because one standard will empower one group over another, so people tend to argue about who wins and who loses in these definitions. There will be fundamental tension between the regulation or decentralization of the institutional rules of the game that you're using to coordinate interaction. And you'll expect layers of abstraction to have different principles of organization depending on what you look at.
No market economy is truly disorganized. It would be total anarchy if so. You still have to have common rules of the game, common currencies, common courts that allow you to do things. So even decentralized institutions still have these [INAUDIBLE] aspects.
Institutions tend to have their structures get locked in historically. They have all kinds of inefficiencies if they're not put together in the right kinds of ways, and this allows us to start looking at computing vulnerabilities through an economic lens. And there has been, I think, a burst of really creative activity in the last 15 years looking at the economics of cybersecurity and characterizing things in terms of the economic incentives and disincentives, which create more abilities, for example, the failure to build in security in the early internet because nobody took security seriously. They didn't have to. It was a cultural group of scientists that largely trusted one another. They didn't expect large-scale exploitation because they didn't think about what it would mean to have millions, if not billions, of users.
Firms that rushed to market to sell functionality, rather than build in security. You see this repeated time and time again. We're going through this again with the Internet of Things.
Oh, my gosh, again, they didn't think about security. They just thought about functionality. Well, if those don't get rewarded, and you want to be the first to market because there's huge network effects if you are the first one to market.
[INAUDIBLE] third-party externalities-- that means a cost or benefit that's not borne by the parties to the exchange. So if your computer is infected with a botnet, but you get to keep using your computer because you have plenty of bandwidth, you actually don't really care, but it might be attacking somebody else's computer. That's an externality of you not patching, which Microsoft has now fixed. The Internet of Things has not fixed, and so now we're seeing toasters starting to attack websites. People that own the toasters still get to make toast, and unfortunately, somewhere else, somewhere else, someone somewhere else in the world is getting attacked by that toaster.
So in many ways, we can think about this institution, which allows unprecedented improvement in control in the economic, governmental sphere-- an unprecedented global institution for improving control and efficiency through common standards and protocols that are still not quite perfect, and these inefficiencies now allow all kinds of exploitations. They allow exploitations in the same way that inefficiencies in other kinds of institutions do. So a yellow light is a design feature that's supposed to slow everybody down when you have something going from green to red. It makes traffic work more efficiently.
But of course, some people say, well, not only is it almost not green anymore, it's still not red, and so we'll speed up. And the world becomes a little more dangerous. We all now have to deal with people exploitings all the lights in this way by looking around a little bit more, so you're piling on a social institution to try and deal with this imperfection that's in the yellow light.
Anytime we talk about legal loopholes, it's somebody exploiting something in the law that the law didn't ever intend because maybe the law wanted you to carry forward some losses that a business had that were in the range of a million dollars, and nobody ever thought that you would take a billion dollar loss and spread it over five or 10 years and not be able to pay any taxes at all. That's a flaw in that legal regime which is being exploited. You can also think about the way that criminals or insurgents or terrorists operate within the society that is hosting them. They're not openly resisting or rebelling against institutions. They have to work within them in order to not be discovered, not be found out, so all of this kind of cheating within an institution fundamentally depends on a degree of cooperation amongst the very people that are exploiting it.
So part three-- so if you buy this idea that cooperation, reliance on common standards and protocols, common abstractions, which are general good and improve efficiency for everybody, is necessary for making information technology useful, and that exploiting this is actually a form of cheating within these institutions, why might reliance on cooperation in the very means of conflict incentivize restrict? The kind of exploitation I've talked about does rely on a degree of deception, and this is profoundly true in most places in offensive cyber operations. Offensive cyber operations are not kinetic operations to force your way through a door. They depend on somebody leaving the door open for you, or convincing somebody to open that door.
There is no brute force in cyberspace. A so-called brute force attack is actually just trying a bunch of keys. It's not kinetic brute force. Martin Libicki likes to say there's no forced entry in cyberspace, which means we're relying on technical exploits, and technical exploits means I'm going to use your protocol, your machine, your standards in a way that you, the designer and the firm, did not intend. There's more variety in the world than you're able to handle, and I'm going to present you something that's going to look normal it, but of course, is now going to realize an intention which is not in the interests that you had conceived.
We need to engage in social engineering, which does the same thing. We want to take advantage of gullible users. Make them believe that we're working together.
All con jobs fundamentally rely on the cooperation of the mark. You believe you're in a safe place, or you want to believe that you can get a cheaper house or a cheaper taxi ride. You participate in the exploitation. It makes that con possible, fundamentally different than a brute force mugging. And if there's no cooperation, there's no connection; there's no connection, then there's no target.
But deception is inherently self-limiting. Deception can do a lot of, things but the more sophisticated that deception, as we spot [INAUDIBLE], the more likely it will be that there will be mistakes made that will compromise that activity. You step out of cyberspace, again, and think of other places where deception figures heavily. We see a lot of deception on the battlefield, traditional military operations, but most of it are tactical ambushes, feints, diversions.
Deception on a large scale, something like the deception before D-Day to try and convince the Germans that the landing was going to be in Calay and not Normandy, are very rare because they are very difficult to pull off because you need to handle all of the information channels that the enemy might have. You need to show them something that's not there. Hide The things that are there. And these kinds of gambits have many, many ways to fail.
So more sophisticated compromises are more likely to fail, and also defenders can be deceptive as well. So if there's an expanded potential for deception, those benefits don't just accrue to the attacker. Those benefits can accrue to the defender, too, if you're willing to use it.
Think about a minefield. A minefield works. It actually provides deterrents because you tell somebody there is a minefield here, they don't know where the particular mines are. They are camouflaged, and it's going to take you a tremendous amount of work to actually walk through this 100 yards. It would be very easy to walk through if you didn't have to deal with [INAUDIBLE] deception along the way.
How much more difficult if you actually have to deal with a counterintelligence operation, where you're being lured into a honeypot, a honey net, or something else, where you're being allowed to be in the system. You're being watched. You're being given bad data because they want to learn about your methods.
They want to learn about what your attack platform looks like. They want to see if there's any ability to start exploiting what you're actually doing. This is active deception against the deceivers. And both on the commercial side, and certainly on the government side, the increasing discussion of active defense, of hack back, I think starts to suggest that this more deceptively oriented defense is becoming more and more common.
So if cyber exploitation relies on defense, and defense has got some of these problems, what-- excuse me, deception. Deception has some of these issues. Where is deception going to be useful?
In international relations, when we think about conflict, we think about two abstract classes of conflictual behavior where conflict is useful. We think about conflict as a form of bargaining. We're negotiating something. It's just part of my negotiating strategy might involve using force to coerce you in a certain way.
And force can be useful in various ways. The most obvious way to use force is to take something away from somebody, or to break their ability to resist or to fight back. You can change the balance of power, or you can change bargaining power.
The second way that you can use force is to communicate information to the other side about what you're willing to do, what you're willing to fight or, how far you're willing to go, that you might be willing to escalate and go further. The difference is that the first form of brute force that changes the balance of power is working right now in the present, whereas the communication of information is talking about this is what I care, and this is what I'm willing to do for it. So deterrents, which is trying to stop something from happening, or compellence, which is trying to start something into motion-- both of these are communicating.
So how useful is deception for both of these? Well, deception has its limits when we start thinking about coercion, using deception to communicate. There is an intrinsic contradiction here.
A lot of the pessimism about deterrents in cyberspace comes from exactly this problem. How can I credibly advertise to retaliate if I credibly reveal what I'm doing, and then people just patch the vulnerabilities or close the vectors that I'm going to rely on? I'm just saying, something might happen-- well, that's not as credible as saying this will particularly happen. Well, that means it's also increasingly difficult to coerce somebody to do something using deceptive means.
Now, there's lots of caveats and asterisks on this. Ransomware is, I think, a really important exception which proves the rule. In ransomware, the attacker cripples your machine and says, hey, you want to get access back to your data, you need to pay me some bitcoins to this address. If you do, then maybe they'll unlock your computer. The fascinating thing is they often actually do.
But there are real limits to how far that can go because if the ask is too high, you as an individual user might say, well, I already backed up my data. I don't really need it. I'll just go buy a new machine.
If you're a corporation or a government, you might say, well, I'm going to start an investigation and call the FBI. We must get things in motion, and now, the perpetrator is starting to have to worry. Did they leave clues? Is there something else that's going to come out? All right?
Relying on anonymity to make threats is really difficult because how do you surrender to no one in particular? So cyber is not very useful for communicating information, which is very ironic because this is an information technology, and this is one area in which its informational uses are not very pronounced.
Now, on the other side, the brute force, or what I like to call revision-- revision of the balance of power-- this is where cyber really comes into its own. This is crime where you're looking to steal some resources, or it's intelligence where you're looking for an informational advantage by learning about somebody else's capability or their bargaining position. It might be manipulating narratives, propaganda, activism, or it might be actually using them in a war fighting mode to [INAUDIBLE] capabilities.
But as we go through that list of things that you might be able to do, there are increasing operation barriers to getting those right, and I won't go through all of those in depth. We've already talked a lot about them. But basically, the more you expect to get out of a particular attack, the more you're going to have to put into it. There's no free lunch anywhere, what we'd expect to hear. If you want to break a cyber physical system without getting caught, without facing retaliation, it's going to take a great deal of preparation [INAUDIBLE] capabilities to make that happen.
Even on the low end of crime, we know that if you actually want to turn that hack into money, that takes a great deal of effort. And this is why you can buy all kinds of perfectly good credit cards and bank accounts in online hacker forums that are being sold for pennies on the dollar. Here's a credit card. It's got a $10,000 limit, and I'm going to give it to you for $5. What the heck is going on there?
Well, there's $9,995 that are being discounted because actually translating that into money is the hard problem. You've got to come up with some sophisticated laundry scheme that depends on mules and ATM cards or little creative deposits that go someone else, and that's really the hard part. So all these have these additional barriers to operation-- barriers to operation and monetization and political usefulness that are added on as we increase the scale of the attack. So I think in many ways, this is kind of why we have this "tastes great, less filling" debate, is that when we focus on the low end attack, the low end scale, we do see a great deal of offense dominance if you will, a weak actor advantage of creative activity. But when we look at the most worrisome scenario-- the cyber 9/11, a digital Pearl Harbor-- we see both strategic disincentives in the form of deterrents, and we see operational barriers in actually getting something done.
So just to wrap up here in the next 10 minutes what would we expect to see cyber-- where would cyber operations be most likely? Talked a lot about the growth of cyberspace and the political cooperation that makes it possible, but the other big trend in recent decades is the gradual lessening in the frequency of armed conflict. That may seem a little insane when we look at the Middle East melting down and some horrific things happening in Syria, but the big picture is pretty clear. You have far less interstate conflict that we ever had before.
A lot has replaced with civil war, internal war, insurgency. Even that is dropping down. Steven Pinker is famous for writing about this, but this data has been interrogated and reproduced in other different kinds of studies. But we have this general attenuation in frequency and intensity of warfare.
Now, there's a lot of different explanations for this. These explanations may have little to do with cyberspace. They may have a lot to do with cyberspace. It might be the case that greater US conventional and nuclear power just reduced the attractiveness of war to most actors that would have the power to get involved.
It might be that better monitoring of the balance of power give people a better idea of what's at stake and fewer places to hide, and therefore, if you can't hide, there's less incentive to go to war rather than make a deal. It might just be that there are better alternatives to war for getting the kinds of things that people used to fight over. [INAUDIBLE] economic interdependence on a global scale just means it's easier to trade for stuff than to fight for arable acres to make it yourself. There might just be more robust cosmopolitan civil society. Some people like these ideas.
I'm not arguing that cyberspace causes any of these, although it's interesting to note that cyberspace is correlated with all of them, correlated with increased US power, correlated with economic interdependence, correlated with the ability to talk to people around the world. But for whatever reason, as traditional war becomes less attractive, war in cyberspace becomes more attractive. Politics doesn't stop. People have fewer incentives to actually resort to force. They are more involved in cooperative institutions, so what's left is the complexity of that institutional landscape and the imperfections in that institutional landscape that can be exploited for small marginal gain.
So here's the paradox, is that as war becomes less attractive, cyber conflict is becoming more attractive. And in many ways, this recalls this old idea from the Cold War of stability instability paradox, and this was this idea, first advanced in the 60s, that nuclear war is bad. Both sides understand it's bad. They both watch "Dr. Strangelove." They don't want to end up like that.
Nuclear weapons are clearly destructive, dangerous, ghastly. They can't defend against them. Everybody knows the other side has them. You have mutually assured destruction, which means it's not rational for either side to go to war. But for the same reason, you can't credibly threaten to commit suicide just because somebody is doing something annoying in El Salvador or Viet Nam.
So you have proliferation of small scale conflicts below the threshold of nuclear retaliation. So you can have nuclear stability and conventional instability. I would argue that, because of several changes in military power, we now have general conventional and nuclear stability, but a great deal of gray zone, cyber, and other forms of instability, which allow these more minor extreme forms of conflict to really proliferate.
So in conclusion, cyber conflict is not a new form of warfare. In fact, it occupies a more midling position between peace and war, between military and civilian affairs, between hierarchy and anarchy, like intelligence, like irregular operations, like coercive sanctions. And in that world, states become up more important, not less, because they're the trees around which these institutional systems are built. Conflict looks a lot more like the kinds of conflict we see in trade regimes-- arguments about how institutions are designed, who benefits from the distribution of given rule sets. It doesn't look like a conflict in [INAUDIBLE], where you have two different opposed autonomous hierarchies going at it.
Cyber international security-- more in the category of irregular operation. Looks like intelligence. It looks like counterintelligence, but on an unprecedented scale-- again, this midling position between war and peace, between hierarchy, anarchy and hierarchy. There are a couple of interesting caveats to this in this world of cross domain military operations, which we can discuss in the question period, but I won't go into now.
In general, because of the downward vertical pressure on conflict, on large scale warfare. You have this ironic increasing horizontal proliferation in the variety and the creativity of different kinds of conflicts. So I think this allows us to move beyond this light beer "tastes great, less filling" debate maybe to a more satisfying pitcher half full of some quality craft brew, or we say the half empty part is, yes, cybersecurity is dangerous, frustrating, complex.
It's a problem that's going to be very difficult to solve, maybe is unsolveable, but the half full part is it's predicated on things going pretty well on a civilizational time scale. So you've got conflict looking more complex, but not necessarily more dangerous. So if that'll wrap it up, I'm really looking forward to your questions.
[APPLAUSE]
SPEAKER 2: Thank you very much for a great job. So I have a question on the stability instability paradox that you brought up, and I'm giving myself [INAUDIBLE] grad student here. So with regard to the idea that stability at a nuclear level helps proliferate instability at a lower level, which at least from what I understood, is the cyber level-- I've been sort of reading up a little bit on the United States's cyber doctrine, and there's a task force report from the Pentagon in 2013, which says that if there's a cyber attack, it should be regarded as any other attack, and that the United States needs to engage its detriment force, including the threat of a nuclear strike against would be cyber perpetrators. And I'm wondering here whether the line that we are clearly drawing between the nuclear level and the cyber level, separating them, is in fact a lot more blurred than what it appears to be. And so I'd like your thoughts on that.
JON LINDSAY: Yeah, great question. The argument is not that there's a super clear line. It's that there's a boundary that is put in place because of the clarity of deterrents at one level, which is not credible at another level. The only reason that you would want to have ambiguity in a declarative deterrent policy is because you have a credibility problem.
If you tell your kid, hey, please stop beating up Jenny, or maybe I might think about possibly taking away your dessert-- well, Bobby's going to say, well, I'm going to maybe get the dessert. And keep beating up Jenny, so you're totally undermined here. So if you can say absolutely, hey, no dessert if you don't stop beating up Jenny-- great, Jenny's going to be OK. Everybody gets dessert. We're all happy.
The problem is how can you say if there's a cyber attack, there's going to be a response-- nobody can say that because nobodies quite sure about that threshold or wants to commit to that threshold for a number of reasons, including that the US wants to be able to do its own cyber attack. So you're going to introduce ambiguity to deal with your own credibility problems, and this was the same in the Cold War. The fundamental puzzle of nuclear weapons is they're unusable. You can't threaten to commit suicide for anything because it's worst than any possible benefit. So what you have to do is threaten to put things in motion that might be really, really bad, [INAUDIBLE] talks about starting to rock the boat.
If you still the boat, it capsizes both passengers, but if you're just rocking the boat, you're just upping the risk, and that risk, essentially, discounts the severity of the threat. So that's ambiguity, in this case, is doing. If a cyber attack is really, really bad somewhere up there, there's something that might happen. You're trying to deal with the fundamental noncredibility of dealing with that.
The way to separate this stability instability in it's classical, or its current, formulation-- they depend on where the clarity of deterrents lies. If it lies at a higher level, the nuclear realm in the Cold War, [INAUDIBLE] now even in the conventional realm-- but because cyber is really useful for all of these low level revisions of the balance of power and moving benefits around and not useful for deterrence, you've got a blossoming [INAUDIBLE]. So it's constrained, not just by deterrents, but also then by [INAUDIBLE].
SPEAKER 3: So [INAUDIBLE] to ask [INAUDIBLE], talked about some [INAUDIBLE] hasn't talked about in the talk, but you you did have a sentence in your chapter that you gave us, which I want to ask you about. Most of the sentence summarizes the way you concluded, where you say limits on the severity of conflict, what about through major improvements of the military control capacity of leading states will tend to encourage an expansion of the complexity and subtlety of conflict? That, I take to be another way of saying what you said in your conclusion, but there's a minor clause at the end of the sentence. Comma, together with ambiguous, if not outright marginal political economic effectiveness. You didn't talk about political and economic effectiveness. You certainly didn't talk about economic effectiveness. Could you say something about that, or would you want to refer me to the book when it comes out?
JON LINDSAY: I refer everybody to the book when it comes out. Gotta get there myself, which is why this conversation is really, really useful. What I was alluding there actually, as full as [INAUDIBLE] the previous question, I want to argue that this interesting cyber paradox we have is not simply a function of deterrence, as in the stability-instability paradox. Which is just because there's no consequences for going to a certain level, I'm going to go a lower level, and that's just fine. I think that does apply, but there is this additional reliance on cooperation and the gains to be had through connectivity and all of the positive, pro-social uses of the internet, which also impose a restraint.
So you've got kind of a doubly restrained environment, where both deterred from [INAUDIBLE] higher level of intensity, and there are benefits from continuing to interact, which are then moderate the level of concessions that you're using.
SPEAKER 4: So the Russians tried hacking the Democratic National Committee's emails, and they came up with some not very important stories that have a short term effect on the American election. Did The story end there, or in responding to that, does President Obama or his successor, whoever she may be, [LAUGHTER] respond by saying, we're going to up the ante on those lousy Russians? We're going to make sure they understand that there are consequences to what they do?
And that may create a feedback loop to the beginning of your theory. That is to say, it may have an effect on the severity of conflict. It may be that in responding to these subtle and complex signals, governments or actors in general may up the ante, and then you may increase the severity.
JON LINDSAY: Yeah, OK. So there's at least three different pieces to that. One is, what are the Russians doing, does it make sense here? Two is, what could be done in response? And three is, does that start to increase mistrust, and create a complex file where you would eventually resort to more kinetic balance. Is that right?
So this is a fascinating case, and I really like looking at, from east to west, the four big episodes of Russian cyber activity from Georgia, Ukraine, Estonia, and the United States, a little bit out of order temporally there. But in all four of them, you have a vigorous contest of propaganda, and there it is. In three of them, you have some fairly large scale denial of service attacks, In two of them, you have the use of clandestine special operations, and in one of them you have full scale military invasion.
So in the place were Russia cares the most, and is most confident that there will be no response, hey, they use all the force they need to get what they want done. That's Georgia. Ukraine, little bit less, because you're a little bit closer to the NATO boundary. You still care about more than NATO does, just a little bit more restrained. Estonia, actually part of NATO, you're pulling your punches even more.
United States, you're doing some very annoying and controversial things, but in the context of a great deal of existing US-Russian tension, existing US sanctions and verbal abuse to Russia for its activities in Ukraine. So I think Russia is fairly confident that it can do these things and get away with it. Basically saying, United States, what are you doing to do?
Now, does that feed a complex spiral that can eventually end up in some Baltic scenario? That is certainly possible. But any Baltic scenario, or the Chinese side, like the South China Seas scenarios, these would certainly have some cyber involved, but it's cyber involved between powers that are already willing to risk military confrontation and insiders interjecting a little bit of uncertainty in a particular way.
So that's the [INAUDIBLE] main caveat that I'm talking about, where these restraints that I think are very real within the cyber domain, when combined with other domains and a willingness to risk real military conflict could have some different effects. We can talk about that really specifically if you want.
SPEAKER 5: Thank you. I'm [INAUDIBLE] So really fun talk, I really enjoyed it. But so far we've been talking about state actors. When we talk about the stability-instability paradox, one of the primary means of fighting it out in the cold war is the use of third parties and paramilitary organizations to act as proxies. So where are our third parties, where's Wikileaks in our story right now, and how does that affect the possibilities for continued glass half full stability in the cyber realm?
JON LINDSAY: The proxies are everywhere, just as a term. But the reason that we having all these surprising interactions is because there's some third or fourth party that is now getting co-opted, exploited, involved in a way that wasn't quite appreciated, that wasn't under an explicit or implied protective umbrella, right?
This is why North Korea is willing to start risking it with Sony. Still very tentative, still trying to obfuscate its identity. But it's sat there, and it's watched China hammer Fortune 500 firms over the past four years, and nothing's happened. And the US government hasn't even called out China by name. So it's feeling that it hides, and we're only talking about a movie, maybe we can go ahead and risk that. Wikileaks, another great example.
So I think that you have this looking at possibilities that are not well defended, and they're not well defended precisely because nobody's decided to very particularly clarify what they're willing or able to do, either complicitly or explicitly. So the glass half empty part, increasing complexity, increasing actors, increasing your defendants. Proxies are a huge part of this. So we should expect to keep getting surprised. We'll all have lots of good employment in the future.
SPEAKER 6: Adding on to the discussion about non-state actors, recently I think China and America, there was President Obama and the premier got together. They had a little chat, and they said, OK, let's stop the cyber attack thing. But given the independence of a lot of these proxy actors, how much do you think these states actually have control over the proxy non-state actors that conduct these things? For example, how much influence does Russia actually have on the Wikileaks? Or can they control that? Can state actors, when they promise, we don't want to do cyber attacks anymore, can they actually enforce that onto their underlings?
JON LINDSAY: In China case, there's a great deal of control. Especially since a lot of that is state sponsored, conducted by the Chinese People Liberation Army. Some of it was moonlighting on the side. And certainly an ongoing military realization is that only partly had to do with these negotiations in September last year. A lot of that has been cracked down on.
Getting governments to agree not to spy on one another is like promising somebody you're not going to lie to them. It's a really difficult proposition. Important work, pursuing the first time that we've ever had anything that starts to approach a norm about espionage, even though it's about only a really small part of espionage, corporate commercial espionage, that's the first time it's ever happened. There's never been international law on this little touch of espionage.
But what appears to happen is not that it's gone away, but that China has switched to more technically capable actors. And instead of having 3PLA, which is the Chinese NSA, doing lots of stuff to lots of people, a lot of that activity is switching to Ministry of State Security, which is kind of like the Chinese CIA. It's an important intelligence organ. And you got some really good technical sophistication. Those are the guys that Microsoft turned over its Windows source code to.
So they are still very active. China has been forced to up its game and not get caught as easily. Remember, they only caught the NSA because of Ed Snowden, and a couple of other things. If it wasn't for Edward Snowden, there'd still be a lot of stuff that wasn't known. It just wasn't as noisy. So I think this norm against espionage has just forced people to get better at espionage. Don't get caught, is the message. There was another part of your question?
SPEAKER 7: Yeah, I'm just more worried that we have this idea who's a boundary, as you said, between total escalation, and then the lower end, like lower intensity conflict. I'm just worried that at least in the case of China, as you said, these state actors do have control. But in cases where the state has, at best, a tenuous control of non-state actors conducting these things on their behalf, is there a real risk that these proxies--
JON LINDSAY: Oh, this is one of the operational boundaries, the boundary's operationalization really come into play. So those non-state actors that are motivated by whatever reasons, they have the motivation but not the ability. Those that have the ability, because of all of the planning, intelligence complements don't have the motivation. That doesn't seem to be [INAUDIBLE]
SPEAKER 8: For China.
SPEAKER 9: My understanding is that the way mutually assured destruction is argued, it depends on both sides believing the other side has sufficient control so that it could launch a retaliatory attack under almost any circumstances. That's why we have those submarines out there.
Yet if it turned out that the control systems for these counterattacks were heavily dependent on networked information systems, then the success in cyber attacks would compromise either your belief, or your belief that the other side believed you believed. And the whole argument mutually assured destruction goes away. Richard Danzig calls this mutually assured destruction, or MUD, in Too Deep.
It seems that, therefore, there is an interaction between cyber and what used to be a nice world order that is rather disturbing. Maybe you would tell us where that fits into--
JON LINDSAY: I'm very glad that you brought this question up. I mentioned a couple times that there are some caveats and asterisks, and this is a really big one. This is another one of these exceptions that prove the rule that from the same argument, we can get into a really bad place. Actually, my colleague Eric [? Carsky ?] and I have a paper called Thermonuclear Cyber War that's coming out. Both of us have been on the skeptical side. Hey, cyber war is overblown.
Then we got together and said, but there is this one exception. And the one exception is hacking nuclear command control. And the reason that's a huge problem is, back to this question, why do nuclear weapons work? They work because of transparency, right? I know and you know that these are terrible, ghastly weapons that kill lots of people in ways that are very difficult, if not impossible to defend against. So you can parade them through Red Square, you can advertise them. And that advertisement helps to make the [INAUDIBLE] stable. And yes, you knew to secure a second retaliatory strike, et cetera.
Cyber is exactly the opposite, it depends on deception. You can't reveal it. If you reveal it, you lose it. Especially nuclear command control, if you ever tell. Somebody that had nuclear weapons that their command control was compromised and not do anything about it, they would fix it. So you can't mention it.
So now here's this terrible situation, where I know that your nuclear command and control is compromised, but I can't tell you because then you would fix yourself. Which means you think you have a deterrent. Because you think you have a deterrent, you're willing to run some significant risks believing that I will back down. But I know that you don't have a deterrent, the balance of power's in my favor. So I'm willing to run risks that you're not willing. And so we're both rocking the boat, believing that our resolve is greater than the other one.
This is a huge problem, because it gets at exactly this problem, that nuclear weapons are useful as political signaling weapons, cyber is useful for changing the balance of power. Most of the time, you would expect that that would put a bound on the intensity of cyber. But if you're already thinking about starting a nuclear crisis on this side of the world, that's a huge problem.
And it's not total fantasy. Because you look at this program from the Cold War in the 1980s, which we found out about it after the wall came down, East Germans had penetrated NATO, and we found this program called Canopy Wing. And Canopy Wing was this massive multi-pronged electronic warfare program aimed at Soviet command and control. And it was going to use electronic warfare, the voice of America, and all these things to confuse, put in disinformation, and try and create these counter-force capabilities. So you can only imagine that in the digital world, you would do the same kinds of things.
The people at US Cyber Command tell me I shouldn't worry as much about that, because there's lots of redundancies, and the Russians are still on analog anyway, big 12-inch floppy disks. But the theory of deterrence is certainly undermined if you're hacking into a command and control.
SPEAKER 10: Yes, but there's another way to look at it, which is equally depressing. If you know that you penetrated your adversary's command and control, then if you think about it for a second, you realize that you might yourself have been penetrated. Cause after all, they're pretty smart, and you're smart.
And now you're living in a world where you believe that you don't have this control, and you realize that they'll go through the same thoughts and realize that they don't have this control. And now nobody's prepared to depend on the automatic retaliation. And now you're tempted to make first strikes preemptively because that's the only option you have rather than being annihilated.
So there are at least two instabilities that are possible once cyber can be penetrated. It's hard to know how people are going to think, but it seems to me this is a very broad undermining of-- I mean, isn't that existential threat level? And it undermines all of our thinking about deterrence of a nuclear weapon.
JON LINDSAY: Yeah, so I mean, that could go to a lot of different ways. And if you suspect you're penetrated, then you know that you're weaker than you thought you were, so you're not willing to provide as much as that same thing. If you found something was going on, that might make you a little more cautious. But if it was going on, you still had control, but you thought you were going to lose it, you might then have this use it or lose it mentality, which becomes a problem.
So yeah, I think that this is a major problem. We don't even have to talk specifically about the nuclear side. We're talking about high intensity conventional conflicts, and you've got cyber means in there. That can also accelerate your willingness to go first. Because you believe that if you're going to use cyber to blind the other side's radar and command and control, you need to use it quickly, and it's only going to create a temporary window. Once you've got that window, you need to jump through it, so you may be acting faster than you would otherwise.
Both sides believe this, as we know both the Chinese and the Americans are all really excited about network warfare so there could be incentives to move fast in the crisis, but both sides are already thinking about militarized conflict, So again, this falls into that cross domain aspect that I was talking about, but yes. We're talking about cross domain interactions between militarily capable competitors. I have lots of worries, and so does US Strategic Command. I think they should. But that's very different from the exciting debate that animates a lot of other discussions.
REBECCA SLAYTON: [INAUDIBLE]
SPEAKER 11: Well, first of all, thank you for a really interesting talk. And I'd like to ask you about the role of the public view in assessing or thinking about the severity of the cyber threat, in that you can have a low level cyber attack that sows public fear and confusion, like we had with the OPM hack, and then that puts public pressure on governments to respond. So does the public perception and confusion, lack of familiarity with cybersecurity, does that make the ambiguity of cyberspace more dangerous than if we only think about governments signaling to each other, thinking [INAUDIBLE] to each other?
JON LINDSAY: So far we haven't seen any public pressure to respond to that kind of activity. I mean, what is that activity saying? Is that activity saying, hey, this is the beginning of something that will follow, or is that activity said, this is as much as I'm willing to risk right now, because I don't really want to go further. So when you see that kind of activity, is it provocative, or is it actually signaling a certain amount of weak strength, unwillingness to go further. So there's that ambiguity that I think I'm going to take into account.
But there just hasn't been much public pressure because people don't feel the impact in any direct way, and there's a lot of good things that they're getting from the relationship with whoever the perceived actor is, so there just isn't a call to do something other than that kind of info team sense.
REBECCA SLAYTON: Does answer your question? OK. I have myself next.
JON LINDSAY: I wrote you the check.
REBECCA SLAYTON: So you've talked a little bit about deterrence, and you're skeptical if it. Cyber deterrence is going to work because of attribution problems, because you mentioned this argument, but the use it or lose it argument that once you've used a weapon it can be reverse engineered, and then it's no longer useful. So by the time you prove your weapon, it's no longer actually a threat.
And I'm wondering, on the other hand, your talk also emphasizes that for any high value target that's actually very labor intensive and very skill intensive process going on. So if we were to refocus on the skills that are the weapon, not the technology that's the weapon, does deterrence become more plausible? That make sense?
JON LINDSAY: Yes, yes it does. So response to that question is there's two different kinds of deterrence, and one of them that really matters. I think this concept of general deterrence, which is a way of preventing people from challenging in any way. And then there's immediate deterrence, which is getting someone to back down from a challenge that they've already made.
I think that this kind of dependence on skills in institution is fantastic for general deterrence. I think that the Snowden revelations are fantastic for US deterrence. Now, a lot of US officials don't like it when you point this out. You can see glimmerings that Admiral Rogers actually agrees with this, even though he will point out immediately, we lost all this trade craft, and that's bad, bad, bad. Well he's an intelligence officer, of course that's what he's going to say. Intelligence officers want to spy, they don't want to deter, because deterring is a very public thing.
But Snowden basically said, here's a lot of stuff we did, and maybe now it's all gone. But there's more where this came from, and the smart gals and guys that made this stuff still work for us, and they're still putting this stuff together. And so beware, that probably exists. Stock said similarly, right? Oh my gosh, there is a country that is willing and able to put all of this stuff together. Can't be sure that it's not going to be used. It's going to be out there.
But in the immediate situation, or like, OK, you have provoked me, and I'm going to explicitly say, hey, no further or x. That or x is still going to have a technological component that's going to include, in order to make it credible, some vulnerability, some vector which is going to give you some information about how to shut this down. This is why the Japanese can't announce where they're coming from when they attacked Pearl Harbor. They might want to use it, of course, but you can't, because it needs to be a surprise. That's still dependent on a great deal of skill, carrier of warfare is incredibly difficult. But in the immediate situation, they could use it.
And by the way, I would say that the general deterrence turn, and the value of the Snowden and Stuxnet, we can really see that reflected in this pivot that is happening right now Chinese military cyber doctrine, right? It's very clear that they've gone from this doctrine of like, net warfare is the awesome enabler of the week to, holy crap, we are really vulnerable to the United States.
REBECCA SLAYTON: Thank you. Well, I don't think I see any further questions, so I think it's time to thank you--
JON LINDSAY: I thought Matt had a question.
REBECCA SLAYTON: That is somebody.
MATT: It was actually a follow up thing you said to the question on public opinion, and I wonder if you underestimated that maybe because you're lucky enough to live in Canada, maybe you aren't paying quite attention to what's going on in our political debate. But we did have the president declare that there would be a response to Russia's cyber intervention in the US electoral process, and we had a democratic candidate accuse her opponent of being a Russian stooge for celebrating that intervention in a way that makes us think she thought the public would react favorably to that. She mentioned it several times in the debate.
So when you suggest that you don't see a public responding in the way that Naomi thought they might, and potentially exacerbating the situation, is it just in the minds of our politicians, then, and you really just don't see the public caring about that? But if they did, how would that complicate the situation?
JON LINDSAY: So my answer to her question was, trying to take the state out of it. This is a situation where the state is very much in it, and it's a state which is making this so egregious and difficult. I'd say the difficult and frustrating aspect of these interactions are still within the realm of, this glass half empty, propaganda, disinformation, intelligence, counterintelligence on a unprecedented scale that I was talking about. So I would still unfortunately expect to see that kind of stuff, and expect to see more of it.
So this brings us into the state to state room. What is the response going to be? Now, it'll probably be proportionate, it will be sub rosa. Maybe we'll hear about it, maybe you won't. It'll be kind of the same sort of stuff that happened with Korea, where you'll be like, well, we may or may not have shut off your lights. And we've sanctioned a few of your officials, and I understand you're already the most sanctioned regime in the world. So this isn't necessarily going to hurt, but we've only shown that we're doing something.
So the Russian response, which has had lots of hemming and hawing associated with it, will be sort of similar. Because there's only so much that you can do to Russia, because there's a lot of stuff that we're worried about Russia doing. So something will be done, but it will still be very restrained. Because we want to say, hey, we're not just going to roll over and take it, but we're not also going to push back that hard.
REBECCA SLAYTON: I apologize. Did I miss anybody else? OK. Was that a yes? No, not a question. OK. Well, with that, I want to thank you very much for a very good talk.
Political scientist and U.S. Navy veteran Jon R. Lindsay discusses the military implications of cyber weapons Oct. 26, 2016 in the second of three Einaudi Center Distinguished Speaker Series talks on international aspects of the cybersecurity challenge. His lecture was presented by the center’s multidisciplinary Cybersecurity Working Group, whose members come from across the university.
Lindsay argues that we should think about cyberspace as a global institution, not just a technological infrastructure. As conventional war becomes less likely for the stakeholders in this institution, he believes cyber conflict becomes more likely.
Jon Lindsay holds a PhD in political science from the Massachusetts Institute of Technology and an MS in computer science from Stanford University.