Recently, we chatted with Dr. Tim Stevens, a lecturer in Global Security at King’s College London. His most recent publication titled ‘Cyberweapons: an emerging global governance architecture’ discusses the already-existing structures in place that oversee the use and regulation of offensive cyber capabilities. Our Communications Manager, Kate Dinnison, discusses with Dr Stevens what constitutes as ‘weaponised’ computer code and the Chinese view of internet sovereignty, among other topics. You can follow his Twitter @tcstvns and his blog at https://assemblingsecurity.wordpress.com.
KD: Firstly, tell me a bit about how you found your place in academia and how you would define your field of research, because I don’t want to try to define it for you.
TS: I came to academia through a rather circuitous route. I had a previous career as an archaeologist, that was my first degree. I worked in archaeology for 10 years in the field. I was a field archaeologist doing excavations both in the U.K. and abroad. I was also a stone tool technologist, so I used to look at flint, tools and artifacts from thousands of years ago. That always reflected in me an interest in technology, in ancient technologies. But I also got very interested in information technologies, so I decided I wanted to go back to college to study that more extensively. I got caught up in the foreign security-conflict relation between information technology and politics. And I did a PhD at King’s and ended up teaching at King’s as well, so now I look at cybersecurity. And for the last ten years I’ve been looking at cybersecurity. What I do now really is thinking more about the global aspect of cybersecurity rather than the technical aspect. And by the global, I mean international politics. How information technology, security, affect the way that states interact, the way that global governance operates in that space with respect to the internet, and lots of issues surrounding those two main areas of research.
KD: That leads perfectly into my second question which is related to the article you recently published. So obviously all eyes are generally on Russia when you’re talking about changing the current, as you put it, global internet sovereignty architecture. But you just published an article on China’s view of Cyber Governance in Politics & Policy. And I was wondering if you could explain a bit about your assessment of their views and their intentions.
TS: I think what we’re seeing at the moment is potentially the beginning of what a lot scholars have been suggesting for years which is that when we talk about the global internet, we shouldn’t get too excited about the fact that it’s going to flatten traditional political hierarchies, that it’s going to need some form of transnational governance automatically, just because the internet exists. These same scholars have argued for a long time that what we may be witnessing, what we’re about to witness, is a fragmentation of the global internet, roughly along national, sovereign, territorial lines. The recent resurgence in this term internet sovereignty or what’s sometimes called cyber sovereignty is exactly what these scholars have been suggesting is that we’re seeing countries attempting to throw up borders in cyberspace, if you’d like, roughly contiguous with their territorial borders, and therefore exert control over the internet in a much more complete and total sense based upon sovereign lines. So when the Chinese talk about internet sovereignty, we’re not entirely sure quite what it means yet, which is the point of the article, but it’s very much about trying to exert sovereignty at particular points and lines in the internet that don’t actually exist in a physical fashion. The internet does cut across borders, but the Chinese are trying to develop the idea, as are the Russians, as indeed are many Western countries as well, about how to exert control of the internet in their borders, about how to control what comes in, about how to control what goes out, how to control what happens within national cyberspace.
KD: Again, that’s perfect segway to talk about the current debate in the U.K. After that attacks of last month, Theresa May came out with this statement, saying the internet must be regulated and we must find a way to get rid of these safe spaces for terrorists to communicate etc, etc. What nuances is the UK debate missing, for those who read this in the Daily Mail and don’t really understand the opposing sides? Going off of that, should the brunt of the responsibility, like she said, be placed on these social media companies, or should it be somewhere else?
TS: There’s a lot of issues packed in there. The first thing to say, clearly, that any regulation of the internet is difficult. The internet is developed, primarily as a fairly lightly regulated space, which has mainly been driven by private actors, corporates, and the like who, by nature of them being high-tech companies, have tended to be lightly regulated, because governments don’t always quite know how to regulate them. Or the fact that they’re seen as great economic drivers, therefore we don’t want to regulate them. And the tech companies know this. The other interesting thing about the U.K. example is that in 2008, then Labour home secretary Jackie Smith said precisely the same thing when she said that the internet is not a no-go area for governance. This is not the first time we’ve been here in the U.K. Theresa May is articulating the same thing that Jackie Smith was. You know, one of her predecessors as home secretary. Jackie Smith’s comments were in the context of exactly the same debate – about terrorism, about online radicalization, about internet terrorism, if you like. And we haven’t progressed an awful lot further. It is not clear precisely how you would go about these measures short of really cracking down on any form of internet content or expression that you deem problematic. And that notion in itself is problematic in a democracy.
Ostensibly, we do have, if not in constitutional terms, at least in international legal and in human rights terms, the right to freedom of speech and expression on the internet or wherever it happens to be. And putting the onus of responsibility on social media companies, while I understand that impulse, because they are effectively these days publishers of content as much as they are just platforms for content. I think it’s going to take a much more cross-government cross-sector approach to this. And what worries me about this is that there’s actually not a lot of public debate about this issue. Maybe that’s because everyone knows intuitively that the internet is so difficult to regulate that whatever governments suggests social media companies do, simply won’t work. Or whether it’s that people don’t care. I really don’t know. Lots of these supposedly technical issues tend not to attract much public attention, but this is not just a technical issue, it’s a political issue. So if you can choose to restrict freedom of expression by one group of people, how do you stop it being applied to another? I think it’s a real thorny issue for government, and I haven’t seen an awful lot of public consultation on this issue. All governments like to think big, talk big, but it remains to be seen what sort of concrete measures the U.K. government is going to actually put in place.
KD: Next one is related to the recent assessment of Crash Override. I was reading into your article on cyberweapons a little bit. I was wondering if any of these recent attacks, in your mind, qualify, with that idea of intentionality and harm, as such?
TS: The whole term cyberweapons is absolutely fraught because when you use it, it brings connotations of military, hardware, of national intent, of them being somehow strategic. So I try to use the term very very sparingly, but it is a term that’s being used. In direct answer to your question, no I would not. I would suggest that this is malware. The WannaCry example may well be something to do with North Korea, a lot of people think it is. In which case, there is intent there, in terms of creating disruption. But what’s the strategic goal? There doesn’t seem to be any kind of clear political aim to releasing malware like that on the internet. It may have just been a test. It may have been to disrupt. We simply don’t know. But I hesitate these days to call many things cyberweapons unless there’s a military, perhaps intelligence context.
KD: I got the feeling that many people were comparing crash override to Stuxnet, saying that this was the second public occurrence of something of this kind, of something this far-reaching. They’re saying it’s a dry-run for something larger, perhaps an attack on American infrastructure.
TS: It really depends on how you define weaponry. And there is no international legal definition of weaponry. When you use the term weapon it comes loaded with all manner of connotation and resonances with conventional weapons, and of course nuclear weapons too. I think there would be a case for calling targeted malware weaponry, but whether I agree with it or not is lute. I’m not going to stake my house on it.
KD: This is more for my personal curiosity. In doing some research for Demystify and for your course as well, there are so many fantastic code names, operation names, kind of hacking aliases and things like this. Do you have a favorite you’ve come across over the years.
TS: I’m quite a fan of Moonlight Maze. I know that Thomas Rid has done an awful lot to unpack precisely what happened and so on, and he’s done brilliant work. But it still has this mysterious, early history of internet war if you like, of espionage, of intelligence. And when I hear that name, it resonates in so many different ways when we look back 20 years. And were thinking now again about the Russians and precisely what they’re up to. And what the Americans are doing, because that quite often drops out of the conversation. It’s all about the darn Ruskis, when we’re forgetting of course the main center of expertise and cyber operations is American, not Russian. All these things come to mind when I think about Moonlight Maze.
KD: And our tools are coming back to bite us!
TS: Yes they are, you can thank Shadow Broker for that.
KD: Last one – what are your go-to blogs, sites, podcasts, twitter pages, to keep up to date on all of these cybersecurity matters.
TS: I think the best one, even if I don’t agree with him all the time, is Stewart Baker.
KD: It’s not the Steptoe Cyberlaw podcast?
TS: Yeah it’s that one. I think because they have such a weight of expertise when they’re discussing these issues. And they’re deeply embedded in the security establishment as well. They can get anyone they want to talk about anything they want, they have that kind of draw. They’re quite hawkish in many respects, but they really kind of cut issues open and analyse them forensically, and sometimes come to rather surprising conclusions. It’s great to hear people doing that kind of forensics, very intellectualised, but practically focused work. So I’d definitely recommend the Steptoe Blog and Podcast.