Tackling the terrorist use of the internet

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Tackling the terrorist use of the internet. The summary for this episode is: <p>In this episode we speak to Adam Hadley on understanding and countering terrorist use of the internet.</p><p>Adam Hadley is the CEO of London-based data science consultancy QuantSpark and Founder of the Online Harms Foundation which implements Tech Against Terrorism, a public-private partnership launched by the global tech sector and the UN in 2017. Adam is a leading commentator on the role of analytics and data science in society and business, digital transformation, social change through technology, and supporting the tech sector in tackling the terrorist use of the internet.&nbsp;</p>

Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Terry Pattar.

Terry Pattar: Hello, welcome to this episode of the Janes podcast. I'm Terry Pattar, I lead the Janes intelligence unit. I'm joined on this episode by Adam Hadley, the CEO of QuantSpark. Adam's also involved in a really interesting initiative that I wanted to talk to him about on this podcast, the Tech Against Terrorism program. Which Adam, I'll get you to introduce yourself and talk a little bit about your background and how you got to this stage and what you're doing with open source intelligence, but then, specifically, what we can come on to talk about Tech Against Terrorism, because I think it's really something that a lot of people probably aren't aware of who actually could get a lot of benefit from it. So, I mean, to start off with, how did you sort of get to where you are now and how do your two roles kind of dovetail, I guess, working at QuantSpark, but also working with Tech Against Terrorism?

Adam Hadley: Hi Terry, thanks so much for having me on the podcast today. Really delighted to contribute. So I run a data science consultancy called QuantSpark that's focused on commercial problem- solving using data and data science. But also, I run a not- for- profit initiative called Tech Against Terrorism. My interest is in applying data in practical ways and having impacts, and Tech Against Terrorism is a really good example of this. So, Tech Against Terrorism came out in 2016, actually. UN counter- terrorism executive directorate started a very small project looking at the terrorist use of the internet, and the UN wanted to focus on the opportunity to develop a public- private partnership. So Tech Against Terrorism is a public- private partnership, and what this means in practice is that we focus on working with democratic governments, the tech sector, and civil society, because we recognize that the terrorists using the internet isn't just purely the responsibility of one type of organization. Long gone are the days when the government alone can solve these challenges, and suddenly the tech sector, while the tech sector has lots of capability in many ways, suddenly the tech sector alone can't tackle this either. So primarily our job at Tech Against Terrorism is to help bridge that divide and to focus on practical, pragmatic approaches so that together we can disrupt, even eliminate, the terrorist use of the internet. But of course, we've got to recognize that we're only really focusing on the online element of this, and terrorist content is a function of terrorists in real life and violent extremists in real life who are producing this content. So we see ourselves as disrupting terrorist use of services, whether that be social media or messaging apps or file sharing sites, whatever it might be. So we really roll up our sleeves, and we pride ourselves on focusing on disruption in a way that sometimes governments can't do and the tech sector doesn't know how to. And we support the tech sector, big and small companies alike, to figure out, well, how to go about this. Because it's so complex, isn't it? The definition of terrorism and terrorist content. It's a wicked problem. It's really hard to define what this problem is, and it's even harder then to come up with various solutions. And certainly we don't lay claim to knowing how to solve all of it, but what we want to focus on is a really small element, which is understanding how do terrorist and violent extremists use online services. What can we do to stop them, or to make it more difficult for them to increase the friction? Because certainly what we don't want is terrorists thinking that they can get away with using the internet in any which way they want. And that's a victory for them, boasting about taking over a small platform or boasting about posting content on a particular large social media platform. For them, it's a battle. It's a virtual battle. And thank goodness, no one dies in this virtual battle, but it's certainly one that we've got to engage with.

Terry Pattar: That description of it being a wicked problem, you've really hit the nail on the head because I think that's really misunderstood. Whenever you see with the news coverage and also members of parliament talking about this issue and this problem, it sounds very over- simplistic. They seem to think that it's tech companies should be fully aware of everything that's going on on their platforms. They should be able to get rid of it, which is just not understanding the scale of the problem in my view. I think, from the work you found, I guess that's also reflected in the interactions you've probably had with some of those tech companies that actually it's so hard and so challenging because all of those platforms are set up and designed for freedom of movement of information and sharing of information, and they're not designed for getting rid of information, necessarily. So how do you sort of see that developing in terms of the understanding of those platforms in how to identify this content and get rid of it before it enables those groups to really celebrate the fact that they're sharing it online?

Adam Hadley: Well, certainly there were two kind of extremes here, inaudible. There's one extreme, oh, it's too complex. There's nothing we can do. It is impossible. And the other position is, oh, well tech is amazing, can do everything. AI that, AI this. inaudible is obviously in the middle, right? So it's about breaking the problem down. If we were thinking about this as a wicked problem, the only way we're really going to solve this is by breaking it down in small chunks, and crucially then having a measurement and evaluation framework so that we can demonstrate success. And this is not to say that we only do things that we can measure, but is really helpful. And I think it's important that all of us in the counter- terrorism community ensure that we have very clear objectives and key results so that we can actually say, well, look, this is successful, this isn't working, let's switch focus in this particular way. So really, what we focus on at Tech Against Terrorism is, I guess it's more methodological in the sense that we want to focus on really practical things that will make a difference. Because if we try and lean into the whole debate of how do we define hate speech, now, there are some amazing minds who are working on this, but are we ever going to solve that? Probably not. I only saw this week that survey come out of members of the British public about what do people feel that the tech sector should do more often? The resounding answer was the tech sector needs to do more about online abuse. The problem with this, of course, is everyone thinks that abuse is different things and probably thinks everyone else is being abusive. So there are some issues here. So instead of tackling that, which certainly beyond our brains, what we want to focus on is content and activity that's blindingly, obviously affiliated and associated with terrorists and violent extremists. So the perfect is the enemy of the good in this case, and the policy community can sometimes get really excited about the complex, academic, intellectual, ambiguous problems. Because, well, of course, it's interesting, right? To think about all of that complexity, but in practice, this is about focusing on things where we can make a difference. So for example, we know the terrorists use all forms of technology for strategic communications, for operations for really tactical things as well. There's a variety of use cases of various technologies. So for us, what's key is trying to map that and understand it, and then establish where we and others can make a difference there. And as I said right at the beginning, for us this is a battle, right? We see this as someone needing to take the fight to terrorists, and so that they know that someone is on their case, if someone is monitoring what they're doing and crucially intervening and trying to stop that. Because of course there's always some complexity here in terms of the ethics, right? Because OSINT typically doesn't seek to influence the adversary. I mean, that's probably a key element of intelligence, right?

Terry Pattar: Yeah, it's about being observational and seeing what's out there, ideally without them knowing that you are observing them.

Adam Hadley: Sure, and that has so much value, of course. And, and there are loads of examples where it's absolutely critical that OSINT and either all sorts of analysis and everything connected to that is secret and confidential. However, it's also important that we do something about it, and striking that balance is really important because there is this sort of intelligence, dilemma conundrum here, isn't there? The extent to which you want to collect information versus then intervene, and when there are thousands of researchers from all manner of institutions with different agenda and priorities, it can become really confusing. And we see a number of kind of strange phenomenon where academics might share enormously detailed screenshots or documents about how terrorists are doing this, that, and other things, and that has some merit in some cases. But a lot of the time, terrorists then pick this up and will use that information. So, the key thing I think that we always try to focus on the attack against terrorism is ensuring that everybody who participates in this understands that we're dealing with a really sophisticated adversary, and I daresay there could well be terrorists and violent extremists listening to this podcast. Hopefully not many, but, I mean, it's really crosstalk.

Terry Pattar: Yeah, it'll be out there. So yeah, potentially.

Adam Hadley: We've got to assume that the environment we're operating, one, is one that we have really sophisticated anniversaries who will shift and change our approaches. And we see this with big tech getting better, a lot better, at automated removal of certain types of content. Certainly there's always room for improvement, but there's a market shift that we've seen with the way big platforms dealing with this, certainly from the initiation of Tech Against Terrorism five years ago, when there was really woefully inadequate measures in place for automating removals, and also processing and referrals for content removal requests. We're really in a different position now, but as a result of that terrorist and violent extremists have shifted and hopefully that's something we can come on to in a few moments in terms of the fact that actually it isn't likable as such. But what we obviously see is adversarial shift, because unless we're dealing with the root causes of terrorists' use of the internet, obviously that activity is going to move elsewhere, to some extent.

Terry Pattar: Yeah, that's really interesting. And yeah, I definitely want to come on to that. But just before we do that, I wanted to ask a little bit more about how does Tech Against Terrorism do what it does? So how do you do that research and, and who's involved? I take it, it's a large collective, and you're getting a lot of collaboration between different organizations.

Adam Hadley: Tech Against Terrorism is a pretty small team, to be honest. So at the moment we're six, we're trying to scale to about 10. Over, we've got three areas of focus. The first is OSINT, so is developing a forensic understanding of how terrorists are using various services in order, generally, to inform our understanding of this, but then in specific detail to figure out which platforms are being targeted and how we can get in contact with them and influence them and kind of support them in dealing with this. And that's the kind of the foundation of what we do is that OSINT. And also, academic literature reviews and so on, because this is a really intellectually challenging area. They're changing all the time, as I'm sure you've experienced as well, Terry. The other two things we focused on is best practice knowledge sharing mainly with platforms, because there's an assumption, I think sometimes, that tech platforms will know the difference between this terrorist group and that terrorist group. Well, why would they? This is quite niche knowledge. So we focus on that kind of knowledge sharing and coaching and mentoring. And thirdly, we recognize that, given this is a sort of tech problem, if we're just considering terrorists use the internet, actually, the only way we're going to scale a response here is by building technology. So we have our own dev team, our own team of developers and software engineers, and they focus on building tools and building data sets and providing that kind of technical assistance. In terms of the open source intelligence we do, this is certainly, I'm sure it's familiar to many of your listeners, this is about trying to first of all understand the precise details of which platform is being used and why, and recording this in various ways and ensuring that we're on top of what's happening. So this is both the kind of structural monitoring, so looking at designated groups and generally how they're using the internet, but also after various offline attacks and events, we then have to have capacity to scale up, to do deep dives into particular things, because in many cases there's significant threat to life and we see it as our responsibility to report that to the authorities when we think those threats to life. The point to stress here is that over the past few years, we've developed a tool called the Terrorist Content Analytics Platform, the TCAP. The first phase is now live and working, and essentially this tool helps our OSINT analysts scale their work. So we will search out for activity relating to designated terrorist organizations on the internet. We'll assess the extent to which we believe that that group, that activity, that channel, belongs to designated organization, and then the TCAP, the Terrorist Content Analytics Platform, will then scrape and crawl, pull that content in, and then alert the platform that's being used, and this is all automated. So if you like, it's a sort of hybrid system that combines the best of human open- source intelligence analysis and assessment with a tool that then scales that. And as a result, we are alerting tens of thousands of pieces of content to platforms, and the vast majority of these then remove that content within a week or two of being informed of this. And it's that type of solution that we really want to advocate for, because it certainly, as I was saying before, the perfect is the enemy of the good. Let's just focus on really clear things that we can do, but cognizant, of course, of adversarial shift. And we're seeing a lot of this in terms of the use of various platforms. And therefore, it's important to stress that we've got to understand why this is happening in the first place, what the drivers of the activity are in order to anticipate what will happen next. So the terrorist use of the internet isn't just big platforms, which is I think a misconception sometimes. Certainly terrorist groups want to get their message out there. They, of course, will always be drawn to big platforms that have a really broad audience reach. Facebook, Twitter, whatnot. And also when they get content on there, they're like, great, we've beaten Facebook and they see that as a victory, keeping content up there. But in reality, there are a number of layers beneath that, and small platforms are really important part of the ecosystem for terrorist actors. Usually because they're so small, they're not aware of that activity, or they don't have the capability or they don't have capacity. And that's where we come in, in terms of providing practical support.

Terry Pattar: That's really interesting, and actually leads us on to what I wanted to ask about next, which was having read the recent report that Tech Against Terrorism put out, the Q1- Q2 report, which is one of your quarterly reports that come out regularly, and you talk in there about some of the highlights of the activity that you're seeing. And one thing that really struck me was this dynamic you've just mentioned, you just touched up on there, which is the shift, perhaps, from a lot of activity being on bigger platforms to now using perhaps smaller platforms or even the distributed web, the D web, we've talked about that this on this podcast in the past, and that certainly seems to have grown. It'd be great to get your thoughts on this and a bit more insight, but you seem to identify that there's a lot more activity on individual websites that these groups are setting up and running for themselves. And that really caught my eye because when I first sort of joined Janes in 2008 and started looking at online extremist activity, we were just on the cusp of the social media platforms really taking off. And so actually up to that point, a lot of the activity was on individual websites. And so we were going around and looking at all of these different websites that were out there, and blogs and discussion forums, and that was where the activity was concentrated. And of course in the last decade, we've seen the huge rise of social media platforms, the ones you've mentioned, the kind of content has grown on those as well. But maybe you can talk a little bit about what you're seeing currently, or what are the shifts and the different changes that are happening now, and perhaps then what you see happening next.

Adam Hadley: I think the key thing to stress here is that if we're going to do a good job with this, we've really got to be thoughtful about understanding the drivers of the use of the internet by terrorists in the first place. So why do terrorists want to do this? There are a number of use cases we put forward. One of my favorite books actually in counter- terrorism is What Terrorists Want. I don't know if you read that, it's really good, and it kind of focuses on the rationale for terrorist activity in the offline world, but I think a lot of that applies online as well. So the first thing I would say is actually, what do terrorists hope to achieve by using the internet? And there are a number of things, of course. There's strategic communications, as I said, earlier, operations and tactical stuff, and the choice of technology is driven by that objective. And I think that's really effective frame through which to try and understand this. In terms of what we are seeing regarding shift from platform to platform, in the early days, like Facebook and Twitter, it's really pretty easy to upload content and it stay there for a very long time, and sort of no one would noticed. But of course over the years, those big platforms have got much better at this, or... That's partly about policy, that's partly about enforcement. And there are a number of kind of contentious areas here, of course, in terms of some platforms just deliberately not having policy in particular areas. Twitter and the Taliban, for example, is a really good example. Enforcements-

Terry Pattar: Very tightly, yeah.

Adam Hadley: Yeah. I mean, enforcement is a separate issue to that, but certainly the over time, we naturally have seen. Like, it's rational for terrorists to try to move to smaller platforms and to share content in parallel over as many as possible. And what we see is that terrorists, for a number of years now, have essentially tried to share content over as many smaller platforms as possible simultaneously in order to evade content take down. Essentially, if you think of this sort of network analysis, they don't want it to be too much kind of centrality. Because if that's the case, then the chances are that this content will be nipped in the bud and won't be able to propagate across the internet. So it's quite rational therefore that terrorists and violent extremists. I'd say most of what I'm saying now applies to violent Islamist extremists, incidentally. The extreme far right have a very different approach and their TTPs are different quite significantly. But certainly, it is a rational response from a violent Islamist extremist to try to do parallel sharing. What we're seeing now, of course, is that the TCAP, essentially, was designed to automate and accelerate process of alerting smaller platforms to this content. And I'm sure there'll be adversarial responses to this, in terms of having the content put out more quickly over the larger range of platforms. But I think more worrying is that in some ways we're seeing terrorist use of the internet going back to the late nineties, right? So in terms of what we call terrorist operated websites, we're aware of hundreds of these, literally hundreds. It kind of seems absurd for so much effort to be focused on removing a few pieces of content on the smaller and larger platforms when terrorists of all persuasions find it so easy to put up gigabytes of material on their own website. I mean, like isis. org. I mean, I'm joking. I don't think isis. org exist, but you know what I mean? There are loads of examples of these and kind of loathe to reference them, right? Because actually crosstalk done about them-

Terry Pattar: Don't necessarily want people to go and look at them.

Adam Hadley: No, definitely not. Definitely not. And that's one of the challenges in communicating and sort of going through, explaining why we think this is a threat, the terrorist- operated websites are a real problem right now, and very few people are talking about them, certainly in the policy world. The reason for this is because governments don't know how to deal with it because of the legal complexity and also the infrastructure there. Well, understandable concerns about human rights and freedom of expression. Actually, it's quite a severe thing to be taking an entire website off, and it's right and proper that there are checks and balances in place. Having said that, if a website is actually run by a designated terrorist organization and they've been paying to host it and paying for the domain name registration, then there's all sorts of kind of legal liability for platforms there, and there's no kind of international consensus on how to deal with this. And what's more that doesn't seem to be much political will to find that consensus, either.

Terry Pattar: Does it also, though, make it harder for those in your position who are trying to track and understand this activity, given that dispersion across all these terrorists- owned websites, rather than that centrality around a few a hand, or a handful of large social media platforms, or even a few more smaller ones?

Adam Hadley: Well, it certainly does. And I guess there's a conflict of interest here, isn't it, that we all want to learn as much as we can about how terrorists are using the internet, and therefore there's a tendency to not want to have the content removed. And so that's an ethical challenge, isn't it? Because we're all, and I'm sure your teams as well, kind of must be exasperated if you spent months trying to track down a particular group and you've got loads of great insight into their thinking, and then all of a sudden the content disappears. And I'm not really sure what the answer is there. It's a real dilemma.

Terry Pattar: I think the worry there is always where has it gone and where does it go next and trying to stay on top of it because ultimately that content is designed often to influence people and to help them generate more publicity, et cetera, all those things. You mentioned strategic messaging, all those aims that you talked about, and they'll just keep on going. They'll keep on trying and trying. And it seems like it's a lot easier, and I guess this goes for any field or any type of activity, it's a lot easier to set up a bunch of websites than it is to identify them and where they are. But hosting the kind of content you mentioned, which is not near the line of debate around whether it's terrorist or not but is well across that line, and actually there's some stuff that should be taken out in terms of not being online. Actually doing all of that is a lot more laborious and time consuming than creating and standing up that content again somewhere else.

Adam Hadley: Hmm. Well, exactly. And so I'd say that the question shouldn't be, can we eliminate terrorist use of the internet, but to what extent do we want to do so? And what's the equilibrium that we want to find, and what's the threshold of concern because if we go too far, we could make the problem much worse. So what is that stable equilibrium point? And there isn't a lot of discussion about this that actually in terms of tackling the terrorists' use the internet, just how far do we want to go? Government policy makers will say we don't want any of this, well, actually that's not going to be possible because, for all the reasons that we've gone into. So it's an interesting challenge, it's extremely difficult to figure out what to do for the best often. But my recommendation always is to ensure that we are really thoughtful about the OSINT we're doing, and when we should actually be reporting content, when we should be referring it to the police, for example. So I think there does need to be a conversation, I think, about what those protocols are. And also, a lot of this is about collaboration. Also, the challenge with OSINT, in my view, is that it's not really clear who is responsible for acting on behalf of that. And essentially we've got a really confusing mix of actors doing OSINT. We we have institutions like yours, we've got governments, we've got private companies, NGOs. It's a real mess, and there's obviously no coordination here. So in the absence of coordination, sometimes nothing happens. And then you just have a really big problem with ISIS taking over a small messaging app, which has happened. I won't name the app. You're probably familiar with the one I mean. And at one point we estimated the app was almost two- thirds of its user base was posting IS content. And because it's hard to coordinate, no one knew, well, who's going to handle this, who's going to deal with it. So often we try and try our best to lend support and to reach out to platforms and just kind of get stuff done. But it is tremendously difficult because there are so many different people focused on this.

Terry Pattar: And like you said, with some of these smaller platforms, they may not have the resources either to really investigate and stay on top of all of this for themselves, or even to respond. If they're being inundated with people telling them, actually, there's all this content on their platform because you don't act on misinformation.

Adam Hadley: Yeah crosstalk. Well, this is it, it's really laborious. So I think when thinking about how can we support small platforms, this has to be about thinking through the challenges that they have. So they've got really limited time, they probably want to focus on developing new features that they don't want some weirdo talking about ISIS on their platform. They just don't want to hear it. It's bad news. And it is disruptive to the growth of their technology. It's never a profit motive, by the way, that stops or hinders tackling the terrorists' use of the internet. It's usually small platforms is that they're focused on developing the tool, and they're not necessarily thinking about various nefarious uses of it. Having said that, increasingly the extreme far right is building its own apps and websites, and in particular, where we were talking about video content. YouTube has been pretty strict in removing quite a lot of offensive, potentially harmful content that resonates with the extreme far- right community. As a result, quite a lot of this content is drifted to alt tech, to newer video sharing platforms. I won't name them, but we all know which ones I'm talking about. And then what we see is that platform a then kicks off lots of the announces, and then they go to platform B and then a new platform pops up trying to monetize this. So with extreme far right, there's a very different dynamic. And in terms of engaging with small platforms, I think it's important to differentiate our approach. And in those cases, what we often do at Tech Against Terrorism is say, look, we're not fighting a culture war here. We just want designated terrorists off your platform. And we refer to the UN list and the US list than UK and EU to help us with that. So again, this goes back to our guiding principle, which is the perfect is the enemy of the good.

Terry Pattar: Yeah, that's really interesting. And I realize we're almost up against time, but I wanted to just sort of maybe ask you, how can people access the work you're doing at Tech Against Terrorism, and who is it for? Because obviously you don't necessarily want to become part of the problem and give access to it to everybody in terms of all the content you're aggregating and the information you're seeing. So how can the analysts who are maybe working in government agencies or in tech platforms, et cetera, who might want to get access to it, how can they get access to it? And where should they go for that?

Adam Hadley: Well, we have a weekly newsletter that has quite a lot of information, typically trying to summarize the changes we're seeing in terms of regulation and the trends regarding terrorist use of the internet, so that should be the first port of call. If listeners would like more information about the detail work we do, I'd ask them to get in touch with us directly, email us, contact techagainstterrorism. org, or message us on Twitter, and we'd be very happy to support. We have a kind of regular rhythm of pretty detailed reporting that we share with with tech companies and with democratic governments to inform the threat picture. We also do a number of specific reports and research, usually for larger tech platforms to help them understand the specific problem that they're facing. And the NUR team, we can stand up actually quite a lot of teams focused on those sorts of projects. So if anyone would like more information about what we do, I mean, we don't broadcast this sort of stuff. We get up to quite a lot, to be honest, and we tend not to tell many people about it unless it's necessary. So I would say, please do get in contact. We'd be really keen to hear from you when we're here to help. We see this as imperative that we're all trying to find the right balance between understanding the terrorists' use to the internet and stopping it, and this requires a lot of conversation and discussion. And we also appreciate that many of these things are sort of secret and confidential, and you can't actually explain stuff because of obvious operational equities that might exist. But nevertheless, we'd always encourage a degree of openness, and it might be sharing information about deconflicting. Because what we wouldn't want to be doing is having an entire site removed by speaking to the domain name registrar, or the host or whatever it is, if actually that could have quite significant operational impact. Certainly though there's no mechanism at the moment to kind of share this information. And certainly, even if the word that would be fraught with ethical challenges. So I think this has to be about trying to talk to one another and connect, and we'd really love to hear from you. And we'd be very happy to share research analysis that we've been working on.

Terry Pattar: That's superb. Yeah, thanks, Adam. I think that'll be really useful for a number of people in our audience. Thanks again for your time, Adam. It's been great.

Adam Hadley: Thank you.

Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website janes. com/ podcast, where you can subscribe to the show on Apple podcasts, Spotify, or Google podcasts. So you'll never miss an episode. Uncover the threat landscape with assured and interconnected threat intelligence from Janes, covering military capabilities, terrorism, and insurgency, country risk, and CBRN. Support your threat and capability assessments and enhance your situational awareness with Janes threat intelligence solutions. Find out more at janes. com/ threat.

DESCRIPTION

In this episode we speak to Adam Hadley on understanding and countering terrorist use of the internet.

Adam Hadley is the CEO of London-based data science consultancy QuantSpark and Founder of the Online Harms Foundation which implements Tech Against Terrorism, a public-private partnership launched by the global tech sector and the UN in 2017. Adam is a leading commentator on the role of analytics and data science in society and business, digital transformation, social change through technology, and supporting the tech sector in tackling the terrorist use of the internet. 

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Adam Hadley

|CEO of QuantSpark