Coming of Age for OSINT Technology: A Conversation with Emily Harding

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Coming of Age for OSINT Technology: A Conversation with Emily Harding. The summary for this episode is: <p>In the latest episode of The World of Intelligence podcast we speak to Emily Harding, Deputy Director and Senior Fellow, International Security Program at the Center for Strategic and International Studies (CSIS) around the latest technology in OSINT, in particular we cover the recent report "Move Over JARVIS, Meet OSCAR: Open-Source, Cloud-Based, AI-Enabled Reporting for the Intelligence Community" which is available to download here: https://www.csis.org/analysis/move-over-jarvis-meet-oscar </p>

Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode, with your host, Harry Kemsley.

Harry Kemsley: Hello. Welcome to this edition of Janes' World of Intelligence. My name's Harry Kemsley. I'm the president of Janes. And I have, as usual, my co- conspirator, Sean. Hello, Sean. Thanks for joining me.

Sean: Hi, Harry.

Harry Kemsley: I'm also delighted to tell you that I have Emily Harding with us. Hello, Emily.

Emily Harding: Hello.

Harry Kemsley: Thank you for joining us, Emily. It's great to have you here. For those that don't know Emily, Emily is the Deputy Director of the International Security Program at the Center for Strategic and International Studies, CSIS, where she oversees rigorous scholarship on defense, intelligence and technology issues. Before CSIS, she spent two decades leading responses to a wide variety of national security crises at CIA, ODNI, the White House and the United States Senate Intelligence Committee. Her research now focuses on the intersection of intelligence and technology. Emily, it's a delight to have you here. Thank you so much for joining.

Emily Harding: So happy to be here. I'm a big fan of the podcast.

Harry Kemsley: Somebody had to be. Thank you. So, Sean and I, in recent podcasts, have been talking quite a lot about that we believe, others believe, that OSINT has come of age. We talked in the past about the fact that open source intelligence can provide a unique context. It can provide indicators and warnings, novel insights, et cetera, et cetera. All of those things I think are now apparent. And I think it's fair to say that technology has been one of those key developments that's enabled this information and intelligence revolution. And, it's in that realm that your paper I'm going to jump on to in a second has focused my mind. I mean, for example, just look around right now, as we speak, at the incredible wealth of information, not all of it entirely true, in the public domain, around the conflict in Ukraine. I mean, to be frank, go back even just a few years, and much of what we have seen in terms of satellite imagery, drone imagery, et cetera, would've been the exclusive domain of classified analysts and their exquisite intelligence capabilities. And yet here we are, watching it almost in real time. Now, Sean and I have spoken for some time about the advantages of open source intelligence and the power that could be exploited. So why hasn't the intelligence community embraced them fully? What has been the cause for them to be slow? We can come back to those, perhaps later, but we've summarized them in previous conversations about the challenges of culture, challenges of policy, security policy particularly, and also, the capability to do it, because we've discussed in the past that open source intelligence is not like other forms of intelligence. There are some unique challenges to be overcome. So capability can be a challenge. And then in January of this year, I, as so rarely happens, picked up a paper called Move Over JARVIS, Meet OSCAR, OSCAR being the Open Source, Cloud- based, AI- enabling Reporting for the intelligence community, a paper that you published in January, Emily. And it's one of those rare moments, where I read a paper and at every turn of the page, I was," Yes, exactly. Exactly. That's right." And, the reason it was right, it was doing more than just giving me a series of challenges and also reinforcing my understanding of the opportunities. It also put forth a slate of actionable recommendations that, as you put it in the paper, could break that log jam, that set of resistances that are in place that we have to overcome. So, Emily, with that introduction, you'll be not surprised to learn that am a huge fan of the paper. And by the way, for those listening, I am going to attach the paper to the podcast, so that if you want to read it, and you really should, you'll be able to find a copy of it for you to read. But for those terrible people that haven't yet read it, could you do then the privilege of giving us a precis of what it is that you believe could be achieved? And then perhaps, we can go from there to find out just how far away we are from actually achieving what you've described as OSCAR. Emily.

Emily Harding: Thanks so much, Harry. I appreciate it. And thank you for continuing to turn the pages there. There are quite a few pages, so I appreciate you actually making through all of them.

Harry Kemsley: Couldn't stop myself.

Emily Harding: For those who are on the too long did not read train, there's also an op- ed version on The Cipher Brief. You can get the bottom line in about five minutes. So, to talk about what the paper actually says, it was sparked by an idea of these three revolutions in data. The first revolution is the massive amount of data that's available now, that didn't used to be. The intelligence community has struggled for 80 years with the idea of open source intelligence, and what's the difference between the New York Times and something that is open source intelligence. Today, I think, is the clearest it has ever been, that open source intelligence is a wide variety of information. It's everywhere. And if you can harness it, if you can use it properly, it can be revolutionary in the way that you think about intelligence and providing insight. The second revolution has to do with the availability and the security of cloud. Now, people... In the past few years, we've gotten to the point where people take cloud for granted. It's," Oh, it's in the cloud." That's amazing in its own way that the cloud exists, that it is as secure as it is, and it's this wonderful resource that exists out there for processing huge amounts of both classified and unclassified data. The third revolution is AI/ ML of course, and this is a revolution that is underway. There have been wonderful advantages already discovered in using AI and ML. And I think that those applications to the intelligence world are still emerging, but that the promise is great. And a combination of this huge amount of data available, the availability of computing resources on the cloud, and then AI/ ML, can really lead to a new way of thinking about intelligence of insight and what is available to both the government and to private sector open source analysts. So, keeping in mind those three revolutions, we went about thinking, what is the vision? What could this be? And of course, the thing that popped to mind for me, I'm a huge Marvel comics fan, is JARVIS, who is Tony Stark's virtual assistant, Tony's right hand man really. JARVIS understands what Tony needs and when he needs it. JARVIS understands Tony's particular sense of humor, which is a dream but still far out there in the future. My Siri personal assistant still can't understand when I curse much less, when I ask her to try to find something for me that isn't easily available in text. But if you could take JARVIS and that idea, and you could adapt it for the current needs of the intelligence community, you could do truly amazing things. I was an analyst for a long time. I was a manager, leader of analysts for a long time. And every time I had to go to an analyst and say to them," In the next two hours, three hours, I need you to write me a PDB, the President's Daily Brief, on the latest thing that happened in X country." And you could just see their face go white, because they knew how much they had to read through, to get to the gems of information that were critical for providing the president what he needed in order to make a difficult decision. And the timeframes are so short, and the quantities of data are so huge and growing, that we definitely need some kind of personal assistance, some kind of JARVIS, who can sit next to us and weed out all the chaff and say," No, pay attention right here. This is what you need to actually focus on." So we went with OSCAR, as you pointed out before, Open Source Cloud- based, AI- enabled Reporting for the intelligence community. If you give it a friendly name, it's a little less scary and amorphous. I've already heard many, many folks, my friends in the intelligence community, saying," Oh yeah, OSCAR. We definitely need OSCAR." And it makes it approachable, and it makes it less of a tall mountain to climb and more of like a buddy that's sitting next to you at your computer.

Harry Kemsley: Right. Right.

Emily Harding: So we wanted to go, as you said, with the actionable recommendations. It's far too easy in Washington, also in London, to admire the problem." Yep. That's a problem."

Harry Kemsley: We're good at that.

Emily Harding: And then never quite get to the point where you break through the log jam and you actually come up with the levers that need to be pulled in order to make this thing a reality. And that's what we wanted to go.

Harry Kemsley: So, I'm intrigued. There's lots you just said there which we've talked about in the past, so I'm not going to repeat back at you what you've just described, and thank you for that precis. But I'm intrigued to now establish just how far away we are from OSCAR. By the way, you should probably trademark that name for when some tech company decides to create something they want to call OSCAR, and it'll be based on your paper. Just saying. I think for me-

Emily Harding: Good advice, pretty much.

Harry Kemsley: For me, understanding the elements of your paper, the actual recommendations you made, let's go through them. We don't have to label any one of them, but let's go through them, and let's understand how far away we are from actually delivering that. So, if you can take us through some of those key recommendations that you made, in your slate of recommendations. As we go through, we'll pause between them. And, Sean, we'll talk about how far we think we are from other perspectives that we've seen, perhaps outside the US, in the UK, for example, in perhaps NATO, Five Eyes countries, because what I'm keen to understand is, and I got the sense as I was reading it, OSCAR feels reachable, not entirely, not understanding nuance in jokes and sarcasm, for example, as per JARVIS. But, in some of those areas, it felt reachable. And I really want to get a sense of how far away we are. Because if it's not that far, then maybe we should be pushing harder to make some of those things actually become reality quicker than they might otherwise by themselves. We'll come back to those. So, let's go through the recommendations, Emily. I'll leave it in your hands to take us through the order that you want to take us through. But let's go through those and then agree how far away we really are from them.

Emily Harding: Sure. So, I don't think we're very far. I think that a lot of the technology that could get us 80% of the way there is sitting on the shelf right now. There are two companies that I'll pick on real quick, and they are already looking at natural language processing in a way that is very close to sort of an initial operating capability for OSCAR. They can take a huge amount of information, boil it down to key points. They have a thing that can auto- write a short article. And even though it's not going to be perfect and it's not going to be insightful, it'll give you the bones of the basic information that you need, that you can then take and use to wrap into a PDB or another intelligence product. There's another company, and they are doing something very similar, where they have figured out how to look across huge amounts of data. They started focusing in the CT realm, and then drawing out from multiple different formats into a natural language processing format. And then using that to recognize patterns. They have a feature as well, where they can take all that information and auto- generate basically a summary. So, in theory, you could take somebody who's brand new to an issue and say," Read this. Here is the latest two weeks of information boiled down to a 20- page paper." So you spend half an hour reading in instead of two hours reading in, or a week reading in.

Harry Kemsley: Yeah.

Emily Harding: So that's kind of a very initial step towards what we're going for. As far as the recommendations go, to get from where we are now to where we really want to be, where we have a comprehensive capability for intelligence analysts, I wanted to talk a little bit about the culture problem, the risk and security problem, and then the acquisition problem as well.

Harry Kemsley: Yeah.

Emily Harding: As a researcher, I love my methodologies. And one of my methodologies for this paper was that I invited in a group of experts from across academia and tech and government. And I gave them a survey and said," Which one of these problems are really the core of what's getting in our way from making this a reality?" And then I made a key mistake, which was at the end, I put," It's an amorphous culture problem, and none of these actually captures the whole thing by itself." And of course, the majority of people picked the amorphous culture problem. But that then gave us an opportunity to really dig deep on what culture means. And it turns out that, when we talk about culture, what we really mean is the demand coming from the people on the ground and where that meets the policies that the organization has put in place. And so we took those two things apart. And on the culture and demand problem, I mean, we have all been former government professionals. And when you're in the government, you know that the pressive business reigns supreme. You don't ever have time and to really learn or experiment with a new kind of tool that you could use. What you're doing is just trying to answer the mail and answer it today. So, there's a culture problem. And as a result, we don't have the demand signal coming from below, that I think would change some of the policies that are getting in the way. So what are those policies? I don't like picking on the security guys, because they're the security guys and they rule supreme. Also, they are professionals who are working with the best possible intentions to protect our governments and their secrets. However, if you look at their approach, it is very much focused on minimizing risk. Nobody ever got promoted in the security realm for accepting big risks and taking a chance. Cloud can be scary because it's out there. It's not in here with us, it's out there. Somebody else is responsible for having their hands on the controls. The other thing that's scary is the word unclassified. Nobody in the security realm likes to hear that something is unclassified, because that means it's available to everybody and all of the holds are off. And when security folks hear that you want to let a whole bunch of analysts loose on some publicly available data, and let them run queries, what they hear is, adversary then figures out our priorities. Well, maybe. But that's not necessarily a bad thing. Some of our priorities are so obvious, that it doesn't really give anything away with those queries happening. There are also ways to obfuscate those queries so that it's not as much of a problem in a security breach as it might appear at first. So that's risk in security. One of our recommendations there is to explicitly accept risk, something like the leadership of an intelligence agency saying," If 20% of your projects aren't failing, then you're not trying hard enough." Or saying," We know that there will be some security risk in what we're about to do. We want you to do it anyway." And that provides the explicit top cover that people are going to need. And then the third thing that I wanted to raise is really the wonkiest and the most inside the belt way DC complaint, and that is to fix the acquisitions process. I mentioned a couple of companies earlier who were playing in this space. There are many of them. And, every single one, when I talk to them about what they're trying to do, all point to the ridiculous hoops they have to jump through in order to contract with the government, as the major impediment to pushing these kinds of things forward. The government likes to think, if it wasn't built here, then it's probably not good enough. I also like to say," Well, these are the exact specifications on the exact timeline that I need." And the private sector just doesn't operate that way. What we need to do is move to a much more flexible approach, where the government can say," These are the capabilities that we need." And once we accomplish those capabilities, how you want to go about providing them, we're agnostic. Also speed. The FedRAMP process, which is a whole hideous process that the government has put in place to verify cloud service providers, 18 months is an extremely rapid timeline for getting through the FedRAMP process. And that's ridiculous. Technology moves way too fast to be putting up with that. So, my goal with these recommendations was to shift the mindset from, here are all the things that we can't do, to, here are the things that we can, with really some pretty small tweaks and some courage.

Harry Kemsley: Yeah. Great. So, if I were to precis that, I think what I've heard is, there's a bunch of technology that's ready to go. We have to get ahead around using it. There's a number of policy things that we might need to take some very careful steps around and do some explicit top down thinking on that. And then there's a bunch of culture. And for me, and, Sean, I'm going to start with culture, because one, you and I have talked about before, in terms of how far away do we really think we are in terms of changing the culture. Just to get you started, one of the things that I'm worried about with culture is that we have" indoctrinated" our analysts to believe, unless it is assured and it is classified, it isn't usable. We have, for too long, in my opinion, allowed analysts to believe, the only thing that matters is what they see on that TS system they're working with, rather than what they could get very, very quickly and easily, oh, and by the way, that they could share subsequently, another topic for another day, from the open source environment. So, Sean, culture, what's your view? How far away are we from getting the culture thing correct?

Sean: Well, first thing I'll say, Emily, is that I love your paper too. I even take it fishing with me. It is that good. And the reason it's that good is because it actually-

Emily Harding: Wait a minute, Sean, do you wrap the fish in the paper or-?

Sean: No, no, no, no. I read it because I don't get many bites, so it's a waiting game. Anyway, but the great thing about it is it does make some recommendations. We are very good, as somebody said earlier, about admiring the problem, and this actually starts to get somewhere. But back to your question, Harry, the culture issue, we've got a real tension here between the imperative to act, and the imperative to act now on OSINT is because we are seeing it coming of age, like it or not. The Ukrainian crisis, or the really bad things happening there, has catalyzed the commercial world to really adopt some of the techniques and procedures. And the result of that is that we're seeing narrative now in some of the big media, which is far more sophisticated, well- informed, than it's ever been before. And that, of course, permeates due to the policy makers and the decision makers, who think," This is really good stuff, why do I need my intelligence organization?" So there's a real imperative in that the intelligence community cannot become irrelevant now. So if that's not a catalyst to act, then nothing is. But against that, you've got what's quite a complex issue, really. And I love your three revolutions there, Emily, as well, sort of huge amounts of data, the availability of the cloud, and then the AI/ ML, which I think there's a little bit of a red herring there, because I don't think we're necessarily as positioned as we could be to actually benefit from that. But we'll park that for a minute. So, it's the ability to change versus the actual need to change. Now, I've talked, as you know, about culture till the cows come home, sorry, that's a British phrase. And I always think of it from an analyst perspective. So when I was in DIA, trying to inculcate a culture of intelligence sharing, in the back of my mind was always, what's in it for me as the analyst? So it has to give you something. So it has to give you an improved product, or make things faster or more efficient. And there has to be a reward for it as well, because exactly, again, as you said, Emily, it's very easy. You can get criticized and lose your career for doing something wrong. And therefore, the incentive to innovate and do something different just isn't there. But if you, A, get rewarded for it, and, B, makes your job easier and better, then you've got a chance. And that goes down to many, many things that you've talked about, that policies, of course, that people do hide behind policies. And, I've got some friends who are senior security officers as well, but they really know their policies. And, nobody takes on policy. It was one of the things that was quite remarkable when I was out there saying," Right, we need to change the policy." And people used to look at me like I had two heads, like," You can't change policy. Policy is policy." So there's always a cyclical thing with, the culture we don't really want to change, therefore, we're not going to try and change the policy, but its policy needs to change to make it happen. So, the culture comes from the top down and the bottom up. I agree with you, Emily, that we need senior leaders, not just to say the right words, but to act the right words and take that element of risk saying," No, we are to make this happen." But it's got to come up with the middle as well, from those very clever young analysts who are the ones who tend to leave because they're not being allowed to be given their head to innovate and to adopt new policies. And I, and you've heard me say this many times, Harry, the problem we've got is the frozen middle, that are very, very good at their jobs, that have always done it a certain way because that's how they've been indoctrinated and how they've been trained, and who are not for changing. And they're the target audience that you have to get to, because if you make them really want to do it and champions of it, then everything happens from there. So, I mean, there's lots of other elements to the cultural piece, but that's a really big one. And of course, finally, you've got to get to the person who can say yes and make it so, however high up that is. And of course, that's incredibly difficult, particularly within any big government organization.

Harry Kemsley: I would like to spend more time talking about this culture piece, but I'm not going to let us do that today, because I know we want to talk about other things. But for me, just to finish off on the culture piece, culture, if it's about belief, if it's about making people start to see things differently, we know it's going to take a long time to change. But I really, really want to understand, what are the elements of change that we can pull levers on to start that culture change that we talked about? But I'm going to park that question for another day. Let me move on to something related. You mentioned a couple of times in your precis of the paper, but also in your discussion there about points, Emily, the risk, the risk flip, the ability to say," Yes, there is tangible risk of operating in an unclassified cloud- based environment, but there is huge opportunity. And to not exploit that opportunity is an opportunity cost." I'm preciing again what you said earlier. Sean, how far away do you think we are? I mean, Emily has offered a very tangible example there of how we could take security officers to sit alongside counterparts in civilian life. How far away do you think we are really from the security community being allowed to do what they should be doing? And then, Emily, I'll come back to you in terms of, what specifically would they be looking for, if they're going to be sitting in front of a civilian counterpart? What are they looking to find from that conversation? But let's start, Sean, with, first, your view about how close the security community is to actually wanting to do this at all.

Sean: I personally think this is the biggest hurdle actually, because there is interpretations of the policy by security people who are, by nature, and you can't blame them, because it's their job, being risk averse. I think the point you made right at the start of this little section was, you've got to make taking risk a positive thing. So if you say," Right, we'll change the policy. Don't worry about that." By the way, by overtly saying," We are now taking risk," that puts it out there. So, with people who are making security policy, whether that's accreditation or whatever happens to be, they can go," I've now got a canvas on which to work, rather than just the old ICDs," or whatever happens to be, says," Look, I'd love to do that, but it's not going to happen." Now, there are more practical elements and the sort of risk, the technical risk is not anywhere, and I think Emily's paper actually mentions it, is nowhere near what people think it is. But if people, as soon as you say the word cloud," Everything's, oh, it's accessible to everybody in every way." Now, technologically, that's just not the case, and closing ourselves off is certainly not the answer. So, I do think it's a big impediment, and it is unfair to lay onto the security people themselves, because they are only enacting the policies they're given. So it's more security policy side, but I don't think we're anywhere near that. And when it comes to just sharing between the community, everyone has different protocols. Everyone has different accreditation rules. So there has to be a commonality throughout that community, which in the US would be ODNI certainly. Here, I don't think we have that same unifying organization that you can do it through. Maybe the cabinet office, I'm not sure. But, that's the level that needs to be addressed.

Harry Kemsley: Yeah. So, Emily, you talked about explicit activity for our security personnel to sit alongside their civilian counterparts. What is it specifically you think they're looking for when they're doing that? I think it's a great idea, but what specifically are they going to go and learn, that would help them understand the opportunity cost of not taking the risk to operate in cloud environments?

Emily Harding: I think it's really a process of demystification. When you are a security officer, and you are asked to evaluate risk of something that you don't understand, I think the very natural human tendency is to be risk averse. If you're being asked to embrace something new and different and maybe a little bit scary, then really understanding what's behind it, what are these security protections that are in place, makes it a lot easier to say yes, instead of just being in a place where it's a lot easier, it's a lot safer, it's probably better for your career to say no. One of the contributors to the report, Sean Roche, who now works at Amazon Web Services, but used to be a long time IC professional, had this great quotation about how the cloud is like sitting in the cockpit of an Airbus A380 or a Boeing 737, as opposed to sitting in a Cessna with a good friend of yours. You'll get in that Cessna with a friend, who you trust and who you know knows how to run that Cessna. You'll also get into a 737 with a total stranger, because you know that the capabilities in that plane are excellent, and that the professionals who are flying it have been well- trained to handle any number of eventualities and that you're going to be safe. So, we need to get to a place with the cloud where it's like the cockpit of that 737, that the people who are evaluating risk can say," Okay. I understand the way that this technology works. I understand the training of the people who are running it. I understand the security protection measures are in place. I'm willing to board this plane, and I'm willing to take that next leap forward."

Harry Kemsley: Yeah. I agree. I think whether we send our security colleagues out to meet with cloud and to meet with their civilian counterparts, or whether we import that commercial cloud provider and their security personnel to sit alongside our security personnel on our systems, is an interesting point to debate. But, I agree. That familiarity that you talk about, and I do like the analogy of the Cessna and the Airbus aircraft. We do climb into things all day long, every day, trains, all sorts of things, that we trust will work. But there's something about the narrative around cloud that, for too long, has been negative enough for people to not trust it. Incidentally, I think there's also an element of blame for the technology community. I have been the victim of technology through my career, many a time, where many promises have been made to me by technology that have been woefully under delivered. And I think there's also a fair amount therefore of skepticism about whether technology can actually do the things that it claims to be able to do as a result, particularly when it comes to security. We fill the news channels with horror stories about cyber crime, and that cannot help our trust in cloud. Nonetheless, cloud does operate at a level that we cannot replicate on our own systems, and we must address the opportunity cost that we are incurring by not using the capability that is out there. Now, there are other things in your paper that I would like to dig into, but with time in mind, I'm going to turn to just one more, which is the urgency for change. Sean and I have spoken with many colleagues from our past. We've had great guests on like yourself, Emily. And, the fact that there is a need for change, I don't think is in debate. I think that's a matter that we can be confident is now agreed almost in every place we've been to. But, the rate of change, the rate of change towards where it should be, appears to be slower than it should be, that it seems to be less urgency than there should. What can we do, Emily, in your mind, that could address that need for urgency speed? We need to make these changes. What is it that's going to help the decision makers understand the need for that urgent change, and then drive it from top down?

Emily Harding: Right. Well, you always hope that it's not going to be a massive national security crisis that actually pushes you towards change. Those are of course the most effective way to go about it, but sadly, not the consequences that we usually want. I think what's going on in Ukraine right now is actually, never let a good crisis go to waste. This is an opportunity to point out, as you said at the top of the show, just how applicable this kind of technology is, and just what a game changer it can be. I think it's also a great opportunity to point out that there's good OSINT and then there's bad OSINT. There are plenty of armchair intelligence analysts who are putting out really bad hot takes about what's going on on the ground on Ukraine, because they watched a video or they looked at some pictures online, and suddenly, they think that they can do what Bellingcat does. There's something to be said for establishing credibility in this space, and really proving your methodology, proving that the information that you're taking in is not only accurate and authentic intelligence, but that you are incorporating it in the right way. You're evaluating it in the right way. You're being rigorous about the way that you put it against other information that you have, and asking questions about what you're not seeing in addition to what you are seeing. And that's a place where the intelligence community has really made its bones. I mean, anybody can watch CNN, but you don't get to be an analyst in the intelligence community just by walking in off the street. There's very rigorous training about how to think, how to question, how to try to set aside your biases, so that you can give the best possible information to the policy maker. And the guys at Bellingcat, I mean, they do amazing work, and they too have built up all of this credibility because they show their work. They demonstrate where they got their information and how they got from point A to point B to conclusion. So I think that this is a real moment to say," Okay, it's a pivotal time to not only prove the capability, but show the value add the tradecraft has."

Harry Kemsley: Yeah, I think that's right. And, Sean, I'll come to you in just a second, in terms of that point that Emily made through there about mis- and disinformation and how we deal with it briefly. But, for me, there are three things you said there, which trigger for me, Emily. First of all, the fact that there is a great deal of information out there that isn't all good. We need to be able to understand what looks like good information and what looks like bad. And that talks to a point we've made in the past, Sean, about data literacy of the audience, as much as it does about the responsibilities of the open source intelligence provider. You certainly talked about the fact that you should be auditable. You should be able to prove how you came to conclusions that you have reached. And of course, that is the tradecraft of intelligence in the classified environment. There's no reason why it shouldn't be, and shouldn't be demonstrably so, in the open source, unclassified environment. And third, you talked about, in effect, that if we don't get this right, if we don't allow everybody to see the real power and truth available in open source intelligence, they're going to continue to under utilize it. And that, for me, would be the greatest travesty. Sean, just before we start to close, can you just give us a few moments of your thoughts about the dealing of the mitigation of mis- and disinformation? We're seeing a great deal of this, of course, in the information campaigns around the Ukraine conflict. What are your thoughts about how we deal with that? I've recognized that's a massive topic, but what are your thoughts about how we deal with the mis- and disinformation in the open source environment?

Sean: Yeah. As you say, Harry, I mean, there's a whole new podcast there, and you hit one of the primary words that I was going to use, and that is developing tradecraft. So, you develop your credibility by having processes that you can actually are repeatable, scalable, but also you can, as Emily said with Bellingcat, you can actually show you working. So we have a duty to the community to be able to do that. You do see a lot of one shot wonders. Do we just say that sensationally see something, extrapolate something for it and it's completely wrong. Now that is not just within the open source commercial intelligence. It happens in the intelligence community too. But you don't last that long too long. So, you develop your reputation credibility by being all those things, but also showing you working out. Now, we are seeing open source intelligence being used as information and disinformation. But, the difference between, I would respectfully assess, between ourselves and the Russians is the Russians would just throw out anything that is absolute lies, whereas we will not do that, either commercially or within the intelligence community, because the information is supported by the evidence. The tradecraft that exists within the community, you use as many sources as you possibly can. You cross- correlate, you integrate and you provide your best assessment. That doesn't mean to say it's a hundred percent right all the time, but it means you're going on your best source. Now, if you use it for information, then that's fine. It's a slightly different use of the same intelligence, but it's still very legitimate. Coming back to the future for OSINT, and for that credibility, you start to then look at the education and the training part of it. One of the debates we used to have in the DIA when I was there is, what do we want from the future analysts? Do we want somebody who is very tech- capable, who can code even, who can develop or who at least understands the power of, or the potential for artificial intelligence, machine learning, et cetera? Or, do you want somebody who's an expert on the geopolitical situation in country X or country Y? Well, of course, the answer is, if you could have both in one person, and there were a few very talented people like that, but only a few, but the answer is clearly both. Now, that comes with training right at the start as well, to make people aware of what the utility is. And then, when you're talking about three to five years down the line, as per Emily's paper, then you start looking at, at what stage of the automated loop do you bring the human into that loop, to validate QC, make the assessment, the SWOT pieces you've heard me mentioned millions of times, that adds that value to the automated stuff. Now, that we're a long way from there yet. And I think that's part of the reason that this appears so difficult. Are we trying to, using your aircraft analogy, fly the aircraft while we're still trying to build it? And that's a real danger as well, I think.

Harry Kemsley: Well, thank you.

Emily Harding: We're always trying to fly the aircraft and build it at the same time. That's just the way the IC rolls.

Harry Kemsley: Yeah. Whether that's an Airbus or assessment, that's true. We have to fix the aircraft while flying it. So, Emily, Sean, thank you so much for taking the time. We are out of time and I need to move us on. So, Emily, let me start by thanking you again for, A, writing, and then, B, publishing the paper, Move Over JARVIS, Meet OSCAR, the Open Source, Cloud- based, AI- enabled Reporting for intelligence, and for giving Sean something to read while he's fishing, which is always worthwhile, because he's not always catching fish. Thank you for the time in this podcast and your contribution. As I so often do, I'm now going to ask you both to give me one takeaway from this session. And to give you the time to think about your one line takeaway, let me give you mine. For me, of all the things we talked about, I'm looking for something that drives urgency. I'm looking for something that might help us improve one topic we didn't get to talk about, which is the woeful acquisition process we have to labor through to get these things done, you touched on, Emily. What's going to drive that, from where I'm sitting, is a recognition of the opportunity cost, the things that we're missing, by not engaging with open source. And one of the missions, I guess, that Sean and I have been trying to drive for the last period of couple of years. But this podcast, another work we are doing, is to demonstrate the value that can be derived, exploited from the open source environment. So for me, the one line of takeaway is, we have to address the risk. We have to take that on full ball, to really see the opportunity. That's my takeaway. Sean, I'll come to you next. And then, Emily, if you allow me to, I'll ask you to finish us off with your one liner. So, Sean, what's the one thing you want the audience to take away from this session?

Sean: You always do this to me, because I wrote down the word urgency, but however, because that's now taken, I would say the direction. Somehow or other, we need to get to the level that can say yes and drive things forward, to make it a priority and say," Make it so." There's lots of champions who are armchair champions, and I'm not trying to be pejorative there, but people that do get this, at very senior levels within the intelligence community, and the policy level, but for whom it's not a priority, because they're fighting the daily fight and all things to go. So we need someone who's a champion that can say," Make this so," and then eventually, it will happen.

Harry Kemsley: Yeah. Thanks, Sean. Emily, the final word?

Emily Harding: Now, OSCAR is the parachute and it's right there. You can reach out and grab it. But I think that it's easy to forget, when you're sitting in the bubble of the intelligence community, that you do have competition, and that the competition is all around you. And the credibility platform, the preserved place of respect platform that the intelligence community is standing on, it is burning and it's shrinking, as the open source intelligence world grows. And I think it's one of those times where you've got to look around and say," Well, we could keep sticking our fingers in our ears and say,'Well, we're still the IC and nobody can do what we can do.'" That's true. But you also need to be embracing all of the tools that are available to you, so that you're providing the best possible insight to the policy maker, because that is the mission. So, embrace the open source world, jump off the burning platform and into the waiting arms of this technology that is there to help.

Harry Kemsley: Perfect. Thank you so much. Emily, Sean, I'll draw a close there. Thank you very much for your contribution. An excellent, excellent session, really thoroughly enjoyed it. Thank you so much.

Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify or Google Podcasts, so you'll never miss an episode.

DESCRIPTION

In the latest episode of The World of Intelligence podcast we speak to Emily Harding, Deputy Director and Senior Fellow, International Security Program at the Center for Strategic and International Studies (CSIS) around the latest technology in OSINT, in particular we cover the recent report "Move Over JARVIS, Meet OSCAR: Open-Source, Cloud-Based, AI-Enabled Reporting for the Intelligence Community" which is available to download here: https://www.csis.org/analysis/move-over-jarvis-meet-oscar

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

AVM (ret’d) Sean Corbett CB MBE MA, RAF

|CEO and Founder IntSight Global Limited
Guest Thumbnail

Emily Harding

|Deputy Director and Senior Fellow, International Security Program, CSIS