Mis/Disinformation in Open Source Intelligence

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Mis/Disinformation in Open Source Intelligence. The summary for this episode is: <p>In this episode we explore the impact of mis or disinformation in open source intelligence with Di Cooke CSIS International Security Program Visiting Fellow&nbsp;and KCL War Studies Doctoral Candidate.</p>

Speaker: Welcome to The World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello, and welcome to this edition of Janes World of Intelligence. Sean, you and I have spoken on numerous occasions about coming back to topics that we've discussed in previous episodes of this, and one of them, it seems to me the mis- and- disinformation in the open source environment is frankly well overdue for a review. So Sean, thank you for joining us as ever. Today I think we're going to go back and actually start answering many of the questions that we've generated today are around mis- and- disinformation.

Sean: Good to be here, Harry, and as you said, really important subject that we cover almost on every single podcast we've done so far.

Harry Kemsley: Yeah. How many times have we said we'll come back to that and never have? So today we're going to do just that. So today our guest, Di Cook. Hello Di.

Di Cook: Hello. Thank you for having me.

Harry Kemsley: Pleasure as always. Di Cook is a research fellow in the international security program at the Center for Strategic and International Studies focusing on emerging technology. Given her role, not surprisingly, her areas of expertise, including emerging technology policy within a defense and intelligence context, AI governance and risk, open source intelligence and deception and counter deception efforts in the digital space, which is our focus for today. Di has worked on policy relevant research and activities at the intersection of technology and security across academia, government and industry. For example, most recently Di was seconded with the UK MED to engage in AI policy development where she built the MEDs Assurance guidance materials to direct and inform its approach to AI operationalization in accordance with its AI ethical principles. A matter some listeners may remember we looked at with Dr. Amy Zegart recently. Di has also considerable experience in academic research with Cambridge, St. Andrews and Kings College London, where she's currently undertaking a PhD in war studies. I wonder Di, where you find the time?

Di Cook: Ah, thank you. It's a pleasure to be here.

Harry Kemsley: So Di, what we'll start with is just a couple of definitions. I'll get Sean to remind us all what we mean by open source intelligence. I think that's important. Then I'll come to you if I may, just give your interpretation, your definition of what we mean by mis- and- disinformation. So Sean, your four points please for the open source intelligence definition.

Sean: Yeah, thanks Harry. I know we've covered this several times before, but I think it is important. There's a lot of work going on within the intelligence community right now to define what they mean by OSINT and there's as many different definitions as there are organizations, which in itself is illustrative because it reflects the varieties of different sources and applications of OSINT in the community. But for me, and I think we agreed for James as well, I think we've agreed that it really has to include four elements. The first of those is that it has to be derived for information that is freely or commercially available to all. Secondly, it has to be derived from legal and as we discussed previously, ethical sources and techniques. And then the final two really are common to all intelligence capabilities. One, that it must be applied to a specific problem set or requirement. And finally, and probably most important, it has to add value, the so what.

Harry Kemsley: Yeah, perfect. So Di, during many conversations around open source information from which we derive intelligence insights, one of the most frequent concerns about the use of the open source or publicly available domain is that it is littered with mis- and- disinformation, which as we've said we're going to talk about today in terms of how we might mitigate that. But how do you define, Di, what we mean by mis- and- disinformation?

Di Cook: Yeah, Harry, that's a great question. So both mis- and- disinformation consists of false content or false information, but the key difference between the two is the intent behind that. So misinformation, you would define as false information that is created and spread without necessarily an intent to harm or deceive. So there's not a specific intent to do something malicious with it where whereas disinformation has that malicious intent included. So that's sharing a false information with deliberate intent to cause harm in some manner, whether it be physical, mental or otherwise.

Harry Kemsley: That's very good. So Di, is it fair to say that both mis- and- disinformation can have the same effect? The outcome could be the same in that somebody is not understanding things as they actually are, but actually one is driven by a specific intent to create that outcome, whereas the other is a mistake. It's almost a secondary effect of somebody just not understanding something.

Di Cook: Yeah. And that's actually a really key point as well, is even though that the intent behind how mis- and- disinformation might be shared is different, both can have significant impacts on public opinion in things like political elections, crisis response conflict, healthcare considerations, and both can cause significant harm whether or not it's on purpose.

Harry Kemsley: Yeah, sure.

Sean: Yeah, that's a great definition that the intent. I just wonder, Di, I'll be really interested in your views here, is there almost a sliding scale there? Because you might intend to change people's behaviors, but you might think that's for the right reasons even though you are not actually telling the truth. And the classic case on that might be the government responds to COVID crisis where for altruistic reasons, it wanted to change the dynamic of the population or certainly the way they behaved, but wasn't necessarily being as open with the facts as they could have been. So for me, I see almost, I used to really look at the disinformation and the misinformation completely different, but is there almost a sliding scale? It's quite a complex and gray area.

Di Cook: Yeah, I would completely agree. I think especially if you're looking at the impact that there's definitely a sliding scale there. So examples of misinformation that can cause significant harm would be definitely the misinformation around the height of the pandemic in terms of ignoring vaccines and thinking they're not necessary or they're conspiracy. Another example would be misinformation spread after the Boston bombing, so the Boston Marathon or accusing a number of missing individuals of being responsible. And so in many cases, misinformation can be equally as harmful as disinformation. On average though because disinformation has that intent to harm and that focus to try to do harm, you could argue when comparing the two, disinformation is more likely to try to cause harm in the end because of that, but it's not a binary categorization.

Harry Kemsley: Yeah, I suspect we could probably spend the next 25, 30 minutes talking about the definition, but we'll move on. Let's accept for now then that the basis of our definition is that misinformation is a misunderstanding that's created without intent, whereas the intent with disinformation is to actually achieve an outcome of misunderstanding by an audience. And that those two definitions pivot around the intent. One is accidental, the other is more purposeful. Is that fair?

Di Cook: Yep, I would definitely agree with that.

Harry Kemsley: All right, so let's pick up then Di and Sean, two parts of how we might approach mitigation. Now, traditionally, in terms of years gone by, Tradecraft, the intelligence Tradecraft was there to ensure that we did the very best we could to find" the truth" of a situation, to enable decisions to be made based on what we provided, the decision maker in the analysis we were doing. So Sean, I'll come to you in a second in terms of the Tradecraft that might enable us to mitigate mis- or- disinformation, but Di with your expertise and background, you'll expect me to then pivot back to you and say, so how do we use artificial intelligence and other advanced technologies to help us mitigate this real problem of mis-and-disinformation?

Di Cook: Of course, before we delve into how technology is serving counter disinformation efforts, it's worth first exploring how it is also being employed to enhance disinformation capabilities. One type of technology that I would like to explore in particular during this conversation is artificial intelligence. In this case specifically machine learning to create fake digital media. You'll probably know it better as deepfakes, and this is AI generated or synthetic media images, audio or video situations or things that didn't happen or don't actually exist. And this could be anything from creating a photograph of someone who isn't real or cloning the voice of a real person and making them say something they never actually said in real life. Most people have likely heard about deepfakes of celebrities or famous figures such as the 2018 deepfake video of Obama that was widely circulated or the more recent deepfake videos of Tom Cruise on TikTok. However, as synthetic media becomes more sophisticated and easier to use, we're seeing an increasing number of instances of it being employed for disinformation purposes specifically, many of which look to be state sponsored. For example, we've seen a number of incidences of deepfakes popping up around the Ukrainian invasion. The most well known one so far has been a deepfake video of President Zelensky ordering Ukrainian troops to surrender. And that was circulated all over social media last march. It was debunked relatively quickly as it wasn't particularly good fake, but that's not always the case. Later in June, mayors of European cities were duped into believing they were holding video calls with Mayor Klitschko because of a much more sophisticated live deepfake than an impersonator was using during the conversation. And many of them admit that they didn't realize it wasn't Klitschko until the impersonator actively started behaving in ways that just didn't make sense, and so that's a much more sophisticated deepfake. And as this technology continues to get better and the barriers to employing it get lower, we expect to see synthetic media being employed much more widely in disinformation campaigns in a variety of ways, which is pretty concerning.

Harry Kemsley: So Di, you said in your piece there that there was an ability with machine learning algorithms driven by data to drive artificial intelligence created media of all different types. And you mentioned that there was an ability for it to create images that the human eye couldn't tell the difference between the synthetic and the real. Is that true?

Di Cook: Yes, and that's increasingly becoming the case. So while this technology really is less than 10 years old, has been on commercially available or publicly available for less than roughly 5 years, we are now reaching to the point where there are types of synthetic media that humans can no longer discern from real authentic media such as human faces. And that has been proven in recent academic studies and we are just seeing a increasingly sophistication of this type of technology.

Harry Kemsley: So I'm wondering, Sean, before I let Di finally answer the question a certain minute ago, which is how do you actually mitigate this problem? When we come to talk about Tradecraft in a second, if a human eye can't see the difference in images, what has Tradecraft got to do to detect the anomaly, which is actually a synthetic image? Answer me that in a moment, but let me get back to Di to the actual question we did ask a few minutes ago. So Di, given, what you've said in terms of the definition and the development of the technology to create this synthetic image, this deepfake, how do we mitigate that? How do we deal with it? If a human eye can't see it, how do we deal with it?

Di Cook: This is a great question. We can generally break down technological responses into prevention, detection, and mitigation. Most currently fall under detection, these would be called automated machine detection models, which examine the images, video or audio fakes to detect anomalies called artifacts such as an accidental third arm or no face. Or there can be more subtle artifacts such as minor warping texture or out- of- sync movement, which cannot be seen with the human eye experts in industry, government and academia all working on trying to improve automated machine detection models from DARPA's media forensics program to the 2019 deepfake detection challenge that was jointly run by Facebook, Amazon, along with a number of universities. The second category of prevention also has some technology- based responses. So the predominant one would be AI adversarial attacks where AI models are being used to hide noise in existing digital media that's invisible to humans, but that will disrupt the efforts of AI technology so that it cannot use that information to generate synthetic media. The final category mitigation has broadly two principle technological responses that seek to mitigate the potential harm or impact a deepfake might have. So data providence, also known as content providence, focuses on ensuring the authentication of real digital content rather than detecting fake content. And it does this by recording the content's origin and relevant metadata, ensuring if this information is accurately updated as the content is shared or altered. So you can think of it like blockchain before your digital media. Meanwhile, we're already seeing widespread use of AI or digital technology tools to identify synthetic media after it's been shared online, such as automatically tracing its source and spread across platforms, cross- referencing against other data and so on. Really these tools aren't specific to synthetic media, but are utilized more broadly for any kind of disinformation content. So as you can see, there's a myriad of different approaches being taken to try to combat the use of synthetic media for disinformation purposes.

Harry Kemsley: Okay. Well, that sounds to me, Di, like some of these technological detections of the deepfakes or indeed as you described today, the authentication methods across different platforms and so on is you had the tenor of it being emerging rather than well established, and that therefore the confidence that technology can detect what technology has created wasn't an absolute, it wasn't a given that we could detect these deepfakes, for example, these synthetic media by other technology. Is it fair to say that we're nowhere near a hundred percent? Or are we actually getting quite close to being pretty good at detecting these things?

Di Cook: I'd actually say we're getting worse. So just like it's harder for humans to detect and discern between what is real and what is fake because these AI generated media are, I guess, these AI models are just becoming so much more sophisticated. They're becoming easier to use and more ubiquitous throughout the digital environment. It is becoming harder to use automated detection effectively, both because the models that's used to develop the detection systems are either becoming out of date or not applicable in certain circumstances, but also they're really hard to convince people to use and convince, let's say, platforms to integrate into their systems across the board. So where one social media platform, for example, might have a more robust detection regime, another might have a zero. And you can see how disinformation could slip by in, let's say, the second platform and then move on to spread through the wider environment through there.

Harry Kemsley: Wow, okay. So we're getting not closer to the solution, but actually further away. So Sean, let's go back 10 years before Di tells us of some of these technologies were starting to emerge and" all we had available to us" was Tradecraft. What are your thoughts then about what you just heard from, Di, in terms of the synthetic media, the disinformation with intent that's permeating into the open source? How do we deal with that from a Tradecraft perspective in your experience?

Sean: Well, this is going to be an extreme challenge. I think that's a given, but particularly if you look at, so Tradecraft, one of the things that we always, always bang on about is the efficacy of the source. So the source material, making sure that it's assured as we talk about. And of course you never, or you try not to rely on one source, you try to cross- refer between 2, 3, 4, as many as you've got basically to validate what it is you're looking at. Now, if you haven't got the confidence that the source is actually true, then that takes one away from you. So I think the way to resolve this, and Di has mentioned it herself that we're nowhere near yet, is to develop the Tradecraft to accommodate and to actually utilize this AI and the machine learning to say, you cannot trust this source, or you've got to wait the source and say, " Okay, in that case, what else do we need to look at?" Now, of course then you get into the challenges as we've always had of circular reporting, of cross- referring, and we used to get it all the time, but you get two different sources and you work out they're actually the same because it's been repeated or somebody has quoted somebody else and it comes around. So you've got to get to the origin of that source as well. So bringing it forward to today and the challenges that Di has articulated is that first of all, find out where that source is from. So if the deepfake is emanating from China or Russia, then you've probably got a good start that what you're looking at is not necessarily true, but it might be. And then you've got to look at the data itself and see if it correlates with other data that you've got and then you've got a chance. But it's back to the difference between information and intelligence. Intelligence is the best analysis you can come up with against the information you've got available, but you've got to wait that information. But as we get more and more of this disinformation coming across and we're seeing it's almost exponentially increasing. We are going to have to rely on the AI as it becomes mature. So it's absolute criticality to not only develop the AI so it becomes mature enough to trust it, but also to formally include it into Tradecraft and data analytics and the standards to make sure that we've A, considered it and B, if the information is not accurate to discard it as well. So it's incredibly challenging and I don't think we're anywhere close to resolving yet.

Harry Kemsley: Hold that thought there about Tradecraft for some discussion in a minute about the so what for the intelligence community? We'll come back to that. Di, what I think I'm hearing is that we've seen the emergence of synthetic media, deliberate mal- intent, disinformation put into the open source environment. We've seen technologies that were being developed to try and detect that. And it sounds like we've got into a sort of counter- counter, counter- counter situation where things are developing in a sort of war of AI, trying to find the best technology to counter the current developments of the most recent AI and so on. It feels to me like we're in a bit of an arms race there in terms of disinformation AI and disinformation detection AI. Is there any light on the horizon end of the tunnel that gives us some hope that we might actually get to a place where intelligence analysts, for example, will actually be able to use AI to detect and reliably detect deepfakes and disinformation? Is there any light at the end of the tunnel for that?

Di Cook: So I would love to say that there's a light at the end of the tunnel, but really with the direction technology is advancing on both the deception and counter deception sides, it's highly unlikely that technology by itself is going to be reliable enough to be the sole solution for authentication of data. Just to be clear, I am absolutely advocating that we continue to pursue technology- based efforts like automated detection as they might yet lead to a more effective solution in the future that we've not simply thought of. An alternative metaphor we could use is that increasingly detecting AI enhanced disinformation, like synthetic media is going to be like climbing Mount Everest. If you just walked out tomorrow with only your shirt on your back and started climbing, you're not going to succeed. You absolutely require the fitness and skills or the Tradecraft to even start climbing and the specialized equipment or the authentication technology will certainly help, but even then it's going to be incredibly hard to make it only under your own steam. You're going to need the support of others to get to the top. And it's this last part that the analyst being supported by other experts and stakeholders that I really do want to emphasize is being especially key for countering disinformation, like deepfakes. Analysts are not going to be able to do it all on their own, nor should they bear the burden of responsibility by themselves. Any kind of effective or sustained response is going to require other actors becoming more active participants, including the AI tech companies, social media platforms, government, and even digital consumers themselves will all need to take more responsibility for their roles in enabling the spread of disinformation, if we hope to combat this effectively.

Harry Kemsley: I definitely hear the sort of ecosystem of players, stakeholders that would need to come together. My worry is that the incentive for them to do so is not yet, well not to me anyway, evident to drive them to do so. But Sean, I think where I'm sitting from what I've heard from Di is that this is a very challenging environment. The arms race of AI is ongoing. We're not doing very well. In fact, I think you said a moment ago, Di, that the actual artificial intelligence technology solutions are going backwards in their efficacy, not forwards. So Sean, it seems to me that the blend of whatever technologies we do have at our fingertips as well as good Tradecraft and the combination of those two things is probably our best defense, our best mitigation against mis- and- disinformation.

Sean: Sure. Yeah, I think that's right. And it goes back to though the use of multiple sources, that's both the source where it's come from, but also the type of source, whether that is imagery intelligence is a little bit harder to spoof if you like. I'm talking about satellite imagery, that sort of stuff. Siggins RF all the rest of it. So you've got to layer on those different source of intelligence. But as I said previously, to formally include that into the considerations I think is important. Deepfakes are obviously a real challenge, but we wouldn't always go to social media and videos to find the source of anything. So I think we've got to balance that. And I would say this, wouldn't I? But I don't think we should underestimate the experience and the knowledge of the analyst themselves. The human brain can still do an awful lot that artificial intelligence can't. And over time you get to feel that there's something not quite right there, particularly if you've already been down that route and you've proved it by, " Well, we looked at that source last time, we got it wrong, why do we get it wrong?" So actually, and this is as appropriate to algorithms as it is to the intelligence process, that if you've got a good algorithm, over time it will prove itself and you'll trust it. If you have a bad algorithm, you'll discard it straight away. And it's the same with the intelligence process. If you've got a source that clearly isn't turning out to be right, then you're going to discard it.

Harry Kemsley: Yeah, just as an aside, although I think it's probably slightly relevant, I do sometimes worry, we spend an awful lot of time talking about machine learning and if there is effective mis- or- disinformation out there, I worry that we don't spend enough time talking about machine unlearning that in other words, we found something to be not true. We need to unwind that out of all the systems that it's been percolated into. But maybe we can park that to one side. Di, I'm curious to know, given your experience and expertise in this area, have you seen any evidence of this sort of ecosystem of actors coming together to start to battle with this problem? Because I think what you've said to me in short is this is really, really tough. We're going backwards in efficacy, but if we could bring together multiple stakeholders, it's possible we could build a technological and Tradecraft- based system across many different platforms, different stakeholders to start to defeat some of this. Do you get any sense that's actually happening? Have you seen any evidence of that starting to come together at all?

Di Cook: Yeah, so there's a lot of really interesting counter disinformation work coming from a number of non- state actors in the intelligence ecosystem. But I'd say it's worth highlighting some of the collective efforts being conducted by civil society and the civilian OSINT community in particular that these communities are especially good at collaboration isn't surprising as it's always been a central part of their ethos. They've engaged in a number of different initiatives, including efforts to archive data being recorded in violent conflict such as video or images or audio to be able to be used for referencing later on and to be able to debunk disinformation on these topics. There's live fact- checking efforts via crowdsourcing as disinformation is being spread across the online environment. There's been a huge effort around the sharing and education of Tradecraft skills amongst open source researchers to improve investigations as well as the development of a number of different publicly maintained automated tools to help support verification of information in various ways. And you can really see how the initiatives by these communities could be directly beneficial to an analysts work, such as helping them verify a piece of open source information by being able to compare it against publicly maintained databases or being able to run it through some of these openly available authentication tools at a strategic level. The initiatives by these and other non- state actors is worth reviewed by the intelligence community as a whole when considering its own role in counter disinformation activities. For example, some worthwhile questions might be what lessons could it learn to better inform its own approach to detecting and mitigating disinformation? How might it be able to leverage the ongoing work of others to supplement or complement its own efforts, or how could it position itself to support the work of others that it might not be as well suited to accomplish? So there's a lot to unpack there, but that's I think a topic for potentially another episode.

Harry Kemsley: Yeah, Sean, that just shrieks at me, the analyst needs to come out of their vault occasionally and step into the open source environment, that ecosystem that I just talked about. Sean?

Sean: Yeah, I think it's even more than that, and Di hit the nail on the head there that I think this is an area where the commercial world industry can actually and has to take the lead because of the agility it has because of this, it's ease in, and I'm not saying the intelligence community and defense doesn't do this, but the ease in which it can bring in new technologies and of course for its survival. I mean you might argue that it's the same with the intelligence community. You've got to be relevant to survive. But we all know open source intelligence organizations and there's a few out there right now who are maybe more prominent than they should be. They're just spouting absolute rubbish because they've read it in the newspapers. Now they won't last, they won't survive. Maybe with some of the mainstream media they will, but they just don't have the efficacy to do that. So it really behooves the organizations such as James to say, " Right, we have now applied all the rigor, all the technology we can to create the assured piece of it." And I don't think the intelligence community can do it on its own. It definitely can't because it's just got other too many other things to focus on and the nature of the way it procures things. So I think there's not just a role for industry, I think it will have to take the lead on this.

Harry Kemsley: So the intelligence community is going to need to be receptive to that lead from commercial sectors. And that's not always easy. I mean, that's not always easy to step into the outside world and listen because you are hemmed in from doing so. Di, I'm going to squeeze in one more question just because I'm dying to ask this question. It feels like this question that needs to be asked and then we'll start to wrap up. Is there a difference between our ability to detect what we've been talking about mostly today, things like deepfakes, synthetic media versus disinformation that's perhaps more text- based or more based on things that we can actually attack with other tool sets? Is there any difference in the media types in terms of our efficacy to detect and beat them?

Di Cook: So I would definitely say that it's harder for people to recognize AI generated fake text than fake images or audio visual media. I'm sure you've seen how realistic the text generated by the new AI chatbot ChatGPT have been. And if you haven't, you should definitely go check it out. Currently, automated detection tools to identify texts created by ChatGPT are actually pretty accurate. But as the underlying technology continues to advance and as new models are created and used, machine detection of this text is likely to become increasingly difficult in a very similar way actually, that the automated detection of synthetic media has become. And so at the end of the day, it's really going to come back to good Tradecraft techniques for authentication and approaching AI generated fake texts in the same way you might approach non- AI generated text.

Harry Kemsley: Well, if that's the case, I think there's a couple of takeaways, Sean, we want to get to in terms of the so what for the intelligence community. And just while you collect your thoughts, Sean, for the intelligence community message, Di, the question I always ask at the end of these podcast episode is the one takeaway you wanted the audience to take away from this topic. Whilst you get your thoughts collected around that before I go to Sean for the so what, for the intelligence community, my takeaway from this, the one takeaway that I would like the audience to remember, which I hope isn't going to steal the sounders from either of you by the way, is that I had assumed that technology creating disinformation or even misinformation for that matter could be relatively easy detected and that it would be detectable if not by the human eye with synthetic media, but it could be detected by machines. The sense I've got from this conversation is that that's actually far from true and worryingly so, which for me means we really, really do need to be very, very open to a multi- discipline approach, a large stakeholder group to ensure we get to" the truth", the fear that we otherwise focus on what is not actually true. So Sean, again, I'm going to come to you last. I'm going to go to Di. What is your one takeaway you'd like for the audience to have taken from this session around mis- and- disinformation, Di?

Di Cook: Harry, I think you stole my takeaway.

Harry Kemsley: I'm sorry.

Di Cook: It's an incredibly important takeaway though, so I just want to re- emphasize that. I mean, this technology is here, it is rapidly advancing so much more than expected. It is incredibly hard to detect and it's only going to get harder from here on out. And so I cannot over overemphasize how important it is for us to, A, recognize that this technology is not all that sophisticated, and then B, also understand that as you were saying, a multi- stakeholder approach is really going to be the only effective solution here. And therefore it's key for all stakeholders within the broader intelligence ecosystem to think about what advantages they have to bring to the table and how they can work with others to try to collectively push back against disinformation.

Harry Kemsley: Perfect. Thank you, Di. Sean, the final word?

Sean: Unusual. I'm going to make two points and I think the first point to make is that this is not just an IC specific problem. We live in what I call a post- truth world. There's no question about it. So every element of national power, national security will have to consider this because the only way you can support decision- making is by getting as closer truth as you can about what's actually happening and what's going to happen next. And that's the segue to the second piece is if the intelligence community wants to remain relevant, in line with all the other challenges it's got to deal with, it's going to have to really focus on this disinformation piece specifically because otherwise the intelligence it provides, the so what if is just not going to be trusted and people will go their own route and make their own minds up against information that is clearly not assured. So it's a really important one this.

Harry Kemsley: Yeah. Well look, Di, what can I say other than a huge thank you for helping us revisit the topic that's been enduring through many podcast episodes. The topic of mis- and- disinformation, which you've certainly shone a light on for me, not necessarily a bright light, one that I'm a bit worried about, I won't deny. But thank you so much for your contribution and your time today. It has been a very, very interesting session. Thank you.

Di Cook: Thank you so much for having me. I really enjoyed our conversation.

Harry Kemsley: So did I. Sean, thank you as ever and thank you to listener. We'll speak again soon. Thank you. Goodbye.

Speaker: Thanks for joining us this week on The World of Intelligence. Make sure to visit our website, janes. com/ podcast where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts. So you'll never miss an episode.

DESCRIPTION

In this episode we explore the impact of mis or disinformation in open source intelligence with Di Cooke CSIS International Security Program Visiting Fellow and KCL War Studies Doctoral Candidate.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Sean Corbett

|AVM (ret’d) Sean Corbett CB MBE MA, RAF
Guest Thumbnail

Di Cooke

|CSIS International Security Program Visiting Fellow and KCL War Studies Doctoral Candidate