OSINT Ethical Considerations with Amy Zegart

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, OSINT Ethical Considerations with Amy Zegart. The summary for this episode is:

Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Speaker 2: Hello and welcome to this edition of the World of Intelligence by Janes. First of all, welcome to Sean, my usual co- conspirator. Hello Sean.

Speaker 3: Hi Harry.

Speaker 2: Good to see you again today. I thought we would move the conversation about open source information and the intelligence we can derive from it, by looking at a fascinating topic and one that I believe is increasingly important, which is the ethics of using the open source information environment for deriving intelligence. Now, before I introduce our esteemed guest, let me start by saying today's topic, the ethical considerations in open source intelligence is like so many others that we could tackle, which is one of those that we might achieve little more than skip like a stone over at a pond on the real issues to be discussed. But if we achieve nothing else in the next few minutes, than help you, the listener, consider the matter of ethics and encourage further study, then we will have achieved a great deal. So ethics in open source intelligence and today I am absolutely delighted to welcome our guest, Dr. Amy Zegart. Hello Amy.

Speaker 4: Hi Harry. Thanks so much for having me on.

Speaker 2: It's absolute delight to have you on. Thank you for coming. For those that don't know Amy, she is the Morris Arnold and Nona Jean Cox, Senior Fellow at the Hoover Institution and Professor of Political Science at Stanford University. She has a number of other roles, including, she's also a senior fellow at Stanford's Freeman Spogli Institute for International Studies, Chair of Stanford's Artificial Intelligence and International Security Steering Committee and a contributor writer to The Atlantic. As I suspect from those titles you can guess, she specializes in intelligence, particularly the US Intelligence, emerging technologies, national security, grand strategy, and global political risk management. She's been featured by the National Journal as one of the 10 most influential experts in intelligence reform. She has been the author of numerous papers, journals, and five books, the most recent of which is Spies, Lies and Algorithms, The History and Future of American Intelligence, released early this year, drawing on her decades of research and hundreds of interviews with intelligence officials. Amy provides a history of US espionage from George Washington's revolutionary war spies to today's spy satellites and examining numerous fascinating insights including how fictional spies, I guess James Bond might be one of those, are influencing real officials. A fascinating, fascinating bio, I'm looking forward to reading the book. Amy. Again, welcome.

Speaker 4: Well thanks. But I have to say, when you take decades of research, it makes me sound like I actually live through George Washington's use of intelligence.

Speaker 2: Yeah, I'm not that old, but some people would wonder if I was. So Sean, let us get started as usual by making sure our listeners are clear about what we mean by the open source information environment and how we define it and then how we derive intelligence. So let's talk about open source information first. Can you help us define that?

Speaker 3: Yeah, of course, Harry. So for me, open source intelligence, and I know it means different things to different people, but predominantly is certainly the way that you look at it through Janes, I believe, is that it's got four components, two of which are specific to open source, and the other one is specific to any intelligence, frankly. So the first one is that it has to be derived from information that's freely or commercially available to all. So anybody can get hold of it if you've got enough money or even if you haven't got enough money, but you've got to be able to get hold of it freely. The secondly that, which is really the hub of what we're going to talk about, is that it's got to be derived from legal and ethical sources and techniques. So that straightaway for me, omits false persona, going into dark web et cetera, et cetera. But we might want to explore that one a little bit. And then the other two elements is that for it to be intelligence at all, it's got to be applied to a specific problem set or a requirement that has been set and it has to add value, the so what, that I talk about so frequently. So in a nutshell, that for me is what open source intelligence is all about.

Speaker 2: Great, thank you Sean. So I think, as we've discussed on previous podcasts, the fact that technology has really enabled the access to the open source environment and the exploitation of the open source environment is actually the underpinnings of this conversation. Because what I've seen over the last period of years is that where we've come used to the idea that open source might be foundational, it might be able to provide us with context, for example, those things are not" actionable" necessarily. If they are a foundational understanding, they give us the context, as I've already said. But now that you are starting to move into the realms of increasing exploitation of open source, so you start moving towards the more actionable, more operational perhaps, use of open source. And with that I posit, you get closer and closer to the need for a more ethical perspective of your use of that intelligent source. In the same way as we do, for example, human intelligence, we have a series of ethical governance, which we need to understand. So Amy, from the research that we've seen, it appears that there is an increasing recognition that ethics is important, but it seems to me that there is far less clarity on what specifically the issues are and how they could be addressed. But before we get to that, perhaps I could ask you to explain what you think ethics means into the intelligence context, and then perhaps we can start from there going into the nuances of what that means for the open source domain, Amy?

Speaker 4: Sure. So Harry, I think first about what ethics isn't, and I think it is helpful to put aside what ethics isn't, to distinguish ethics from other ways of thinking through difficult problems. Ethics is not just following the law. There are many laws that are not ethical. So slavery was the law in the United States. It wasn't ethically or morally correct. Ethics is also not just following societal norms. There are a number of societal norms that are also ethically problematic. So patriarchy, misogyny, for example, have been ethical norms in societies for 100s of years. So ethics is distinguished from that. So that aside, what I think ethics is, is the application of moral principles in a deliberate process to a problem set, to identify the best course of action for good, not just the expedient answer. That's a long way of saying, number one, you have to apply what you think is the right thing to do. And number two, the process piece is really important. Ethics doesn't just stand out there in the universe. It's the deliberate process that makes it important.

Speaker 2: So within the process piece then, as you've described it, intelligence communities often talk about trade craft, their craft of their trade, how they do what they do, and they collect, they collate, they analyze, they report, et cetera. I've already indicated, and we'll come back to it perhaps later, that technology has stepped into that trade craft in that is it has enabled it. Where do you see the difference between process, trade craft and the technology that's supporting it and the ethics? Where does the ethics fit into that in your estimation?

Speaker 4: Well, I think ethics has to be baked in to trade craft. It can't be bolted on at the end. You can't just do your analysis and then say, is it ethical or not? It has to be part of the process holistically from the beginning. And I think trade craft and ethical considerations are intertwined in ways that many people may not think about. So the analyst has to think about the ethical responsibility to admit uncertainty, to understand what other possibilities or explanations of the data might be, to recognize when they're wrong. That's all part of being an ethical analyst, is understanding failure or alternative explanations, not just promoting your own analytic judgment.

Speaker 2: And I think the human aspects of what you just described, the psychology of what you've described would be a fascinating topic in itself. Maybe a podcast for the future because there is a real human psychological dynamic at play there, isn't there? If I have done something wrong in my analysis and I have created a bad outcome, ethically I should step forward and make that clear that I have recognized that error and I will learn from it. That's not necessarily what we want to do though as professionals, in terms of our perception of self or indeed the perception of others of us. So Sean, maybe bring you in at that point as well. So ethics in the open source environment, ethics in the trade craft of intelligence. How does ethics play, given your experience in intelligence, what do you see as ethics being for you, given your background?

Speaker 3: Yeah, there was a lot to unpick there from what Amy said. And just to touch on a couple of those very quickly, I like the idea very much of you've got to follow a process. Ethics can't be just out there something that's ethereal, it has to be something you follow. Now, does it get a trained specifically in terms of some of our trade craft? I don't think it does, but there's an implied. You've got to be, do all the things that we are talking about. You've got to be accurate, you've got to cross refer, you've got to make sure that there's rigorous analytical process followed. That's slightly different though, from what I call the moral component and the moral courage, where particularly in the military, you might be a dedicated intelligence analyst and you've come up with assessment and your commander does not like that assessment because it doesn't fit their narrative. You then have to have a moral judgment to say, actually, sir, or ma'am, this is what we think and why. That's happened to me several times. And the problem with that of course, is it could be career limiting, but I think as long as you've got that trade craft that is actually followed and written down and understood, So there's an education element to this, particularly with the receiver of the intelligence, I think is quite important. Just going back to what you're saying Amy, about, it's not just societal norms because it's got to be what's right. The question I'd have on that, it's a philosophical question, is who is it that defines for good? Because if I was a Russian intelligence analyst right now, I'd probably have a very different perspective on what intelligence I was providing for good and what good meant. So I guess there is a little bit about societal norms there, but this is where, without getting into it right now, but this is where the sort of alliances and partnerships come in. Whereas if you can use a level of trade craft, which you all understand and even standardized that is trusted by all, then you've got a better chance I think, of not only getting good trade craft, but making it ethical as well.

Speaker 4: Well Sean, I would say that's such an important point about who decides what's good. And there I think it depends on the organization. When we're talking about outside of government agencies, each organization, when we're talking about how can we have an ethical process for open source intelligence, I think each organization needs to be very explicit about what their goals are, who are they serving, who decides and how. So I think that's one of the challenges of the open source world, is it's less clear, you're not serving one government often, in the open source world. And many open source intelligence organizations like Bellingcat, say they serve the world. So how do you operationalize ethics in that context?

Speaker 2: Yeah, and of course we're not just talking about organizations now because frankly, I can pick up some shareware off the internet in five minutes and go and do" open source" information gathering and intelligence. I can be an intelligence analyst in the open source environment as much as the next man because technology enables that. So actually, the ethics of open source actually goes outside the boundaries of government organizations or even non- state organizations such as the international community, the international organization community. But fear that we will dive into that rabbit warren and never reemerge. Let me move us on then, to a related question. It is a follow on question, so let me posit this as a contentious question or a contentious assumption, probably better said. There is an assumption that if a piece of information is in the public or even commercial domain, it is ethical to use, it is reasonable to use it, probably a bit more accurate statement of that assumption. That if I have left my LinkedIn or my Facebook account unprotected, no privacy clauses, anybody can see it, then I'm allowed to use it. That's a very controversial statement and deliberately so. What's the view of that assumption? Let me start with you, Amy. How do you feel about that assumption that I can take as available information, anything that I can get access to without the need for any underhand or nefarious techniques? I can just go and pick it up off the internet, for example.

Speaker 4: So I would say, Harry, I'm uncomfortable with that assumption and I'm going to give you a reason why that's going to be very unsatisfying, which is that it depends. It depends on the stakes of the situation. It depends on the benefits that you could derive from the information that you're going to access. It depends on the harm that you might inflict, either deliberately or inadvertently, in using that information. So for example, information on the internet, if you use it in open source analysis and it could identify an individual that could then be subject to imminent harm, that's deeply problematic. Now, it depends on what the goal is. Is the ends justifying the means? But the flip side though, is that individuals often make a lot of information public and they really don't care how that information is used. So often you hear in this conversation, well, individuals should have autonomy over their data and they know what they want to be used or not. But actually, some incredible research that my colleague, Susan Athey at Stanford did at MIT, showed that undergraduates would give away the email addresses of their closest friends for a pizza. So when we hear about stated preferences for privacy, and then you offer a very small incentive, in this case one pizza, you'd be amazed at how often the revealed preferences of individuals suggest that they're actually not so concerned with their privacy. So that's important to bear in mind.

Speaker 2: I have to admit that I may well have been persuaded by a similar offer had I been ever offered a pizza. My waistline is evidence to that. So Sean, before I come to you, because I think what we're saying, from what Amy has just said, is it does depend, and that's of course the worst possible answer, but it's the exact correct answer. It depends on so many variables. So when you are facing a question and you perceive an ethics issue, where do you draw the line? You've got decades of experience in intelligence. How do you begin to understand where to draw the line in the absence of any governance that might be in front of you?

Speaker 3: Yeah. And you won't be surprised to know that I pretty much agree with Amy on this one, and I would always ask two questions about that intelligence, is to what end? What are you trying to achieve with that intelligence? If it's to increase your understanding or a baseline, then that's entirely different thing from causing harm to anybody or impacting anybody. And what are the consequences of that? So the two are linked. What I would say is this is where the debate gets really interesting because there is a gray area because how much impact or how much consequences, the degree consequences are acceptable and how much aren't? So if are having an impact that is say, psychological or you're able to shape the potential enemy in terms of their behavior, which might lead to something in the future, that's a different thing from being able to deliver kinetic or other effect actually on them. So the question then becomes, at what stage do you have that graticule that says that's okay and that isn't. And that will of course, as Amy said, depend on the situation at the time, the amount of peril you're in or the real requirement of what you are trying to achieve. So I don't think there is an answer particularly to that question.

Speaker 2: No. Well, my view is that eventually somebody has to make a decision, and so long as that person has been given the best available information upon which to make the decision. For example, a point you made, Amy, about the, I didn't mean to do this, effect. That's not what I intended, I didn't realize that would be the effect. That thing is impossible to predict, but somebody has to be accountable for the decision. And certainly in an organization, you'd expect that to be a competent individual provided with the right information to make it. For me, the ethics of this particular question about whether it should be used is determined by the ends. It is something that the decision maker has to make a decision about and has to be held accountable to. Therefore, there has to be due process, which goes back to the trade craft. But we'll come back to that point in a second. Another rabbit warrant is looming, I suspect. I'm going to push us on. So what about the arrival... I mentioned this in my introduction, Amy, the arrival of technology that truly started to enable the realization of potential of open source information and the intelligence value on it. Should there be limits on what we are able to collect and should we be more worried about the use of advanced technologies like artificial intelligence, there are others, but for example, artificial intelligence? Should there be things that we should be concerning us about the use of AI in the open source environment, that should be limiting the way we're employing those kind of techniques?

Speaker 4: So I think new technologies of any stripe require deep ethical thought because they're always used in ways that we can't foresee. And with AI, we know that there are weaknesses, deep inherent weaknesses in AI. And part of the ethical responsibility of using AI, is first understanding how AI can fail, how it can lead analysts astray, and as a result leave individuals vulnerable. So for example, we know that AI is only as good as the training data on which it is based. And so facial recognition technologies are very good at identifying light skinned faces and less good at identifying dark skinned faces. So you have in law enforcement, the misidentification of suspected criminals, are wrongfully arrested because of faulty facial recognition algorithms. So we know that training data is crucial. So understanding that weakness is one consideration. The second real weakness with AI is explainability. So analysis in anything is not just an act of analytics, it's an act of persuasion. And so if you're going to a boss and you say," Well we think China's about to invade Taiwan." And the boss says," How do you know?" And you say," The AI told me." That's not very persuasive. So AI isn't explainable, at least yet. And so how an algorithm arrives at a conclusion is problematic because it's a black box. And then I think there's a third risk, which is that AI, although an incredibly powerful tool, and I think intelligence agencies need to utilize it more, AI can distort the analytic process because it leads you to count what can be counted. So often the most important parts, as you all know, of intelligence analysis, have to do with intentions. They deal with creativity, not analysis. And so if what you're doing is relying on AI more and more, because it's this wonderful tool, you're not spending your time on the squishy, hard to quantify variables that are often more important. As we're seeing right now in the war in Ukraine. If you just count weapons, then you think that Russian military is far more powerful than it's been and the Ukrainians were likely to lose. Well it turns out those non quantifiable variables like morale, the ability to do combined arms operations, are far more important than intelligence analysts anticipated.

Speaker 2: Right. Yeah. Sean, I almost know what you're going to say but I'll ask you anyway. Your view then, on the limits of collect and the AI use in these kinds of conversations?

Speaker 4: Yeah, I think this is a really important number and we could spend a lot of time on this because, and there's a wider intelligence issue about this. Because why do we always get it wrong in the intelligence community? In terms of, we can see exactly as Amy said, exactly how many tanks and aircraft and all the rest of it, but discerning the intent, which is what half of the threat matrix is all about and the predictive intelligence, is the really key thing, which is why you employ clever people to be intelligence professionals. But we invariably, well that's not, invariably, we do get that wrong sometimes, why is that? So I think there is definitely a role for AI, of course there is. But for me, the role is making it so it's easier to sort, wrangle as you've taught me that word, and manage the intelligence that's provided, to provide that baseline. But I do love the phrase that you used that AI will let you count what can be counted because it's got to be more than that. So you know what I was going to say originally is, where do you put the human in the loop for that? It's got to be useful, but it's a tool just like any other intelligence tool. And I think that a lot of people, I've had a lot of debates, as you know, about AI. You've got the intelligence community slightly scared of it because they think they're going to lose their jobs. That would be an incorrect application of AI, but equal, how do you use it to maximize what you do? And again, I agree a 100% with Amy about the trainable data. You've got to have enough training data to start trusting the AI. It's like any intelligence analyst, certainly when I was around. You've got probably three attempts to brief the boss on what you think's happening. If you've got it wrong three times, they're never going to use you again and then move on something else. It's the same with the AI. If your algorithms are, in hindsight, proving to be really strong, it's probably you've got good algorithms there, you're going to use them again. And of course, you can test and adjust. So AI is definitely an important part of this. In terms of how it meets the ethics. It's down again, to what Amy was saying, if it's not explainable, how do you know? I mean, can AI be ethical or non- ethical or is it just ones and naughts? That's the really tricky one, which is quite scary to even think about.

Speaker 2: Yeah. Sorry, Amy, go ahead.

Speaker 4: I was just going to say, Sean, I think we bake our ethics into algorithms without even realizing it. How we structure the algorithms, the assumptions we make, the data we use, those are imbued with ethical ideas and they reflect societal norms.

Speaker 2: Yeah, exactly what I was about to say. One thing we didn't touch on in that question then was, where are the limits? For example, we can plunder from open sources, anonymous phone records, where are those phones in the world? What are they being used for, for advertising purposes, for example? We can access an incredible amount of data, but should there be any limits to where we should be able to collect? Should there be more governance over what information is available in the open source? IE, make some of that closed source, only available to people with the appropriate authority or the appropriate authorizations? Is there a limit to the open source environment that we should be imposing? Amy?

Speaker 4: I think there should be some limits. And of course I stand here in the United States and fully recognizing that we are really behind Europe when it comes to data protection, data privacy regulations. So there's all sorts of information that's available about me online that is free and open for our adversaries to use, but is not free and open for the US government to use. Now I'm not advocating that the US government use all this information, there have to be guardrails against that too. But my point is that the playing field is not level. And so not just from an ethical perspective, but from a national security perspective, this is not the optimum environment in which we want to operate. So I think there do have to be some thoughtful limits. And the question is how do you get to thoughtful limits that keep pace with technology about what information is available about you and how it can be used? And I think we're nowhere close to being where we need to be in the United States.

Speaker 2: Yeah, I don't think that's unique to the US, by the way. I think that's a global problem that we've created for ourself with the emergence of this technology that allows us to basically distribute data in an unbelievable way. Terabytes of data per second being generated. And that's all" available" for various uses. Sean, if you'll forgive me, I'm going to move us on to what probably the last question we have time for in this session. We've moved through so many huge topics, I feel remiss in not having given it more time, but we don't have enough time. Let's talk then, about how we start to mitigate some of these challenges. Now again, we're not going to have time to talk about this in great detail, but what are the big macro things we should be thinking about in this challenge that we face on the ethical use of the open source environment? What are the kind of things that we should be using? And if I start with you, Amy, in terms of some general principles that you might have. Sean, if I come to you after it to give you some chance to think about it, what does that mean in the military intelligence environment? What do we need to be thinking about in that very specific community? One that certainly Janes spends a huge amount of its time supporting, that we should be thinking about. So Amy, coming to you first, generally speaking, ethics in the open source environment. What are some of the ways we can mitigate the challenges we've at least touched on in the last period of minutes?

Speaker 4: Well, Harry, I think about three. I mean there, I could have a long list, but I'll pick my top three.

Speaker 2: Thank you.

Speaker 4: Number one, that each open source intelligence actor needs to have an explicit guide about, so putting things down on paper, being explicit is a really important part of the process. Back to Sean's important point, to what end and what are the consequences? How do they reason through what their ethical guidelines are? And the act of actually writing that down is very useful and can hold, especially as individuals grow and organizations grow, can implicate new members into the values of the organization. So being very explicit about that. So guide number one. Number two, back to our conversation about AI, understanding weaknesses of the data and the tools that they're using. And not just weaknesses in terms of getting to the wrong answer, but weaknesses in terms of exposing innocent people to harm. So really understanding the tools and the data you're using and what the consequences could be. And then number three, and it's related to the first two, understanding your red lines. What won't you do, no matter what the circumstances are? Where are your individual moral red lines? Where are your organizational red lines? Where are your national red lines? Even if it's expedient, you're not going to do it and why?

Speaker 2: Yeah, that's great. I want to come back on that, but I won't. Sean, what are your thoughts about that in the context of the more military related intelligence analyst?

Speaker 3: So the military side of that really translates beautifully into what Amy was saying. I mean, firstly I think the education process, and we're starting that by having this discussion now, but that has got to get into the military purview as well. I know there are sort of some very rudimentary discussions that's been happening, but how far are they? And we're only simple folks in the military. So writing stuff down, which is your orders, that is what you will do. Doctrine is what I'm really talking about at this stage, that which is taught. But it's got to be more than just doctrine because as we know, we just put doctrine on the shelves and never refer to it again. It's got to get down, right the way through into the training environment. And in this day and age, when you've got some very, very capable junior analysts, quite often in some difficult situations, that understanding has got to go right the way down. It's not just enough now for the commander to say, do this, because it might been a legal order and they might not know it's an illegal order. So I think that education in particular's, got to go right the way, right the way down, actually. I mean it's an extension of what we do anyway, through the law of armed conflict in international humanitarian law, all those things. But it's got to be translated into, the manual that says you can do this and can't do that. The only nuance I would say, and the nuance is the right word, is that because this is such a complex subject and there are gray areas, in that you've got to train with it. So just like we used to do in the targeting cycle, you know go through lots of different scenarios to see is that proportionate, distinctive, et cetera, et cetera. I think in this case you'd need to as well. And the example I was thinking about the other day actually, was you'll be aware that the Russians have been trying to target the HIMARS a very, very capable artillery, which is making a difference on the battlefield. And so the Russians been wasting some of their very highly capable missiles on the decoys. Now, had that come from the open source domain, which is something that legitimately an organization like Janes could have identified, would they or would they not be able to actually share that? And the fact that it actually came from the Ukrainians themselves is, quite thankful for that. But it that's that sort of gray area that you'd need to almost red team and work out saying, Okay, if we've got this, could we use it? Would we use it? So I think there's more than education, it's actually doing some scenarios.

Speaker 2: So that internal training, that standardized approach within trade craft that we talk about so frequently in the intelligence community does need to be imbibed with this ethical thread. It needs to be understood from beginning to end. Now because time is short, let me both give you a moment to think about the answer to the next question, which is the same question I ask at the end of every podcast, which is, what is the one takeaway you want the listener to take away from this conversation about ethics in the open source environment? We've spoken about the fact that ethical issues are coming to the surface in the open source information, and therefore intelligence community, because open source is being exploited ever more capably by technology that's allowing us to do it. And that, as I described at the beginning, is moving us closer to that actionability point where things could be done as a result of what's being found in the open source. That's not the only reason ethics is increasingly important, but it is certainly, I think, a catalyst to this conversation about the need for the ethical debate in it. We talked about the assumptions being placed and how that makes us feel uncomfortable, but we also talked about the kind of things that technology might need to be carefully useful. The AI question we talked about for a few minutes there. But if you had to leave the listener with one question, sorry, one statement or even one question I suppose, Amy, what would that be? What would you want to leave them with?

Speaker 4: I think I'd want to leave them with the idea that ethics is a dynamic process and I emphasize the word process because technology is changing so fast, because the conditions of its use are changing so fast, that you can't just say, here are my ethical principles and call it a day that it is a constant process that individuals and organizations need to be engaged in.

Speaker 2: Sean?

Speaker 3: Just in this very swift, as you said, skipping stone of this, it just proves that this is an embryonic, but a very, and a complex issue and we've got to do more on it. But what I'd say is underpinning it all is good trade craft. To get the trade craft right. Then you're probably going to do all right on the ethics as well. So you must consider the ethical element of trade craft.

Speaker 2: Yeah, I think if I was going to answer that question, and I'm going to now, it would be this. If you've listened to this podcast and you haven't at some point felt just a little uncomfortable because you don't really understand the ethics of what you're doing in the intelligence world. Stop and think about what you're doing and how you're doing it and have a conversation with your colleagues, your peers, your seniors, and just see what you could begin to form as an ethical decision making process or support to that. Because the lack of doing that is where you get the inadvertent effects that you've mentioned in your narrative earlier, Amy, when you talked about the fact that you've got all of these things going on, some of which you can predict, many of which you can't, and at the end of day, Amy, as you said, we end up with inadvertent effects and that is really the bit that I'm worried most about in the open source environment. So sadly, I do have to bring this conversation to a close because I'm afraid going to run out of time. Amy, once again, huge thank you for your participation in this. We'll pass on all the difficult questions to you. We'll keep the easy questions for ourselves and I'm sure this topic we need to come back to Sean, so let's make sure we get some additional time in Amy's diary as soon as possible to take us up on some of the extra points. Amy, thank you so, so much for coming.

Speaker 4: Oh, it's such an honor and delight to be with you both. Thank you.

Speaker 2: Thank you. Sean, thank you.

Speaker 3: Thanks both. That was a really good discussion.

Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts, so you'll never miss an episode.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Amy Zegart

|Stanford faculty, sr. fellow at Hoover Institution & FSI, Atlantic contributing writer
Guest Thumbnail

AVM (ret’d) Sean Corbett CB MBE MA, RAF

|CEO and Founder IntSight Global Limited