Next Level OSINT Considerations - Part 1

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Next Level OSINT Considerations - Part 1. The summary for this episode is: <p>We invited some of our most popular guests back to take us to the next level of what everyone needs to consider for their OSINT and why technology, ethics, culture and empathy are increasingly important.</p>

Speaker 1: Welcome to The World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello, this is Harry Kemsley. Before we start this episode of our podcast, I thought I'd explain why it's in two parts. And it's as simple as this. We had our four extremely good guests in, we started a conversation, and well after an hour we realized we were still going. So rather than asking you to sit and listen for an hour in one go, we thought it might be better to break this brilliant podcast down into two parts. So part one is the introduction of the guests, of the topic, and some of the first parts of our conversation. Then we take a break after about 30 minutes, and then, part two, finish the conversation about 30 or so minutes later. That's why we've pulled it into two parts. I know you're going to enjoy this first part, and I sincerely hope you get to go and listen to the second as well. Thanks very much. Hello and welcome to this edition of World of Intelligence at Jane's. And as usual, my co- conspirator Sean Corbett joins me. Hello, Sean.

Sean Corbett: Hi, Harry.

Harry Kemsley: Now, for those of you that have listened to podcasts in the recent time, you'll know that we've done a range of interesting topics. And among those, none more interesting, we've decided, than the repeat we're going to do on some technology, on open source intelligence and ethics, open source intelligence and empathy, and also open source intelligence and culture. The fusion of technology is an inevitability in the modern world. I think, Sean, you and I have spoken several times about the necessity of technology in what we're doing, if nothing else because of the sheer volume and variety of content we have to deal with in the open source arena. But with guests we've had in the past, who I am absolutely delighted to say have come back for a second helping of pain talking to Sean and I, we're going to examine that again but now all together. We're not just going to look at empathy or just look at ethics. We're going to look at both of those things with culture and technology. So to help us do that, I am delighted to welcome back for the second time Dr. Amy Zegart. Hello, Amy.

Dr. Amy Zegart: Hi, Harry.

Harry Kemsley: Good to see you back. Dr. Claire Yorke. Hello, Claire.

Dr. Claire Yorke: Hi, Harry.

Harry Kemsley: Emily Harding. Hello, Emily.

Emily Harding: Hi. Thanks for having me back.

Harry Kemsley: Always a pleasure. And a new guest to the podcast, a colleague from Janes, who's one of our senior analysts, Alison Evans.

Alison Evans: Hello.

Harry Kemsley: Welcome to the podcast, Alison. So let me introduce each of our guests in turn. Dr. Amy Zegart is the Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution and Professor of Political Science by courtesy at Stanford University. She's also a senior fellow at Stanford's Freeman Spogli Institute for International Studies, Chair of Stanford's Artificial Intelligence and International Security Steering Committee, and a contributing writer at The Atlantic. She specializes in US intelligence, emerging technologies and national security, grand strategy and global political risk management. She's also the author of five books. Amy has applied her award- winning research to the bestseller Spies, Lies and Algorithms: The History and Future of American Intelligence. Dr. Claire Yorke is an author and academic researcher. She's currently the Marie Skłodowska- Curie Fellow at the War Studies Center in the University of Southern Denmark, leading a new project funded by the European Union's Horizon 2020 Fund on empathy and international security. Between 2018 and'20, she was a Henry Kissinger post- doctoral fellow and lecturer at International Security Studies and the Jackson Institute for Global Affairs at Yale University. She's currently writing two books on empathy and emotions, the first on their role in diplomacy and the second on how empathy and emotions are critical to effective political leadership. In addition, she has co- edited two volumes on diplomacy, which were published in April 2021. Emily Harding is deputy director and senior fellow with the International Security Program at the Center for Strategic and International Studies, CSIS. She joined CSIS from the Senate Selects Committee on Intelligence where she was deputy staff director. In her nearly 20 years of government service, she has served in a series of high profile national security positions at critical moments. Emily began her career as a leadership analyst at CIA and later, during a tour at the National Security Council, she led interagency efforts to create innovative policies drawing on all elements of national power. And finally, Alison Evans. Alison is the head of tradecraft and subscriber services at Janes, enhancing our customer's capability and capacity for open source intelligence. Previously a senior analyst mainly responsible for assessing and forecasting risks to operations and investments associated with East Asian countries' politics and security, she became deputy head of Asia- Pacific Country Intelligence and OSINT team leader, managing a team of analysts assessing political and violent events. Alison is a distinguished linguist with advanced Japanese, Korean and German language skills and has worked in local government in Japan within the European Commission and the commercial aviation sector in South Korea. Welcome to all four of our guests. Thank you so much for being here. So, Sean, we've spoken a great deal about the utility of open source intelligence. We've talked about the implications that we need to bear in mind, and as I've said in the introduction, we want to start to bring some of that together now, particularly around this fusion of technology and the human aspects of empathy, ethics, and culture. So to get us started, let me throw the question first to you, Emily, in terms of what is it that makes the open source environment absolutely essential that it deals with technology? When we talked about the technology that you were talking about with OSCAR, there was an inevitability about the need for technology that we described there. So could you say a few words about why that's true? What's the inevitability of the need for technology in the open source environment? And then what I'm going to do is I'm going to steer that across to the other considerations as we described in the introduction. So, Emily, over to you first.

Emily Harding: So the open source environment has dramatically changed. Some of my colleagues on this podcast have written extensively about this. I can't wait to hear them talk about it. In my view, there's really been a revolution in what's considered open source intelligence. I had a long career in the intelligence community. When I started, open source intelligence was basically translated foreign press. You could take information that had been published overseas, you could translate it and, voilà, open source intelligence. Now that world has blown wide open. There is still translated foreign press, don't get me wrong, but there's also this brave new world of all kinds of interesting information that's just out there for the taking if only we can figure out how to use it properly. So in my mind, this is something that the intelligence community is inevitably going to have to shift to, drawing in information that's open source, that's available, and then using technology to understand it, to try and discover the signals and the noise, and it's a vast amount of noise, and then to facilitate decision- making by the policy makers. And really, the only way to do that is to use the technologies we have available to us, a combination of vast computing power, and then the possibility of using AI ML systems to identify these tiny, tiny signals in this vast sea of noise. And if the intelligence community doesn't get on board with that new world of OSINT, then I think that the private sector is going to just zoom ahead of it and they're going to quickly become irrelevant. So I think the real challenge now is to get the intelligence community to combine their exquisite capability in the classified realm with this open source capability that's out there now, and then finding the right tools to actually make the best use of the information that exists out there in the world. And I think to compete with future competitors like China, we're going to have to find ways to do this in our own way.

Harry Kemsley: Thank you for that. Now, Amy, when we spoke not that long after we met with Emily around technology, we then introduced this concept of operating ethically in the open source environment. So could you help the listener that wasn't listening to that podcast, just summarize what are we talking about when we're dealing with the ethical issues of intelligence in the open source environment?

Dr. Amy Zegart: As we talked about, there's so many ethical issues in intelligence generally, and folks that work in the US government or in governments confront ethical challenges all the time. So in the open source environment, what's the objective and who are you working for? Right?

Harry Kemsley: Right, yeah.

Dr. Amy Zegart: In government, when someone is an intelligence officer, you know you're working for your country. Who does an open source intelligence producer owe their allegiance too? How is their accountability? How do they think about the intelligence cost benefit analysis? If you're going to put someone at risk, for example, by identifying where they are on a battlefield, for example, how do they make those judgments? So a whole host of ethical challenges.

Harry Kemsley: Yeah, for sure. And of course, as technology is bringing all of that information through the gate, as you say, we don't necessarily know that that's been collected ethically, but we'll come back to that point a bit later on. So then moving across to the empathy that we need to employ, Dr. Claire, we spoke about that, did we not, in terms of what empathy really means? But again, for the benefit of the listener that didn't hear that podcast, what do we talk about in terms of the necessity of an empathetic view in intelligence?

Dr. Claire Yorke: So we covered as well a lot in our conversation, but it's really about centering human experiences and understanding the meaning and significance different people give to data and information and why that in itself is important, but also understanding the diversity of those experiences and how we can try to integrate and synthesize some of that data into a more coherent and richer, more nuanced analysis.

Harry Kemsley: Okay, fantastic. Thank you. Now, for the listener that's heard those three descriptions already, you can already begin to see the complexity of this starting to bloom. I'm going to add in an additional complexity, and that is that as I talk to the audience listening to this podcast as a white middle- class Western orientated male, I automatically bring a series of prejudgments by virtue of that education, which I strive to overcome, but fundamentally, I'm built in a cultural way that I can't ignore. So, Alison, you've had the benefit of exposure not only to languages but also culture from overseas. How would you capture, in a few words, the cultural aspects of intelligence and how they differ from one side of the globe to the other?

Alison Evans: Yeah. I think it's always really important to understand the cultural context of the information that you're consuming, either as an analyst or a decision- maker, because as Emily, Amy, and Claire have all mentioned, it's about being able to create that baseline understanding from open source intelligence to tip and cue exquisite capabilities or understand where this information might be going, who's benefiting, and creating that meaning. Understanding of the context really relies on understanding the history, the people, how they might be thinking about protests or political changes in their own countries, for example. And understanding the culture of your audience as an analyst as well. Being able to empathize with those different groups allows you to have that deeper understanding, get the context and also tailor your messaging so that it's more effective as well.

Harry Kemsley: So now that we've laid out in front of ourselves and the audience the four dimensions here, Sean, I'm not going to ask you to do the impossible and that is to summarize it. I can see you looking at me already thinking, " Please don't ask me to do that." The reality is when Sean and I have spoken throughout all the podcasts, the word that we use more than any other is tradecraft. We talk consistently about the need for the intelligence process, the intelligence technology, the intelligence considerations to be embodied in an effective and efficient tradecraft that answers the question to the best ability, eventually becoming a decision support. That's ultimately what it's for. So, Sean, in a few words, can you give me your best guesstimate of how much culture, ethics, empathy, and even technology is actually now already in the kind of tradecraft that we care about in intelligence? From your experience and from your more recent exposure to it, what do you think? How are we going to grade our scorecard right now in terms of considerations of empathy, considerations of ethics, use of technology, and our recognition of cultural differences? Give the intelligent community you've worked with a scorecard. And you're not allowed to say anything that you might regret later, by the way.

Sean Corbett: So I'm going to be slightly controversial and look at it in a different perspective, that if you get your tradecraft right and you are... The keyword word here for me is objectivity. And it's a question rather than a statement. Can the analyst actually step away from those softer issues and say, " Look, if I've got all the data that I need and it's filtered in the right way, and maybe I'm using good artificial intelligence and algorithms, all the rest of it, how much do I actually need to worry about the empathetic things, the ethics?" Now, and I'm being slightly pejorative there deliberately, but if you've got the tradecraft right, so you're just looking at every single piece of information, you're considering it all from objective tradecraft perspective, and then you write your best possible assessment or analysis, then can you, question mark, actually almost take yourself away from those issues? Now, it's not as simple as that, and I'm obviously trying to open up for discussion here, but when you look at the analysts right now... I've got my new favorite statistic. You've heard me say it before. So by 2025, there's going to be 463 exabytes of data generated every day. I mean, that is mind- boggling. There are so many noughts, you can't do that. So if I'm the analyst, it's all I've got to do to actually filter through all that and come up with an answer. And this is another key point for me is that we all say incorrectly that there's no such thing as a bad intelligence answer, just a bad intelligence question. So is the empathetic part, the ethics, is that down to the person who's asking the question to filter out? Because as an analyst, you should be as objective as you possibly can be and give as maximum of the answers you can. So as normal, I haven't answered your question, but I've probably posed a few more.

Harry Kemsley: Well, I'm going to interpret that as a C minus. Okay. I'm going to open up the floor there to anybody who wants to start grappling with that. Alison, go ahead.

Alison Evans: Yeah. I'd say that point that you made, Sean, about having the right questions really means that it's impossible almost for analysts to be objective. And no matter how detailed the intelligence requirement they might receive is, as analysts they have to create more questions and go after more information. And understanding the cultural context of the countries or areas that they're looking at means that they can also better understand the cultural context they're coming from and ask better questions, because some questions will be more relevant in certain contexts than others.

Dr. Amy Zegart: I'd just jump in and say that I think objectivity is not the goal, rigor is the goal. And objectivity is one component of analytic rigor. And so if I'm purely objective and I don't understand the psychology of the adversary that I'm trying to understand, I'm going to be objective and I'm going to be wrong. And so if we think about the movies, there's a great combination of this in Star Trek, right? There's Spock, who is rigorously analytic, he's objective, but he has no sense of psychology, or emotion, or compassion, or empathy. He struggles with that, right? And then there's Kirk, who's all about emotion and all about empathy. And of course, the moral of the story in those Star Trek movies is you have to have both to have good leadership. And I think that's especially important now when we think about authoritarian rulers like Xi Jinping and Vladimir Putin, the policy goal in part is to keep them from deceiving themselves, because their own processes are so autocratic, they're not objective, they're not getting alternative views. And so it's even more important for intelligence analysts outside of those countries to have that empathy, that perspective. It's not a touchy- feely nice to sprinkle on top. It's the key to analytic rigor.

Harry Kemsley: Yeah, I remember you saying before. I know, Claire, you wanted to come in. I know before when we spoke, you talked about it needing to be baked in, baked into the rigorous process, Amy. Sorry, Claire, you had your hand up as well.

Dr. Claire Yorke: Yeah. I think this idea of rigor is absolutely key. And I'm always suspicious of the idea that you can have this sterile objectivity, which is, I think, something that people who really value data as a kind of data sits in opposition to emotions, I think you cannot get that sterile objectivity. And I always think about what sound judgment is. And sound judgment is reason combined with a good judicious sense of the emotional data that you're also getting, an understanding of what you feel but also what you're seeing other people feel. And that has to be synthesized to create sound judgment and rigorous analysis. A key challenge I think we have also with data is that those algorithms we've found are also not objective. There have been a number of studies about how biases and prejudices can be built into algorithms and built into the creation of certain systems. So you can't rely entirely on that giving you an accurate picture. You have to always be questioning, " What is emerging here? Is this actually the correct picture that we're seeing?"

Harry Kemsley: Emily, I'm going to come across to you now because not that you are the corner defending technology but because we talked about JARVIS and then OSCAR replacing JARVIS. And one of the things we talked about there was the ability for the technology embodied in the character JARVIS having the ability to understand sarcasm, wit and so on, in other words, very human traits as opposed to the very objective machine- based. And then we've heard just from Claire a second ago about how no matter what you try and do, you're going to pour bias into your code whether you like it or not. Do you think it's an aspiration that's never going to be achieved? Never is a strong word. Very unlikely to be achieved in the foreseeable future where technology can actually straddle the rigor that we just talked about, which is a more subjective blend rather than just a purely objective AI- based? Do you think that's a possibility? Do you see that as something that's emerging?

Emily Harding: I think it's a possibility, but a long way away. Let me first say that you guys are blowing my mind a little bit, because as an intelligence analyst and especially as somebody who grew up as a leadership analyst, I never would have said that objectivity meant not taking into account the human emotions of the other people that I was studying. In my mind, objectivity was something more along the lines of me trying to set aside my preferences for a situation or me trying to set aside my own view of the world to try to be more objective about how the other side would see the world. So this is sort of reshaping my brain as we're talking, which I guess is the purpose of a great conversation like this. I remember when I was a very young leadership analyst. I'm going to try to tell this story without revealing any classified information. I was working on a particularly thorny question about a group and what their intentions were when it came to conflict that was going on in the region at the time. What I wanted to be the case was one thing, and what I thought they would probably do was one thing. And I remember sitting down... Speaking of tradecraft, I think the really good tradecraft comes when you can test your assumptions and throw ideas around with somebody else who also understands something about the world but from a different perspective. I remember sitting down with a colleague and saying, " Well, this is what I think is going on," and her saying, " Why do you think that?" and me saying, " Well, just logically, I don't think they would do this," and her saying, " Yeah, let's test that a little bit," and really pushing me hard on why I thought these things, and saying, " What assumptions are you making about the kind of decisions that are going through their head? What kind of biases are you bringing to that conversation in your own head about those people's intentions?" And she was right. It turned out that my original analysis was totally wrong and they were in fact doing the thing that I didn't think that they would do, because I was misinterpreting their intentions and their incentive structure. So to bring this back to your question, Harry, about technology, I think that where technology will come in very handy is by holding up a clear mirror that basically says, " I have done some of these calculations with not zero bias but with maybe a different set of biases, and this is what I've come up with." I think the value add and the true analysis comes in marrying up the human gut instinct with the machine mirror and saying, " Which one of these is right? And why do I think it's right?" But it does mean going into it with eyes wide open about what the technology is and is not capable of. Can it understand sarcasm? Can it? So for example, when you're doing things like natural language processing, can it figure out that a sporting event is not the same thing as a war? Because we use so much of the same language for both. How do you disentangle what the machine is seeing and have it explain what it's seeing from what you think? And then how do you have those bounce off of each other in a really effective way?

Harry Kemsley: Yeah. So for me, that starts to open up really the second big topic I wanted to move us to. So thank you for the segue, Emily, that's very helpful. One of my concerns about the technology is the black box syndrome. If we're going to hold up that mirror, if we're going to understand what technology's doing for us, we're going to need to understand a bit more about what the technology's doing and how it's doing it. And my exposure to advanced analytics is that frequently it throws an answer to me which is the furthest of all places, which looks really impressive, but I don't actually know how it was generated. I don't really understand the factors that were considered. Now, of course, I can have somebody help me unravel that. Data science is doing more and more good in terms of data literacy and how systems like this work. But how do we start to actually integrate the art, the subjective with the science, with the objective for the rigor that we've agreed we need in the tradecraft? How do we actually do that when, from my perspective anyway, and I'm happy to be disagreed with, technology seems impenetrable, opaque, difficult to understand what it's actually doing? And by the way, just to throw this one in as an extra thought, and I'll go to Sean with this first, misinformation and disinformation in a machine learning system, does that not demand huge amounts of effort in machine unlearning? Discuss. Sean, across to you first in terms of how do we begin to bring together, integrate the art and the science of technology and the human aspects we've just been discussing? How do we begin to do that? And by the way, your answer can't be tradecraft. You're not allowed to use the word tradecraft.

Sean Corbett: No, I won't. But if you look at artificial intelligence, you can't uninvent stuff, and everybody's using it now, whether it's right or wrong. And there is good and bad. But I use the analogy back to, actually, the analyst, the sort of cognitive person that's got brain cells which still are much more complex, as far as I'm aware, than some of the artificial intelligence. If you get a bad analyst and they get it wrong a couple of times, you'll probably sack them. At least you will in the military, and I've seen it happen. Or you won't listen to anything they're going to say and so they won't brief you anymore. And I think it's the same with the algorithms as well. So it's about being repeatable processes that this is the assessment it comes out with or this is the data points it will filter for you. And then in hindsight you've got to learn by saying, " Well, was it any good, or wasn't it?" If it wasn't, then you've got to change the algorithms just as you change your bad analyst or you change your perception of your source in the exquisite world. And it used to really irritate me that, particularly in the human side, " Oh, if it's human, it must be really good," but all you're really doing is reporting on someone's perception, or what they say, or what they want to tell you anyway. Now, is that not the same as the ones and noughts that are being developed by algorithms? So for me, there is a learn by doing, but also to then check your homework, as we try to do in the intelligence world, say, " Right. Did we get that right, or didn't we? If we didn't, why not?" and then move on to the next piece. So I sometimes think we get bogged down a little bit with AI, especially unexplainable AI. Well, now that's a different world which gets really scary, actually. But as long as the results are... I go back to the objectivity, and I absolutely agree with everything that was said about objectivity. It's not an end in itself, but you need to be as objective as you can be, bearing in mind that we all do have our unconscious bias and we all have our backgrounds, all the rest of it. So the analyst needs to put themselves in that place where they are aware of what their biases are. Now, when you say it's unconscious bias, then how can you be by definition? But that's where peer reviewing comes in, QCing, talking to other organizations. And you've heard me say many times before, one example in Afghanistan, two very big and very impressive intelligence organizations with all exactly the same data had diametrically opposed views about what was actually happening on the ground. Now, why is that? And what I'm not sure we do is retrospectively analyze why that is.

Harry Kemsley: I don't disagree, but my worry about the idea that you would check the homework of the AI, as things are moving so fast, I can't even remember the title you gave to the large amount of data that's going to be produced by 2025, but-

Sean Corbett: Exabytes.

Harry Kemsley: ...things are working at a speed that I just don't know how we'd check the homework. Claire, I'm going to come to you next on this matter of how we embed the idea of the art, the subjective into the science, the objective, and the computer, for the want of a better description. I'm curious to know if there is any study in how you can capture empathy in some way that could be replicated by a machine. Is there any way that you know of or any studies or any research that's done in how you can actually start to capture the concept of empathy in a more" objective way"? Which I know is a bit of a tautology, but I'm actually curious to know whether there is actually any movement in that direction, because if not, then we are separating the two and having to keep them that way for the time being.

Dr. Claire Yorke: So there's a number of things to unpack in that, and there's definitely efforts right now to create more emotionally intelligence AI. I think it was interesting to look at the recent New York Times article where you had the journalist having a conversation with ChatGPT, and then the ChatGPT suddenly got incredibly emotional, trying to communicate in quite effective ways a sense of feeling, and also tried to inculcate empathy in the person who was interacting with it, and that's really interesting. And so it is getting there. One of my challenges that I have with this is why would we want to remove the human from that? Why do we not value the human art within it? And what we need to be moving towards is humans working more effectively alongside the technology, because, as you rightly said, we can't get rid of the technology, we can't uninvent it. It brings with it enormous potential and advantages for what we're able to do and what is possible, especially in an environment where data and information and events are moving so quickly and evolving so rapidly. But I think we need to be valuing the human component of it and not trying to replicate it with machines. And that is where I get into bigger philosophical discussions around what are we trying to do with technology? We shouldn't be trying to replace human capacity and abilities and skills. We should be trying to create far more effective means by which humans can maximize their potential already. And this is perhaps why I also push back on this idea of objectivity being something sterile. Why do we want it to be sterile? Why do we imagine that you can ever create a space that is completely devoid of human meaning and significance? We cannot remove biases either from humans or machines. So let's lean into them and just try and understand better what those are. And then that gives you a certain lens through which to interpret what you're dealing with.

Harry Kemsley: Fantastic. I'm going to come to you in just a second, Alison, to see if you perceive any difference of cultural perspective for these kind of topics from your experience in the Far East to your experience in the West. But before I do that, Amy, we just had a conversation briefly there about how do we consider the idea of bringing empathy into our AI? And I think Claire's quite carefully and quite accurately said, " Well, hang on, why would we want to do that?" Is that the same answer for the ethical considerations, or is there more scope for machines taking a set of rules that we could define, if that's possible, for ethics in the intelligence community?

Dr. Amy Zegart: I think Claire put it so well, and I think what she said applies to ethics as well. We shouldn't want machines to replace human reasonings. Machines are better at some things and humans are better at others, right? We know machines are much better at memory, at recall, at repetitive tasks, at pattern recognition. So lean into that for the technology. And humans are much better at creativity, they're much better at compassion, they're much better at context. And so we need that human machine teaming to get the best of both. I'm concerned in this moment with AI in particular that we have increasingly AI that sounds human but doesn't reason like a human. And so we have even more of a possibility for misunderstanding. And just for kicks, I actually asked a few questions of ChatGPT in preparation for our podcast. And one of the questions I asked ChatGPT was, " Does it matter whether China rules the global order or leads the global order?" And oh boy, there are some assumptions in that answer. It's, " Some people think this. Other people think that." And part of the answer was, " Some may argue that China's leadership could bring new perspectives and approaches to global governance and could help address inequities and injustice in the world." So there are real human assumptions baked into that AI, not ones that I agree with, and we need to be cognizant of them, but it's often hard to see.

Harry Kemsley: Yeah. I think that's a great answer, and I love the example. By the way, who has not yet asked ChatGPT some questions? Okay, Claire. That's interesting. I have. Oh, Alison as well. Sean? Well, that's because you're a Luddite. You don't actually use technology, only the phone.

Sean Corbett: It would argue with me.

Harry Kemsley: So, Alison, the question was, is there a difference? I mean, I have lots of friends in that part of the world and they more ready approach to technology. They're much happier to adopt technology to do things is my impression, which is probably biased, but that's the impression I've been given. Is that your experience of working in the Far East versus the West in terms of the acceptance of technology, the interruptions and the intrusions of technology?

Alison Evans: I think what actually is interesting about thinking about foreign experiences and learning languages, et cetera, is that's quite difficult in a national security or government context. By definition, thinking about security clearances, et cetera, you want people who perhaps haven't spent that much time abroad or didn't grow up abroad the way that I did in order to have those clearances not necessarily be perceived as a threat. And yet as we've just been talking about, it's humans and AI working together that can produce better outcomes. It's people who have those diverse experiences. I'd also say that we want to value that diversity of human experiences and people, and that's something that not just feeds into how we think about this in the national security space but also plays into open source providers in the private arena, because you can have a diversity of teams. So for example, when I was covering the impeachment of South Korean president Park Geun- hye, I was able to turn around to a fellow analyst and say, " Tell me about the impeachment of Brazilian president Rousseff." And in that very international team, you're able to have those contexts, that diversity of understanding and meaning that similar to teaming with the AIs, you're also able to have a better outcome.

Harry Kemsley: Okay, that's fascinating. I love the example of turning to your colleague from Brazil to examine a situation going on in South Korea. That's a tremendous example of why diversity is helpful. I remember working with an intelligence group in which, I didn't know this at the beginning, was a person who used to be an underwriter for insurance companies. And as I started talking about something, the individual turned to me and said, " Well, let's look at the probabilities here," and did this incredible math in front of my floor. It frightened me. Frightening amount of math in seconds. But in that process, I realized how much I hadn't even considered a range of possibilities that were immediately obvious to that individual because of his background and the experiences that he'd had in a completely different realm, the world of underwriting and insurance.

Speaker 1: Thanks for joining us this week on The World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.

DESCRIPTION

We invited some of our most popular guests back to take us to the next level of what everyone needs to consider for their OSINT and why technology, ethics, culture and empathy are increasingly important.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Emily Harding

|Deputy Director and Senior Fellow, International Security Program, CSIS
Guest Thumbnail

Dr Claire Yorke

|Marie Skłodowska-Curie Fellow
Guest Thumbnail

Amy Zegart

|Stanford faculty, sr. fellow at Hoover Institution & FSI, Atlantic contributing writer
Guest Thumbnail

Alison Evans

|Head of Tradecraft and Subscriber Services, Janes
Guest Thumbnail

Sean Corbett

|AVM (ret’d) Sean Corbett CB MBE MA, RAF