In this podcast Harry Kemsley and Sean Corbett are joined by Amil Khan, the founder and CEO of Valent Projects, to delve deep into the implications of misinformation and disinformation for open-source intelligence. They identify the difference between misinformation and disinformation and how we can overcome these
challenges to support open-source intelligence.
Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.
Harry Kemsley: Hello and welcome to this edition of World of Intelligence at Janes. As usual, your host, Harry Kemsley, hello, and my co- host, Sean. Hello, Sean.
Sean: Hello, Harry. It's good to be back and happy New Year.
Harry Kemsley: Yeah, happy New Year to you as well. So last year, Sean, we had a number of great topics among which we discussed miss and disinformation, a topic that came up frequently through a number of the episodes of the podcast. We had a great session with Dai Cook, in which we looked at it through a particular prism. As you might remember, Sean, we looked at it principally through the video deep fakes and the technology going on around that. But I felt as though that needed a revisit. We talk about miss and disinformation so frequently in these podcasts. I don't feel as though we've given it enough time, enough breadth in conversation. So I wanted to revisit and I'm absolutely delighted that we brought somebody who knows a great deal about this topic, Amil Khan. Hello, Amil.
Amil Khan: Hi, Harry.
Harry Kemsley: Amil Khan started his career as a Reuters foreign correspondent, followed by a stint at the BBC as an investigations' reporter. Working in the early 2000s and having studied Arabic and Farsi, he covered breaking news in Iraq, Lebanon, Palestine, Israel, while also conducting investigations and embeds with militias in Darfur, Chad, Libya, etc. Sensing that the information landscape was changing, Amil left journalism and started consulting for government, political groups, and civil society organizations. In the aftermath of the use of chemical weapons in Syria, Amil noticed that the online space was being manipulated, not just by false information, but by thousands of coordinated fake accounts. As most of the attention was being diverted to the nature of the false information and not its dissemination. Amil set about investigating the misuse of social media platforms and wrote one of the first studies for the UK inaudible office on technical manipulation. After a couple of years at Chatham House, Amil set out valiant after realizing that the problem was only going to grow, and few organizations maintained the skills, the mix of skills, social media strategists, political analysts, and data engineers needed to address the threat it posed. All right, let's get started then. So Amil, perhaps you can help the three of us, for the audience, define what do we mean by misinformation, and then disinformation? And Sean, I'll come to you as well in a second in terms of any nuances the military or defense industry might have for the same. But Amil, can we start with what you understand by misinformation versus disinformation?
Amil Khan: In this space at the moment, the generally accepted definition is around, really intent. So misinformation comes down to sharing wrong information, believing it to be true, in good faith essentially. Whereas disinformation is sharing untrue, or creating actually as well as sharing, creating, promoting false information, knowing it to be false.
Harry Kemsley: So it's a deliberate act as opposed to an inadvertent one. Sean, anything to add to that?
Sean: Not really. I think for me, absolutely right is the intent piece. You'll not be surprised to know that the various different definitions, you can go anywhere and they're all different. But it's the intent piece. I mean there are several nuances to that though, and I like the, " In good faith," thing. For misinformation, I mean, people could just be ignorant. They could be repeating other people or they could just be careless. But then there is that blurry area that I find with, and again, without being political, as you know I sometimes am, what is spin? What is trying to put a view on something that might not be 100% true? So it's not quite disinformation, but it's not misinformation either. And that, I think is starting to get a dangerous area where people talk about their own truth. Basically it means is I'm going to believe what I want to believe and I'll find the source for it somewhere. So, we might get into that a little bit
Harry Kemsley: Yeah, and I think, Amil, we'd agree that a lot of people are now talking about the post- truth world. The idea that you can create any truth you want and get enough people to copy and paste your truth into the various channels that are available to us, eventually takes on a life of its own. But do you see what Sean's talking about as well, Amil, this blurring from... Not necessarily from misinformation to disinformation, but the two are becoming less distinct. Do you see that as well?
Amil Khan: Yeah, there's definitely bleed over from one to the other. And also, there's also other nuances where it starts getting complicated. Which, revolve around how deep you want to get into the idea of purposeful manipulation and that adds, so it's purposeful. But how is that purpose exercised? Which is where a lot of my work and the company that I run, works on. Which is, we actually tend to go with manipulation rather than disinformation because we look at if you're not actually saying something, but you're running armies of bots for example, or something like that. Then, some people just say, " Well, it's all disinformation," which is fine. But I think to be more specific, it's manipulation.
Harry Kemsley: Yeah. Now, Sean, I'll come to you in just a second. I remember seeing a while ago a documentary on TV that talked about how essentially the marketing activity, the advertising algorithms around various, particularly social media channels were driving echo chambers, were driving certain manipulations of what otherwise might have been good intent. Is that the thing you're talking about? The gaming of the system for specific purposes. Perhaps, initially for economic reasons and economic incentives. But in reality there is a more difficult problem coming out. Is that the kind of thing you're referring to, Amil?
Amil Khan: That thing on steroids. And I think it goes back to what you were saying about the bleed over between misinformation, disinformation. Because, clearly, in the world of advertising and social media marketing, for example, if a content creator realizes that having the background yellow in their video is prioritized by the YouTube's algorithm will push their content higher up the rankings, then having a yellow background is not a huge deal. It's just quite smart, really. But, of course there are other ways of manipulating the algorithm, including, which are totally not allowed in terms of the terms of reference of the different platforms. So that's easy because you can say, " Well, they're not allowed." But then there is again, a gray area when you're not sure exactly what is allowed and the use of certain phrases. They're getting, for example, say you're a political actor and you have voters and saying, " Hey guys, my video's going to drop. Could you like it?" Nothing particularly wrong with that. But, setting up 100 fake accounts to do that, clearly wrong. And there's gray areas in between that as well.
Harry Kemsley: Yeah, yeah. Well we'll come back to, how we might govern this problem perhaps a bit later on. But let's, Sean, before we go on to the next topic, what did you want to add onto on that thought?
Sean: No, I think this is really important to spend some time on identifying what we mean, because it's prevalent in everything we do. So if you look at polling data, for example, which is notoriously inaccurate. And the reason it's inaccurate is because people will, if they're being asked a poll, will sometimes say what they think is expected of them. I'll always come back to the Brexit thing, that no one admits to supporting leaving the EU. And yet more than half the population did. No one admits to supporting Donald Trump, and yet almost half the population did there. So there's a nuance to that, and that's where you start to get, and Amil, you'll understand this better than me, into the algorithms where there's a great book actually called Everybody Lies, which is worth a read. Which talks about, if you interrogate the Google searches that people do rather than the polling date, what they say, you get a very, very different perspective. But what I was going to say really is that when does, so from a military perspective, and I know Amil, you've got experience of this as well, is that I would call information operations. Is that a subset of disinformation misinformation? Well, it shouldn't be because IO, as we call it, to be effectively should have a lot of truth in it. It can't be just lies and it can't be wrong. But you can skew what you say, how much you say and the context you give it in. And then you've got the political element, which is propaganda, which is a little bit more insidious, and it's literally, deliberately intended to influence populations or in a way thinking. So, it is really quite a complex thing. But the reason I think this is so important, we're talking about it is, as I said, it's prevalent in everyday life we have now. It doesn't matter what you're doing. And a good example I gave is that right now, of course, all the apps are showing dozens and dozens of videos on how you get a shredded body for the new year. And they're all contradictory.
Amil Khan: Oh, you're getting those too? It's not just me.
Sean: Yeah, I get them all the time. No, no-
Harry Kemsley: Just for the record and for the listeners, I don't get any of those. They've long since given up on my body.
Sean: So it's like, " Do cardio, don't do cardio. Eat carbohydrates, don't eat carbohydrates." Every one of them is different. And you could be really confused. Now, are they deliberately trying to misinform? Sorry, disinform. Well, there's clearly people are trying to sell their own apps or sell their own programs, whatever. But, which is right? I've absolutely no ideas. I'll just do what I normally do and not inaudible much.
Harry Kemsley: That's something I'd like to just pause on it, if I may, Amil? Because, for the listeners out there that work in this arena, perhaps in defense intelligence. This is something they grapple with in what we've just described as, information operations perhaps. But for the other listeners who perhaps are not directly involved in this activity, there is this huge sense of, " So what do we do about it? What can I begin to understand about what I'm doing to myself?" This idea that I've created an echo chamber around myself. The algorithms we talk about, we seem to have little or no control over. What does a listener do about this technical manipulation? How do we recognize it and how do we begin to do something about it?
Amil Khan: I think it starts with most problems of this nature, around awareness. Being aware of that echo chamber. Are you in an echo chamber? And if you are, what's the nature of that echo chamber? And a lot of the operations by hostile countries or dictatorships or whatever, a lot of the work that we do, we've noticed that the activity is not always as straightforward as trying to make a population vote or not vote, or riot or something like that. It's often actually aimed at deceiving decision makers. So for example, in one case that we saw, trying to make a government deploy the army because they thought 100,000 people were turning up with weapons in a January 6th type situation. When, actually that wasn't true and it was only going to be 3, 000 people and they were unarmed civilians. But, being the nature of that scenario, there's a very high likelihood that people would've died. Not a particularly disciplined army, not great command control, all those issues, they would've started shooting at people armed or not armed, 3000 or 100, 000. And then the actors behind it have their own talking point then to run with. So it was like Maoist insurgency stuff. And so, I would say, being aware of what that is, what are you seeing? And a lot of the time what we see is actors who are being targeted don't have any decent capabilities, tools, people on staff who can help them, and they're literally just scrolling their phones. So there's a very high chance they're going to be manipulated because all it is is, even in a cabinet... So a cabinet of say 10 people in government sitting around making decisions, all of them are just on their phone going, " Oh my God, look at this. Look what's happening."
Harry Kemsley: Yeah, yeah, yeah. So I think there's probably a conversation to be had, maybe not today, about what are the symptoms I need to look at for me being in an echo chamber? How do I know I'm in that, for that awareness piece? But we'll come back to that perhaps another time. So Sean, one of the things that we've spoken about almost on every episode of this podcast over the recent years has been the necessary reliance on good trade craft. Understanding what is a best practice method? Identifying what appears to be an outlier that needs to be looked at. Maybe it's an outlier because it's not true, it's not credible, or maybe it's an outlier because it's something we need to look at in more detail. The potential for creating open source intelligence standards, OSINT standards is also something we've talked about. But I sense from a conversation we've had so far, and Amil's, I think given us some really good examples of why this is so important. The need for these open source intelligence standards and best practice and the awareness. That's something that we've really got to get right, is it not? I mean, that's something that we really do need to get right. Sean?
Sean: Yeah, it is. Absolutely. And I can't believe you mentioned the trade craft word before I did. But it's absolutely critical. I mean, we've talked about this before a little bit, the Wild West vote for, " Open source intelligence," and you get everybody from, well- meaning talking heads who've got unconscious bias to, and again, being slightly impolite, who read the Times and then repeat it on whichever form of mainstream media they go. But all the way up to, obviously people that are doing it deliberately. The key for me is, within the trade craft context, is making sure that you have reliable data that has been cross- referred. And there's a time element to that. The first, and you've heard me say this before as well, the first information you get on any event tends to be the wrong one. So you've got to validate it. And that means second and third level analysis. Okay, you can't just look at the reporting, the classic cases, the Al- Ahli Hospital in Gaza where it was attacked. And then very soon afterwards, you just have to look at the video to see that it wasn't quite right. And then the truth unfolds. So you can't go on the first thing, but you've got to correlate the information and weigh it as well. And we don't often talk about this. So it's easier to weigh positively things like imagery where you've got metadata in there, so you can work out quite quickly whether it's actually of the event in terms of time and place. And it's a big thing that's happening in Gaza right now, is people are using quite stark imagery from other conflicts like Syria to support their hypothesis. And then quite quickly have a look at the metadata and go, " Well, actually this is from well before that happened." So, that's easier. When you're talking about eyewitness account though, you've really got to understand what the motivation and the potential unconscious bias is, and cross- refer it to other eyewitness. So, it's both a pOSINTive and a negative now that social media is so prevalent because you can actually do that cross- referring as well. And then there's people like ourselves who are responsible, open source intelligence. And it's got to be about the objectivity. You've got to say, " See what you see as opposed to what you think, and only then make the analysis afterwards based on all the available information and intelligence you've got." And the problem with all of this, of course, is that it takes time. It always takes time to do that. And with the society we've got today where everything is soundbites and needs to happen now, even if you get to the truth, it's probably too late. Everybody's moved on, and the people that are absorbing this anyway, have got it in their heads that certain event happened. And it doesn't matter that you send something out in more detail afterwards that says, " Right, actually this didn't happen. Or the nuance is that." So it's really complicated, but you're right, it is about responsible trade craft where you go through a standardized methodological process that you can prove your working and it's absolutely critical in our world.
Harry Kemsley: So Amil, let me flip that question to you, but in a slightly different way. Forming a credible baseline, understanding what looks to be real, what makes sense, what's within a common sense, feasible picture. These are things that we might or might not be fairly good at in the government space. We may or may not be very good at that in the commercial space. But again, for the casual observer who frequently is flicking through their phone in nanoseconds, how do we begin to address the reality of misinformation or disinformation in that environment? How do we begin to build a credible baseline? That to me, seems to be ultimately the problem we're trying to solve here. It's not just that there is misinformation, disinformation. It's what do we do about it?
Amil Khan: Yeah, I get asked this question a lot just by friends and family. And when I think about how I look at it, and early on in my career, I was a journalist and there's a period of training that we went through at least. And some of that has stuck with me, just in terms of how I think about information. And so I still apply that. And that starts with feeling that if something really appeals to you, if you're getting information that makes you go, " Yeah," I get that, totally, I knew it. Then you need to take a step back, at that point, straight away. And that should just be, if I had to do one of those media critical analysis courses that you see people often offering, mine would be that, if a bit of information makes you feel good, then think about it twice. That would be it. I would walk out the room after that, I'd be like, " That's all you need. Do that to begin with. That's your baseline." And then there's specific things. I saw one recently, I think it was COVID related. And you get the classic thing, it said something about George Soros, who owns the WHO or something. Have a look at those details, those classic things where you just know it's not true. Black Rock has shares in the British government. I mean, governments tend not to give out shares. Just that, if anything just pops up at you like, "Oh, well that's a bit odd." Then yeah, then the red flag should be up.
Harry Kemsley: Yeah, I like that. It's a very, very nice idea that you just look at yourself, actually just reflect on your own and stand back and watch yourself and your reaction to things. I like that a lot. Sean, we'll have a conversation about that offline. I think there's probably another podcast in there about practical advice to an audience about how to deal with these issues.
Sean: Almost certainly.
Harry Kemsley: Let's move this conversation on to the next, but related topic. So Janes works in an open source intelligence environment. That's what we've been doing for many, many years. And, like your organization, Amil, we are trying very, very hard to be objective. We're trying very hard to be real to the open sources that we can access, that in theory, anybody could access by the way. That's what makes them open. And deriving intelligence insights from those. What I've noticed is how frequently I see the phrase or the acronym OSINT, open source intelligence assigned to almost everything. Whether it's actually just raw data or not. But I think there's a slightly more nefarious use of that OSINT term, which is that it's often used to blanket across things that are not open source. You, I know have experience in that. So I'm very keen to understand your perspective of this, frankly, slightly more nefarious use of this blanket term, OSINT. But I think it's also, Sean, I'll come to you with it in terms of how that then gets inside the propaganda loop, that gets inside the disinformation loops. So first of all, this use of OSINT, let's just start there, Amil, from your experience. How often are you seeing the word OSINT used for all kinds of things?
Amil Khan: All the time, basically. I think we live in a world now, where that word, OSINT, and the phrase, " Open source intelligence," has earned itself a cachet for a lot of good reasons. But what the impact that's had on regular people is they think, " Oh, it's a thing. I don't really know what it is, but it's a thing and it's a credible thing." In the way that perhaps 20, 30, 40 years ago, people would've said, " I saw it on the BBC," and it was just a byline for, " Oh, it's from a trustworthy source. I read it in the Times," or whatever. Now, I think that's been replaced, and friends and former colleagues who are at the BBC will hate me saying this, but we are in a world where OSINT is probably more of a trustworthy phrase than the BBC. Which then of course opens it up for manipulation in itself. And the specific manipulation that we come across is, because it's a hazy phrase. I mean, it's quite specific, but different people understand it in different ways. So information that is, by its nature, not open source. Information that is taken from hacking information that's taken from maybe physical surveillance or just false information, will be presented to the audience as, " oh, OSINT analysis showed us." Or it's close cousin, metadata said, " blah, blah, blah." And that's the most common thing that I see. And I think we have had a few cases in the UK where people were hacked and that information was used in media in a disinformation scenario. And OSINT and metadata was all over that. I mean, every paragraph, it was mentioned twice.
Harry Kemsley: Yeah. Not to trivialize it at all, but I do recall a time in previous years where I've been presented with operational analysis that had four or five decimal place percentages of certainty about certain things. It's very hard to not feel as though that's somehow being analyzed to an incredible degree of accuracy that gives them three or four or five decimal places worth of accuracy. By the way, side note, as I dug into it, it turns out it was completely made up and therefore was not analysis at all. But that's another story for another day. Sean, your experience of information operations where we are seeking to convey a certain message to a defined audience is, I know quite considerable. How often do you think these days though, that has become somewhat overtaken to Amil's point about this OSINT, this open source intelligence? Do you feel as though it's becoming a problem, this OSINT term?
Sean: That's almost a whole podcast in itself, actually.
Harry Kemsley: Well, let's get started with the first answer then.
Sean: Yeah, indeed. But yeah, OSINT, it does. As Amil said, as soon as you say, " Oh yeah, I do open source intelligence." Everybody goes, " All right, you have credibility there. You know what you're talking about, et cetera." And you just need to say that. But it doesn't mean to say that that's true. I think getting onto the more nefarious side, I think that as much as artificial intelligence is a challenge for us in this area. Amil, you mentioned bots and other capabilities as well actually. I think it can also be a help in terms of identifying where stuff came from, whether it's actually true or not. But also, getting into the deeper stuff as to where the source of some of this comes from. Without getting into any detail, you've got cyber command and organizations like this, that actually whose job it is to find out where these things come from and why that's being done. But turning it to your point of your question, I just think with information operations, we have to be very careful to be as close to the truth as we possibly can be. Because otherwise you lose credibility. Once you've lost credibility, that's it. So information operations to me, is really about enhancing something that's true, in terms of what it is we want people to know or how do we influence them by stuff that is true. And you'll see, just in terms of the daily slides that are produced by the MOD on the Ukrainian thing, is that nothing that goes out there is not 100% true. But there's only inaudible three bullet points and they choose which three bullet points that they want to actually get out there. So that's more, what I would call, information operations. Because it's just enhancing what you want people to hear out of the truth and the truth is really important. Just getting back to some of the details of what we were talking about before though, of course being responsible, I use that word advisedly, as an organization, we've got, and I know we've talked about this and we do it a lot actually. We've got extreme legal and ethical constraints in how we work through things. It is not just the law that's right. What do we do is right? So, we would never even spin, I would suggest, and I'm always very careful of this, any of our analysis, it is what it is. You've got to give that objectivity. If you haven't got a high level of confidence, you give assumptions and you give alternative analyses. So there's a lot in that. There's something that I... Just within the context of what you're saying, that we might explore if you've got time. But, what role does the state have in controlling information? This is a really big debate. The Meta's of this world, the debate that's happening over. Can you control it? Because there'd be real dragons. I'm not sure if we want to get into this now, but very happy to if you do.
Harry Kemsley: Actually, I do want to just turn that to you, Amil, for an insight on this topic. Sean spoke quite eloquently there about information operations. He alluded to the fact that it had certain controls and so on, and that we were doing this to the best of our ability for the right reasons. But the flip side of that coin is one man's propaganda is another man's information operation and vice versa. So, is there a place for organizations like Valent? Is there a place for the commercial sector to be policing some of this? To be trying to give, frankly, ground truth where governments may have lost demonstrable legitimacy, they may have lost good faith, people don't believe them anymore. Is there a place for the commercial sector step in and fill that void and give ground truth? Or are we all tainted by the same brush? Nobody believes everybody because everybody's lying.
Amil Khan: I think that, whether for good or bad, private organizations, big companies and non- governmental organizations are now part of the information environment. And will be seen in a particular context. So it's context specific. So something's happening. A company's involved, let's say through the news recently, and there was a plane fire in Tokyo. So, of course, then the airline is going to be an information source and that airline has to be able to put out its information well and competently and is going to be judged on that. But next week, maybe we'll go back to normal and nobody will want to know about that airline again. So I think for companies, if they're doing something that is in the information space, or could be, they have to be good at putting their information out truthfully, competently, clearly, that goes without saying. But at the same time, so there will be people who will try and game that for economic reasons, financial, political. Where organizations that are new media players, so new online outlets, investigations, OSINT investigations, where they play a really important role is, if they do it well, is saying, " Yeah, we are now the filter of this." Because people still want that filter. I don't believe that we've given up with having that filter. There's so much information, we as people want to go somewhere where we can get it. Then we get into the problem of it getting gamed. What an organization like ours does is, we are very specific. Because disinformation is such a broad area. We look at technical manipulation. So for us, the question of, and we get asked this all the time, " Oh, you guys work in disinformation? Whose disinformation? Whose truth, and who's not truth?" Totally legitimate question. For us though, it is there technical manipulation going on? And if there's technical manipulation going on, even if the basic information is truthful, then that's still manipulation, that's still something we'll flag. That's still something, depending on what project we're doing on, either publish it or say to a regulator, " Look, this is happening." Or go back to the platforms and say, " Guys, look. This thing is happening." And I think there is an evolution now in the world of dealing with disinformation because it's evolving so quickly and it's becoming such a big problem that people are focusing on specific bits of it, which I think is healthy. Instead of saying, " We do disinformation writ large." Which is just, " What do you do in that? Do you fact check you?" So our bit is the technical side and I think that's been overlooked quite a lot. And I still find that when we talk to people they don't know it's a thing.
Harry Kemsley: Yeah. I wonder, Sean, if there is scope actually, to actually walk to a place where you read an article and the bottom right corner, top right corner there is, just like we have in intelligence briefs, a reliability of source. How much analysis have we done? Is there a space for a screen to have a little box that says, " High probability of technical manipulation." Something that actually allows the audience to assess what they're reading without having to understand what technical manipulation means. They know that's not a good thing. Is there scope for that, do you think, Sean? I know that we do that in military context.
Sean: Yeah, I think it's really important that we do it. I mean, to an extent that some of the commercial companies do it. This has not been verified by independent needs. But the problem is, again, I talk about the short time span people have these days, is that if it's up there all the time, people will just ignore it. Now, within a government context, we do that anyway because as you said, we already say, " What is your level of confidence in this?" Et cetera, et cetera. But even then, actually I've got to say that when I'm, or have been briefing very, very senior military officers, they'll either believe you or they won't. Based on sometimes, generally good stuff, but sometimes on back to the unconscious bias thing. So really, I think it's important that we do it, just to say that we've gone through that process and to give that level of assurance, back to assurance word. But how effective it would be, I don't know?
Harry Kemsley: Yeah. I don't know either? All right, because time has started to evaporate on us. Let me give you a moment to think about this question, Amil. If you wanted the audience to walk away from this podcast with one thing that you'd want them to remember, what would it be? I'll go to Sean first because I always give Sean no time at all to think about it. And I'll come to you in a second. So Sean, what would be your one takeaway from this session?
Sean: For me, I think this is really real, and it's now, and it's at scale. Unless we take this whole disinformation, misinformation nuances seriously, then particularly in the West anyway, we could get in a really serious situation, which you could argue that we are now, because society is polarized enough. And all you're going to do is polarize it even more. And you name whatever subject you want, whether it's your views on COVID, on politics, on... You name it. Right now, you can go to wherever you want to, to support what your worldview is, the echo chambers, et cetera, et cetera. And so somehow, we've got to grasp it and counter it as we've been talking about. Which is not going to be an easy thing to do. So, this is one area that I'm really worried about.
Harry Kemsley: Thank you, Sean. Amil?
Amil Khan: I would want people to realize the nature of the threat that we are living in. Which is information being manipulated, undermines every other facet of our lives. And that is political, as Sean was saying, and also economic, also social, also everything. Because the basis of, actually our systems are predicated on the idea that we get decent information and make decisions. We're an empowered public, we vote, stuff like that. And even if you live in a country where you don't vote, you have some sort of political voice through one way or another. So, if that's the case and it is the case, then us as individuals, what's our responsibility in dealing with this? And I think you have to, as an individual, take individual responsibility for the information that you consume and the way that you analyze that information. And I think that should just be baked into all of our psyches going forward. We can't leave it to others and just be like, " Oh, well, it's this organization or this paper or this TV station or this influencer that I really like listening to who..." You should always have a filter that says, " Why are they saying that? What are the specific points they're making?" And often when the worst or even worst stroke, most effective forms of manipulation, when you look at the content, it's actually very easy to pick apart. Whether it's pictures that are not meshing right? Those old AI things with six fingers or whatever. Or just the basic things they're saying, don't make sense. But as human beings, we just skip over it because of that emotional sense that, " Oh, I like it. So I'm not going to focus on the details." I think we should always focus on the details.
Harry Kemsley: Yeah, cognitive dissonance. So for me, the one takeaway I've got from this, which I would want to underscore for the audience, is that moment a few minutes ago, Amil, when you said, " Look at yourself." If you are reading something and you think, " Yeah, I totally agree with that." Step back and ask yourself, " Well, hang on a sec. Are you just creating a chamber?" I really, really like that. I think there's a very, very tangible takeaway for the audience. Watch yourselves. Watch what you react to and how you react to it. Because there's probably a clue in there about your own subconscious bias maybe? Or maybe just, you've been persuaded by some technical manipulation that is the truth that you should be following and ignoring everything else. So, thank you. That was a really good insight for me. Well, as ever, I am left with more questions than I started with. But that's the nature of a great conversation, I think. And it gives us opportunities to come back and do this again another time. So Amil, thank you very, very much indeed for your expertise and your commentary. Really, really appreciate your time at this early part of the new year. Sean, as always, thank you for your contribution as well. And for the listeners, as we've said before, if you have any questions, any points you want to raise, please let us know. And if there's any topics we haven't covered, you'd like us to cover, let us know that too. Amil, thank you.
Amil Khan: Thank you.
Harry Kemsley: Thank you, Sean.
Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.