Knowledge to understanding and how to get there - part two
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.
Harry Kemsley: Hello and welcome back for those of you who listened to part one of this podcast, you'll know that we're about to pick up the second part now. Thank you for listening. I'm going to pivot us back into the question I was coming to, Mike, which is this data tribalism concept. So back to you on data tribalism.
Mike: Yeah, so one of the unfortunate artifacts of humans is they're incredibly tribal, right? And anybody who's worked in a bureaucracy knows that. And so one of the issues with data tribalism is if you have a large enterprise and you have lots of different types of data in there, and unit A has this whatever type of data and they use it all the time and they make decisions based on video graphics or video data or image data. And then there's another element in the enterprise that uses financial data. And when those two want to integrate that data, if we had manufacturing data and market data integrated in the same environment, then we could really cook with gas. We could sell stuff, we could sell it fast and we could sell it cheap. But unfortunately, just in the human world that we live, our tribalism gets in the way. And when you start talking about large bureaucracies like the Department of Defense or a large intelligence agency or what have you, this tribalism becomes a real drawback because now certainly you have classification levels, and things are appropriately protected. Excuse me. Things are appropriately protected from wandering eyes. In many cases, the data that you need to really understand an environment is maybe not the data that you're familiar with or the data that you don't even know about. And so in a bureaucracy or a bureaucratically built data environment where there are shields and doors and locks and keys, then you won't gain the benefits of that integrated force data. And I think the magic of data applied in a military context is that you have the entire force with a unity of command and a unity of understanding, and you always have sensors looking for things to change, but you have that idea that you want to prosecute. And today the real challenge is that humans won't do that. The air guy will walk into the room and the ship guy will walk into the room and a ground guy will walk into the room and, " Well, yeah, we could do that, but we're not going to do that because it would be better for us to do this." And the air guy says, " Yeah, but I need to have three jets on the ramp and so I'm not going to be able to do this, or I don't know if I'm going to be able to get there. And that target, yeah, we looked at that yesterday, we didn't like it." When you're dealing in the human environment, you can kind of negotiate your way to an operation. When you're dealing in a digital environment where somebody just says, " No, actually you are going to have to make that decision uninformed of what I know," then you're setting yourself up for failure obviously, at worst, at best with data tribalism, if the Air Force has a sensor and it's really sensitive and they don't want other people to have that data, well, they won't let you use that data. Therefore, your integrated AI solution will not have that data. If the army has the data about their material readiness of these ground forces, they don't want people to know that. Even in an environment of inside the Pentagon where everybody's on the same team, presumably, you still have these tribal boundaries that will not allow you, that little bureaucrats all over the place will say, " Nah, I really don't want you to use that data. I'm not comfortable with that." And so then you got to spend another month going up the ladder to try to find somebody who actually gets what we're trying to accomplish and then put that into practice. I used to get frustrated by that, but again, human society and human tribalism is ripe in our society. It's what we are as humans, right? And so you can't squeeze that out of somebody, nor can you logic somebody out of something that they feel so strongly. And so those human environments, why is AI so hard? It's because of those humans. That's what we're wrestling with. We're trying to build a logical integrated environment that's enabled by machines. Humans are not logical, humans are not integrated, humans are not machines, and so there's a natural tussle there that you have to work your way through. And the best way to do that is to build trust. So you have to be a relationship builder before you need the relationship. That is so critical. And it's true in business as well as it is in artificial intelligence or combat. You have to lean out and say, look, I want to extend a hand here to make this problem that we all have an element of... That's what leadership is all about. And so I think the leaders that are listening here, like yeah, I'm not sure about AI, okay, well get a good set of humans to help you understand that, and that is such, when you can balance a machine, a way a machine thinks and the way a human thinks, then you are going to be successful in this environment as this evolves because you need to be good at both.
Harry Kemsley: Yeah. One of the things you've said there, Mike, that really struck me was the fact that humans are not logical, they don't like to be integrated, they're very tribal. And I think that is a real impediment to the idea that the AI would collect all of the available data from all of the necessary sources to come up with its view about what it needs to help the commander understand. I totally, totally agree with that. What that probably does in the meantime while we're trying to get everyone to the same level of trust, is it starts to push the decision- making higher and higher and higher up the system, to the point where the only place where everything's available because they have sight of everything, in theory, will be the highest level of command, the furthest away from the tactical environment. And to use my war story from earlier, that can be very, very tricky. You should be in command or in control, as a very famous British general said in a very, very good piece he did some time ago, in command and out of control. I would love to see the time when we trusted ourselves, let alone the AI well enough to share the information we should share. We've done it in our history. We have had times in our history when somebody had a piece of intelligence that they weren't going to share with you but meant you had to do something, and that meant you had to just trust that person either because of the rank on their shoulder or because you actually did know them well enough. And I guess that brings us right back, does it not, to that need for constant trials, constant exercises, constant implementation practice to get to a place where we do understand it. Without it, we're never going to integrate the tribalism or indeed the machines. Sean, just before we step off this, we haven't used the T word at all yet in this conversation. Tradecraft, Mike, is what I'm referring to there. I don't think we've ever had a podcast where we didn't. Tradecraft fundamentally, in Harry's words, is a combination of three things. Best practice in terms of process, great judgment driven by experience and the understanding you're getting from the environment you're in, and then increasingly these days a good grasp of how to enable the first two with technology. So your great process, great judgment, great technology, the combination of those three things is what we aim for in terms of great tradecraft. What I think I'm hearing, Sean, after that intro is that trade tradecraft is still tradecraft. You still have to have it, but increasingly now we need to understand more about the third part, because the third part, the technology can help us to some degree with the first and the second parts. If I've got great AI working for me and I trust it, then I can rely more on it than I needed to before when I might have needed to have great process and great judgment. I can democratize my process to some degree because the AI is enabling me to do that. How comfortable are you feeling with that statement?
Sean: Pretty comfortable, actually. This is the exciting bit for me because if we can start to trust the AI, how many times have we used that word, trust, in this podcast, and many others actually, and it's accepted that we have a data lake from whatever source that is available to all, then we can focus on what I think the important parts of the tradecraft are. You've heard me say this, so what and what if before, but really doing that analysis from a check your assumptions, make sure your working out is right, cross- referring to what is the exam question? Am I answering the right question as opposed to just answering what I know about something? That releases the analyst who spends right now most of their time doing, as far as I can tell, still Excel spreadsheets and doing what I've just done in terms of all over the internet on various different sites, et cetera, trying to find stuff because if it's all there and you can trust it enough to go right, okay, I've got my database now, what does it mean? Then the nuances of tradecraft become even more, I was going to say easy, it's never easy, but more accessible and you have more time to do them. We've talked about saving time before, which is a really key element.
Harry Kemsley: Isn't that the key point though, that the technology is really just giving you the time to spend more time. When you're sitting there running the G2 section, Mike, and you've got all this data swimming around you, you don't know what it's trying to tell you, it hasn't been organized, collated, summarized, and it's just a flood of data in front of you. Don't you spend the next N hours just trying to sift through it, trying to work out what the hell it's telling you? When that's what the machine does for you in the first instance, it says, " Mike, this is what I think this data means. This is what I'm starting to see as a pattern." And then you can step from that straight into your next part of your process. Isn't that what the machines are doing for you? It's accelerating, enabling.
Mike: It is. I think it's more than that though, honestly, because the machines are suggesting. So I do think, Sean's comment about tradecraft is really important. What does tradecraft become? And it's going to be different than what it was. If sort of the core data environment is a given and shared, then everything that now a human analyst can do is they don't have to worry about that because they're enabled by that core data environment and that core data environment can be very dynamic if you're hoovering up news services and things from other languages and all these other things, and machines get really good at helping you understand, ah, we've seen this before, remember? That kind of stuff. And machines can do that stuff now. Your trade craft becomes much more about the data curation because the machines can do a lot of the analytical synthesis that the humans still, I don't think will ever be able to do, right? I mean you only have two eyes, you can only look at 10 screens at once. So you're going to have to have machines to do some of that stuff. We can't be afraid of that. So I think that's really important because that level, that bottom layer of understanding and filtering data is kind of table stakes again. You have to have that environment and you have to know if there's a bias in this Taiwanese report or whatever, then you have to know, your algorithms need to know that too. Like yeah, you know what, this source, whatever that is, has had a past of this kind of reliability or unreliability. All of that's possible in a machine environment too. I think one of the really important points here with the data environment, I mean it is an augment to what you have today already. I think, " Well, I'm not sure if I can trust this." Well, can you trust Skippy who's reading that article downstairs in the basement of the intelligence agency? Can you? Does that guy know? Does he know the environment he's dealing with? I think that certainly from a global perspective, if you are reading things across the globe, you have to understand context across the globe. If you read something that happens in Bolivia, it's different than something that happens in the Czech Republic. And so the cultural nuances that bias data are things that can readily be handled by a machine because a machine can tell you that. So I think that's really important. One of the, I think probably most important, once again, you go back to the human relationships, but you also have to go back to how you build the machine environment because I think the way that I think at very senior leadership, they think, " Oh, well, we'll just bring some tech bros in and they'll code up this decision process we have and we'll be working by tomorrow afternoon." And you're like, oh, you poor child, my sweet summer child. You have no idea the complexity of this ecosystem. And so what really has to happen, and I see this, I saw this all the time when I was the JIG director, oh, well these tech bros, here they are, see their parachutes, they're coming in. They're going to have this all done by this afternoon. And dude, they know nothing about warfare, they know nothing about operations, they know nothing about the restraints and the constraints. So it is so important when you're building artificial intelligence environments that real operators or real analysts are part of that conversation. They have to be the core of it. The tech bros can parachute in, but the tech bros have to ask, " Hey, Marine, why are you doing that and why are you doing it that way?" And when you do that and you start to expand that, you build it from the bottom up. The tech bros can help build it. They're great, they make a lot of money, so they want to do this, but it's so important, that functional expertise is the primary capability that we're exploiting, functional expertise. Hey, you're a pilot. You know something about flying, right? Let's talk about that and let's get the machine to understand that environment. And just to extend that just a little bit, I mean the machines are so capable now. Here, let me give you a great example. So I'm a Marine, right? I don't know what an accent wall is, I don't know how to decorate a house, I don't know any of that stuff, but my wife said, " Hey, I want an accent wall." So I said, " Okay, sure." So I whipped out an LLM and I took a picture and I said, " Hey, what do I do here?" And the machine, now think about this. The machine's looking at this and saying, " Well, given that pillow there and that picture on the wall, you probably want to lean toward this and you want to have this kind of texture, and if you add this chair rail, it'll match that picture or that stained- glass window or whatever." Holy cow, machines can do all of that stuff now way better than this human anyway, and so let's take advantage of that, right? Let's build the algorithms that say, yeah, in this environment, in this country, that's happened three times and each time as a response to this... It's the same flow, right? It's the same muscle movement, in the image environment, in the language environment. Can you actually code that in a way that makes sense? Can you curate that over the long term because things change? And how do you do that, right? That is so critical.
Harry Kemsley: Well, I'm going to come to you in just a second, Sean. I want to start bringing this human element to a close to conclude this podcast, but I have to ask, Mike, when you presented your plans for the accent wall to your wife, did she declare who are you and what have you done with my husband?
Mike: Yeah, exactly. Like I said, an accent wall? I don't know, what does that mean?
Harry Kemsley: By the way, shortly after recording this podcast, I'm going to go and find out what an accent wall is, probably by checking with an LLM somewhere. Sean-
Mike: Absolutely. Use an LLM and you'll stay out of trouble.
Harry Kemsley: Yeah, it'll give me a good answer, no doubt. Let's start bringing this together then at the back end. I said at the beginning of the podcast, we would talk about how this all starts to come to an important crossroads. If we're going to get inside the decision- making cycle of our adversary, it almost always means we're going to make decisions better and faster than they do. And there's an implication, therefore, accuracy and speed. Well, doesn't that run against the equally important, in my opinion, equally important need to build up wisdom, judgment, and understanding. And those in the human environment, anyway, take time. So what I'm looking for here, Sean, is your initial thoughts on how do we balance this? How do we balance this need for, in quotes, instant situational learning or understanding against the need to build up wisdom and judgment? How do we do that in the modern era?
Sean: This is the one that... Sorry.
Harry Kemsley: I know before you even start to answer, that's an impossible question to answer.
Sean: Yeah, no, I appreciate that.
Harry Kemsley: That's where the two things come together for me in this conversation.
Sean: It is, and it's a really difficult one, particularly if you take it in the strategic context, the more information somebody has doesn't necessarily the more wisdom they have, but what it does mean is the more they want to make decisions. We've already seen a world in the security environment, and as you know, I'm a targeteer by background, where we would choose a target against the designated intent, the aim that we were trying to leave, but it would more than that, okay, this is what we're trying to achieve, therefore derive targets from that. We'd have rules of engagement, we'd have a targeting directive, it was all there. So we knew about legality, proportionality, et cetera, et cetera. I'd have a lawyer literally sat next to me in the targeting board as well as all my analysts, and yet on 99% of the occasions, I would still have to a target all the way up to the very senior levels in Whitehall to say, " Can we hit this target?" Absolutely ridiculous. Now, I'd like to think that things have developed significantly since then, although I'm not entirely convinced, but at what stage does that trust say, right, no, go for it. You've got the guidance, you've got the authority, now just go for it. And this is in all sorts of areas, so there's a cultural issue, but there's also a legal issue. We are so in the west constrained by our own legalities. No, that's a very bad way of saying it. Of course, we should be constrained by legalities, but we should understand it enough to know the difference between risk, proportionality, all these other things, so we can just do things. We don't and we're struggling to get there. You look at the adversary though, which is really the important thing, they don't care about that. You look at the way that large parts of Ukraine are being rubblized by Russia, and the way that China are acting in certain areas, they don't have the same understanding or the same care that we take. Now, the problem with that is, regardless of whether you use AI effectively or not, is that you are getting outside of your own OODA loop, if there is such a thing, and the enemy can react first. So you might have the best information earlier, but if you're not prepared to act on it and trust it, then you're in a bad place. That's the concern that I have right now that I don't think we're there yet.
Harry Kemsley: Right, so this decision- making moment, let me just give you a counterpoint. I remember a situation where looking at synthetic aperture radar images, which to my eye looked like Rorschach's ink blots, I could have seen a pig, a cat, a dog with a hat on. I didn't know what I was looking at. The decision about whether this was a legitimate target, ultimately, in my opinion on that day, came down to the young tech sergeant who was staring at it and saying, " Yep, that's definitely the target we need." I couldn't interpret the image at all. And by the way, even if that image had been set up to Whitehall or even beyond, it would not have been understood unless they were SAR imagery analysts. And so the counterpoint would be where does the decision actually get made? Sometimes it can be an ultra tactical position or it can be an ultra strategic one, but isn't, Mike, that where the balance needs to be struck for this mind on a pedestal, they now understand what they need to understand, they can make a decision that's going to be decisive and effective getting inside the OODA loop of the adversary, but they just can't take it. They just can't take that decision.
Mike: Yeah, exactly. Exactly. In a human- machine team, humans and machines both have their role, and you cannot get by with only one of those, right? I mean, humans can do a lot by themselves, and machines can do a lot too, but what we're talking about here is now optimizing that relationship between humans and machines. And so you have to think of it that way. If you look at artificial intelligence at arm's length and, " Well, I heard that it was bad or I heard that it sometimes misspelled a word or what have you." If that's where you're stuck, then you need to unstuck yourself. You need to keep learning until you really do understand the advantages and risks of artificial intelligence solutions and the right way to interact with those systems and how you interact with the outcomes. Understand the way to do it, understand the risks that you're taking, understand the nuances of your application environment so that you know if something's wrong or it's not wrong. That is really important, and I think more and more people are on that journey and they're doing well. The fact that you can operate a large language model on your phone wherever you are seven days a week, that I think has really helped. Because people are starting to see the accent wall use cases or which car to buy use cases or all of the other things that are very mundane and very practical, but you need help with those decisions if you want to optimize them. Optimizing decisions is not only for military environments, it's just that they're really important in those environments because the consequences are so grave.
Harry Kemsley: But it's coming from those non- critical environments and the use of AI and large language models that people are starting to become more comfortable, I think to your point earlier, that trust and understanding of the strengths and weaknesses is growing. And I think, by the way, Sean, you said earlier we've used the word trust a great deal. We've also used the word understanding in this conversation a great deal as well. The two are very closely linked, are they not? In order to trust something, you have to get to a degree of understanding, and that understanding I think is growing by the fact that, as you say, I can walk around with my telephone in my pocket, my cellphone, and I can actually punch a few keys, and I get an incredible answer on a very complicated topic very fast by using a large language model, which is helping me understand these strengths and weaknesses. Gents, I'm conscious that we're an hour in and we could probably spend the next three hours going through the next couple of topics, but I'm going to pull stumps, a reference to a game of cricket in the UK to the umpire says stop. We'll pause there just because I think this is a conversation that we should take further, and we don't have time today to do that, so let me pause the conversation by first of all saying, Mike, thank you for giving up your time and your experience and expertise in this conversation. Very, very grateful for doing all of that for us. But before I let you go, one further thing, which we always do at the end of the podcast, and that is the one takeaway. If you wanted the audience to know one thing out of this whole conversation thus far, what would it be? I know that Sean's been probably scribbling down a couple of ideas on the way through, and Sean, I'll let you go before me today, which is unusual because I normally let him go last. Mike, if you had one thing you wanted the audience to know from this conversation, what would it be?
Mike: If there was one thing that I would really wish for everybody is that you take the time to understand the new tools that are available to you, because honestly, if you don't do that, if you don't do that, you will fall behind in your job, in your analysis, and increasingly in your life, as these tools become integrated in all of our human and machine environments. So operating in a commercial environment or operating outside of your job environment even, it's going to be necessary for you to be literate in these things. Do not say, " Well, I'm not sure about this AI. I heard it's dangerous." No. Go find out yourself. Use it. See where it works, see where it doesn't work. Imagine utilization, artifacts or opportunities in your house in the way that you do business. This is the transformational technology, and when I say transformational technology, it means transform means the form changes. We are in a transformational age. If you aren't changing your form, then you are not going to be ready for tomorrow. You will not be employable, you will not be able to run your household, you will not be able to do these things, right? So it is critical that you understand this transformational technology because it in turn transforms our society, it transforms the way we deal with data. It transforms the way we fight. If you don't understand these things, you are out of the conversation. Not a place you want to be.
Harry Kemsley: Transformation, evolution by another word and all the more important these days is this transformation is happening so fast. Evolution used to take millennia. These days, the evolution of human culture might be measured in hours, days in some regards. Sean, your takeaway please.
Sean: Yeah, mine nests into this perfectly actually, and it's we have to come to a time where we are comfortable with LLMs and algorithms and they're just part of normal how we do business, but I don't know if enough thought has been put into, serious thought into, okay, how does that change how we think about how we do intelligence? So if you want our tradecraft, what does tradecraft 2030 look like? Because it's going to have to change and it should do, and if we get it right, it will actually improve everything so much that we'd be able to come up with really good foresight, really good analysis, all the rest of it. But I don't know how much work is going into that. It can't be just about, okay, well the AI's over there, right, we've got all this now. Okay, we'll just do the normal business like we've always done, we'll cross- refer, we'll use the spreadsheets, et cetera, et cetera. We've got to think about things differently. Where does the analyst come into the loop and how are they used to best effect to come up with the so what and the what if that is needed by the commander? So I think that's a piece of work that really needs doing.
Harry Kemsley: Thank you, Sean. For me, it would be something that was said quite early on by yourself, Mike, in terms of by putting the mind on the pedestal, you're freeing the mind from all the clatter and the noise in the data. You're letting the mind rise to a point where we can actually begin to actually understand and see things more clearly. That for me is a vision that caught my attention as you said it, because if that's what that means, given that we do not lack data, it's everywhere, it's surrounding us, it's swarming around us, the ability to rise above it, find that degree of calm, clear vision of what's going on, to understand and then make decisions, that to me makes it all the more important that we pursue this mind on a pedestal. And on that, I'll finish by saying thank you one more time, Mike, for your time today. Thank you. If there is nothing else from either of you, I'm just pausing. Let me say thank you to the audience for coming through this journey with us over the last couple of episodes. It has been a great conversation, Mike. I'm going to say thank you for the third time. Really grateful for your time. If the audience has any questions or anything they'd like us to explore still further, there is now an email you can use with the show notes where you can send your request in. A lot of people are doing so, and we'll happily attend to those in future podcasts as and where we can. With that, nothing else to say other than thank you for your time. Bye- bye.
Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, or you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.
DESCRIPTION
In the second part of this podcast, Lt Gen (retd) Mike Groen, Harry Kemsley, and Sean Corbett continue to explore how decision making is evolving in the age of data overload. They discuss the concept of data tribalism within military and intelligence communities and the hurdles and potential of integrating diverse data types for enhanced decision making. Discover how artificial intelligence (AI) and human collaboration could revolutionise intelligence analysis and operational efficiency, challenging traditional practices and requiring a new understanding of technology’s role.
Today's Host

Harry Kemsley
Today's Guests
