Knowledge to understanding and how to get there - part one
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.
Harry Kemsley: Hello, before we start this podcast episode, just a quick explanation that we're going to split it into two parts. So the first part we'll play now, and then we'll invite you back to join us for the second part very shortly. Hello and welcome to this edition of World of Intelligence by Janes, your host, Harry Kemsley. And as usual, my co- host, Sean Corbett. Hello, Sean.
Sean Corbett: Hi Harry.
Harry Kemsley: Good to see you as always, Sean. We've discussed so many topics. Now it's actually difficult to remember all of the topics, but one thing that we've discussed on numerous occasions is that we don't lack data anymore. We don't lack information. In fact, we're almost drowning in it. But one of the things I wanted to discuss today is how do you move from this modern world full of data to clear thinking and decisive action, and ideally decisive action that would defeat a potential adversary? And for that, I've invited a really, really good guest for this, Mike Groen. Hello, Mike.
Mike Groen: Hey, Harry, how are you? Hey, Sean. Good to see you guys.
Harry Kemsley: inaudible to join us. Mike, for those of you from the United States intelligence or operational communities, Lieutenant General retired Mike Groen will need little introduction. For listeners, however, who do not know him, he commanded at every level within the United States Marine Corps. He completed his very distinguished military career as a director of the Joint AI Center in the Pentagon, following a tour as a director of intelligence for the Chairman of the Joint Chiefs and Deputy Director of Computer Network operations for NSA. He's also a former Director of Intelligence for the US Marine Corps. Since his departure from the service Lieutenant General Groen has had roles in AI consultancy, geopolitical risk, has recently formed a new company, Global Frontier Advisors, focusing on global infrastructure, climate energy and mining carbon capture. As I mentioned a moment ago, in an age where data comes at us faster than we can think, the challenge isn't just collecting it, it's making sense of it. How do we organize and aggregate information so that it frees the mind rather than overloads it? And how can technology automation and AI act as force multipliers for human decision- making instead of replacing it? So, Mike, with that bio, what I really want to do is I want to dig in a little bit on this concept of the putting the brain, the human brain on a pedestal. I want to look at things like how does the machine help with this world of lots of information? How does it help us get to a place where we can make decisions better than adversaries? What can we do about the culture we're facing? Sean and I have spoken in the past about the cultural barriers, some of the technologies that are available, for example, I want to just dig into that a little bit. And then ultimately, once we've gone through some of those, I'd like to get to the end about this balance between the need for near- term or immediate good decisions and the need for judgment and understanding. So that's a sort of flow I'd like to take us through. Mike, and given your background of the bio, I've just read out, I'm very, very comfortable. You're the right man to be sat here discussing it with us. Again, thank you for joining us.
Mike Groen: Thank you very much, Harry and Sean both. I really excited to be part of the podcast, World of Intel is one of my favorites, and I know you have a big listener base for all the right reasons. So really glad to be part of it today. Thanks for having me.
Harry Kemsley: Thank you, Mike. And I must confess, Sean and I do sometimes look at each other with a degree of dismay almost that the numbers are as big as they are. We consider ourselves to be a little bit like a couple of characters sit up on the balcony in The Muppet Show, but let me not go too far down that road. All right, let's get started. First of all, this concept of why the human brain should still be on the pedestal, let's just be clear about what we mean by that. So in your words, Mike, what do we mean by on the pedestal and that concept?
Mike Groen: Yeah, sure. I mean, it's a great question to start with and we can go a lot of different ways with this conversation. But at the core, I think everybody really understands the challenges of data overload, the challenges that the human brain has with lots of layered data simultaneously. You can't do that. And so just purely from a technical perspective, thinking about how do you help the human brain actually operate in this data rich environment? And you can't do it with the brain alone. And so that to me, that's a really important thing, not just from the technical aspect, but from the human and application aspect. So if you can elevate the human mind, and in most cases I think I always think about the commander, let the commander of an operation or the commander of a unit put that individual's mind in a place where all of the dirty, ugly, repetitive, redundant, layered information that an intelligence machine produces. And when I say machine, obviously I'm thinking about humans and machines here who for all the right reasons are pushing data toward a commander. We have to think differently about this. We really have to produce understanding for a commander, not just data- informed commanders. And so I'll give you a classic story. I was General Mattis' G2 in the first Marine division a couple of years ago, and here's our intel sessions or the staff sessions, we would debate the intelligence. We would have a set of facts that was produced by whatever type of intelligence and then would argue about it. I was the intelligence officer. General Mattis obviously was the commander, and we would argue about it, " Sir, I think this means that." And he would say, " No, no, no, I think I saw this other thing and that's not really what I see here." And that debate, that dialogue elevates you above the facts into understanding. So now instead of just like, "Oh, okay, three enemy tanks were spotted here." Now it's like, " Wow, okay, that's part of the first tank battalion. And I know those guys got hit yesterday, so they're probably low on ammo." That intuitive understanding in a combat environment it's very simple because it just flows and then beyond just the commander's human mind on a pedestal, you're dragging all the other decision makers up onto that pedestal as well because now they not only have the data, but now they have the situational understanding, " In those conditions, I should probably do something like this." Now take that to the machine age. And now with machines, the machines can keep you in that place all the time where you not only know the intelligence data, but you know the impacts of that data and the artifacts of that data. You know what, I'm not really thrilled with that source, so I'm going to kind of discount that a little bit in my analysis. An AI can't really do that, right? They're going to give you the data and they're going to turn on that data and you're going to get some really fantastic things that you never would've found in your own mind. However, now you have that machine and you have human intelligence to actually get to the place where a layered commanders are all up on this pedestal. They're not worried about where all of the enemy units are because they can see it. It is part of their picture that they're following, right? And another difference, sorry, I'll just add one more here is the continuous nature of this. It is not a one- time, " Oh, it's nine o'clock in the morning. I get my intel brief. And so here's a fact and here's another fact. I'll see you at three when I give you the next update." No, no, no, no. It's continuous, right? Is the temperature hot or is it cold? I don't know. Let's turn the dials. That way of thinking about intelligence and beyond the commander, that intuition feeds down into all the way down across the force. So now you have commanders at a platoon level who have a good sound understanding of like, " Wow, I think the boss... The way the boss put that, I'm going to try to flow like this," and they'll have access to the same data. " I wonder why he said that. Oh, because he's taking this threat really seriously." You know what I mean? Now the human mind is not doing filing and recall. The human mind is applying judgment and understanding continuously all the way through the chain of command.
Harry Kemsley: Got it. So just before I come to you, Sean, because I can see you leaning into the conversation very quickly. What I took away from that is that putting the mind on the pedestal is about getting to level of understanding rising up out of the data to point where I understand the situation sufficiently well. I can start to identify the so what and the so what for me right now and perhaps my so what for tomorrow and so on. And that level of understanding, to use your point, permeates down through the layers below me if I am the senior commander or the peer group around me, it allows me to allow to use the UK doctrine term, a bit of mission command. It allows them to understand things sufficiently well. They can operate with a degree of autonomy. That's the sort of so what that comes away from people really understanding and having judgment. So if that's what we mean, we'll come on to the tools in a second. Sean, your thoughts on that so far?
Sean Corbett: Yeah, just already we've probably got a three- hour podcast in the process here.
Harry Kemsley: inaudible. We're going to be here a while.
Sean Corbett: Just two very short points for now because I would like to delve into the command and control piece at some stage, but this all starts with asking the right question because... And you've seen me play with various different AI things and getting really frustrated because it's not giving me the answers I know, but there's a way to frame the question that's the same with a commander. So a commander won't necessarily always know what's available to answer the question. So they might ask something like, " How many tanks are there?" That's not really what you should be asking. You'd be like, " Okay, what does it mean in terms of what the formation exactly," as you said, Mike, these were there yesterday and they've already been in combat therefore. So there's an education element then and then cultural thing. And of course the more we're getting into this data overload information days, the more that commanders tend to become their own decision maker. Let's just park that one though for a moment. But I think that's the second point actually, is that relationship between the intelligence professional and their commander is still absolutely critical. I've written and talked about this many times before, you've got to get the trust of your commander and that's got to be earned. But you've also got to have a relationship where you can have dialogue and they can be to and fro, and that's not always the case. So there's an educational, the cultural piece to this.
Harry Kemsley: Yeah, I think that educational and cultural piece we will come back to later, but just to put a bullet in as a bookmark for later, I think that's one of the big issues we've identified before, Sean, haven't we, about the machine dominated approach is that it's a black box. You can't really understand how the algorithms have done what they've done. You can't really probe and it's very difficult therefore to gain the level of trust that you need. But I'm getting ahead of myself. Let me pivot this then into the tool sets. You alluded to a couple of times in your piece, Mike, about how that level of situational understanding can be to some degree enhanced, enabled by the machines. Of course, we're referring to a variety of advanced technologies now that are becoming increasingly prevalent in all walks of life all day every day. Let's spend a moment then to talk about where the AI and automation and other advanced technologies can help, but equally, let's be clear about where we don't think they can help, where we need to be more careful. Let's go there next. Mike, let's go there first.
Mike Groen: Yeah, okay. No, that sounds great, Harry. Here's the thing. I think what everything we talked about earlier, that's table stakes, right? The relationship with the commander and all this, we've been doing that since Napoleon, right? So that is great. It's necessary. You can't forget those muscle movements. You know how to walk. Now let's talk about how to run, right? Because now we're bringing in the machines to help us with and where I see that pedestal, that mind on a pedestal, now that mind on the pedestal is not worried about little bits of data. The mind on the pedestal is now thinking about, " We moved left, so they're probably going to move right. I can see it in their formation. I see the rows." I'm sorry I use lots of ground analogies here. That's where I came from. But it applies everywhere. But here's the thing, now you can start to anticipate and pre- adapt to what you think is going to happen on the battlefield. So now I'm going to set an ambush over there, and you know what? When they hit that ambush, that's probably going to drive them because of this information here, that's probably going to drive them to this course of action maybe after that one. So you know what? I'm going to set another ambush there. And so you start to... Because your mind is free of all of the data distractions because the machines have taken care of that, now you can start to anticipate and now you can start to be really dangerous. I call this the Jurassic Park School of Intelligence, and that probably deserves some explanation. So here's what I think. I love the movie Jurassic Park, where remember the big game hunter, he's going to go get going to kill the mama inaudible or whatever it was.
Harry Kemsley: Velociraptor.
Mike Groen: Velociraptor. And so this guy is skilled, he's going through the woods, he's got his big gun, and then you see the leaves part, and you see the leaves part, and all of a sudden you see the Velociraptor's teeth right there, and he turns and he says, " Oh, clever girl," right? Wow. So now what does that mean in a combat environment? That means one dilemma after another, just when you think you've solved it, " Oh, okay, we got through this ambush. Oh, no, there's another one." Or, " Oh, no, all of our ships aren't sank. Or we diverted through electronic means. We made the navigation of the ship that was supporting the enemy suddenly go to the wrong place," or whatever, a miss a connection. But this idea of now using the machines because now the machines can help us like, " Okay, if they attack this way, how is that really going to go?" Artificial intelligence can help you with that. So now you have data driven... Not just you have the data, but the data is under the sidewalk, right? It's going through pipes on the sidewalk. Now you have living, thinking aggression, right? Now you can figure out like, " I'm going to create 1,000 dilemmas for these guys because if they do this, I know they're going to need to do something else. I'm going to take that away from them and I'm going to take something else away from them." The machines help you with that because you're not asking, " Okay, what's the enemy situation in the next 15 minutes?" You are actually now liberating your human mind from managing data to I want to have this effect on the enemy. I want to do it like this. And so that is so powerful because that's what wins. And so that's what we're after here is commanders that have a fingertip feel for the environment, not because they know all the data, but all of their fingertips are sensing heat, pressure, fiber, whatever it is. That is all in the commander's space now. It makes it really personalized for a commander because now the commander can start to think, " Oh, okay, I can feel how this is going. And now let me start implementing or preparing challenges for the enemy." To me, that's a mind on a pedestal free from all of the distractions because you know what, the staff and the machines will take care of all the distractions you don't have to worry about. You're going to run out of bullets because the machine and the logistics enterprise and all the machines that are part of that, they're doing all that for you, right? Am I going to have enough ammo to do this? Yes. Okay, good. Move on, right? I don't need to know how or why, but now you can move. And again, I apologize for being ground- centric because think about the South China Sea now in a largely naval and air engagement now. Wow, okay, I can see where their air power is being generated. I can see what their ranges are. I know that they've shown a vulnerability to this tactic in the past. All of that, it just pops into your head, right? Because you are now... The machines are helping you with patterns and machines are helping you with future thinking because you understand the present because you have the machines inside the house. So apologize for rambling here, but I get really excited about this. This is how you win. And I think if you're a military person, you want to win, right? Don't step backwards from that. Win. And so if you're going to win, use the machines to do that because where it's helpful, and if it's not helpful to you, then don't. It's okay. But your enemy is going to be moving at that tempo. You need to move at that tempo too if you want to win.
Harry Kemsley: Or faster. And I guess that's the point, right? You get the machine, you get to a point where you are creating so many challenges for them before they even had a chance to think about the last one. The new one's already hit them and they're just basically pushed onto the back foot and kept there. So, Sean, I'm going to come to you with the... Okay, so what's the problem with this approach, this philosophy, this machine- enabled philosophy? Where does it stop? Where does the machine stop being helpful in your opinion? We've discussed this on it before, but how do we counter to balance the aspiration for this near- perfect understanding of the environment so I can get ahead by two or three steps of our adversary?
Sean Corbett: I just want to go on the record before we start that I'm very much in line with Mike's thesis here in terms of that's where we've got to get to. But there are challenges on the way. And in fact, I remember briefing the same amazing general as Mike just mentioned in DIA saying, " Yeah, great, because we're trying to bring stuff to life. So a visual screen, interactive screen." And he said, " Okay, that's great, but what if I change the parameters to say what happens there?" And I have to say at that stage, this is 2016, we were totally lost. That's the briefing. What else? However, one of the great analysts said, " Oh, yeah, okay, if that happens, then this happens." So the challenge is getting what's in an analyst's brain with something that might be slightly ethereal. And I feel this quite a lot now actually. Sometimes I just know stuff. I know that when I see something on CNN or BBC, generally not BBC, but anyway, I will go, " Right, that's what this means." But it's based on a lot of experience and a lot of background and thousands of things I've read that I would never be able to go, " I think I read that there, but I just know that I read it somewhere." So the level of confidence that you've seen me do this several times, " No, that's not that. It's this." And if someone says, " Right, do your homework," I find that very difficult indeed, and it takes me a long time to research that. So the complexity of the human brain that can amalgamate everything together and come up the next level is a challenge. And can it just be done through AI? I would argue right now that's a challenge. It may welcome the future, but what AI can do of course, is provide and do all the manual stuff that Mike spoke about, the spreadsheets. I was doing some analysis this morning, which might scare you actually, but I ended up having 18 screens open just to make sure that I was cross- referring everything on the various different things and bringing that together was incredibly difficult and still is. But you did still have to cross- reference it and then put the same question in AI and it came up with more or less the same stuff. So in about 10 seconds. So in terms of coagulating, if there's a word, all those good things together, then I think it is a really positive thing. And that's really where right now, I think the goodness is. The biggest challenge for me though, and there are two, but the biggest challenge for me is the veracity, the assuredness, and the trust on the data. Just because that data is out there does not mean to say that it's right. And if you are relying on the AI to say, " There's your data point, there's another data point," and you're using that to drive you, then who's to say that's right? And this is where I'm not sure we're there yet.
Harry Kemsley: So let's not dive into the disinformation, misinformation pond yet, because I think that's a whole conversation in itself. But let's accept there will be good information, bad information out there that the AI and other means have got to sort their way through. What I'm slightly more conscious of though, at the risk of sounding like a middle- aged veteran versus a current operator, I use that caveat purely because things may well have changed in the last 12 years since I was in service, is the fact that culturally, I don't remember the time being where people were comfortable handing over too much to machines. It always had to be audited, it always had to be checked back to source. Even to this day in my work from an open source unclassified basis supporting governments around the world, the most frequent question I get asked is, " What are your sources?" And it's in that question that you find a cultural, maybe it's an question that says, " I can't rely on anything. I don't understand its source, its tradecraft and its analytical path to get to that recommendation." Maybe that's what it is. But my worry about the analytics, Mike, if it's done more by machine even in the earlier stages, is this resistance. I mentioned in my introduction to yourself earlier, this data tribalism as being part of the thinking we're going to discuss today and these sort of cultural norms that I'm trying to bring together. I think these are barriers to the pedestal you're wanting to put the brain. I think these things are trouble that we have to overcome. So first of all, do you agree we have these kind of barriers, and secondly, what do we do about them?
Mike Groen: Yeah, I agree wholeheartedly. And that makes it so important that we adopt and we experiment and we try these things, right? And that to me is that has been the hardest part, right? Implementation is the hardest part. But we're starting to see now certainly in the US industry that leaders are eager to start to move into an AI space. Workers are actually eager to move into an AI space because they know in their closed environment with the data that they operate with on a daily basis, they know they can do it with some assurance that something that we're doing today with a pencil and a piece of paper, they can do that now with machines and that it will take time to build trust into those environments. And you can never generalize that, I guess is what I would say in an industrial application because every pipeline is different. People use different data that, and that skews things and market data comes in and you're changing things. All of that is readily possible, but you need a machine and a team, a human team that can actually understand that most of the problems of artificial intelligence come from either human misunderstanding or machine misunderstanding of language. And here's a bit of a diversion, but what does the large language model do? It uses data or words as data. And you and I know that words have lots of different meanings and lots of different nuances, and it depends what context you use that word in. If you say it in a loud voice or soft voice, see, machines don't do that. And I think artificial intelligence application writ large kind of took a back step because we thought, " Oh, well, that machine understands everything that we're saying. And so it's responding to us in language," not really realizing that, " You know what, no, the machine doesn't understand anything at all." And so it's trying to use words as data and that's where you get things like hallucinations, right, where a word has a different meaning and now spiral off into and really change an outcome from an artificial intelligence engine. So understanding that is so important. Humans introduce those challenges too. So it really is important that humans, machines, context are all in play and are all practiced. And you wouldn't take a brand new model out of the box and use it in a combat environment. You certainly should not because it takes time to tune these machines to the application environment that you're using. And in warfare it's crazy from day one to the day end, you really have to have your fingertips on the machine all the time. So the philosophical point is a good one, but I think in many application environments, people have been able to get to a place where they're comfortable with the outcomes in that environment. And that's what I'm talking about on the pedestal where those proven application environments are contributing to the commander's understanding without having to read every single report. That gets us, I think a lot closer. You mentioned, Harry, tribalism too. I don't know if you want to go through that now, but-
Harry Kemsley: If you don't mind, Mike, what I'll do is just hold that tribalism just to one side. I just want to go back from what you said because I think there's quite an interesting point you've made there about this growing trust. You said that you know of commanders, you know of people who are throughout the tiers increasingly engaged in wanting to be working with AI, that's in itself interesting. I don't think that was true a number of years or months ago. There was a degree of resistance some time ago to the idea of even using this stuff. Now I've certainly seen it in my day job. You've mentioned it in the last few minutes. There is a growing acceptance and in fact, in some regards, there's a growing appetite to work with these tools. So the question is what's changed? Is it just a matter of that it's just been around so long, people just kind of got used to it? It's just become part of everyday life. They see it everywhere, so it must be okay, normalized, or is it they've actually started to detect what Sean alluded to a moment ago. Wow, these really big complicated tasks, 18 tabs open in my computer. It takes me an hour just to kind of get my headset on what the 18 are telling me and the computer do it in 10 seconds. And by the way, when I checked the answer, the computer gave in 10 seconds, it was good enough to move on. So I don't quite know what the thing is that's changed, whether it's just normalized by exposure or whether it's people are starting to realize actually it can do this stuff pretty well.
Mike Groen: Implementation... Sorry.
Harry Kemsley: Go ahead.
Mike Groen: Implementation is that is where we are in this and we're, I won't say stuck, but it's moving slow, right? Because especially if you're running a large company, you want to make sure that you have stakeholders and stockholders that that are watching everything you do. And so it is a very conservative application environment in a material production facility or material information facility, whatever it is, whatever your product is. So if your company depends on it, you better test it and you better make sure that you understand the provenance of the data and that it's applicable to the environments that you're trying to operate in. And this is once again where humans are the problem here, because in almost everything you read, an AI did something unexpected or what have you, in almost every case it was either a human who didn't really understand the environment that he or she was trying to build to, so they didn't understand the implementation environment, so they built it wrong. And rarely the other places where problems come in is if you're asking machines to do something that they're not good at. And let me give you a good example. The easiest example is large language models are language models. No shock there. But if you ask a large language model to solve a spatial reasoning problem, what color are the red blocks underneath the blue blocks and is one forward or one back? If you ask a large language model to do spatial reasoning, guess what? They suck at that and they get it wrong. They couldn't find their way from point A to Z in a maze because they have no idea. And so I think people are skeptical of AI in general, and there's an element of goodness in that, you need to understand how those machines actually work, what they're good at, what they're not good at. And then you understand your application environment and then you start cooking and once the machine starts to go, I wouldn't submit that you would send a commander into the South China Sea at the head of a convoy or head of a set of ships or what have you if you didn't know exactly what that machine was doing underneath the hood. And so it is really important. That's incumbent on the humans. It's not incumbent on the artificial intelligence. It does what it does. You have to understand how it works and how you can make it achieve the outcomes that you want it to achieve.
Harry Kemsley: Right. Sure. I'm going to come back to the data tribalism, just a moment, Mike. Sean, your thoughts on that because I can see inaudible.
Sean Corbett: Yeah, so I think this leads to what I see as the absolute critical question here. At what stage does the human need to be in the loop? And there's two elements of that as well. It's the who at what level has the right data, the right trusted data. So, right, I've got what I need, I can actually therefore make my decisions versus the, right, I don't want to be bothered with all the detail, but I've got analysts that do all that, including using the AI. So I still want to be briefed by my trusted analyst, which is fine. So what stage does the command decide? We've all been in, I think in scenarios where the commander has decided they know better than we do and they're not really going to listen to what we've got to say because it doesn't necessarily fit their scheme of maneuver, if you like. And we've all been there and been told to wind our necks and off we go. And I've even seen two formations produce exactly the same... Sorry, with exactly the same data, produce two different assessments and analysis, and then the commander picks what they want. But the secondary part of that, so is at what stage do you need to get that person in, but at what age stage is the decision maker the decision maker? We always talk about mission command as though it's this great thing, we've done it forever. " Oh, yeah, mission command, this is what I want you to achieve." How you do it is your business. And that can be all the way from the super strategic, the Pentagon level all the way down to a sub tactical unit. But at what level is the decision really made? Because now the communications are so good and everybody at all levels have pretty much the same sort of information. And whether that is classified or open source information, you'd like to think that mission commander becomes more positive and real because the guy at the front end has got the problem that might be getting shot at, has the levers to actually go, " Right, I understand that therefore I'm going to do this." Versus people at the right sort of strategic military level that goes, " Oh, I've got all the data as well, I'm just going to tell them what to do." Now that's something I think we've been wrestling with for decades now, and I think that there is a danger of as AI gets better and better, which it will as the decision making becomes easier, inverted commas because that data is better and it's being better managed, all the rest of it, you've got that natural tension because the people with the screwdriver, the top thing, " Well, I know just as much as they do." That for me is a big conundrum.
Harry Kemsley: And that is a problem we've seen for a long time. I'm reminded of a time flying over the former Yugoslavia when there was a very, very senior air officer on the radio telling the F- 16 driver to bomb the tank. And after several times telling the pilot to bomb the tank, the pilot said, " Dad, I just can't see the tank." The point being the general could see it because he had eyes on by various means, the pilot couldn't see it because his ROE wanted to see it by naked eye, not through cloud. So the idea of command or control is opening up before me. I'm going to just push that to one side, and I think you're talking, Sean, a bit there about the data democracy that we're starting to see where data becomes available to everybody. Okay, we'll take just a short pause there. That's the end of part one. Please do join us for part two very soon. And thank you for listening.
Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.
DESCRIPTION
In this two-part episode, Lieutenant General Mike Groen (retd) joins Harry Kemsley and Sean Corbett to share his experience in the US intelligence and operational communities and how decision making is evolving in the age of data overload. They explore the transition from drowning in data to achieving clear thinking and decisive action in military and security operations. Discover how AI and technology can amplify human judgment without replacing it, and why putting the human brain on a pedestal is crucial in today's information-rich environment. Listen to how unlocking the potential of AI as a force multiplier can support strategic decision making.
Today's Host

Harry Kemsley
Today's Guests
