Using Virtual Worlds to Prepare and Plan for Future Scenarios
Using Virtual Worlds to Prepare and Plan for Future Scenarios
In this episode of the Janes podcast Terry Pattar talks to Joe Robinson, CEO of Defence at Improbable about how advances in the games industry are being utilised by the defence and intelligence community by creating virtual worlds to prepare and plan for future scenarios.
Joe RobinsonCEO Defence, Improbable
Terry Pattar: Hello and welcome to this episode of the Janes Podcast. I'm Terry Pattar, I lead the Jane's Intelligence Unit. I'm joined by a member of my team, Kyle McGroaty.
Kyle McGroaty: Hello.
Terry Pattar: And by Joe Robinson, who is CEO of Defense at Improbable. Welcome, Joe.
Joe Robinson: Hi, nice to speak to you.
Terry Pattar: It's great to have you on. It'd be great, Joe, for anyone who's listening who hasn't heard of Improbable before or maybe actually has heard of Improbable, but heard of you in a different context and maybe isn't conscious of what you do in defense, et cetera, for you to maybe give an introduction to Improbable, a little bit of your background and what your role is there currently.
Joe Robinson: I'm Joe Robinson, I joined Improbable around about five or six years ago now. And I joined Improbable from the Ministry of Defense, so I did 10 years in the MOD in the British Army, and I came across to Improbable just as we had started to look at applications for our technology beyond gaming. Improbable, relatively better known as a gaming business, or a games technology business. But the way that we would describe ourselves overall is a virtual worlds company. In the defense and intelligence space, we build this unique and sovereign capability for defense called a synthetic environment. This is essentially a very realistic virtual representation of any real operating environment, which enables users to train, to plan, to test strategies in a virtual world before you then go and implement them in the real world. We utilize some of the same technology that we use to power unique multiplayer gaming experiences, and bring that capability to defense. That's really what we do on my side of the business, and we're active across the UK, across the US, and across NATO at the moment, and growing very quickly.
Terry Pattar: Just for those who aren't aware, when you talk about building those virtual worlds or synthetic environments, in practice what does that look like? And how do people experience it when they come to use it?
Joe Robinson: Yeah, it's a good question. I mean, synthetic environments, these virtual worlds as we describe them, they're capable of synthesizing the vast complexities of the modern operating environment. So there's much that's been written about the fact that the world's increasingly contested, increasingly congested, and the threats that we're all facing today and our governments are facing to protect as they're more diverse and I suppose interconnected than they have been for a very long time. And synthesizing all of this complexity, which includes things like bringing together cyber effects and activities that are happening in the space domain, and information effects and the stuff that you see in hybrid warfare scenarios, this enables defense to firstly understand the complexities of that world, and then really to try experiments and to train and to plan and to even orchestrate operations in a virtual environment, in a synthetic environment, before they go and do it in the real world. Our technology is essentially a platform, a software platform, which enables the creation of different applications and solutions, so that users can consume these worlds, these synthetic environments, through different ways. It could be one end of the spectrum, what is essentially massive Call of Duty for the military, with lots of soldiers experiencing this deep, immersive, multi- domain operating environment to improve training outcomes and to really experience the realities of hybrid warfare in a way that hasn't been possible before. But on the other end of the spectrum, we have much more analytical applications and solutions that are running on the platform, which is supporting policy design and development, operational decision making and planning, much more on the sort of analytical and the scientific simulation end of the spectrum. Which is able then to run thousands of choices of action over and over, fast in real- time, and then to pull out the nuances of those choices and then to help defense and the intelligence community make better decisions. This sort of full spectrum of I suppose really preparedness activity, from training to war gaming to policy design. Even things like testing evaluation. Testing new capabilities, new equipments in a virtual world before you then go and have to deploy them in the real world. We should hopefully help defense and government make better choices overall across these different areas. This sort of single platform that stitches together all these models and data and produces these synthetic environments, which then different users can interact with in different ways. I kind of feel myself clutching for a whiteboard pen when I try to explain this.
Terry Pattar: I was going to say, it's one of those things I suppose on a podcast is difficult to describe, whereas if you were doing it visually, it'd probably be a lot easier.
Joe Robinson: Yeah. You can't use slides inaudible.
Terry Pattar: No fancy little video to demonstrate it.
Joe Robinson: It's a new concept, it's a new way of thinking about the issue of really getting after what in the UK they call multi- domain integration. Being able to understand the multi- domain operating environment, being able to integrate lots of users together, all drawing on the same synthetic environment, and then being able to build a collective response and to collaborate and deliver better outcomes. By looking at it from a perspective as a software company and thinking about it as trying to solve some of the foundational issues of bringing these very realistic worlds together, but trying to do them in a very efficient way, a way that is reliable and cost- effective and scalable, that's really where we really start to push the boundaries of what's been possible before. And that's what makes it really exciting.
Kyle McGroaty: Joe, I didn't know if you read, a couple of years ago, it came out now. How to Measure Anything in Cybersecurity Risk. Really interesting book, breaks it down quite neatly, talks about running multiple probalistic models just using Excel spreadsheets and a couple of macros. And actually, the guy who wrote the book, I forget the name now, I think he put up on his website some of those probabilistic models and Excel spreadsheets. I imagine yours are much more complex than that, and so they should be because cybersecurity is something that you can model in a far more predictable way than, I don't know, asymmetric warfare in a foreign country. But it's amazing to me because the idea of being able to run a scenario, change a reporting line for a subordinate unit, change a capability, assign one unit to another or to a different part of the hierarchy, and then run that simulation again and see how effectively that change is implemented or what it means when that change is implemented, is an incredible thing to be able to do.
Joe Robinson: Oh, absolutely. And we have the capability to deliver probabilistic models using bayesian inference. Thomas Bay has been around for hundreds of years, these techniques of surfacing the probability and outcomes, and actually the defense national security user is quite used to understanding and comprehending probabilities. They're quite used to thinking of things in terms of error and likelihood of things going well. So we found that capability is quite important, but I think kind of the old adage of modeling of any model, is that all models are wrong. But some are useful and often said phrase, but it's incredibly apt and incredibly appropriate. Part of the reality of bringing together vast numbers of models and ensuring that they are sufficiently validated and verified and calibrated to support specific questions means that we need to surface the errors in those models. And we need to ensure that the system synthetic environment, and this is one of the great advantages of looking through the problem of effective multi- domain integration through synthetic environment. It isn't a black box. It isn't an Excel spreadsheet model, which is just pumping out an answer. It say a virtual world which has an audit chain. You can understand where the outcomes have come from, and we're able to surface the error and the unknowns in that decision- making. It's all about ultimately empowering the decision- maker. It's about giving the human decision- maker, whether it's the soldier training, the policy maker designing policy, or the kind of stressed J5 planner in the operational headquarters trying to run these choices of action to understand outcomes. It's about giving them the tools to understand that world more effectively, but also empower them to make better decisions without giving them an answer that is without explainability, that has no background to it, no audit chain. You need to be able to combine both a highly reliable bounded statistical model, something that probabilistic modeling bayesian inference can provide with you. But recognizing that that outcome can be quite intoxicating for a decision maker, because they feel very confident in the answer. When in reality, the answer is very often, as you said, a little bit more nuanced than that, and that therefore you've got to be able to bring together juristic models. Those of individuals who are subject matter experts in their area. Say, " Look, I know the mathematical model is telling me this, but I know this system and I've had the understanding of this for a long period of time." It's combining those elements together, surfacing the error and then providing synthetic environments that help a user understand the problem space and make better decisions. That's really what it's about. It's about enhancing that decision- maker and improving the human in the loop, that still has agency, they can still decide whether they want to follow what the environment is suggesting or whether they want to follow their guts. And that's something that's really important, frankly, to technology like this. And it enables us, I suppose, to not sort of faced this crisis of legitimacy that often plagues AI as a sort of purist capability.
Kyle McGroaty: I have so many questions for you. I have so many questions that even my questions have questions. At Janes we have a similar issue with a tremendous amount of information collection, trying to rationalize it, trying to turn into data sets, sensitize it. Create some sort of ontology that allows us to understand the information that we have collected. It strikes me from what you're saying that the estimate of language, the probabilistic aspect of your models aren't as pressing a challenge for you as the confidence levels. I remember writing reports where I put I have a high degree of confidence in this assessment based on the fact that whatever, I'm struck by things like less than 3% of Twitter being geo locatable in any meaningful way means how do you actually have any real confidence that you are able to make date, time, place assumptions about the data that you're picking up, knowing that you're only really relying on less than 3% of the entire data set available. And the confidence levels thing is the first question I had for you, there are many, many others.
Joe Robinson: Yeah. You're spot on. There are different levels of building confidence. There's building confidence in the software and making sure that the software is reliable, it's easy to use. That it's somewhere that people actually go to support decision making that the user enjoys using it and it's giving them something that they haven't had before. It's coming back to making things faster, making things easier, making collaboration more effective. Confidence is lifted at a basic level by enhancing that level of collaboration, by getting many minds and many thoughts contributing to it, then it's about surfacing the assumptions behind the models and the assumptions behind the decisions. And we've been developing this framework. We call it the explain and review framework with defense scientists for a couple of years now. A pretty unique collaboration with DSTL in the UK to ensure that we can really surface a lot of this uncertainty and make that very clear to decision makers in what they're seeing and giving models a history. Any model that's incorporated inside the synthetic environment, it has what we call a passport. And within that passport, it has specific characteristics, who owns it, where does the data come from? Is this a model that's suitable for training and supporting training outcomes? Or is it something you've got to want to start to rely upon for frankly for life and death decision making, in which case that's a higher level of certainty and a high level of importance when it comes to surfacing the error within that. It's important to know, as a company, we do a little bit of the modeling ourselves. We have some fantastic and hugely capable academics and fantastic what we call model engineers inside our business that do build some of these models. But actually the power of approaching this problem of how to understand the respond to this horrendously complex world is to bring some of the best models together from others. Part of this confidence level and building up confidence and system is recognizing that there's one company, we can't be an expert in modeling everything. We can't be an expert in modeling power grids or, come back to Janes, enemy movements and inaudible and that kind of thing. We've got to work with the best in the business. And so we have an ecosystem of partners that all sort of contribute to these synthetic environments. So we pull models from industrial partners, from academic partners, from within defense, and they come with their own credibility. And then we stitch those together and do some clever things with the cloud, through our platform to ensure that the world's scale is reliable and that it's consumable by different users. Part of that confidence is also recognizing that you're trusting us as a software company to be the broker really, and the integrator of those capabilities and to deliver the software confidence, I suppose, but also recognizing that you're relying on the best information that you have available. And of course there's challenges in finding that and accessing that from different government departments and the like and bringing that together. It's often a policy in security and culture problem, actually, as opposed to a technology problem that finds has issues here. Where I think you can enhance that confidence
Kyle McGroaty: When you mentioned the passports I said, " Wow, that's interesting." And I thought it was recorded, but I was on mute. It didn't interrupt your flow, which is great, because I think that's really interesting and it strikes me that actually rather like any intelligence it's rather like what Janes goes through, the relationships between the different entities, how one affects the other, whether it's the terrain and the vehicles that are on it, whether it's international air travel and air defense networks, these are really complex models in themselves, but understanding how they interact with each other, that's a particularly difficult thing. And I suppose if you're coming from a gaming simulation background, I'm just thinking back to my wasted teenage years with far cry and other things.
Joe Robinson: Not wasted. We wouldn't say that..
Kyle McGroaty: No, I don't think they were, my mother, however, might disagree. But it's the understanding that there is a damage engine sitting there trying to work out, okay, this action has been taken by this individual in the game. How does that relate to something else is really quite a fascinating challenge to try and get your head around. And if you're able to take models from other organizations that understand that and can explain the real world interaction with one unit and another, then we'll first stop and it saves you reading a whole world of doctrine and joint service publications, and other things, which I imagine you do anyway, because you've kind of got a model of the structures of how an organization works if you're going to give it a playground to work in.
Joe Robinson: Yeah. I think that's very strong observation. One of the interesting insights, I suppose, that we've gained from our work over the past few years with defense across the US and the UK, is that actually a lot of the time you can gain insights and you can gain improvements, I suppose. It's all about ultimately becoming better prepared and making better decisions. You can gain improvements, you can deliver improvements to users with actually pretty simple models. Models and combinations of models that aren't hugely complex when it comes to sort of really looking at the kind of deep scientific accuracy of capabilities. I can give you example. If you're able to bring your unit together and to train. Talk about a military training here. You can train in a virtual environment where when you're a high explosive tank shell, when it hits a broadband on bundling hub, that the internet goes down or when the building collapses and it lands on a power line, that the power grid goes down and it may be in certain countries, you're looking at jewel use capabilities where the military and intelligence community are using the same broadband networks, and they're using the same power networks as the general population. Now, those systems are relatively well understood, how a broadband network works. How a power grid works. But just being able to deliver the functionality inside a synthetic environment, which is one of these sort of pillars of our platforms USP, this idea of increased realism. What we really mean by realism is realistic facts and a realistic environment, representative environment, essentially of the real world. If you can just start to bring some of those models into play, you're immediately making a step change, immediately making a leap ahead in the capable of pretty significantly, frankly, in the capabilities that they have today. And you start talking about hybrid warfare scenarios. When our troops are experiencing working in this gray space, where damage to critical national infrastructure can the start to genuinely affect population dynamics, it can begin to affect whether people have access to the internet, which in turn then there are models that exist, which start to look at how populations can start to be nudged and moved in different directions based on these types of activities. Then you're immediately giving them a much more realistic and representative experience. And those models don't need to be super accurate. They just need to recognize that when my high explosive round hits that on unbundling hub, the internet goes off in this area, so what happens next? And there is so much incremental value that you can begin to add. And then of course, then it's just about refinement. It's about layering more of that complex operating environment in and very quickly because, again, you're approaching this problem through the lens of software. Software platform which rapidly and efficiently brings these models and these data sets to bear and can upgrade scenarios in weeks, whereas before it would take years to update a scenario for the military training. That efficiency aspect to the delivery, that really gets you an interesting space because you can start to bring in real data, live data from the real world, as well as existing models that you understand. And then you think ahead of that and then suddenly you're moving from a realm you're going from J7s and our training scenario that I've just described, begins to really bleed to a J5 and a J3 scenario. The environment you're training in is essentially a copy, a faithful representation of the actual operating environment you're going into. Then you can let your mind run away with where things go next, but it unlocks a different paradigm in the way we think about the world and the way that our governments can effectively prepare to respond to this type of environment.
Kyle McGroaty: Going just to that point, you mentioned that sort the way in which we view this and see it and experience it, but from a user perspective, how does a user experience this? Is this somebody who's got to train for an operational mission? Well, it could be obviously for training purposes, it will be one that is imaginary. But you might still use a real setting, et cetera. How do they experience it? How do they sort of see it and feel it.
Joe Robinson: How do they consume it? I think this is part of the reality of, I mean, modern software companies call it the sort of user led design or user led development. It's putting the user front and center of the consumption experience of the capability. The answer is ultimately, it's tailored depending on the situation. For an analyst, for a decision maker, it's sitting down at a computer screen, at a laptop and this can be in a deployed headquarters running off a local network or cloud in a box, running through analysis of a particular problem and what they're experiencing is a view of the topography of that problem. And they can also view it through the lenses networks. They can view it through different sort of mechanisms of viewing the problem on picking the problem and then running scenarios against that. I mean, as part of our work within the operational decision support realm, we've essentially digitize the entire military planning process. From right at the start of running your combat estimate and your military decision making process, MDMP as Americans call it. That entire process has been digitized. So you can really start to collaborate really upfront sort of understanding the situation and how it affects you through to running war game at the end of it. One end of the spectrum of kind of quite traditional analytical consumption methodology. I think my UI and my UX team would be looking at me at this point, as I sort of massacre a description of the phenomenal work they do which is just alchemy, it's incredible, it's the most wonderful thing when you see that.
Kyle McGroaty: It is so important these days though, isn't it? The UI and the UX. If that goes wrong, then the rest of it, no matter how good it is, it doesn't matter.
Joe Robinson: It is. It's critical. It's the most incredible experience when you build a piece of software for a user and they turn around and they say, I can now do something in half an hour, which would ordinarily take me five hours or six hours. When you suddenly deliver something that makes us smile. That's what it's all about. It's about building capability and delivering that kind of thing. One end of the spectrum, it's an analytical user interface, map- based or running statistical scenarios. The other end of a spectrum, it is blending live virtual, constructive simulation. That can be consumed through VR, through AR, through your traditional collective treks and synthetic training collector training environments, where you're consuming the synthetic environment via a big simulator or via a desktop battle simulation software. And we work with a number of different partners to enable the reuse of their capabilities. So they get all the enhancement and all this realism efficiency, but they don't need to change a lot of the way that they consume unless they need to, unless they want to. The bottom line is that the models that are being consumed by a soldier in an AR headset, a soldier in a HoloLens to headset in the field, that's experiencing this live virtual constructive blend in there, out on patrol, on Salisbury Plain, in a training scenario, the same models that they are pulling on, the same datasets they're pulling on can be reused for other applications across the fence. They could be using and training against the same models that PJHQ are using to plan live operations. And we're seeing this reuse happening now. The fact that you have this ecosystem of models and data that are being pulled upon by different applications, this is a complete step change in the way the defense consumes software and understands how it can reuse capabilities from different areas. And of course, there's continuity there which can be really powerful.
Kyle McGroaty: Joe, is this concurrent real time, because it strikes me that having had the experience of wandering around aimlessly on Salisbury Plain, that as a young Squatty or a young team leader or platoon leader or section commander, whatever. As I'm going through the process of my section company attack, whatever it is. And I am reacting to a scenario, the ability to pull that model back through one step, two steps to a a command post so that they are able to see one of two things. A, the model works, that their planning processes are in place that they're combat estimate or their whole planning process has worked as required. And B, that the troops are trained, as you would assume that they are trained because there is a difference between well, this particular unit can do this on paper and this particular unit has been trained sufficiently to do it in real life. Which kind of leads to a broader at least observation on my part is that you're able to show, if that's the case, you're able to show not only the value of the model, and I appreciate all models are wrong, but some are useful. But also some models are useful because they are wrong. They demonstrate intelligence gaps, they demonstrate capability gaps. They demonstrate inefficiencies in an organization that you would have assumed have been weeded out through the evolution of constant combat for the last, what are we in, 22, 23 years? A friend of mine was talking to me about how one of his mates in his unit has been out to Afghanistan at the same time as his father had been out to Afghanistan. I think, wow, two generations fighting the same battle. We should have weeded out a lot of those quirks of our militaries, but this strikes me as a real way to find in real time where those exist, where you might not have thought they had, and conversely, where things work surprisingly well, where you'd always assume that they didn't.
Joe Robinson: Yes. Absolutely. I suppose this is the sort of aha moment really, which is when you see a real plan, a real operational plan being trained against in the room next door, essentially, with a unit that is running through that plan through a games engine, through a kind of virtual simulation of that scenario, and actually going through their TTPs and running through that mission essentially as a mission rehearsal. They're running through it as a training scenario, and then all the data that is then pulled off that can then be brought back into the planning process to refine that plan and say, " Hey, look, when we went and tried this for real, it turns out that you got all the timing wrong and the vehicles didn't get here in that space and time or you didn't account for the fact that we had a gunner in the turret that wasn't trained to operate the HMG because he hadn't been through his competency assessments or was out of date. And therefore we weren't able to bring that weapon system to bear and that scenario as effectively as we could have before. Seeing that happen it is a transformative element of a synthetic environment, the capabilities that companies like ourselves can develop, but we should also acknowledge and come back to your example, Kyle of walking around, hopefully not aimlessly on Salisbury Plain.
Kyle McGroaty: Tactical bundling.
Joe Robinson: Tactical bundling. Patrolling around. We must come back to realities here of the challenges involved in that type of scenario. It's all well and good doing it within a training environment where you have great bandwidth, you have plenty of processing power and plenty of compute power available. And of course cloud capabilities are now getting to the stage where you can deploy these edge computing devices. You can end up with a cloud in a box right at the front line, right in a forward deployed location to do this type of activity. But the realities of bandwidth, of data, of those sort of ruggedized equipment, doing stuff in the field is always going to be more testing and more challenging. Capabilities are getting improved and getting advanced all the time. But I think if you look at some of the sentiment and some of the recent literature around the digital backbone, the idea of the MOD producing this digital backbone on which to hang off a lot of these capabilities. Focusing on networks, focusing on connectivity, focusing on how to leverage 5G, how to leverage cloud compute. You need all of that sort of messy backend for, want of a better term, for the soldier in the AI headset to really get the benefit of the software. Getting it working in the field is really that sort of final frontier of capability. But in recent months, we've seen real advances in that area. And it's something that is pretty exciting to see come to life.
Kyle McGroaty: Yeah. And you're lucky because you're in a space where a lot of that... You don't have some of the challenges that you would have had with IT systems five, 10 years ago, classification issues, limited access, large amounts of backbone. What is it, the Android Team Awareness Kit, the ATAK. You download that, it works really well. Is it particularly sophisticated? No. Is it particularly sensitive? No, but you can download that in amongst a team. You can upload coordinates, basic orders and you can coordinate a team on the ground in quite austere conditions so long as you have a mobile phone that is able to connect to some sort of data. I think you're really lucky there. I do have a question for you about bias though, because it always struck me that what's the phrase about armies only have a fight the last war that they engaged in, or they always trained for the last war that they engaged in. And it also strikes me that you see a lot of training and I think this is true across military. It's not a disparaging remark of any of them. You see a lot of training, the training is completed. There a tendency to assume that the capability exists because the training for that capability has been undertaken, but you don't really know until someone's shooting the other way. And you've got to deploy it in anger at a moment's notice when everything is going wrong. You've got the human bias involved, you've got machine learning and artificial intelligence algorithms that are running in the background. You've got the benefit of human and artificial biases and being aware of those and how they may impact your capability, I'm assuming is something you've already thought about. And if you have, how far have you got down that road, because it's always a difficult thing to try and read out or engineer out of a solution.
Joe Robinson: It's incredibly difficult. And I don't think it's ever possible to and it may be possible in the future, but I don't think right now it's actually possible to fully eliminate all of that bias that you've talked through. There's always going to be a degree of it. I think the important thing is firstly recognizing that it's going to be there, then trying to surface that with the decision makers. Your point, Kyle, plays to a wider questionnaire, which is around the challenge of adopting a capability like this. So that example of bias is a fantastic example of the challenges around adoption and the reality of technology colliding with culture often in training, a decision maker or rather someone in the command appointment that's going through, take a battle group commander is going through. If they're not deployed operations, they may get to big training events, big collective training events, where they get to roll their battle group out and kind of test themselves in that environment. Now you can recognize that there is bias in the algorithms, in the system, but you often have to go to the other end of the spectrum, which is to dial down some of the complexity that would even cause that bias in the first place, because the decision maker doesn't want to be seen to be failing. The commander doesn't want to be put in a position where they roll off the start line and then within half an hour of the battle they're in dire trouble. The reality of something like collector training is that there's always a need to gradually dial up the complexity in the system to ensure that the decision maker, that the unit that's running through, how's the opportunity to build up their TTPs, to exercise themselves and the way that the training objectives suggest and to respond to this increasing complexity as they do better and better. You add a bit more in kind of as you go. Often in a synthetic environment, you got to be careful what you wish for a little bit. You can present a lot of the complexities of the world very quickly that can run counter to the culture of producing very structured training objectives, where you have to tick off specific capabilities. You can begin to surface and recognize that there's bias in the system, in a lot of the algorithms that are supporting the delivery of this capability, but actually you need to start back from the principles of what is that training audience trying to achieve and try and tailor and calibrate the complexity of that environment. So you bring more about it and with it, you say, "Okay, I'm going to bring in more complexity that's going to be a bit more bias. And here it is," and kind of stormer stack it up within the training environment. And the adoption challenge, the culture of adopting these new technologies and the policies and processes and systems. Policies are a critical thing to consider when you look at these types of capabilities is what policies do we actually have in place? What policies do defense departments, the intelligence community have in place to understand these types of biases, and to effectively ensure that adoption of new capability like this can happen in an effective and in a cost effective and efficient way. It's part of a broader question, which is successful technology adoption. I mean, people talk about software as the new differentiator, national security, hardware will always be important, but software is the new differentiator in achieving strategic advantage. And those who have the advantage in software will win. They will be in a position to have strategic advantage over an adversary. But actually the there is a kind of calmer after that statement, which is the organizations, the governments, the countries that can successfully adopt new technologies and bring in the leavers of security and policy, and decision- making together alongside the technology and understanding biases and understanding what they're seeing. Those organizations will be the ones that are successful as opposed to the ones that look at this purely as a technology problem, as opposed to one that's more holistic.
Kyle McGroaty: I had, years ago, an analyst who was amazingly bright. He was a great influence on my development as an analyst. And he used to refer to nuclear weapons as the Gordon Ramsey problem, where he would say, " You can go to the market and you can buy the best produce, and you can spend 20 grand on a fancy kitchen, and you can buy Gordon Ramsey's cookbook, but it's not going to make you the chef at the Savoy. There's a certain amount of inherent cultural knowledge and experience that you have to be able to put this complex set of pieces together and turn it into a functioning nuclear device." And that always stuck with me because you've just described pretty much the same thing. It's not the hardware that's going to be the advantage in itself. It's probably not the software in itself that is going to be an advantage to it. It's the culture of an organization that's willing to review its policies and procedures. That's willing to change its own structure. That's willing to devolve organizational, sorry, decision- making responsibility. That's able to open up the information and how it's controlled, that all comes into a cultural shift and being able to create a simulation where you can do this over and over and over again, and get it wrong with very little cost relative to getting it wrong in the real world, certainly relative to getting it wrong in conflict, seems to be a catalyst for cultural change. And I don't know whether you've seen that amongst different clients or whether you've seen a difference based on their pre- existing cultures to who adopts that quicker without naming names and shaming your clients into working harder to adopt the change.
Joe Robinson: Yeah. It's a great question. I'm optimistic actually, and I'm pretty encouraged. I was previously a scholar of history and historically defense has actually been very good. If you sort of treat defenses as a very complex enterprise customer, a very complex enterprise, compared to other enterprises that have evolved over the past decades, defense has a rich history of successfully adopting new technologies. Often it's been driven or the real drive to this adoption has been because of international events. It's been because of the strategic scenario has shifted coming into the nuclear age or being able to be the country that can put satellites into space or being the organization that can effectively master submersible warfare. There are these big capability leads where defense has actually pivoted really beautifully, really successfully to adopt these things. Because if they don't, if they're not successful, the stakes are so high, they're almost existential to survivability of a nation. If their governments can't effectively adopt these technologies. Historically I feel confident and I actually feel increasingly confident working with our incredible customers, these incredibly well- meaning hugely impressive and inspiring people in the defense and intelligence community who are in fact dependent on the sort of, whatever stereotype that you can come up with, but they are very open- minded. They recognize the challenges of their own system and they approach the adoption of new technology with the curiosity and the openness and the willingness to change that one could ever wish for, that they approach it with real zeal to want to drive improvements, because they can see the errors in their own system. They can see where the improvements can be made available. This is a great thing about software, I suppose, is that unlike a hardware program where you get paid to do a thing, you go and build it. It takes many, many years of delivering capability. It's very expensive. Software's cheap and quick in relative terms. You can demonstrate value very, very quickly, and the user and the customer can see it and go, " Wow. That's what I want or no, I absolutely don't want that." It's kind of idea of failing fast. That you can make lots of mistakes and you can improve it, but there is bias. But the only sort of fly in my ornament of optimism at the minute, and this is really consistent across a number of different innovation activities within the defense and intelligence community is that we've kind of got comfortable as organizations were failing fast. We've kind of got comfortable with the idea that we're going to try and do software. We're going to see stuff breaking. We're going to expose the inner workings, we are going to improve over time gradual and rapid improvement deliver capability. What we haven't effectively or what I don't feel or I haven't seen consistently, at least. And there are some organizations that do this very well, but not consistently across NATO and across the US and the UK, is when things go catastrophically really well. When a innovation project suddenly delivers capability out of the blue, or it delivers an outcome an answer that is scary or is really drives a capability forward. Suddenly the mechanism, the policy, the processes, the systems, to take that innovation project and to turn it into real capability. That's your valley of death. That is your, how do I exploit catastrophic success? This should be in the lexicon. It should be in the language, in the policy and the thoughts of senior decision makers across defense and government at the moment, which is, how do we point at something like, " Yeah, I want that." How do I get my system to buy it and to turn into a long- term thing and into a program that we can scale and we can deliver across different areas because software has that uniqueness to it, which is it's easily repeatable. I don't quote one to my investor, Marc Andreessen says software is eating the world. It's eating the world because it can replicate very quickly. You can scale it very quickly, very cheaply. It has an economy of scale to the way it's delivered. Government must look hard and industry has got a massive part to play here in making and helping them to do this, but they must look harder at their own systems and processes to when they are successful, how to scale that success and how to turn a science experiment or an innovation project. We see all these innovation hubs popping up all over NATO, wonderful thing. But how do you take innovation into core? How do you get, when you do really well, how do you turn that into real capability? That's the challenge of adoption in my mind, that's the central problem that we need to solve at the moment.
Kyle McGroaty: Yeah. We've got a lot of innovation centers. We don't have a lot of adoption centers. I think you could probably be reading my notes on open source intelligence and how that's used, because I think it's a very similar challenge and that's not a criticism so much as a critique. You're right. There's so much out there, so much information out there. I mean, Terry and I were talking a couple of weeks ago about HawkEye 360 and the radio frequency collection that they are able to do. 10 years ago, that is not a conversation we could have had anywhere outside of a very small group of buildings. And it would be a very close old conversation. Now, you're going to purchase that knowledge, that is definitely some catastrophic success there. Question is, how do I get it right down to the guy who's probably a sergeant, maybe a young captain who really needs it?
Terry Pattar: That's exactly right. I think the two things that are closely related, I think in terms of the growth of open source intelligence we've seen and the growth of these kinds of capabilities that we've discussed and Joe, that you've talked us through. Kyle would love to pepper you all week with questions, but I wanted to sort of maybe wrap up with one last question which is, how do you see the future developing? I mean, you talked there about the current state of play and some of the challenges you've got. What's sort of the future for using things like synthetic environments. And is it less about the technology then, and more about the policies and the cultures?
Joe Robinson: Ultimately they all have to go hand in glove. What we're really talking about is technology transformations, and transformation, I think it's too often used, but it's transformation of an organization in order to successfully adopt these types of capabilities. As long as the environment for innovation exists. And I think those companies that are in the UK that are looking those tech startups and those SMEs that are looking to build capability for the defense national security community should take real notice of the new defense industrial strategy, the defense security industrial strategy, which has been recently, recently launched.
Terry Pattar: Is that coming out of the integrated review?
Joe Robinson: The integrated review. And the prime minister's commitment to science and technology in UK. My point is that, the conditions in the UK, if the policy can be effectively enacted but the sentiment is certainly there. And the leadership is there. The conditions in the UK are now fantastic for companies to start to look at delivering equities for defense. As long as the conditions are there, economically and from a talent perspective, and from an ecosystem perspective, connecting with academics and universities and bringing together an industrial base, which is increasingly diversified a supply chain of suppliers, which are increasingly diversified and not centered around two or three big primes which are sort of stifling the innovation that can be brought to bear by smaller businesses and slowing down the agility of these types of capabilities. Get those conditions right then technology will come.
Terry Pattar: Yeah. This has been great, Joe, thanks for taking the time to speak to us. And we'd love to see more of this in action. And then the future sounds even more interesting in terms of what can be achieved, the potential that's there for this type of technology and how it helps with intelligence planning, et cetera.
Kyle McGroaty: Yeah. Hugely impressive. Where were you guys 10 years ago when my tactical bundling needed a whole lot of help.
Joe Robinson: Thank you Terry and Kyle for the opportunity to talk and let's look forward to the time when we can sort of meet each other in the flesh.
Kyle McGroaty: I'll wait.
Terry Pattar: Sounds great. Thanks, Joe.
Joe Robinson: Bye- bye.
Terry Pattar: Cheers Joe.
Joe Robinson: Thank you.