Machine Learning and the Future of National Security
Terry Pattar: Hello, and welcome to this episode of the Janes podcast. I'm Terry Pattar I lead the Janes intelligence unit and I'm joined on this episode by Brian Raymond, who runs the public sector business for Primer. ai, which is a tech company specializing in natural language processing and creating applications to do that for national security purposes. Brian hello and welcome to the show. Thanks for joining me.
Brian Raymond: Terry. Thanks for having me. It's great to be here today.
Terry Pattar: Excellent. I didn't you just at all in that introduction. So I'll get you maybe to talk a little bit about your background and current sort of role at Primer.
Brian Raymond: Thanks, Terry. That sounds great. So my background really started at the CIA and so I left a political science PhD program for Langley, spent a bunch of years both in Northern Virginia and in the Middle East covering a variety of CT and political related issues. From the agency I moved down to the White House where I served as a country director in the Obama administration on the National Security Council covering Iraq and ISIS. And then after that left government spent a short stint in investment banking, and then as I like to say I was rescued from investment banking by a former CIA colleague of mine who landed at Primer about four years ago. And small little San Francisco based startup at the time about 30 people, and had recently received an investment from In-Q-Tel, which is the venture capital arm of the US Intelligence Community, which was really interested in Primer for what it was doing to structure and summarize vast amounts of think of it like narrative text. So news story, diplomatic cable, maybe an intelligence report and the possibility that that helped. And so over the last four years Primer has deepened its partnership with the US Intelligence Community, the defense community, as well as a number of allies. And our sweet spot is building tooling and platforms that ingest in process enormous volumes of unstructured text in English, Russian, Chinese multiple languages and make it more useful for everyone from a strategic level analyst, all the way down to tactical units out in the field. And so the nice thing about it is once you get great at the models, you can start stringing them together into bespoke pipelines and put them on the networks that the customers need and tailor them to specific workflows. But at the end of the day you're still dealing with large volumes of text and making it useful for decision makers. And so that's our big focus at Primer. We're about 150 people now spanning five offices from the US, UK, Middle East we'll be opening an office in Singapore soon and about three quarters of our businesses in the intelligence and defense space.
Terry Pattar: And we can come on to sort of talk a little bit about Primer and the capabilities you have and the work you're doing, but I wanted to touch first on actually, something you wrote recently and published in a blog post online talking about some of the current and emerging, and I guess touching on some of the future national security threats that the US and allies face. And in particular thinking about this information and the role that plays in how the threat picture is changing. But I wanted to get your thoughts first on especially coming from the background you come from and where you're sitting now, how do you view the current state of threats that national security organizations are facing?
Brian Raymond: Well, this is the issue of our time. I grew up in 9/ 11 generation and counter- terrorism was the dominant focus for the last 20 years. But really since Crimea in 2014, and especially since the 2016 presidential election our adversaries, adversaries of the West have had an enormous amount of success in this asymmetric gray zone type of competition in terms of eroding the pillars of Western democracy, which is a shared understanding of our collective experience and what is truth. It's cheap, you can do it efficiently, you can do it at scale now with a lot of the technologies that are coming online. And it's almost a perfect asymmetric tool against the West because of the values that our governments house and the values that a lot of our adversaries do not have, which is a proclivity and a willingness to actively undermine a shared understanding and truth and to do that with the arms of the state. And so you hear this all the time when you look at the testimony from, for example our commanders in the Pacific and inaudible and the threat they're facing from the PRC, or what the Russians are still doing today as well as other second tier actors. This is a big and growing problem that seems every year to snowball and so my expectation is that over the next two, three, five, 10 years, we will need to reorganize at least in the US as well as within NATO in order to counter this more effectively than we're doing today, which is treading water at best.
Terry Pattar: Yeah. And I think it feels like we're not well geared. We're not operating the same way that our adversaries are. Do you think when you say we'd have to reorganize, do you see efforts towards that already happening, or do you think that's still lagging and that's something that actually we need to do more on.
Brian Raymond: Without directly criticizing a lot of the folks that are crosstalk-
Terry Pattar: No. Yeah. Indeed.
Brian Raymond: ...everyday on this, the point that I made in my article was that our playbook for fighting back largely mirrors the Active Measures Working Group that was stood up in the US and the early 1980s to counter Soviet propaganda. And it's predicated on an inter- agency process to identify it and shine light on it. And look, that was incredibly effective. It brought Mikhail Gorbachev to the table, he actually told the KGB to stand down on a lot of their disinformation efforts in the latter half of the 1980s because of the efficacy of that Active Measures Working Group on shining light on it. And after the fall of the Soviet Union, basically that playbook was frozen and that became after Crimea happened, that was dusted off. You had the global engagement center stand up at state. You had a lot of the responsibilities for this push down to SOCOM, but really it was you get an inter- agency group together, you identify it, you try and shine light on it in the public. And information moves a lot more quickly today, it's much more diffused, you don't have three main television networks anymore. You have a multiplicity of different avenues, and now you have memes, could you even imagine memes in the 1980s or things like that, and how do you fight that? And so it requires an entirely different toolkit. And this is the opportunity for us to step back and take a fresh look at how to counter this, not just from a US perspective, but from the US and its allies, as well as from a government perspective and private sector coming together in order to counteract, because this is probably not a problem that's solvable just by, with and through US government or Western government organizations.
Terry Pattar: Do you think that the threat is such and especially with the gray zone activity that you've described, that it's just difficult for government entities, military defense, et cetera, to anticipate how it's going to shift and change and what might pop up next. Especially when you say... You touched on memes. A meme can pop up and have a big impact in a certain location or place.
Brian Raymond: Well what you seeing today is kind of massive and efforts to bridge public private partnerships around these fact- checking clearing houses. Facebook has done a lot of work. They've been the subject of a hell of a lot of criticism, but they've also done a lot of work looking at the efficacy of different approaches for countering mis- and disinformation. And I've seen a lot of mixed results in particular on fact- checking. What seems to be really important is counter messaging whatever that mis- or disinformation is right at the start, getting at it right from the get go. To almost tamp it down before it becomes a brush fire, brush fire turns into a wildfire. We had a nice article in WIRED last fall, WIRED Magazine last fall about like what we're doing with SOCOM and the mis- and disinformation. And our thesis is that you're going to need to be able to do three things. You're going to need to one detect thought amplified content online, lots of great folks doing that today. You're going to need to be able to look at artificially generated text. So synthetic text, and if it's not bot generated, and if it's not bot amplified and it's just human troll farms, you're going to be able to need to find it too. And the way to do that is through natural language processing. So AI at scale, and map the information landscape in almost real time to understand what are new claims that are catching hold, who's saying them, what parties are involved. And so that you can give the folks that are responsible in the government and on these platforms a fighting chance at at least giving the recipients of those messages some sort of truthful counterpoint to what's being said. And so how that's implemented and how that's done there's a lot of policy questions that are going to need to be grappled with. Right now the challenge is how do you build the tooling so that you could even understand what is being said across the information landscape at the speed of relevance, because right now it's days or weeks or months later that you end up seeing it.
Terry Pattar: Yeah, no, that's it. And I think we've seen, I'm sure you've seen as well, examples where that speed has held up the efforts to then try and counter any of that kind of messaging which is disinformation, et cetera. And the time it takes to put anything out by that point, it's gone, you're almost onto the next issue or thing that's arisen. So is there also a problem though in that let's say we get to that stage where we're much better at spotting it, identifying it, understanding and assessing this information as it comes out, that actually then countering it is a massive problem. And doing something to stop it, doing something to push back against it is challenging. And I almost want to say fighting fire with fire. We can't necessarily go to the same place in terms of using information in the same way as some of our adversaries, because, well, it be useful to get your thoughts on this. Do you think that we're more sensitive to those kinds of things being exposed and being criticized for doing them than our adversaries might be.
Brian Raymond: Yeah. The cost of being wrong of pumping out factually untrue information is incredibly high as it probably should be. But look, this is where we have an opportunity to take a fresh look. Like right now, for example, if an ambassador or a military commander won a counter message, each one of them has their own public affairs office that it has to go through. And then it has to be coordinated back in Washington and then it has to go back out to the Philippines, or it has to go back out to Singapore, Japan. We're not doing ourselves a lot of favor in terms of streamlining the bureaucracy. These sorts of problems have been solved in other areas. But look this is probably as much a technology problem as it is an organization problem, a human organization problem. And so the bad news is that it's a huge problem. The good news is that it's a problem of our own creation. So it's a problem that we can solve. That's something that Sue Gordon likes to say is that if we created it, we can solve it. And in this case, I think that we going to be inspired by even if we move away what that Active Measures Working Group did back in the 80s, we were totally helpless in late 1970s encountering Soviet propaganda. But it was a small group that came together and said we need to move fast, we needed to move forcefully, but we also need to be correct and it ended up working. Now that doesn't mean that that template is what should be used today, but I think if we solved it once, we can solve it again.
Terry Pattar: Yeah. That's a really important point. And I think it's interesting what you say there about it being a technology problem, more than a process issue or a an organizational issue. Because I think in many ways that's almost easier to solve whereas trying to realign processes or reorganize to try and counter a different threat is often something that takes a lot more time.
Brian Raymond: One of the biggest asymmetries here, Terry, is it orders a magnitude cheaper to pollute the information environment with falsehoods than it is to find whatever has been put into the information environment that's polluting it and to counter it. It's far cheaper for the PRC and the Kremlin to pollute than it is for us to clean up the oil spill as it were. I think that's the crux of the issue from a technology standpoint.
Terry Pattar: Interesting. Yeah, no, indeed. Well, I think you've given some hints of hope there for us at least in terms of that technology being able to help us in the future, which sort of leads me on to asking about Primer and maybe getting you to describe a bit more about the capabilities and the technology you're developing at Primer. Because I find it quite intriguing some of the work you're doing, especially the things you're doing to create tools that will support analysts, intelligence analysts in helping them get on top of large amounts of information and make sense of it and then actually use it in reports, et cetera. So maybe you can talk us through some of the capabilities you're building and developing there.
Brian Raymond: Absolutely. Thanks Terry. Look, there's an interesting study done a couple of years ago, which looked at a typical intelligence analyst covering a second tier country that had second tier in terms of the volume of reporting on that particular country. And it said that look in 1995 for that analyst to stay up to date on that country, probably have to read about 20,000 words a day. And by 2015 that had increased tenfold. And by 2025, they expected that to be in the millions of words per day. And so this isn't a problem. The intelligence community realizes this, they're not dummies. And they also realize they can't hire their way out of this problem. The only way out of this problem is to pair analysts with algorithms in creative ways to accelerate rote work, and then to also uncover connections and insights that were buried in the data. And so for us and our mission, how we think about it it's really threefold. So one, the consequences of missing information is incredibly high for these analysts. There's lots of professions, adjacent professions that we serve where they have to read a lot. But it's probably the highest for intelligence analysts if they miss one key report. And so once serving them in that domain, second, as I mentioned identifying connections or insights that are buried within the landscape, and then three helping to clean the data, or at least understand where there may be mis- or disinformation as it's coming in. What we're doing at a practical level though, to lift the veil, three things. One, we're structuring all of the unstructured texts that's coming in. So what does that mean? You have a report come in and it mentions 10 people. And it has a bunch of facts about those people and locations where they were and people they work with and titles that they have, we're able to find all that information and create that structure. Second, and this is something that where the focus of the mind is usually going when you think about AI, summarization. So we can summarize individual documents, automatically cluster documents, write you an entire briefing memo about it. And then third passive monitoring, so think indicators and warnings. So we work with analysts to train to encode their knowledge into the base algorithms. And then have those arguments running against millions of documents a day to surface those that may contain indicators and warnings for issues that they care about. Going back to one of your past podcasts and some of the great work that's been done recently by Zachary Tyson Brown and Carmen Medina. They made a point in The Cipher Brief post that they've received some feedback that they should have talked a bit more about AI and open source, but that it would only manufacture the same sort of uninspiring type of analytical products that are being made today. I think that's totally true, but it also misses the real revolution that's going on on the structure side and this coincides with what Janes is doing as well, where when I was an analyst my job wasn't to summarize. That was what I did as a briefer. I hated it. I had to wake up in the middle of the night and summarize. But really, if you were thinking about 60% of the job or 70% of the job, it's where are the tanks? Where are the people? Where are the organizations? Where are they going? Who are they connected to and drawing those connections. And that is still the exact same process today in 2021, as it was in 1945. We call it the less screen, right screen workflow internally. Or if it's 1945, the less stack, right stack workflow. I'm reading documents that are coming in over here, I'm gleaning insights and then I'm recording those insights in some sort of knowledge graph overview. What we're doing on the structure side is we're automating that jump. And so that up in the workflow you come in and now we've curated all that information and now it's ready for analysis, where you're not spending 70 or 80% of your day pruning it, cleaning it, curating it, getting it ready for analysis, it's all ready for analysis. An example here is there's been a lot written, a lot of ink spilled on the Obama administration decision making around Syria. At a certain point, there was a request made for the inner agency to look at every time that we've supported, the US has supported a foreign insurgency. To look at what happened before, what happened during and whether or not it resulted in an outcome that was favorable for US foreign policy objectives. And I remember that that took multitudes of people offline for weeks to curate that information. What we're focused and others in the space are focused is on can we cluster all those relevant documents? Can we identify the types of events within those documents? Can we identify all the key entities and then string together all of that information, wanting to get it ready for analysis. But then two once you have those strings, and once you have those timelines and have all that information, curate it, you can start building models on top of that and start moving into the predictive realm. And so that's just cost prohibitive today when everything is still artisanal and hand curated. But when you have machines doing a lot of this rote work, that machines are great with, it can unlock time for humans to do what they're best at, which is being creative and thinking about second order implications.
Terry Pattar: I think that's really interesting because when we hear a lot about AI, about machine learning and about the tools that people are building to do intelligence work or support intelligence work, I think and I'm sure this has been countered by many people, but there is still I think a latent fear amongst analysts that they will be somehow put out of a job by some of these tools. And this crops up on a number of our previous podcasts and in discussions I've had with other people. But I think everyone's in agreement that none of the tools right now are at that level. I think of the three kind of capabilities you mentioned about the primaries developing, then the second one that you mentioned there about producing reports and to be able to brief people. I think that's the one that would have those kinds of alarm bells ringing and people raising an eyebrow and saying," Well, hang on. That's what I do, that's my job. And you say you're building a tool that's going to do my job for me, or rather for my boss and make me redundant." But I think what you've said there is certainly what I've heard from people working in this area or building these kinds of capabilities that it's not to replace analysts, it is to free up their time so they can do more interesting things. Produce more insightful reports and briefings off of the information that's out there rather than having spent a lot of time sifting, curating, collating, organizing their information or structuring it, which is hopefully what the tool can do for them. To what extent is that true from your perspective?
Brian Raymond: It's funny, this comes up a lot. And I would say that about a quarter, about a fifth to a quarter of Primers headcount are former IC and DOD folk. And we have our fingerprints on everything and we're building the tool by, with and through our customers. And one thing that gets us most excited is that we're building tools that automate away the work that we hated doing when we were in the seat. I remember sitting in embassies as I was a briefer at 3: 00 AM summarizing different cables. And I was like," This is horrible. I should be able to automate at least a chunk of this so I don't have to do that because this is not high value add." What's high value add is once I have that information summarized and curated think about," Okay, what does the ambassador, or what is the commanding general going to want to know about X, Y, or Z and go and hunt all of that down and curate it." And so I actually have a deep, rich brief that I can deliver to a policy maker rather than just the what. I can go far beyond the what into the why, the so what, and what next type questions, which we just don't have enough time in the day to do. And we can integrate and dovetail these technologies really elegantly into the workflows. But I think stepping back here for a second, where all this is going and why does this matter? How does it fit into the bigger picture? It's fascinating in that we have such a close partnership with various organizations that are doing a lot of great thinking on this, but across the intel space, across the defense space and the private sector. But it's really going towards having a single ground truth of what's going on in the world and being able to automate as much as you can of the rote organization of that, so that you actually have a terrain map of the information environment, the facts of what we know and what we don't know. Doing all of that curation of that at scale, almost instantaneously is what we're doing for analysts at the strategic level. But then down at the operational or the tactical level we're doing a lot of work with the air force today and others for JADC2, Joint All- Domain Command and Control, which is really this notion of connecting every sensor with every service member so that anyone can have whatever data that they need on demand to understand the environment. And where that's going right now with the push to the cloud and the push to the infrastructure that's required that the JAIC is doing, the Joint Artificial Intelligence Center. It's really to create the operating space or the common operating picture for all the services, all the Intel analysts, everything. And so what you're going to need on top of that are these applications like what Primer is doing or other fabulous competitors, not competitors, but providers in the computer vision space or folks that are turning on nascent imagery to crunch on all of that, and then index it in almost real time so that you can actually keep track of what's going on in the world with the speed with which everything is moving. And for us we occupy this little neighborhood of language and being able to process internal communications, intelligence reports, open source to help contribute to that. But it's really exciting to kind of see that start to come together.
Terry Pattar: It sounds fantastic in many ways, in the sense that I think we've both had the same experiences in the past where we've been working in analyst roles where you think," Oh, wow, this is actually quite tedious because what I'm producing, isn't very interesting." I want to get to those questions like the so what and the what next, which are the exciting questions that you want to work on, and you want to be able to explain to customers. But often it's the basic stuff that gets in the way, or that takes up a lot of the time and you don't get to do the more interesting stuff. So I think anything that helps shortcut that process and get to those end points, which is really what that's what my team at Janes specializes in I think. And I can already imagine the faces of some of my colleagues and my teammates saying kind of lighting up saying," Oh, wow, you mean you're going to help us do this quicker." So I can definitely see the appeal of that. And that is an area that is a massive need I think for analysts across the board, in whatever roles, whether it's in intelligence or in other fields. But in terms of the information that you're working with and the tools you're working with. And you mentioned there obviously it's dealing with a lot of language, it's dealing with a lot of unstructured information. Are we moving forward though to a point where a lot of the certainly open source information that we're dealing with is also produced in a more structured way by AI led systems or tools, et cetera. And I'm thinking in the sense of the way that say news reporting is going, for example, where in some ways there's more basic news stories posted online or being pushed out and published through which they are automated in the way that they produced. And so in a way have you got machines dealing with information produced by machines? Is that where-
Brian Raymond: Some circularity there.
Terry Pattar: ...we're getting to? Yeah. Yeah. Is that where we're getting to? And does that make it easier in some ways, or does it make it more challenging?
Brian Raymond: This is a tough issue. So the world was captivated last year by GPT-3, at least the nerdy part of the world that we occupy. And it's a large parameter language model developed by OpenAI, Elon Musk's shop and it was incredible. You could put in just a little bit of text and it would write you an entire article about it. It would dream all of it or hallucinate it, depending on which journalist you're talking to. And it scared a lot of people and OpenAI made the decision that they weren't going to open source it. So GPT- 2 its predecessor they'd opened sourced and thrown it out to the world and said," Go make great things with this." And OpenAI made the decision," We're not going to do that." And they just gave I believe Microsoft access to it and then just a few journalists and that's about it. During the last 12 months, different developer groups have said," Well, that looks like fun let's go build our own." And actually a couple of weeks ago, GPT- Neo was released a smaller version of GPT-3, it was released into the wild. Now it's available to anyone that wants to download it and use it. And what's different about this is that with GPT- 2 and then earlier language models, we're pretty good at training models to go find that synthetic text. It had certain tells that you could go train a model, feed in training data and then go hunt for it and then identify it. We're entering a realm now where it's almost impossible to discern what's synthetic and what's not or we will be sooner than later. And so that's why getting into the message of what's being conveyed and understanding it at a natural language understanding perspective is going to be absolutely critical because they won't have texts` or tells that you can pick up on from an information advantage type perspective. But then also understanding not just a single message, but how our adversaries are potentially doing A/ B testing on the population. So when it's cheap and you can generate an infinite amount of text for next to nothing, then you can take a marketing approach. And some of the reports that have come out about the manipulation that was done over the last several years, we know that the Russians are already doing A/ B testing on a small scale on social media but with these large perimeter language models it's going to kick that into overdrive. The curve is increasing at an increasing rate.
Terry Pattar: Wow. Wow. So you've sort of touched on there what's going to come next, but are there any other developments that you see coming in your field or your area where you think they could have a national security effect, or any implications for us when we're looking at national security related issues.
Brian Raymond: I've said a lot about bad news I think on the good side some of the really exciting developments I think will stem from the infrastructure that's being built today. And so over the past decades there's been a tight relationship among Five Eyes and NATO of information sharing. We're moving now to a world where we're going to have shared clouds between the Five Eyes, between NATO, between other allies. And with the clouds and with large volumes of data on that, you'll be able to train models and you'll be able to share models, and you'll be able to encode in those models, the learnings and the expertise not only of just your particular unit, but your broader organization, your broader service, and also your sister services. And so that coupled with technology, we're developing, others are developing no code AI model labeling, training and deployment platforms where you're going to cut out... Today you have to pick up the phone call someone to go get some labeled data, you have to call a data scientist to run the training, you need to call a solutions architect and deploy those models. There's an enormous number of friction points in the process of actually using AI in the operational context. We along with others in the field are just attacking all of those friction points very aggressively to try and one make it much easier once you have those clouds and you have the data to do the model training and the model sharing. But then more importantly is that within the dynamic context, once a confrontation erupts and you learn that your models not performing very well. You can take it offline, you can retrain it in a half an hour or an hour and redeploy it whereas today it takes weeks is typical. And so that's that tightening of that OODA loop of the models in that observe, orient, decide, act loop, and nesting them within them so that they are truly operationally ready, and can be readily retrained with a high degree of performance. That's is becoming unlocked with the infrastructure that's being built today not only in the US but also abroad.
Terry Pattar: It sounds incredible and I think we're going to reach a point I guess, well, will it creep up on us gradually, or will we reach a tipping point where all of a sudden we'll see a big shift? I know that's a sort of hard question to answer, but what's your feeling on that?
Brian Raymond: I think it'll be a slow drip and we'll wake up and we'll say we're in an entirely new world, and we didn't even realize we crossed the boundary because it's going to be incremental. But there is so much incredible innovation going on at so many different parts of the military and in the private sector. We have partners at Microsoft that are doing incredible work on creating that infrastructure. And they're focused like a laser on an enabling next generation AI applications like Primer. Other partners at Palentir, for example, doing the same, enabling work that we do by curating the data, creating the infrastructure that's required and creating the UIs that are intuitive and easy to use so you don't need a PhD in order to go and actually do AI, that's where this is going. It's also going and this is something that I touched on a minute ago, but I really want to dig in here. It's this democratizing of AI and taking it out of the hands of scientists really, and putting it into the hands of the folks that are confronting these challenges every day, so that they can go and automate things that are automateable and augment the things that they need augmentation on. I think that's going to be the big tipping point here is that when we make it easy enough and fast enough for the folks that are on the front lines to encode their tacit knowledge into these models and use them and lower those costs, that's when we're going to see a paradigm shift and really how we're pairing algorithms with analysts and operators.
Terry Pattar: Yeah, that's really interesting. So I kind of almost envisioned thinking about the human aspect of it, where we now have maybe that generational shift from previous generation to what would have been termed digital natives, et cetera. People who've grown up with some of the tools that we probably take for granted nowadays. Do you think at some point we'll have a generation that are AI natives, essentially, that will have grown up with these tools from when they're young and accustomed to using them in a way that we almost can't conceive of yet?
Brian Raymond: Terry I'd almost flip it around and say a lot of those folks are already here. That generation is already here and they're blocked right now. So they're blocked by the infrastructure, they're blocked by the data, they're blocked by thorny issues like ATOs an Authority to Operate non- classified systems and moving across domains and things that are just nightmarishly bureaucratic, but incredibly practical and hindrances. And so with what General Groan has done at the JAIC to reorient the JAIC. Right now they have a$ 250 million solicitation out right now for data curation, help getting data ready to analyze what you have with the army, you have project convergence, you have Navy Project Overmatch and you have ABMS with the air force supported by Kessel Ron and MIT and other national labs. What you're starting to have is a critical mass. You have the 700 page report by the national security commission on AI, which is going to shape a lot of policy making over the coming years. And you have messaging from the Biden administration that they're going to continue and amplify what had begun under the Trump administration on really investing in AI so that we can compete with our near peer adversaries. This is happening. It's just that the world's not ready for it yet. There's such a hunger, such an appetite across the services to do this, but a lot of the practical plumbing that's required in order to enable it is being worked on right now. But once that's fixed and put into place, I think there's going to be a lot of pent up demand that's going to be met. And then that's going to have a spill over into other parts of services, which might not be as technically native that are going to want to come along as well.
Terry Pattar: Interesting. So do you think there'll be a first few projects or capabilities that will show exactly what's possible and then others will follow along and say,"Actually, we see that now. We now need that." And then the organizational plumbing and the culture shifts and everything else we need to reorient the way organizations work now that that will follow along.
Brian Raymond: Yeah. And I think, I think what Preston Dunlap and the team at the air force started with ABMS, what's continuing with what NORTHCOM is doing around JADC2, what the army is doing with its various demonstrations, people are convinced that this is how we have to go. Now, it's just a matter of like how do we get there as quickly as possible? And folks like David Spirk who's the chief data officer at the Pentagon talks a lot about work the department needs to do, as well as work the department needs to do with its allies to get data ready and have a data first mindset so that, one, you can get the data you need to train. Two, you can have the GPU's and the cloud infrastructure you need in order to train and run them. And then three, you empower the ground units who know the problems they need to solve to go pick up that tooling and go run with it. And so I think from a leadership perspective and from where they're going there's so much great work going on right now but it's in little pockets. But it's getting a lot of visibility, folks are generally convinced. And I think what we should be looking at is the finalization of the JADC2 strategy work that's going on right now. And once all the leadership's installed at the Pentagon that strategy is codified and you start having doctrine fall out of that, then I think that will mark this turning point into this AI enabled and AI driven workforce that already knows that they want it.
Terry Pattar: Yeah. That's pretty interesting. I think for me personally, I'm thinking that the turning point is going to come when the 700 page report you mentioned when I've got an AI tool that can summarize it for me so I don't have to read it myself.
Brian Raymond: Absolutely.
Terry Pattar: It's been great talking to you, Brian. I have so many sort of pent- up questions I can pepper you with, but most of those would reveal my own ignorance and probably bore you in terms of the basic nature of them. So I look forward to hearing more about the capability as it develops and to keeping in touch with you, and finding out more about the work of Primer and what you're involved in. But also seeing how a lot of these big initiatives develop and seeing how we collectively across the national security space in public and private sector do come together as you talked about to try and counter some of these big threats that we're facing and how they might develop in future so exciting times ahead I think.
Brian Raymond: Terry, thanks for having me. I've enjoyed the conversation and look forward to picking it back up again soon.
Terry Pattar: That's great. Thanks Brian.
DESCRIPTION
In this episode of the Janes podcast Terry Pattar, head of the Janes Intelligence Unit and Brian Raymond, Vice President of Government at Primer discuss the global impact of machine learning and the future of national security.