AI for automated OSINT reconnaissance - part two

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, AI for automated OSINT reconnaissance - part two. The summary for this episode is: <p>In part two of this podcast, Jim Clover OBE, Varadius Ltd, continues to uncover the evolving landscape of artificial intelligence (AI) in the intelligence community with Harry Kemsley and Sean Corbett. They discuss the fine line between the innovative applications of AI and the critical importance of human oversight in intelligence analysis. Explore how AI is reshaping intelligence gathering, the risks of over reliance on technology, and the vital role of ‘prompt engineering’ for accurate and ethical outcomes.</p><p><br></p>

Speaker 1: Welcome to The World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open- source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello, and welcome back. For those of you who listen to part one of this podcast, you'll know that we're about to pick up the second part now. Thank you for listening. One of the things that I'd like to perhaps start to wrap around this conversation then is the application of the AI, where we are comfortable with it, where we are less comfortable, to the intelligence cycle. I know I keep referring back to that, but I want to try and lock this conversation to the practical utility of AI. I think what I've heard in the last 20 or so minutes is the summarization, the relatively straightforward tasks we're comfortable with... It can be used in analytics. It can be used in other aspects of it as well, but at all times we need a human ensuring the prompt is effective in terms of driving it in the right way because it's not, to use your word, ready for prime time. It's not ready yet and just released. If that summary is good enough, good enough to proceed, one of the things that steps out for me, Jim, is the worry bead that I have that somebody who doesn't know how to prompt, who doesn't have the curiosity or the experience to say, " Well, wait a second," starts to take these things at face value, starts to put in a question and get an answer, and they say, " Well, it comes from an AI, therefore I will use it at face value," ... There's danger in that, isn't there? There is a worry that we would have mis or disinformation being fed through from the model into something that might be quite important. How do we ensure, in the intelligence community, we don't become victims of our excitement of using these technologies? That's the bottom- line question for me. How do we ensure we retain the tradecraft, the governance, and therefore the assurance in the product we produce?

Jim: There's a technical answer, which I won't go into because I'll out- geek myself, but I think, at the end of the day, this is, again, human the loop. Until we reach a point whereby we are able to fuse these large language models that are constantly evolving and constantly getting better, I'm not going to say smarter, but better at how they extrapolate data, present data, think things through by multiple turns like humans do very rapidly, I think you are just underlining for that community using AI will need to still look to the human to say, " Is that the right thing?" Because certainly in the intelligence world, it's really, really important compared to my news application. My news application is fetching open source statements about a subject. But for intelligence gathering, for anything that might be being promoted into serious decision makers, I think right now... But again, let's not subdue the power of AI for that intelligence analyst or researcher. For example, I can set off searchers, just conventional searchers, to say, " Go around the internet for 24 hours on a subject." Results in thousands and thousands of web pages. Some will be garbage. Some won't. Now, with a strong prompt analyzing each one of those returns, and if yourself and Sean sat there and wrote the prompt out, when you are analyzing this article that is returned, I am looking for the following and the following only. If you do not see that in this article, reject it. If you see anything that's unethical, reject it. This is what I class as bias. You put the rules in. It all comes down to the prompt when we're talking about... And that's either a prompt in ChatGPT, or it's a prompt in code, and I think it is just something that's often skipped by. We often rush to the wrapped solution, and I think exposing that to folks, the importance of the prompt in those different scenarios... And the other thing I didn't mention, actually, Harry, is the other thing that obviously, for intelligence and for information gathering, is the power of AI and enrichment. Again, as long as the rules are strongly applied, these large language models can enrich information provided, can give us insights, can go and fetch current stuff and put context around them, which will otherwise be complex data. It is all about wrangling those prompts. It's about really gripping the inbound and outbound. Sean's point. Garbage in, garbage out. Well, part of the garbage could be the human prompt.

Harry Kemsley: Yes. Steering it into the big garbage pile in the corner from inaudible work. Sean, I'm going to come to you in just a second because one of the things we talk about in every podcast is tradecraft. Just give you a moment to think about it. When, if we haven't already, do we start driving AI prompt engineering into our tradecraft? Or is that really just another version of the Google Boolean search that we've been creating over many, many generations? When do we start really getting hold of this? Or has it already been done?

Sean: It is actually quite a deep question because if the intelligence community is not already engaged with it, then there's something seriously wrong. But I think there's several issues that... There's education issue in terms of this is how you use it and all the good things that Jim's been saying, particularly about the prompts. There's also understanding the limitations and the opportunities, and that then leads to a, what do you need now in the future from intelligence analysts? And we've covered it a little bit in previous podcasts but not in any great depth. And there are many views and certainly in the intelligence community are doing that. Do we want somebody who's technically very literate that understands the ones and nor the coding all the stuff that Jim pretends he doesn't understand but really does? Or do you want someone who's an expert on geopolitical parts of the world or even understands cultures? In an ideal world, the answer is all of the above, but you're never going to get that. People just aren't that good. I think this is something that it probably hasn't been touched on enough because everyone's in the tyranny of the now. I would like to think that in certainly some of the R& D areas and maybe some of the areas where Jim worked before, they are doing that. But if I was to say... If I was sitting on the floor plate of the PGHQ right now talking to my strategic corpora, who's a brilliant analyst, to say, " How are you applying elements of AI? And do you understand that?" I suspect the answer would be probably not as much as they could.

Harry Kemsley: My worry, Jim... We'll start wrapping things up in a minute, I promise, but I can't... I got to get these next couple of questions out. I was at a conference recently, a NATO conference, where data science and these kind of capabilities were being discussed, and it was largely dominated by technical people talking in technical terms. And the thrust from that was we need to educate the users. And I tested that statement a couple of times by asking a few questions, and the net answer was our users, to your point, Sean, need to become AI experts. They need to become very, very good at coding, et cetera. And to your point about 20, 25 minutes ago, Jim, about democratization of this capability, that doesn't necessarily mean I've got to become a coder. That means I've got to be able to know how to manage, govern the tool. I've got to learn how to use this tool, not become an ability inaudible become able to create the tool but to actually just use it effectively. My worry is that we are approaching this conversation in terms of the application of capability as though it's an engineering task rather than a tradecraft task. Do you agree with that, first of all, that we could be in danger of that? How do we stop that happening in the tradecraft training side of things?

Jim: This is where it gets a bit exciting for me because I've seen... Last year, I saw two examples of this. Again, to steal one of Sean's things about being on the floor plate and asking a member of staff a question, I had that actually in a commercial setting. CEO was very interested in knowing something that he'd have to get a data- based architect out to do and probably a data scientist. And when I was sat with the team later on, I said, " Why don't you just give him a chat window on his desk so he stops bothering you," and they're, "What do you mean?" I said, " Well, we'll just give him an NLM. We'll give him," ... They had enough horsepower locally on- prem to keep it private, so we're not sending the tokens up to ChatGPT to then have to connect to our data. Why don't we just provide him the ability to ask questions of the data locally? And these are commercial questions. For example, how many customers do we have in this part of the UK? Again, this is really with the power of having the ability to chat in plain English to various data sources. Goes back to the MCP point that we're seeing as a really key emergent technology. If the AI, and this is what's happening now, will say, " They've asked me a question about regions in the UK, so I'm going to go and speak to the database and select regions in the UK about this point, and then I'll put that into plain English and send that back." I mean, that's really, really cool stuff and a great use of AI. Again, we're not asking AI to ask a question from its own knowledge. We're actually using it as a plain- English discussion frontier in that sense. But sorry, what's the second part of your question?

Harry Kemsley: The actual ability to get people quote educated doesn't need them to become technical engineers.

Jim: I don't agree with that. I think, again, where I've seen some really, really cool tech... One of the firms I actually talk to quite often, and I'm an advisor to them on the technical side of life, their folks were creating agents from agents. Rather than we have a situation where the AI basically in the background is saying, " I don't know because I don't have the ability to... I don't have the tools to do that," it would actually almost give birth to another agent so that it was permitted to write in a sandbox more code to achieve the task using AI. And again, as long as it's sandbox, as long as it's not breaking out, as long as it's not being given permissions to go crazy across all the customers' data and make thousands of token costs and token calls and drive up the OpenAI bill, these things are doable right now. I still think the whole AI game's a young sport. I really do. I think we've got some amazing stuff from the frontier models, which is democratized knowledge, democratized coding, but what I will say in partial support to those people that you heard, Harry, is what's really exciting is that anybody, and I mean anybody, that has got one quality, which is interest, which a lot of people have, can sit down and say, " I would like to learn to code," ... Back in the old days, and I'm old enough to remember the eight- bit times, as I'm sure we all are, how do I do Hello, World in Python? My son or daughter's doing Python at school. What the hell is Python? And we can have those conversations now and get going. Well, how would I save a file? And how would I create a database? I don't understand what you are printing on screen. Explain it to me like I was a child, and it will do that. And again, these are the things which can allow non- coders to start the journey toward a computer science degree and, in some cases, just become very proficient home tour developers. And I see both sides of that. I see very high- grade software developers using AI to maximize their productivity and expedite work. And I also see people that are just really interested, but they just want to expedite their learning but can't access a computer course or can't afford it, maybe.

Harry Kemsley: Time and money. Sure.

Jim: Again, a really exciting aspect of all of this.

Harry Kemsley: Let me wrap things up, because time is now evaporating on us, by asking you a question to which you're only allowed to use a one- word answer. Yes or no. And if you absolutely have to, you get a sentence because time is short. Let's go forward three, five years from now to the getting toward the back end of this decade, in other words. Many of the limitations which we've highlighted in this conversation which prevent us from believing that AI can be let loose prime time... Do you think many of those will be gone, that will be less concerned, that'll be able to just let AI go and do our intelligent cycle for us?

Jim: Yes.

Harry Kemsley: Interesting. I can see Sean's horror at the thought of that. Let's hold that question for another conversation because I'd like to really unpack that. I'd really like to spend more time looking at that because, for me, what I've taken away... And by the way, what I'm going to do now is going to ask you, Jim, Sean, give me the one takeaway you want the audience to take away from this conversation. If you had the chance to let them say one thing, what would it be? While you're thinking about that, let me just give you a couple of thoughts that I've taken away from this. We have exposed previously the technical analytical capabilities of AI, theoretically. Today, we've started to look at that more practically in terms of what you described as Street AI and the democratization on capability, which I think we've established in this conversation is very, very good at doing some things that humans don't need to do anymore. Collect, collect, summarize, et cetera. But then all the way through, if your prompt engineering isn't efficient and effective, the chances are you're going to get a lot of noise and not anywhere near the quality of art you could get if your prompt engineering was tighter, was more effective, and I'm sure there's an entire science on that. And then we've also agreed, I think in principle, that educating people to become good at managing this tool is what we need, not necessarily that everybody needs to become a data scientist or a Python script writer, although there are skills to be gathered there. If that's a fair summary, and please, by the way, correct anything that you don't like about that, what would you like the audience to take away? The one takeaway, Jim, from this conversation for you?

Jim: Have fun. At the end of the day, this is amazing, amazing tech. It is for everyone that wants to engage it. The more you can do it for free, do so. Look at things like offline models. Offline local language models. But for those that can use the free tools, do. I mean, this is a really exciting time, and it's unfortunately become an overused statement. But as a 51- year- old guy in tech, with my clients, I always say that this is actually a great time to be in business and be alive in technology because there's so much yet to do on this. And be cautious. Bottom line is there's still a computer responding, not a human, even though it'll come across as it, and the human loop remains absolutely vital. But for several tasks that have historically been of the human job specification, they will probably go, hence my quick yes.

Harry Kemsley: I got that. I got that. Thank you, Jim. Sean, your one takeaway.

Sean: I think my views are fairly consistent now, actually. Very well looking at it from an intelligence perspective. It is here to stay, and it's incredibly useful, and it saves us time and brings things together, but it's intelligence, and it helps intelligence process. It's not fact information. At some stage, you've got to have somebody, or maybe one day a computer, that goes, " Based on experience, knowledge, understanding, previous things have happened and just logic, this is how I'm going to weight that to come up with my so-what and the what-if." We're not there with that. It is understanding the limitations as well as the opportunities. And I guess that actually applies to the full use of AI, not just in the O inaudible.

Harry Kemsley: Sure. Sure. Well, Jim, I'm going to press the pause button there because time has now, as always, evaporated on us. Can I say thank you very, very sincerely for your contribution? I really, really do appreciate the fact that you brought some sort of practical and experiential aspects to this beyond just the technical. Particularly like the linguistics as well. The idea of the Street AI and the democratization piece for me is a very powerful perspective. However, what I'd also like to do is invite you back to talk about the coming period of time because I think where we are today is exciting. And if you have curiosity... I'm 61, not 51, and I'm curious. There's still hope for everybody else. Where would we be when I get to 71? That will be a really interesting conversation, and I know there's lots of people have views on that. But if we can keep it rooted and tied to the practical application of these tools and where they might go as a result, then I think that will be a really, really second part of this conversation.

Jim: Sure.

Harry Kemsley: Sean, any final words from you, Sean?

Sean: No. Good for me. Thanks. Thanks very much, Jim.

Harry Kemsley: Jim, thank you. Really, really appreciate it. And for the audience, thank you for listening. As ever, if you have any questions or comments, feel free to send them in, and we'll make sure the link in the podcast link that you'll have used to get to us you can use to reach out to Sean and I or, indeed, through Sean and I to Jim. Thank you for listening.

Speaker 1: Thanks for joining us this week on The World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.

DESCRIPTION

In part two of this podcast, Jim Clover OBE, Varadius Ltd, continues to uncover the evolving landscape of artificial intelligence (AI) in the intelligence community with Harry Kemsley and Sean Corbett. They discuss the fine line between the innovative applications of AI and the critical importance of human oversight in intelligence analysis. Explore how AI is reshaping intelligence gathering, the risks of over reliance on technology, and the vital role of ‘prompt engineering’ for accurate and ethical outcomes.


Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Jim Clover OBE

|Varadius Ltd - Tech Board Advisor, Problem Solver, Mentor and Creator Founder of EthosCheck.com for AI Model Safety Testing