Use of OSINT to support Special Operational Forces

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Use of OSINT to support Special Operational Forces. The summary for this episode is: <p>In this episode we speak to Gwyn Armfield, Brigadier General, USAF (retired) to discuss how OSINT supports Special Operational Forces in their operations.</p>

Speaker 1: Welcome to The World of Intelligence, a podcast for you to discover the latest analysis of military and security trends within the open- source defense intelligence community. Now, onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello and welcome to this episode of The World of Intelligence of Janes. As usual, your host, Harry Kemsley and I'm joined by Sean Corbett. Hi, Sean.

Sean Corbett: Good to see you, Harry.

Harry Kemsley: Good to see you, Sean. Sean, we've been talking quite a lot recently about a variety of things around open- source intelligence. In recent episodes, we talked about a few applications and considerations about open- source intelligence. Today I thought we'd move on or move back to, probably better said, some things we've done in the past, which is looking at the uses, the utility and the considerations of open- source intelligence in support of operations. Today I thought we might spend some time talking about how open- source intelligence might be used to support special operations. To help us with that, we have a guest with us, Gwyn Armfield. Hello, Gwyn.

Gwyn Armfield: Harry and Sean, thanks for having me.

Harry Kemsley: It's a real pleasure as always to have you here, Gwyn. For those that don't know Gwyn, Gwyn Armfield served for nearly three decades of active duty in the United States Air Force. Retiring as a brigadier general, he now leads the RGA Consulting Group, where he advises several clients on leadership, strategy, and technology developments. During his time in the military, Gwyn led Air Force combat controllers and pararescue men, conducting high- risk, high- return special operations around the globe. He finished his career as the deputy commanding general for Special Operation Forces in Afghanistan, and is the vice director of Plans and Strategy for the United States Center Command. For his actions, Gwyn was awarded the United States Air Force top leadership recognition, the US Air Force Lance Sijan Leadership Award. He's the co- author of the book, Lead to Serve and Serve to Lead: Leading Well in Turbulent Times, with co- author Lieutenant General Retired Bruce Fister. He's a frequent speaker at corporate and government leadership training events. Gwyn's passion is equipping others to lead well, and enabling organizations to build high- performing, successful teams. Gwyn, it's a delight to have you here. Thank you for joining us.

Gwyn Armfield: Yeah. Harry and Sean, this is a fun conversation. I look forward to getting into it with you.

Harry Kemsley: Very good. Sean, as we do regularly, let's start by making sure we're all on the same page by what we mean by open- source intelligence. Give us your definition, Sean, as you so frequently do on what do we mean by open- source intelligence, Sean?

Sean Corbett: Yeah. Thanks, Harry. It's been a bit of a journey, but I think we got into a pretty sound place by what we think and we understand by open- source intelligence. It has to have four main elements. Firstly, it has to be derived from information that's freely or commercially available to all. Secondly, it has to be derived from legal and ethical sources and techniques. Then in common with the other forms of intelligence, it's going to be applied to a specific problem set or requirement, and it has to add value, the so what as I call it.

Harry Kemsley: Yeah. That so what and value piece is, of course, what we're going to be looking at today with you, Gwyn, around special operations. Let's start then with a pretty broad question, high- level question about where do you see the role of open- source intelligence in support of your experience with special operations, Gwyn?

Gwyn Armfield: I think that's a great way to start the conversation. In the special operations' community, we're very human- centric in looking at the intent and actions of other humans that are out there, more so than system- focused where you see the broader, conventional forces. The ability to leverage available open- source intelligence to inform fast- paced and dynamic operations, where you're looking at contextual changes of what's going on in certain specific places. Or you're looking at the population either in a micro sense or a broader sense, the trends in that population of what's being said amongst the people. To inform unique I like to call it specialized operations instead of special. But your very unique, highly- skilled individuals performing specific tasks. Those usually have high consequences and usually high risk associated with them, using everything available to us. Now, we'll get into it here in a little bit, but I looked this morning, and the internet at least is telling me there's about five billion people connected. Five billion collectors out there all contributing to the global conversation. That's one thing, is the ability to leverage all the available information for time- sensitive missions. The other piece that Sean and I talk about frequently, is the ability to share information. A lot of times when you're doing specialized operations, very rarely in 30 years of doing it, very, very rarely did I see that done unilaterally by the United States. Almost always, you're working with close allies, partners, or the partner nation. The ability to quickly find and share information with them without the bureaucratic, foreign disclosure process. Again, when you're doing time- sensitive stuff, you got to move quick. We'll talk a little bit about information veracity, I think, as we get into today's conversation. But the ability to have readily available information that I can share my partners at the tactical or team level, is super helpful. I'll stop there on that.

Harry Kemsley: Good. That's a great way to start. There's about six or seven things you said there, which we could probably spend the next hour on each for, so that's perfect. Let's just tuck in there, Sean. Specialized operations, essentially a human focus where we're trying to understand intent and interactions. I particularly like the point about speed and context, because when you put speed and context alongside veracity, there are risks and opportunities in that space. Let's go and have a look at that in a second. The shareability piece I will just take a second to talk about that. Because one of the things we found at Janes in recent times around the Ukraine conflict, is that agencies are struggling to share sensitive intelligence with partners they don't normally work with. Using the open- source intelligence is making that much easier. Maybe we'll come back to that piece in a second. For me, Sean, let's focus in around that human focus, that intent piece and particularly the dynamic between the shareability, time sensitivity, and the veracity piece. Let's just tuck into there. Sean, your thoughts to get us started.

Sean Corbett: Yeah. It's interesting, we must have done too many of these together, because the two things I just wrote down were speed and context. Now, supporting specialized forces or special forces is always challenging for intelligence specialists anyway, because the agility means that you're not necessarily going to have a full setup organization where you got all the comms you need, you got high sight in all the rest of it. There's a particular challenge I used to have when supporting special forces in Afghanistan, that you've just got to go with what you've got. But it's got to be good enough to actually, now I'm going to use the word actionable intelligence here. I'll use that advisably and I don't mean that in this targeting sense, but it's got to be geographically bounded and it's got to be time sensitive. The guys that are mounting operations, whatever they are, they just need what they need to know, but they need it now and they need it as reliable as possible. Now, sometimes there's a balance to be struck between all of those because you're not going to have the 100% solution right now, but is the 80% good enough? That requires really both foundational intelligence that you've got access to, but also the granularity of what you have on the ground. Of course, open- source intelligence has stuff to play there, because of the electronic spectrum out there that you can just hoover up anyway. It's worth saying at this point, I think, we might get onto this later, that unclassified does not mean not sensitive. You've still got to protect that. The question then is for how long you protect it and is it time expired, so actually it doesn't really matter? Of course, I will talk about if we get onto the intelligence sharing, which is a really big one for me as well, but there's a huge amount in there.

Harry Kemsley: Gwyn, let's just step in a little bit deeper to the kinds of activities that specialized operations touch upon, where open- source intelligence might be useful. If you think about before the operation is planning, during the operation there will be information that might be very contextual, very relevant, very time sensitive. Then after it, there is the reaction, the effect, the outcome, if you like, the post- operation assessment. Do you think the difference in terms of the validity of use for open- source intelligence... I'm being very general and simplistic, but those three phases, is there a difference in the utility do you think across those three aspects of the specialized operations?

Gwyn Armfield: Yeah. I think there is, Harry. One of the theories that I buy into is John Boyd's OODA loop, the ability to observe, orient, decide and act.

Harry Kemsley: Right.

Gwyn Armfield: We look at the way we make decisions and the military decision- making process, a lot of that is a very structured, long- term ability to look at options with facts and assumptions and come back with the best military option to proceed to solve a problem. Usually in parallel to other government actions that are being worked as well, we have the military option. Let me give you an example. Without being specific, about a decade or so ago, there was an event that happened overseas. It was unexpected, no one had seen it coming. They recalled the unit that I was in so that we could respond to it. As I'm driving into work early, like 2: 00, 3:00 in the morning, I'm listening to BBC.

Harry Kemsley: Right.

Gwyn Armfield: BBC has a person on the ground, live voice, telling us what's going on, on the ground from the area of concern. I get into work, we go into the higher end intelligence capabilities, and they don't know anything. They've got nothing because those capabilities take a long time in the context of a crisis, they take a long time to be applied to a problem. Once they're applied, they're very helpful, but we're in a fast- moving, dynamic situation where we're going to go out the door in a very brief period of time. I'm sitting in the meeting and I'm like, " Hey, did any of the rest of you all listen to BBC driving in this morning, because it's the only thing on the radio?" Here's what this guy said from on the ground, and I went through about a paragraph of information. It was the only intel that we had, and this was a decade ago. This is a kind of World War II esque reporter on the ground passing information. Well, fast forward to 2022, and we've got the five billion people out there all reporting, and the ability to sort that out. What I always recommended was triangulating to determine veracity. But the ability to feed that into your OODA loop, whether you're in contact with the enemy and taking fire immediately, or whether I'm going to conduct an extremist operation in 48, 96 hours. Or hey, I've got to develop something over the next six to eight months and have influence somewhere. All of those revolve around the OODA loop and the ability to observe, orient, decide and act. And how open source feeds into that, not only to initiate the planning process, but to validate assumptions as you go forward. A lot of times, we got to go with what we got, in terms of facts and assumptions. To me, my intelligence collection plan was always geared toward turning my assumptions into facts.

Harry Kemsley: Into facts, right.

Gwyn Armfield: Either refute them or turn them into a fact, because that's the thing where you go wrong in planning is your assumptions, and we don't spend enough time on those. Then going back, an iterative process to refine those. I think the open source, the ability to use, there's a plethora of information out there. Then determine fact from fiction. We talk about influence and how to use it to influence nefariously. I think that's critical is you look at especially with extremist's response to missions that are usually geared toward a specialized force, whether it's counterterrorism, hostage rescue or some other type of event where there's a very no- fail piece to this, defined mission bounded by time.

Harry Kemsley: Yeah. Just remind me, your anecdote reminds me of a couple of things. One, which is a war story that I'm not entirely sure Sean can talk about on this podcast, where he got some intelligence from a source on the ground. But it reminds me of Janes, we have contributors all around the world who are on the ground. They might be local academics, they might be tribal leaders, they might be local government. They provide us with insights into what they're seeing, hearing, in terms of sentiment or events on the ground. Sometimes that is the only place we can find any record of an emerging issue. Clearly, once we've started to detect something is happening, we can look for other sources and then that brings us to the point, I suppose, Sean, in terms of tradecraft that Gwyn just mentioned. That with six billion collectors pouring information into the environment, we don't know they're all true but with triangulation and good tradecraft, we can begin to zero in on where the truth is most likely to be. That brings us, does it not, Sean, to the open- source intelligence community support to ops, particularly the time- sensitive arena having to be extremely good at the triangulation process? Finding where things are clustering, in terms of the most likely source for truth, Sean?

Sean Corbett: Yeah, absolutely. Back to that great word tradecraft. I know we are going to have a future podcast on tradecraft. It's back to what Gwyn was saying initially about it's human- centric business. Every human being is a sensor. The question is to triangulate, as you said previously, different sources and then discern what is true and what is just misinformation, disinformation. There are elements there of sentiment analysis that you can look at as well. It's hoovering up what's available immediately and being able to filter that and say, " Right. This is relevant, that isn't relevant." Now, that's a real challenge and I think it's a real challenge for the intelligence community per se anyway. Although to be fair to them, they've been doing it for many, many years now. The one thing we've learned over the last 20 years, is how to do counterterrorism support from an intelligence perspective. But I think it is quite new the using open- source intelligence to support that, much to some of us have been doing that for maybe the last five to 10 years. I think in terms of a formal process, it's fairly nascent. Just looking at the Ukraine crisis, they're slightly getting away from special forces. But we learn pretty quickly about the value of limitations to open- source intelligence in supporting that, whether that was from a messaging perspective, whether that was sharing intelligence with Ukrainians, or just understanding ourselves. ` I think we have learned a lot there, but I think in terms of the agility that we're talking about and the timeliness, I think we're fairly nascent still, in terms of supporting special forces.

Harry Kemsley: Yeah. Gwyn, let me just pull us in a slightly different direction. We've discussed that OSINT is and can be very useful to the preparation for the conduct of and even be understanding after the fact for specialized operations, but what about the limitations? You mentioned a couple of times veracity and that's very frequently raised as a concern of the open source because of the thing that Sean just mentioned, mis and disinformation that is very widely spread into the six billion collectors. What about the limitations? Where do you start to get that uneasy feeling about only using, for example, open sources? Where would you start to feel the pinch of unease about relying upon that?

Gwyn Armfield: Yeah. I don't think I'd ever act on open- source information. It tips and queues, it validates, it provides leading indications of where things might be trending in a different direction. The biggest thing that I just wrote down, Harry, was having reliance on others. Hey, if they just go off the net, and if we're monitoring social media or watching the news, or something else, I don't have the ability to know I'm going to have that source of information as the operation unfolds or afterwards.

Harry Kemsley: Right.

Gwyn Armfield: We want to appeal to provide more reliable and trusted abilities to put on top of this, but I think it is a great way to path find into where you want to be, than to use that information to validate as you go along.

Harry Kemsley: Yeah. I sense, Sean, from our previous conversations, in terms of this limitation, one of the limitations that we discussed in a podcast recently that we broadcast quite soon, Gwyn, was about the power of technology to create very, very compelling, very compelling disinformation. We talked specifically about deepfakes, these synthetic media that can be created. A guest we had on, Guy Cook, was talking to us about the fact that the human can no longer detect these fakes. That does bring for me, a real sense of unease about some of the things we might be seeing in the open spaces, particularly if we're focused, Sean, through the straw at one or two sources that seem to be saying the same thing. Does that not really underscore once again in black ink, the need for multiple sources that are clustering around a particular point? Does that not again, Sean, underscore the need for tradecraft?

Sean Corbett: It does, indeed. Absolutely. Again, I go back to you're not going to act on purely open source. I would just slightly question, in terms of would it not depend on what the imperative to act is? If there are people are significant threat, for example, and it was worth taking the risk. This is where the special forces are probably equipped and prepared to take more risk than say, conventional operations. Would you see a situation where you have no option but to take that open source with all the risk that comes in? Now, that is a really difficult potential and ethical discussion as well. But I agree with what you said, Harry, in terms of it is behoven on the intelligence community, whether that's commercial or indeed government, to provide every single piece of intelligence it can. Actually, in an integrated way with the background knowledge to say, " This is our best calculation, our best assessment, our best analysis based on what we've got." Because at the end of the day, it's up to the intelligence specialists to provide the threat assessment, if you like, that analysis. But it's the operational commander that has to accept or not accept the risk.

Harry Kemsley: The balance of risk against available intelligence, I guess, is the question there. You alluded to earlier, sometimes you have to move because it's a no- fail environment and we need to start acting upon what we're seeing. But if all we're seeing is from social media or other open sources, then that's a higher risk environment.

Gwyn Armfield: Yeah. I think it comes down to trust and trust is very subjective. When we were doing this years ago when I was still in active duty, and we were starting to look at using open source and we were using open source for things. You look at the intel roster and say, " Hey, can you put a trust metric on this? Green, yellow, red, one to five. Where are we on this?" That was very nascent in our ability to fuse this all together. It would be really neat if you had a trust meter on things like this, and you could put it on there, but just quickly back to risk. I had a conversation with someone earlier this week about how do you define risk? The way I like to do it and I think it's probably doctrinal, is risk to force, risk to your people in your assets and resources, and risk to mission. If you look at the standard chart of high consequence, high probability. Then back down to low consequence, low probability, you got to look at both of those independently and then as a decision maker, fuse them together. There's certain things, there's certain missions out there, they're very rare where you're going to take a high risk to force so that you mitigate the risk to mission. But in most cases, you're going to balance those two out because if you get it wrong, you have a very finite ability, you have a very finite pool of assets. There's usually not one forward, two back like in the conventional sense of a line infantry unit. You've got a unique capability. It's either so forward positioned you can't get anybody else there in a reasonable amount of time to act, so you got to think through the application of that force in a lot of different ways. The idea of looking at this if what's my risk to my force, and how do I use open source to mitigate that risk? Then how do I use the same to mitigate risk to the mission that's been given to me? I think that's where this I got my facts and assumptions, and then I've got constant input of open- source intel. Every jock out there has got four or five different news feeds going and you're informed by that. I remember in the latter part of my career, walking into the four- star briefing and looking up at the news and seeing something that happened in Syria. Something consequential that happened, I'm like, " I bet we're going to hear about that."

Harry Kemsley: Yeah, right.

Gwyn Armfield: Then I walked in the boss' office, I'm like, " Hey, did you guys see this happened?" Then 72 hours later, we had a presidential action to act on that. But the ability to tip and queue, and get your mind around things to start thinking through options. I'll stop there. Sean?

Harry Kemsley: Sean, I can see you want to come in. Just before you do, when you're giving us your thoughts, Sean, I'd like to go back to this word trust. I'd like to talk about technology in the trust realm, but let me come back to that in a second. Sean, go ahead.

Sean Corbett: It was the trust piece that I was going to come in on and speak, and it's a little bit of an anecdote because the trust as much as anything, is between the commander and his intelligence staff. It's something that I've looked at in great detail over the years, where it's understanding the limitations of the intelligence. Actually, understanding the intelligence but understanding what the requirement is. There was one situation where I was again, no names, but a steely- eyed killer commander was wanting to mount an operation. My role was to give him a threat brief, and it was pretty significant the threat brief. But what he wanted to hear from me, was actually that the threat was okay. I actually had to gently tell him, " Look, the threat is what it is. I'm not going to dilute the threat. It's your risk to decide whether you want to take that risk, in terms of potentially losing an aircraft, et cetera." It was more a political thing than it was a physical thing. It took us a while, but once we got around there, then the mutual understanding and the trust was really good after that. It was a pivotal time, but I've been having many cases where I wouldn't say the commanders wanted to abdicate their responsibility or their decision- making, but haven't really understood the limits of the intelligence. Now, that's as much due to the intelligence staff not being able to articulate. Like you said, Gwyn, give me a rough number on how accurate you think it is, not just a probable or possible. That's one thing I try to inculcate in our military headquarters, where instead of doing the whole probability yard stick or something, we'd stick our neck out. Now, high risk but actually it's very much appreciated. It allows that commander to be informed enough as to whether to take that risk or not, so the trust is as much between the commander and his intelligence staff, as it is with the information.

Harry Kemsley: Yeah. Gwyn, let me just tuck into your passion, building high- performing, successful teams, which in the specialized operations' arena, I guess is an intrinsic feature of those teams. Would like to think so, trust in those environments is built on a number of different things. Let's just pull that word trust up to the conversation we're having here about the use of open sources or other sources to help us access the risk. Is it not fair to say that trust, because it has to be earned, takes a long time to be generated? That we are trying to build an intelligence team around a decision support team, around a mission team. What you really need to do is get to know each other really, really well. Because then you look in the guy's eyes, and you get that sense of, " I know this guy's got it." Doing that with a machine, doing that where a lot of machinery involved or maybe I'm revealing more about my age and generation than I should. I get this feeling that we're losing that contact, that human understanding. What's your view about performing teams and that building trust piece, because it's an intrinsic part of the role, surely?

Gwyn Armfield: Harry, you're spot on. It's this subjective human- on- human contact. I think within seconds of meeting somebody, we start drawing our own conclusions of whether or not this person's an asset or a liability to the team's mission that we're all on together. One of the truths that the USSOCOM organization, the enterprise has been founded on, is the idea that you can not create capability after an emergency occurs. You got to create the capability before the emergency or before the crisis. Then you rely on frankly, the trust that's created in this pre- crisis environment to then roll in on an event. I recently had a conversation with a senior special operations' leader in the US, about the ability to take AI ML and apply it to intelligence problems with the goal of reducing the analytic manpower requirement. We're counting things, we're doing simple, human tasks that a machine can do. The officer's reply to me was, " Hey, we're all asking for this, but we haven't been able to write a good requirement's document because what we really want to know is how is the machine coming up with the answer?"

Harry Kemsley: Right.

Gwyn Armfield: Because before I take this to the president or to the prime minister and get execute authority, I've got to trust that this is absolutely right. I don't think we've gotten to that point yet, where we can explain how things do what they do. With an iPhone, the consequences are really, usually pretty low. I think that's pretty cutting- edge technology because it's informed by so much data. It still doesn't take me to the right place on my map all the time. That's probably got more crowdsourced info than anything. Until that can hit about 95% to 100% of the time, I'm not going to trust what a nascent, what we'll call the gonculator. A gonculator spits out to an intel officer, who then takes it to a decision maker or a policymaker, who then takes it to a senior national leader. In the special operations world, that's what you're dealing with, is the missions that hey, this is a very senior national, usually international collaboration to go do something. We really, really got to get the intel right.

Harry Kemsley: Yeah.

Gwyn Armfield: By the way, it's happening in double time, triple time speed than what we're comfortable dealing with, so the idea of hey, I need to trust this person. In the past we did it by just hey, we're all forward deployed. If this person does what they say they do, or they do what they say they can do over time, the ability to deliver on their words, that's what builds trust with me. They're on time, they got the right information, they're prepared for success. When they say they can do something, they do it. After about three or four times, my trust factor with them is really high. If I don't have the chance to get those reps in previously or I'm working with a new piece of technology, that trust factor's going to be really low for a period of time.

Harry Kemsley: Yeah. I guess then the takeaway from that, Gwyn, is that if it is inevitable that we have to use technology to, for example, sort through those six billion collectors out there to find the nugget, the corner of the haystack where the needle might be hidden. If we're going to do that, if that's a necessity, which I suspect it probably is, then I believe we need to spend more time with the AI technology, with the data that's flowing through it and so on. That when we get that trust, the number of times we go around that buoy, the greater we get to the point where we start to trust the technology. But Sean, I think it's fair to say in our previous conversations, we've concluded that human- in- the- loop certainly for the foreseeable future, is a necessity not a desirable. That the machines can augment the human. The stuff that the human doesn't need to do, is still I think where the machine needs to stay leaving the human to do the important stuff. Would you agree with that, Sean?

Sean Corbett: Yeah. I think my position's clear on this, is that you use the algorithms as a tool to help you make those decisions.

Harry Kemsley: Right.

Sean Corbett: At the moment, however clever they are, I don't think they've got the cognitive ability to provide that so what, this is what I think. That might be based on experience and it's back to the human element as well. As an intel specialist, I knew I had probably three chances in front of a big commander to get it right, get it concise, and then I'd be sacked. That was quite right. I used to have an ex four- star acquaintance who used to sack people on the spot. Used to use his specialists, the corporals in particular, because they just knew their stuff, knew what was required. That's right. The human element, back to almost where we started, is just so important but it's got to be someone has the background, the context, the experience and the ability to build up everything I know to this is what the boss needs to know.

Harry Kemsley: Gwyn, go ahead.

Gwyn Armfield: Yeah. I just wanted to bring this into the current day conversation when we start talking about decision dominance. That's what's really driving this idea in the US at least, between joint all- domain command and control, you can automatically link sensors to shooters. The fundamental thing we're getting at here is trusting that system to work. I always joke about the 1980s US movie, War Games, where they have the WOPR. It's making these decisions that affect the nuclear enterprise that can't be stopped. It's a 1980s vignette that actually applies today when you start looking at this ability to trust machines to make frankly, human- on- the- loop is what we're calling it now where you can stop it. What I would come back to just quickly, is the unique attribute, the unique competitive advantage of the Western Alliances. Is our ability is make decisions quickly because we delegate down to the lowest, competent level, is what Joel Matis used to say. We trust our lower levels to perform on intent. That's just a unique attribute that we've got to empower, as we go forward and we start looking at the competition that we're in now with great powers. With this ability to push decision- making down and rely on our subordinate leaders to execute on intent. Our competitors can not do that. Their societies, they don't create that condition then you can bring into your military to use to your advantage. As we think through how to bring open- source intelligence in, how do we bring AI ML into the bigger military decision- making process on shoot, don't shoot? Then, when we get into a shoot and fight, how do we match sensors to shooters? The fundamental issue there is trust. I don't think we're there yet. I think we're still at the WOPR stage, of all the lights are flashing and things are working, but we're not really sure how that thing's going to work. Hopefully over time that develops, but I do hope in a minute here, we'll jump into this idea of ethical use of OSINT, and the use of technical surveillance.

Harry Kemsley: Just where I'm going now, just as we step across to the ethical issue before we start to summarize. I think the maneuverist approach, this acting on intent piece that you just touched on there, Gwyn, is a significant, competitive advantage for our own allied forces. But that again, isn't happening just because we were taught it from a book once. That is something that we've trained and retrained, and practiced and repracticed. I think for me, the takeaway I'm starting to get here is if OSINT and technology are going to be plugged into things. It needs to be done regularly, frequently until people get that sense of familiarity and therefore, trust. Let's move onto ethics. Sean, I'm sorry, I did cut you off there. I know you wanted to comment but we need to move on. The use of open sources then. Technology is going out there scraping internet for information that we might find intelligence value in, has come up as an ethical issue in an earlier podcast we did with a lady doctor, Amy Zeger. That was a fascinating conversation. What's your take, in terms of the use of the open- source environment, Gwyn, on the ethics of doing so and the necessary impact that would have, in terms of the way that would operate, Gwyn?

Gwyn Armfield: Yeah. Thanks for that, Harry. The terms dystopian and Orwellian are the first two things that come to mind when we start looking into this. The fact that people would willingly carry around a device on their body almost 24 hours a day that feeds unknown collectors, whether it's for marketing or anything else. We willingly pay to do that, so do the ends justify the means? I don't think they do because you start disestablishing the fundamentals of your society when you go down that road. Sean and I previous to getting on today with you, Harry, were talking about the US Constitution's Fourth Amendment against unreasonable or unlawful search and seizure, written hundreds of years before the internet existed. The problem now is that takes into account the privacy that someone would have within their home. But when you start putting information out, it's publicly viewable. Now, we're in this gray zone of hey, does the Fourth Amendment protect that or not? That's for lawyers to talk about, but the idea that we believe in a democratic process. Eventually, does every device in your house need to have an IP address? Does my light need an IP address? I am maybe a Luddite when it comes to embracing that, but I like to think that if my Wi- Fi goes out my house can still function. I've been resistant to embrace all that technology can offer me. Maybe it's because of my background a little bit too, but I really do worry about the approach or inaudible that we're rapidly trading convenience for privacy. The more that takes us as a democratic society.

Harry Kemsley: Yeah. The access to that freely available, publicly available information and being used for intelligence, Sean, is something that we've talked about before, in terms of where are the lines being drawn? We agreed, I think, with Dr. Amy Zeger that there isn't really substantial policy or protocols out there, which we can put a hand to say, " That is guiding us." I don't recall anything being substantially confidence building, in terms of that approach for access to publicly available information.

Sean Corbett: No, it's definitely a gray area but it's something that we need to more and more consider, because I agree. The ends justifies the means, which you do hear a lot at the tactical levels. Slightly crass, it's just not acceptable, particularly when you don't know where the information is coming from and you don't know what's going to be used with it, so that piece in the middle. So it does require legislation, as Gwyn said. It also requires TTPs, tactics, techniques and procedures. And policy, which needs to be practiced because there are so many gray areas. Obviously, I've got a background in targeting. We used to have some really searching questions within the targeting boards of senior levels about whether we should hit a concrete target or not, based on all the things you'd expect. In some ways, we were far too risk- adverse, in terms of what impact we would've had. But equally, without having those checks and balances, then you get into a very bad place very quickly. I think this is almost a podcast for another day, because it is really quite a deep discussion. What are the limitations that we're using? Call it surveillance if you want, your mobile phone but everything too. As you know, London is absolutely stuffed full of surveillance cameras. What are the limitations of using those? That's back down to limitations of technology. So face recognition, for example, which has been proved to be less than accurate but it's clearly being used more and more, so we're going to have to get better with that.

Harry Kemsley: Gwyn, go ahead.

Gwyn Armfield: I think the real challenge we're going to have, is that we will have our own ethical conversation within a democratic society of how to do this. The challenge is going to be our adversaries will not have that conversation.

Harry Kemsley: Right. Right.

Gwyn Armfield: They'll just be full on either exploiting the technology we've created to make our lives convenient, or they will use their own technology for their own ends. That's where things are going to get messy, I think.

Harry Kemsley: Yeah. I totally agree. Sadly, we are going to run out of time. What I'm going to do now is ask you both to think about for the audience purposes, what's the one thing you want them to takeaway with regard to the use of open sources in the specialized operations area? I'll come to you first, Gwyn, after my few comments in a second and then to you, Sean. For me then, what have I really taken away from today? There was so much that I could pick on. I think it's probably two things that I'd like to zero in on. First, to remember that in the specialized operations' arena, you are talking about a human environment. The domain is principally about the human interactions. The fact that to operate there with the appropriate level of risk to read a force or mission, means you have to trust things. You have to work in a trust environment, means you got to build high- performing teams. Now, those high- performing teams need to include technology because of the scale of the problem we have, in terms of volume, veracity, and velocity of information that we have available to us, certainly in the open- source environment. For me, the takeaway is for the open- source environment and the intelligence analyst, we're going to have to practice this a lot. We're going to have to get comfortable with the use of technology, comfortable with using open- source intelligence. And just like we have for the maneuverist approach, working on intent, delegating authority to the right level. That takes practice, that takes a lot of attention to detail. For me, it's that fact that we have to work at this. It's not going to be easy. Let me go to you, Gwyn, next. Your chance.

Gwyn Armfield: Harry, if there's one thing, I'd say about using open- source intelligence in specialized missions, is that you've got to understand the cultural context of the information that you're using. You can't mirror image your society's interpretation of that information onto what's really going on.

Harry Kemsley: Yeah. We talked about that before, haven't we, Sean? That cultural piece?

Sean Corbett: That cultural element, again inaudible.

Harry Kemsley: Let me resist the temptation to open our podcast again and let it run. Sean, go ahead.

Sean Corbett: Yeah. I'd just say just the breadth of what we've discussed. We clearly went off piece, but all of it is relevant. Just says to me that OSINT support special forces, just amplifies all the challenges that we already kind of recognize within supplying open- source intelligence to operational situations. It is a huge challenge where we need to knit all the different factors together. So complex, but if we crack this one, then we can crack anything.

Harry Kemsley: Yeah, very good. Gwyn, what can I say? It's been an absolute pleasure talking to you. I know we could've spent another two or three hours, and I say that probably on every podcast because it's true. These topics are genuinely interesting and important. Thank you for taking the time to speak with us today. I am sure the audience would have found much of this very, very interesting. If I may, I'm going to put you down as one of those people on a long list of guests, who I'd like to get back and follow- up on some things. Gwyn, thank you.

Gwyn Armfield: Looking forward to talking again, Harry.

Harry Kemsley: All right, thank you. Sean, as ever, thank you very much.

Sean Corbett: Thanks, Harry.

Harry Kemsley: Good day. Bye- bye.

Speaker 1: Thanks for joining us this week on The World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you'll never miss an episode.

DESCRIPTION

In this episode we speak to Gwyn Armfield, Brigadier General, USAF (retired) to discuss how OSINT supports Special Operational Forces in their operations.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Sean Corbett

|AVM (ret’d) Sean Corbett CB MBE MA, RAF
Guest Thumbnail

Gwyn Armfield

|Brigadier General, USAF, retired