Tradecraft in Open Source Intelligence

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Tradecraft in Open Source Intelligence. The summary for this episode is: <p>In this episode we look at tradecraft in Open Source Intelligence with Neil Wiley, former Chair of the National Intelligence Council and former Director for Analysis at the Defense Intelligence Agency.</p>

Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello and welcome to this episode of World of Intelligence by Janes, your host Harry Kemsley, and as usual, my co- conspirator Sean Corbett. Hello Sean.

Sean Corbett: Hi, Harry.

Harry Kemsley: Okay, so Sean, I think of all the podcasts we've done, the most frequent topic we've touched on in probably every single one of them, if not every single one of them, the vast majority, is tradecraft, intelligence tradecraft. And I think we've said probably a hundred times, we really should have a podcast about tradecraft at some point, so today we have. And I am absolutely delighted to say that we have probably one of the most expert people in the world, certainly in the intelligence world, in tradecraft, and that is our guest today, Neil Wiley. Hello Neil, thank you for joining.

Neil Wiley: Hi, good morning. Or good afternoon, I suppose, it is where you are.

Harry Kemsley: Yeah, God save the king. Neil Wiley is a former naval officer and intelligence professional. He retired in 2021 after nearly 40 years of government service, as an intelligence analyst by profession and laterally a reluctant bureaucrat by requirement. He served in numerous senior analytical and intelligence leadership positions at the Department of Defense and national levels, including director for analysis at the DIA, Defense Intelligence Agency, chairman of the National Intelligence Council, the intelligence community all- source analytical element, and finally as principal deputy director of national intelligence. He's currently a professor of practice at the University of Maryland's Applied Research Laboratory for Intelligence and Security and the managing principle for Lyseon consulting. So Neil, given that incredible track record that you've got and all the places that you've been, very senior positions leading tradecraft, I guess we should start the conversation with what do you think the word tradecraft means? What do we mean when we say, in the intelligence context, tradecraft?

Neil Wiley: Yeah, so that's a good question because it's sort of a squishy word. We use it a lot, but it means different things to different people and it certainly means different things in different contexts. Tradecraft at its most basic are the tactics, techniques, and procedures that a discipline within the intelligence community uses to achieve its mission. So tradecraft is different and tradecraft applies across the intelligence community. There is tradecraft in human intelligence collection, there is tradecraft in signals intelligence collection, there is tradecraft in measurement and signatures intelligence collection. And there is tradecraft in intelligence analysis, which is really where my expertise lies. They're not the same tradecraft, but they are tradecraft nonetheless. And tradecraft is used to ensure that we perform our jobs to a standard, and that that standard can be explained. And I think fundamentally that's tradecraft within the intelligence community. And of course again it differs by discipline, and I suspect we're going to spend a lot of time talking about the analytic discipline in tradecraft.

Harry Kemsley: So if tradecraft is about that standard and it's about being able to explain what you've done, I guess what it comes down to then is there is a process that needs to be seen and recognized, there is going to be an element of judgment, and I think that's what the word craft is talking to. So you've got the process, you've got craft, and the net of those two is what we mean by tradecraft. Is that a reasonable summary?

Neil Wiley: I think that's absolutely reasonable. There's a difference, and where we get wrapped around the axle in the intelligence community frequently, particularly in analysis, there is a difference between ethics and tradecraft. So fundamentally, and it may seem ironic for a community like the intelligence community, which does some fairly insalubrious things to get its job done, but we operate in a very strong ethical foundation. So the ethics of intelligence analysis are that intelligence analysis is objective, that it is apolitical, that all- source assessment is really done through the fullest possible consideration of all relevant sources, that it shows up on time, and that it demonstrates a degree of transparency and applicability that allows the consumer to interact with it. Those are the ethics of intelligence analysis. Tradecraft are the tactics, techniques, and procedures we use to bake that ethic into a tangible output. So I think a good example is the medical profession. So the medical profession operates under a 2, 500 year old ethic. And that ethic really fundamentally hasn't changed. You raise your right hand or whatever doctors do, and they swear to uphold the Hippocratic oath. But the tactics, techniques, and procedures that the medical profession uses has moved on a lot in 2, 500 years. So I don't think Hippocrates would recognize an awful lot of stuff that goes on into how they do their job, but I think Hippocrates would recognize the mission, role, and ethic of the medical profession. And I think that's the fundamental distinction between ethics and tradecraft. Ethics don't change, or at least you have to have a very compelling reason to change your ethic. Tradecraft evolves as capability evolves. Tradecraft evolves as the adversary evolves. Tradecraft evolves as our discipline evolves. Tradecraft always evolves. But we conflate ethics and tradecraft sometimes when we're talking about it, because we don't make that distinction, and I think the distinction is very important, because if you confuse tradecraft and ethics, you might take the position, as I know I have done in the past as well, that tradecraft can't change. But if you separate the ethical underpinning of what you're trying to do from how you do it, then it becomes not only easy to understand that tradecraft can change, and there's actually a mandate, a compelling need for it to evolve, as long as ultimately all of those things that you do are still consistent with your fundamental ethic that you should be objective, you should be all- source, you should be apolitical, you should show up on time, and you should be able to explain what you do, then you're good.

Harry Kemsley: So there's at least a hundred things in there that I want to dig into. I'm only going to have time for a few. I want to come back to the evolutionary part and the fact of the matter is we've got a different informational world today than we had even a few years ago. But before I go any further into that, Sean, you and I have spoken so many times about tradecraft on podcasts in the previous years we've known each other. It would be remiss of me not to give you the opportunity to give you your perspective of tradecraft before we go any further. So Sean...

Sean Corbett: It might surprise Neil to find out I actually agree with him 100%, because we like us spar. But no, the tradecraft piece is the standard repeatable processes. It's how we do our business, if you like. But it is squishy, I love that word as well, the squishy word, because it is quite hard to define, which is why I really like the way that you've taken the ethics away as a distinct piece, because one is a process, basically, the tradecraft, and that I would include in to an extent analytical standards. And I'm quoting from the ICD 203, which clearly needs updating because it was written in January, 2015, which is an issue in itself. But it is got to be the standardized way of doing things. But as Neil said, I think some people hide behind that because, " I've got to do it this way because that's what the documentation said." When of course the advances in technology, the advances in how we do stuff, the advances in the way that we create information or even collect information, actually, means that it has to be adaptable. We could get onto on the ethics thing. Obviously we did have a podcast with Amy Zegar who absolutely nailed it. And she said the definition of ethics is that which is right, to which the question is who decides what's right? But that's for another day. But I do like the way that we've delineated that. So the standard repeatable processes. And for me the difference between maybe tradecraft, which has to be... It has to be softer than do this, this, and this, and analytical standards, which is the legalities, the how we treat it security wise, maybe even the terminology and the confidence level, that sort of thing. They are different, but they are also entwined.

Harry Kemsley: I think in that Venn diagram that we've probably just painted, that the ethical aspects of what we do in tradecraft, how we adopt certain techniques, how do we perform certain techniques, I should say, is probably where Amy's pointing her finger in terms of there are ethics in what we do, but I also like the separation of the two because it allows the fundamental truth of intelligence to endure. But how we generate it, how we understand it, how we order it, that's the bit that's evolving continuously. So I do like that, but I sense we can probably spend the next hour or so just on that one topic, so I'm going to pivot us towards something else you said, which is about this standardization piece, Neil. So we've established that good process, best practice is what we're capturing, among other things, in our tradecraft. But I think you also said, or you alluded to the fact, that we could end up getting hung up on that to the point where we cannot evolve, we cannot innovate. And one of the things that I've seen, and I think we'll probably find time to talk about in a few minutes, is out in the commercial space since I left intelligent community, the rate of change, not just of technology but of techniques, Ukraine, has opened some people's eyes to the power of open sources for derived intelligence. The techniques that are being developed are evolving very, very fast. We'll come back to the potential synergies between commercial and government in a moment. But for now, let me bring you to that point that I think we're alluding to, which is the static nature of tradecraft provides the need for it to evolve and innovate.

Neil Wiley: So to follow on from what Sean was saying, so I'll pivot off of Sean's point on ICD 203, and you'll be pleased to know, Sean, that normal service has been resumed, because I don't entirely agree with you on the utility of ICD 203. But I think it's a good example of how you can look at the same thing different ways and come to a different conclusion, which we do in analysis all the time. So yes, I think there is a danger that tradecraft becomes rote, and intelligence analysis is a deeply human thing. It's fluid, it's imaginative. It is all of those things that you actually can't write a flow chart to do, despite a number of attempts to find some automated way of doing analysis. But I do think there is a general need for a framework that determines or suggests best practice for what becomes a more meritorious assessment. And a lot of times that tradecraft is aimed... A lot of our tradecraft is aimed at how you convey judgements to the consumer, so that the consumer understands them better or can get a better sense of what's really underneath it. Because in my opinion, as important as the assessment or judgment we convey to the policymaker or the commander, whoever gets it, is that we convey what's under the hood of it. What's it based on? And a lot of our modern tradecraft in 203 is post Iraq WMD commission, and it addresses some of the shortfalls that intelligence publications, particularly for senior leadership, had in the lead up to the Iraq war, in which we were less than clear about what was under the hood of a number of things we said. So fundamentally, the tradecraft elements in ICD 203 now are intentionally designed so that intelligence is not misleading. That we do not state something as a fact when we know it not to be. When it is a judgment held to some degree of confidence. So I think the basic tenets in 203, and this is where I'll get back to 203, Sean, the basic tenets in 203 are useful and applicable as technology changes. As managers of intelligence or leaders of analysis, what we can't do is read them verbatim and apply them as rigorous mathematical formulas. What we have to do is look at the sense of what that tradecraft standard in ICD 203 is trying to convey. Should we have to be able to explain how we arrived at our judgment? Yeah, we should. Should we explain our sourcing, how much we liked it, how much didn't, where we wish we had more? Yeah, of course we should. Should we have to explain the assumptions or whatever other frog DNA we stuck into that assessment in order to make it make sense? Yes, we should. And we should make transparent dissent or alternative views. So none of those things are outmoded or outdated, but they become problematic when analytic managers look at them and say, " There must be one particular way of doing that." And that's when a document like 203 risks becoming inflexible. So it really is the flexibility of the people reading it that matters.

Sean Corbett: As always you put it far more eloquent than I did, but that was exactly the point I was trying to make. Per se it is very good framework, which is what you just said, but it's people that apply it too rigidly, and the poor analyst who's trying to do their best in terms of coming up with the so what and the what if feels constrained by potentially managers who, " No, no, this is how you do it." So it's that balance between constraining the intellect and the thought and that sort of thing from the do it this way. So it is exactly what you said.

Harry Kemsley: But why would there be a need for somebody's inflexibility? Because people, they don't get out of bed this morning and say, " I will be inflexible." They get out of bed and they follow process. But is it because they believe that's the only way it can be done? Is there a sense of risk concern, an adversity to risk, that if they don't do it the right way, it's wrong? Is that the problem?

Neil Wiley: Well I think all of the above. I mean think we have to realize that certainly in the US intelligence community, it is a big place, and intelligence assessments are birthed, chaired for, and sent out into the world through a process that is an agency process. So one of the defining distinctions between intelligence and academic writing, say, is that the drafting analyst does not own that assessment. The assessment is the agency's assessment. The assessment is the community's assessment. The drafting analyst shepherds the process through, but it goes through a certain number of review steps as part of the quality process. And actually I think that reviews change process is good fundamentally. I mean depending on where you are in your career as an intelligence analyst, you either love the review process or you hate the review process. The more senior I got, the more I liked it, and I think that's probably how that works. But if done properly, every one of those steps in the review process is there to inject a tradecraft element that the drafting analyst himself or herself just doesn't have the perspective to do. So I absolutely believe that, I don't care who you are, no person is single- handedly capable of writing a tradecraft compliant intelligence assessment. Neil Wiley can write an op- ed piece, that's fine. But a tradecraft compliant intelligence assessment requires perspective that an individual just can't have. So you have hundreds and hundreds of these review processes and drafting activities going on in the IC on any given day done by thousands of different people, all of whom have varying levels of training or appreciation for... All of which have varying levels of training or appreciation for the ICDs or their particular agency practices. So some people are going to default to if this then that, you must, because I know people are different. Others will take it in the spirit it's intended and they'll apply some degree of fluidity and flexibility to it, and it's really up to the more senior analytic managers at the all- source elements to ensure that that flexibility actually is allowed. But it's not always the case that the incentive system or the review system or however the analysis is managed does that. So there's a fair degree of variability.

Harry Kemsley: Just before I come to you, Sean, I think the essence of that, though, is that it's in the flexibility that innovation and evolution is allowed to prosper. It's a bit like the biological sense of evolution that if you don't allow DNA to change, it's never going to evolve, and it becomes unfit to the environment. And I think that's in that flexibility of the senior that we allow our analysts and others to actually start to innovate and to evolve the practice of doing the intelligence that is there in front of them. Sean, you had a point to raise.

Sean Corbett: Yeah, and without opening a kettle of worms here, I mean this is flashbacks to my time in DIA here, where trying to get to the heart of why were the brilliant junior analysts not writing out Five Eyes as opposed to no form. And there was so many, because a lot of them were frustrated. They wanted to do it, but they felt constrained by the process, if you like, in some ways. But really for me... And they also felt constrained, by the way, by the time it took to QC these things. But it depends on the requirement. When you're trying to get the best possible intelligence to the decision maker, whoever that is, it depends on what that requirement is. If you want something that's going to be rapid so you don't have to declassify to Five Eyes level and get it to the person who needs it immediately, then of course there is a reason why that shouldn't happen. So I think what I'm saying is that there does need to be that flexibility. And I go back to my time in PGHQ in terms of the requirements. So we used to produce stuff at speed. You'd come in at five o'clock in the morning and you'd have to brief the boss at 7: 00, and you didn't really know what was happening overnight, but you had two minutes to brief generally the great Air Marshall Peach, who probably knew more than you about anything anyway. And if you got it wrong or more than five minutes, that was it. It was all over for you. So the rapidity of the, " Right, what does the boss need to know now," which we course suited the way that we are... Our culture, if you like, the military culture in there. But that was entirely different from the DI staff who were briefing senior ministers, et cetera, that had to come up with a far more deliberate, far more, if you like, controlled process. Now generally we were in about the same place, but of course all the military people loved our assessments first because it was, " Right, tell me what you think. Tell me why you think it." It's similar to the quote from Colin Powell when he was US Sec State. " Tell me what you think. Tell me why you think it, and then I will make the decision on that." I'm misquoting him here. So it's horses for courses depending on what the requirement is, both in terms of the detail and the time.

Neil Wiley: I agree with you entirely. So this is a good example of where flexibility comes in. And we have that same discussion certainly within the IC. I spent an awful long time at a combatant command before I went to big DIA and then the national world. And one of the fundamental tenets of intelligence is it has to be timely and relevant to the consumer, now whoever that consumer is. And if the consumer is Stu Peach at 7: 00 in the morning, that's a consumer. If the consumer is the President, the Secretary of Defense or a NATO or a Five Eyes colleague, that's great. It has to be relevant, and relevant is, is the topic something they care about? Then it has to be consumable by them. Is it in a format that they'll actually digest. And is it timely? Do they get it when they need it? And all of those are important for whatever customer you have. So where we get... " We," my colleagues in the intelligence community have gotten tied up in knots before is the notion that because you have to do something in a very quick turn, and at the NIC, we actually... It's surprising at the National Intelligence Council, it's known for writing national intelligence estimates that take nine months or a year or a year and a half to do. But actually about 70% of what the NIC does are quick turn policy memos for the National Security Council. So it's got a real bimodal distribution in the kind of work it does. It's either very long- term estimate of stuff or it's a lot of, " I need it in three hours." But the misconception is that if you need something fast, that is only achievable by short circuiting your tradecraft standards. And I believe that it is not only achievable by short circuiting your tradecraft standards, in my view. And also I would suggest that Air Marshall Peach required the same degree of diligence to the information and to the intelligence you were giving him that as did the Prime Minister. But you just have to roll through those basic elements of... You've got to go back and think of what your tradecraft's there to do. Am I clearly explaining my judgment? Yes. Am I clearly explaining what that judgment's based on? Yes. Am I clearly explaining any assumptions I made that I think Stu Peach is going to need to know? Yes. You've done your tradecraft. And that's where that inflexibility of it must be done in a certain road reviewing way gets in the way. Because you can't do that fast enough to do your short term product, so people think all they can do is just avoid some of those steps. No, you got to do them, you just got to do them faster. Everybody deserves the same level of diligence.

Harry Kemsley: It sounds to me, Neil, when we talked earlier about tradecraft being in part process and part judgment, it's in the judgment that you are going to be able to differentiate the time sensitivity, the tradecraft being achieved and so on. And that is something you can teach, but is also based on experience, which is probably why people with more experience end up going up the chain in terms of quality assurance. Now I am going to move us on because I know that we don't want to stay on each of these points for too long, so I'm going to move us on in terms of the tradecraft, and I've heard mentioned a couple of times the different agencies that are moving around in the intelligence community, and the intent of course is to build a general understanding of something, particularly at strategic levels of importance, you want multiple agencies to understand more or less the same thing. Does that drive a need for tradecraft coherence across the IC? Is there a need for a fundamental tradecraft that everybody, a core tradecraft, everybody understands, adheres to, to ensure normalization, standardization of output? Or is there actually no need for such a strict adherence, it can be done at the agency level? What's your experience with that, Neil?

Neil Wiley: Yeah, so that's a really good question, and I liked the word you used in coherence, because I do believe there is a need for a coherent set of tradecraft across the all- source agencies. And again, when I'm talking about analytic tradecraft, now I'm not talking about a tradecraft that's specific to SIGINT or specific to GEOINT or one of the disciplines. So we're talking about all- source tradecraft. I think there's a need for coherence because we are now at the point within the IC, until somebody decides to clamp down on it again, that assessments written by one agency are generally made available across the government consuming world. So if you're the Secretary of Defense or you're the Secretary of the Treasury or Secretary of State, you're going to read assessments from CIA, you're going to read assessments from DIA, you're going to read assessments for the NAG, from state INR. And there needs to be some baseline confidence that an assessment written by one agency is fundamentally structured the same as an assessment read by another one. Otherwise you just spent an awful lot of time trying to figure out what's different, and they used this word and not that word when they talked about confidence. So it's confusing for the client if you don't have some coherence. I do not believe, however, that there needs to be some stricter standardization. Again, I go back to ICD 203. ICD 203 sets out your ethic and it sets out the general characteristics that a good all- source assessment should have. Doesn't tell you how to do it. It's down to the agencies to determine how they're going to meet the ICD 203 standards. So DIA has its own set of tradecraft guidance that amplifies the standards in the ICD 203. CIA hasn't the same thing, although it wounds CIA to admit that anything ODNI does is directive, but they do. And in fact, ICD 203 standards were derived largely from the analytic tradecraft standard that CIA had in the first place. So most of the all- source analytic standards in the IC are ultimately derived from CIA DNA. And that's as it should be, because they've been the leader in those tradecraft standards for a long time. But every agency gets to decide how to put those basic standards into practice in their own particular product lines, and I think that's appropriate. So coherence good, strict standardization unnecessary, and in my view, it gets in the way of flexibility.

Harry Kemsley: Probably almost impossible anyway, given the scale of inaudible-

Neil Wiley: And impossible.

Harry Kemsley: In practical terms. Sean, I'm going to come to you in a second. I know you've got some background noise, we'll work through it. The Five Eyes community, which you of course had a very, very big part in, both in terms of where you've worked, but also your role in DIA. What is the prospect in your mind of a coherent tradecraft across the Five Eyes for the sharing of intelligence that is actually at the center of the Five Eyes concept.

Neil Wiley: I think the prospect is quite good, and I think actually if you... I've been away from the... I've been a lotus eater for two years now. But I would suspect that if you actually go through the Five Eyes all- source intelligence output, it is largely coherent now. There's a great degree of similarity between how intelligence assessments are written in the various Five Eyes intelligence establishments. The discussion is vibrant and always has been about all- source assessment. The leadership knows one another, and that's very intentionally nurtured. And in fact, I don't know, four or five years ago, Sean, I think you were there in fact when we did this, there was finally some Five Eyescoherence on agreed language for conveyance of confidence. And that really was the main thing that lets you cohere the rest of your tradecraft. Because ultimately the hardest thing about writing an assessment is determining the level of confidence you assign to it. And if you don't have similar language, if you don't have a benchmark for confidence, you really can't do much else. So that to me made everything else possible.

Harry Kemsley: I'm going to come to a commercial tradecraft perspective on that in just a second, Neil, but Sean, on the Five Eyes question, you, as I say, were directly involved in some Five Eyes coherence from DIA and from other roles you've filled. What's your view about the coherence issue across multi nations rather than just multi agencies?

Sean Corbett: Yeah, hopefully you can hear me because I've got some background noise at the moment, but I think Neil, you were the US chair for the particular committee that was set up to do exactly that, to cohere, if nothing else, the terminology, and the really point I wanted to make that it's such a personal thing, that even that was problematic. Everyone was trying to do the same thing, but at the national level people have... Language does matter. It's like, " Well, what's probable? What does possible mean?" Even that sort of thing. So we ended up coming up with... It was a handbook, basically, says, " In the US this term means that, and in the Australia it means..." So to make sure that we were coherent in terms of what we meant, even though we used some of the same language. I mean we did get some significant process. I have to say that probably took a year to do it. But we got there, and as Neil said it's all about the network and the camaraderie, actually, and the will to make that happen. So while there were some technical pieces in terms of assurance and accreditation, that we're always going to get in the way a little bit in terms of the analytical standards, there was absolutely a huge amount of effort and success, actually, and back to coherence, the word cohering, what everyone meant. What that meant was that if you're sharing a DIA document into Defense Intelligence in the UK or vice versa, that people would understand what was being said. So you could actually say... You can actually use that inaudible.

Harry Kemsley: Let me move us on just slightly, then, Neil, to the commercial sector. So Janes' been around doing tradecraft in the open source environment for 125 years. We think we know a bit about tradecraft, actually, in the open source environment, and therefore we have our own standards, we have our own tradecraft definitions and so on. Cohering commercial organizations with the IC, as commercial organization organizations such as Janes are increasingly integrated into the intelligence process for the agencies, is a constant point of reference for the commercial organizations like Janes. What's your view, though, about the need for coherence in that same sense? We talked about coherence just a moment ago with Sean in terms of language. We've talked about the need for a core understanding of the ethic and the how changing as we go along. How does that work across the divide between the commercial and the government agencies in your experience?

Neil Wiley: So I think it ultimately comes down, as it does with most things in the intelligence community, it comes down to trust. It's not possible to use the thing for which you cannot gauge its validity.

Harry Kemsley: It's providence.

Neil Wiley: It's providence. So trust matters a lot, and some coherence in standards, and particularly a coherence in ethics, I think is fundamentally necessary to underpin that trust. And you're not paying me, so I'll just say this for free. Janes is a very trusted name. I don't know anybody who grew up... I grew up in naval intelligence. I don't know of anybody who didn't grow up into defense intelligence who didn't have Janes publications on their desk at some point as a fundamental reference for anything they were doing. I mean we really did consider Janes to be authoritative. And it's not because we knew Janes was authoritative, it's because we trusted that Janes was authoritative over... Decades of experience tended to pan that out. So I think the importance for coherence with the commercial world is really on ethics and trust. Now there is some need, and if we're going to talk about machine analytics, I'll hold this off, but there is some... So there is need for explicability as well, because that's the other part of trust.

Harry Kemsley: Step, let's definitely step there in just a second, Neil, because it's where I'd like to finish this conversation actually, is on that opaque environment that we're moving into with technology. Do you want to pick up on this word trust, though? I mean in my induction sessions that I do with all our new arrivals in Janes, the first thing I talk about is the reason Janes has existed for long as it has is because we're trusted. For the commercial world to get inside the intelligence community, not necessarily behind the vault door, but just to be working closely with the intelligence community, they have to be assured and they have to trust, as you say, which is about engagement, and we are human beings. That engagement has to be two- way. Getting that engagement, by the way, for commercial organizations can be quite challenging, maybe because there isn't trust. But building trust I say requires engagement. If you can't get engagement, you can't build trust, it becomes a chicken and egg problem. What could the commercial sector be doing differently that would allow the engagement to at least start? How do we break that chain? What's the vicious circle breaker that we can actually get inside the machine and start to help?

Neil Wiley: So that's a really good question, and first off... So now having been out of government for two years and having seen government from the outside, I have developed great sympathy for those in the commercial world who are trying to deal with it. It is not an easy critter to deal with, and government does not make itself easy to do. I don't know if the devil ever wrote a book, but if he did, it was the FAR. And it just makes engagement very, very hard. I think the way I would start with engagement, and I found that the people who are most successful at doing this, is to try to use a common set of language. So if you're a government analyst, if you're a government manager of analysis, your language that you use is derived from 203, it's derived from your intelligence ethical standards. It's derived from those elements of tradecraft you need to convey in order to have a useful and meaningful product for your plan. I find that if you try to use the same language the person or the organization you're trying to engage uses, it's a lot easier to, one, hold their attention or catch their attention. And two, it immediately develops some kind of appreciation on the part of the government that actually you what I do. And it's that empathy that what the government does that I think makes you listenable to more easily than someone who steams in from the outside and says, " I have this wonderful thing, you need it because you're messed up now." And inaudible that happens, and that's in my experience, less successful.

Harry Kemsley: There's a degree of commercial arrogance about that that you've got quite right, probably rubs shoulders the wrong way. But the language point, I like that because it's a key, isn't it? It's one that unlocks the beginnings of a conversation that allows engagement. Sean, I suspect you'd probably want to come back on that, but I'm going to push us on and I apologize for that because I know you will want to speak on it. Let's go then into the technology world that we've stepped into and how that's affecting tradecraft. The fact that we need technology to enable tradecraft is not a debate anymore. There's simply no possibility for the human to deal with the vast and the high velocity of information that's pouring into the open source environment as well as other environments with the exquisite capabilities behind them. Therefore we need technology. My question, though, is, as that technology becomes arguably more opaque, more black box, less auditable, less explainable, does that not create for the analyst a problem in terms of explaining how they got to the conclusions that they've reached, and for the recipient, the customer, a degree of trust breakdown, because I just don't know how that was... Just what assumptions were built into the coefficient with the algorithm? How do we deal with that in the tradecraft realm? Because it's come up in lots of our podcasts before, that tradecraft is a way to solve problems, and yet tradecraft has now being increasingly enabled by technology.

Neil Wiley: Yeah, no, I think that's the fundamental question of the future for all of us. As you say, it's not really a subject of debate anymore that some degree of human machine teaming is necessary in the analytic world. I would reinforce that, though, by saying... I mean to me, and again, I actually spend a lot of time considering the ethics of what it is that we do. And part of the underpinning ethics of all- source analysis is we're supposed to be deriving our judgments based on the fullest consideration of as much potentially relevant information as we can. Now that sounds fine if I say it fast, but in reality, given the vast amount of information that's out there, even in the classified world, we collect far more information we can deal with, and then you add the far vaster amount of information that's out there in the non- classified world, and given the fact that every piece of intelligence require has some time constraint placed on it to deliver it. The poor analysts do what they can with whatever they know is diagnostic or have experienced to be diagnostic in the past to generate an assessment in the amount of time they have available. But the fact of the matter is we are generating our assessments based on an ever diminishing proportion of the potentially relevant information. And that wounds my heart as an all- source analyst. So not only is there a practical imperative, there's an ethical imperative to do this. So what happens now when you start bringing in automation to at least bring more potentially relevant information under consideration of some kind? So inevitably and necessarily these various machine applications work in some way that is variously explainable. So the guidance I've given in the past when we fielded these things is that I have to be able to apply the relevant ICD 203 standards to this machine application. It has to be able to demonstrate them. It does not, however, have to be able to demonstrate them in the way that we demonstrate them in a manual assessment. You have to go back to what is that tradecraft element really about? It's about I need to know how you thought, it's about I had to know what your source was, it's about I had to know what your assumptions were. So I've never given the automated systems of a get out of jail free card from ICD 203 because I can't use a black box. If I cannot sufficiently or satisfactorily explain how it works, I can't validate it and I can't stand behind it when I incorporate that into an assessment that goes to a human on whose decision may indeed rest political, economic, or indeed human consequence, I can't do that fundamental value proposition that I do in a manual assessment. But we have to be careful not to try to hold the machine to a higher standard than we hold ourselves. Because while we like to say we can explain completely how we came to a conclusion, we actually can't. There's an awful lot of inexplicability that goes on in the wet work up here that we just smooth right over. So the way conceptually I think about this is I need to know what information is available to that automated machine analytic system and what assumptions it has been given to make, and then broadly how it processes. And then if we have a sense of its performance over time, statistically, that's probably good enough, because it doesn't have to explain to me how I got every answer, but what it does have to do is be pretty clear about what it uses and how it thinks. And I think that's conceptually how you get to where we need to go. But again, we dilute ourselves that we know how we think, and in fact we don't.

Harry Kemsley: Yeah, I think that's a great point. Sean, I'm going to come to you in just a second, but I think what that really comes down to from what you've said, Neil, is the total intelligence cycle and all the things that spin off it in terms of decision supports doesn't rest in the hands of a black box. There's always going to be a human in or on the loop, but we need to be clear about just how good or indeed not good the human on the loop or in the loop is. And that sense of performance, which you're alluding to there in terms of tracking the performance is something we do in intelligence all the time, but I'm in danger of disappearing in a rabbit while I'm there. Sean, what's your view on this? Is there still room for the human in the loop of intelligence?

Sean Corbett: Well you know, one of my mantras is that there has to be. When you're making decisions that matter you, there has to be human in the loop. The key comes when that human is and what their role is. But I was just going to reinforce what Neil said, really, it's that conundrum, isn't it, between using something as a tool which could be really, really powerful, but not being governed by it. But an analogy would be, again, your analyst, back to the great Stu Peach, and over time, and it wasn't much time, if the mental algorithm of the individual analyst was such that he didn't believe you, you had about two or three attempts and that was it. And then you got another one in. If he liked you and he trusted what you're saying because that your inaudible, he would write, " You're my man," or woman, and off you go. And for me, I think that it could be similar for the algorithms. Once you start running scripts for the rest of it, if subsequently it proves that was really good assessment based on a particular... You're not going to make the as assessments on the algorithm, you are going to get the right data. So that was a really good, " How did you do it?" Then you're going to trust that particular piece of ML or AI and use it and then the rubbish you're going to discard. The key comes with how much do you use it and when? When do you know to discard it? And what decisions you make on it.

Harry Kemsley: Yeah, thank you. Just to get to one point Neil. I sense from that, though, there is an attention to the ability for the human to understand another human being, that we believe we can do that, and I don't understand the black box and therefore I'm not going to trust it as much. When in fact if you give them the time, the machine can prove that it can do certain things that humans can't do. So I think there is a cultural, educational, maybe a psychological thing there that we need to accept in this transition to a world where technology in the loop is an inevitability, and an understanding of it and its value is something that we have not quite got used to yet. And I think that's one of those things that in time will become normal, but at the moment still feels abnormal. Neil, sorry I cut you off.

Neil Wiley: No I think that's a wonderful summary and I think that's completely correct. All I was going to say is that... I mean, again, the approach we've taken is to certify that an application is usable, rather than try to certify each answer it gave was right. Now the important thing, though, is I think that there, as with anything else, and as with Sean's example of briefers to the great Stu Peach, there's an amount of post- deployment surveillance that has to go on all of these automated ML systems, because they do in fact alter their performance over time. It's almost like making aspirin. If you're an aspirin company, you've got a product that comes out of the end of your aspirin line that is supposed to be, one, efficacious. It's supposed to do the thing aspirin does, and two, it's supposed to not have any unfortunate side effects you don't know about. So the same is true when you employ automated analytics. The thing needs to be efficacious. It needs to do roughly what you know think it ought to do, and it has any side effects or unfortunate hiccups. You need to know what they are. Now with aspirin, you can fail very safe and you can grind up every pill that comes out the end of your aspirin line and assay it to make sure it's aspirin, but if you do that, you're not going to sell a lot of aspirin. So it's probably not good. Or you could just decide that your immediate production line engineering decision was fine and that you trust your suppliers and that everything coming out of the end of aspirin's really aspirin and nothing else and I'm good. That's fine too, but there's a pretty heavy risk in there if you're fair. So there's a surveillance process that goes on in any industrial process to ensure that what you thought was happening to begin with is still happening six months later, and I think that's an important factor to consider when you're bringing automated analytics in. There's a post- release surveillance process that absolutely has to happen.

Harry Kemsley: At Janes we talk about the triple lock tradecraft that we have, which is essentially a different way of describing our quality assurance process, which is about peer review, senior review, et cetera, and multi sources. Those three things lock together to ensure that James is more right than wrong and we don't care to be first. We care more to be right. And that quality assurance process that you're alluding to there in terms of that production line with the intelligence or aspirin is essentially what I think generates trust in what you produce, particularly if the language is correct, to use the point that we described earlier. So I'm going to have to draw stumps because there is, I'm afraid, a finite amount of time we can spend on this, although if you don't mind, Neil, I'm going to put a request in now for a part two of this conversation to pick up on the 4 or 500 things we didn't talk about that we could have done, but we'll come to that later. As always with these podcasts, I'd like to finish on a so what. One thing to take away for the audience. So while you're both thinking about what your one takeaway is, and I'll start with you Neil, in just a second, and Sean, you'll go second, my takeaway from this conversation is as simple as this. In your definition of tradecraft, Neil, you talked about the separation of the ethic of intelligence and the tradecraft, and that the evolution of tradecraft is an inevitability because circumstances require that. I think that separation is not only instructive, I think it's really helpful, because it takes away the dogma of intelligence tradecraft. It allows the practitioner to feel like they've got the room, the flexibility to actually drive innovation and evolution, particularly if their supervising quality assurance officers allow that. But I think for me that's the big takeaway, is allowing the freedom of movement by separating ethic from tradecraft and allowing the two things to move together. Neil, your one take.

Neil Wiley: Well first off, I wish I'd have used yours, actually, because I thought it was better. No, I think the one key takeaway from this is the fundamental value proposition of the intelligence community to its client, whether it's the President or the commander of a fighter squadron, is trust, and everything we do, our ethic and those tradecraft standards that bake the ethic into our products are ultimately about developing, strengthening, and maintaining the trust that the consumer has in what we do. Because if they don't have that trust, then we're just one of a thousand other information providers and we have no unique value. So if trust isn't underpinning everything you think about as an analyst or an analytic manager or indeed a producer at Janes, then you're probably thinking the wrong way.

Harry Kemsley: Right. Thank you. Thank you, Neil. Sean.

Sean Corbett: So that's all my sandwiches and the chips eaten, actually, but inaudible as usual. So what I'm going to go with, though, I think, is kind of a subset what you're saying. It's the necessary evolutionary requirement for tradecraft. The dogma piece. You've got to find a balance between the TTPs, the evolving TTPs, and having the agility and the flexibility to meet the need of the user effectively.

Harry Kemsley: Yeah, I love it. Well, I am first of all immensely grateful, Neil, for you taking the time to talk about a topic that has been long overdue to be addressed, tradecraft. I'm particularly pleased that we've got to do it, but also not surprisingly I'm very, very pleased with the outcome. I hope the listener enjoyed that conversation as much as I did to be a part of it. And as I said earlier, I guarantee at some point your inbox is going to get an invitation to come back and do a part two, three, and possibly a four, because there's just so much we could discuss here. Let me finish by saying thank you very, very much for your contribution. Thank you for your service. 30 years of government service is no mean feat, and in incredibly important roles for that. One of the things that's come out of this conversation for me, though, is from all that service, your insight into a really important topic, tradecraft, needs to be captured, and I hope this podcast has gone some way to do that. So Neil, thank you.

Neil Wiley: Now thank you very much Harry and Sean. I really appreciate it. I enjoyed it. You could probably tell I love talking about this and you will be impressed, I hope, that I got your pull stumps cricket reference.

Harry Kemsley: Yeah, I tend to use that quite a lot in the podcast and I free freely forget, not everyone listening has any idea what pulling stumps actually means. So well done you, Neil. Sean, thank you as always for your contribution. Always a pleasure to be here with you and I look forward to the next episode, whenever that will come. Thank you again. Bye- bye.

Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts, so you'll never miss an episode.

DESCRIPTION

In this episode we look at tradecraft in Open Source Intelligence with Neil Wiley, former Chair of the National Intelligence Council and former Director for Analysis at the Defense Intelligence Agency.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Neil Wiley

|Professor of Practice at the University of Maryland’s Applied Research Laboratory for Intelligence and Security, and Managing Principal at Lyseon Consulting, LLC
Guest Thumbnail

Sean Corbett

|AVM (ret’d) Sean Corbett CB MBE MA, RAF