Applying analytic tradecraft to OSINT

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Applying analytic tradecraft to OSINT. The summary for this episode is: <p>In this episode of the Janes podcast Terry Pattar and Kathryn Haahr-Escolano discuss analytic standards, OSINT and the Intelligence Community Directive 203 (ICD 203). </p><p>Kathryn Haahr-Escolano has worked in the US Intelligence Community, and is a practitioner of the ODNI/AIS Analytic Tradecraft standards and their application to the craft of intelligence analysis and the OSINT lifecycle for government, academic, and international clients. </p>

Terry Pattar: Hello and welcome to this episode of the Janes podcast. I'm Terry Pattar. I lead the Janes Intelligence Unit. Joining me on the podcast is somebody who I've been really looking forward to talking to for a while, Kathryn Haahr- Escolano, a consultant on Analytic Tradecraft Standards and Intelligence Analysis, particularly as they relate to open source intelligence, which we're going to come on to talk about. Katherine previously worked for the CIA in multiple disciplines across collection and analysis, including as an analyst in the foreign affairs directorate. Kathryn currently advises public and private sector clients on how to apply analytic tradecraft to open source intelligence. That's, really, where I wanted to get into talking, actually, on this episode, Kathryn. Welcome and thanks for joining me.

Kathryn Haahr-Escolano: Thank yo so much for having me on your podcast today. As you said, it's probably been a couple of years that we've been talking and sharing ideas around OSINT and analytic tradecraft. It really is an honor to be with you in your Janes podcast audience.

Terry Pattar: Nice. I'm really pleased to have you. It's an honor for us to have you on the podcast. As I mentioned, I want to talk about analytic tradecraft and open source intelligence. One of the things that my audience would have heard me say a lot over the previous episodes is that when we talk about open source intelligence, it becomes a very collection- focused discussion. There's a real obsession, I think, within the field among open source intelligence practitioners to focus on how to collect information, especially because there's so much information out there. The sources are so varied these days. It feels like we can do so much more than we used to be able to do. What I wanted to talk about was, how do we actually focus on the analysis part of the equation when we want to produce good open source intelligence? Because I think that's really important, and I think it's something that's overlooked. I'd love to get your thoughts on that in terms of, maybe, starting us off by describing what we mean by analytic tradecraft standards, but also, what your experience has been when you've gone out and done consulting or training for organizations whether it's public sector or private sector in open source intelligence.

Kathryn Haahr-Escolano: I think the first part of the comments I'd like to address is, you're absolutely right that the focus almost exclusively has been on collection. Not just from the intelligence community, from which I hail, have you work with many of the U. S. intelligence communities partners. There are common set of standards, if you will, for the multiple ends, humans against et cetera. OSINT has always been separate from these ends. It's been part as its own discipline and there are reasons for that, but that doesn't mean that what happens in the OSINT domain is separate from the structure and the analytic rigor that happens in the other end domains. From where I, having worked both on the human side as well as OSINTs, I have, as an analyst, when I was in government as well as supporting or being as a consultant to establishing analytic tradecraft cells as well as OSINT cells in the past and not trying to fuse these two, if you will, disciplines into one that they are not diametrically opposed, they really should be one operational, if you will, analytic architecture for how OSINT has done. It moves beyond just collection. What I love about OSINT is that it is just as significant discipline. Let's take humans. Only because when we think about, as you say, so much varied information coming from so many different types of sources including sub sources, well, think about the challenges having read so many of Janes, I use analytic products, the OSINT products, sub sources coming from social media, talk about varied and how do you consider who is sub sources. This is where it becomes really important to understand what it means to assess who that source is, what they're reporting and the timeliness with which you do it. It goes beyond just collecting these voices, this myriad universe of voices. It's making sense of all of these dots. That is what I really enjoy doing by bringing, and we'll talk in a moment about what is analytic tradecraft, but bringing the analytic rigor that was incubated in the IC, at least, from the United States, born out of the 9/ 11 terrorist attacks when the U. S. Government established the office of director for national intelligence. Within the ODNI, a very specific office called, Analytic Integrity Standards to develop and guide analysts in how to apply these structured analytic quality standards and techniques to their analysis. We've moved to a place where, now in the open source, we seriously need for our government and especially when we think about how we collaborate in our intelligence products via on the classified side, but more probably for the OSINT products, the final analytic products where they should all have a common set of standards. By that, we mean that they're credible. They have analytic rigor. That's very, very important for when your customer reads one of your OSINT products. They know that the whole life cycle, that whole collection from planning, collection analysis and the production has followed some structured analytical rigor. It has analytic rigor involved. That's the first comment I wanted to make.

Terry Pattar: Yeah. I think that's a really important point you made there in terms of why this is important for open source intelligence. I think it's because and you mentioned working with different partners and the U.S. and UK obviously work very closely. We follow a similar set of standards and there's a similar role within the government here that you described in ODNI in terms of the professional head of intelligence analysis that we have here, but for open source intelligence, one of the key benefits of it is the collaborative element in terms of being able to share OSINT products widely. It makes so much sense for there to be that common set of analytic standards to ensure that they are as easily usable by different organizations across those international alliances. I think that's, still, something that perhaps people haven't really thought about much when they are doing open source intelligence. Actually, that really is a huge benefit, but only if that rigor is there.

Kathryn Haahr-Escolano: You're absolutely right. You've had on your podcast, Carmen Medina, who is the queen of analytic tradecraft and thinking about the future and the evolution of intelligence. Something that she discusses and it's very important is that the IC, at least in this country, but more than likely, this involves many of our partners is, we need to move beyond the very secret of world in which we've always done analysis. There is still a need for the classified side. You're not going to move away from having sources that can only be poured through, through classified channels, but the beauty of open source is that open source can provide, has the ability to provide additional credibility and reliability to classified products. Because if you can confirm what it is that sources and some sources are reporting, I mean, we have that available through vetted open sources, that becomes a much more credible and authoritative classified product 0. 1, right?

Terry Pattar: Mm- hmm(affirmative).

Kathryn Haahr-Escolano: You're absolutely right in terms of just an OSINT product and what it is that it enables your clients to do with a product that does have these analytic standards that are applied to it. Again, we will come to what that means, but I have found in working with international private sector clients and helping them understand and then to train them, and how do we use the standards for their OSINT life cycle is that it completely... it's like the Concorde where it takes off and you're in Mac two of your OSINT products. All of a sudden, you're dealing with alternatives that, perhaps, the customer would never have considered. It's no longer just a descriptive piece of analysis. It becomes evaluative. It can become estimative. You can even forecast, but again, there's a structure for all this. The analytic tradecraft, what we mean by that is that it's the art of thinking critically about an issue and how you're communicating your findings. In this case, what your key judgments are using the lingo from our worlds, if you will, and any good OSINT piece. That's what I like about reading the Janes products is that you definitely follow the rigor of having a main message, what is that key judgment, the main takeaway that your client needs to know. Again, if you have a four to five page OSINT report, that's fine, but if your client has to read something quickly and understand what the main message for action or decision- making is, then that, very typically, will be represented up in your first section and a header, key takeaways, however you want to describe that. That's what analytic tradecraft does. It's relying on these standards of which there are nine born, again, out of the analytic integrity standards office to ensure that the intelligence analysis lifecycle is objective and just credible. We'll come back to objective, because one of the things that we don't think about in the OSINT world is politicization, which we hear a lot about, of course, and on the classified side, but politicization occurs in the open source world. Well, we can talk about that, but in short, the tradecraft ensures credibility. There are these nine standards which are these techniques and methods to ensure that we, across the board, analysts are following a structure and have these steps in critical thinking as to ensure that there isn't politicization to help protect against cognitive and perceptual biases that are there from the beginning. Think about collection, right, Terry?

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: How many biases go into just the collection by itself.

Terry Pattar: Yeah. The collection, I mean, it's phenomenal. Well, first thing I always say to analysts, especially when we're delivering training and helping build new analyst units is that we're ultimately all creatures of habit. We're all human. We're all creatures of habit. We have our own foibles, I guess. We may have our own preferences, our own ideas, ways of seeing things, perceiving things. It could be something as simple as just regularly going to the same new sites to get your news every morning, which could shape your perception of the world. If you're not mixing it up or trying to seek out other sources, then something as easy as that can affect sometimes how they collect information.

Kathryn Haahr-Escolano: Exactly. I just want to pull that thread before we get back to the standards is, one of the things that I've encountered working with multilingual analysts in these OSINT's analytic cells is that we all have cultural biases. First of all, if we are multilingual ourselves, we may have a bias in that language that we are collecting on for whatever reason. We've lived in that country. We may have, again, predilection to using certain sources because we've deemed them to be credible for X, Y, Z reasons, but we could also have a bias, a negative bias, against using certain sources without even understanding why we have that bias. What I love about the standards is that they help us consider alternatives to collection. They help us with uncertainty when we're dealing with information gaps. Also with the uncertainty as I've collected all this information, what does it really mean. If you do a link analysis of all the datasets or the data points, I should say, especially if you have your Twitter world and your social media, any dark web actors to really evaluate the significance of what all these voices are saying and what you need to discard, what you need to judge as, perhaps, being deception. Again, what is valid? The standards can help an analyst if you will structure the critical thinking process. That's really, in my view, the most critical value of applying these standards to the OSINT world, because-

Terry Pattar: It's so interesting you say that because I think, for me, there's always a couple of elements to that. One is, it's really important for analysts to be able to structure their own thinking. Like you say, apply critical thinking in a way that is directly related to the work they're doing, but also, once you're applying it and once you're using those standards, it makes it easier for someone else to come and review your work, which I think is another really important aspect of producing rigorous, really well formed analytic judgments is that you ask someone else to look at those and give you some thoughts and feedback. Like you said, that can help you get around some of those biases you might have yourself. There was an individual and we'll have, and the fact that we as individuals can't always see everything. We can't necessarily say we think of everything. We do need help, especially I'm trying to understand and figure out what's going on in complex situations. Yeah, I think that I'm really glad the way you described really chimes with how I've certainly described it and shown other people and taught other people about how to use those types of standards and employ them in their work. Maybe you can give us some idea as well of maybe a bit more tangibly, what are we talking about when we talk about these analytic tradecraft standards and which are the ones that you think apply maybe most to open source intelligence.

Kathryn Haahr-Escolano: Yes. You said something that I think is so crucial though and I feel compelled to address-

Terry Pattar: Please do. Please do.

Kathryn Haahr-Escolano: ...some of the standards. You're talking about the review of these products when the analyst, through the lifestyle of which we've spoken. It's just as important for the standards to help guide and direct the analyst. Again, talking about the OSINT analyst in this case from start to finish, but as critical a role and indispensable is the editor. Your first line editor, your second line, you as the director of the unit, whatever. If you're doing the final review, the QC of that product before you push it out to your clients, you need to have the assurance and the confidence that your analysts have that ... what you're reading is of the highest quality and integrity. These standards will help ensure that, because one of the things that I've found going back to the bias discussion, and this happens with editors as well, is that Mr. Richards Heuer, who has really laid the groundwork for all of this.

Terry Pattar: For anyone who is not familiar with him, he wrote the Psychology of Intelligence Analysis.

Kathryn Haahr-Escolano: I thank you for mentioning that. Absolutely. I just want a quote from him when he said, " When the heart evidence runs out, the tendency to project the analyst's own mindset takes over." That happens from start to finish and it can happen from start to finish in the life cycle. It can happen and the same in the editing role. The tighter your final analytic product is that you give to your editor, the more successful, if you will, in terms of that highest quality and integrity resulting is going to be. I wanted to mention that.

Terry Pattar: It's such a great quote as well because it really, I think, hints what is a key challenge within intelligence, which I think not many people outside the field really appreciate, which is that you're often not looking at evidence. You're often actually looking at the gaps between pieces of evidence and trying to work out what goes in that gap. He-

Kathryn Haahr-Escolano: It needs inaudible vacuum, right?

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: It goes with whatever, without realizing what we're filling up, which is usually assumptions and cherry picking of information, which takes us to my favorite standard, if you will.

Terry Pattar: Excellent.

Kathryn Haahr-Escolano: Talking about the ODNIs, ICD 203 for our audience, analytic tradecraft standards. There are nine. I'm not going to cover all nine, but we're talking about sourcing, expressing uncertainties, distinguishing between evidence, analysis and assumptions. We have the four standards, which is called, Analysis of Alternatives. We move on to customer relevance. Our sixth standard is logical argumentation, which of course, I will be discussing briefly, consistency of analysis, accuracy of judgments and finally, visuals, which is very doable in the open source world.

Terry Pattar: Yeah. What I think is and for anyone who hasn't seen these standards, you can search for them and find them online. ICD 203, if people look that up, they'll be able to find the details, but it really, I think, encapsulates the process. I think it describes and lays out the key and fundamental things that people need to think about when they're analyzing information and producing intelligence, which is, again, as I mentioned at the start, it's something that I think people don't think about enough and there's an obsession with gathering lots of information and then just trying to shovel all of that into a report. Whereas, I think, the key points that you mentioned there that are encapsulated in ICD 203 really help lay out, actually, what helps an analyst produce something that is useful to an end customer.

Kathryn Haahr-Escolano: Correct. Absolutely. My favorite is sourcing, which is, the importance of sourcing standard, one, is that it forces the analyst. I say force in a good way here to evaluate and describe the quality and the credibility of the underlying data that he or she is using to undergird their key judgments. In OSINT, we do use key judgments. We have a main message. We have the what, the main takeaway and then we have the so what, the implications. Now, sourcing one, sorry to get a little techie here, but sourcing one and ICD 203 also has its own intelligence community directive, which is ICD 206 for those of you that like to follow these.

Terry Pattar: This is great, but yeah. Again, this is all incredibly useful stuff that's out there that people can use. I really want to grow the awareness. This is great to be talking through these and getting your thoughts on them. Yeah, so ICD 206, sorry. I'm interrupting you and dragging you away from what you're saying.

Kathryn Haahr-Escolano: No, no. Yeah. That's what it is. It has its own ICD, if you will, but in brief, I'm going to focus on two of the three characteristics of sourcing, which is the in- text, what we call source descriptors, which you do use. Of course, it's the according to where you are describing the source and then we have... I don't think I've seen these in many OSINT products writ large in the U. S. or other countries. I certainly encourage this, which is something that I am training my customers on now is the source summary statement. Well, which is one of my favorite analytic challenges because it's its own little mini analytic universe, but the importance of source descriptors as you well know is that you're letting the client, you're letting your customer know, this judgment is based on the source. I'm using the source because I'm telling you that in our view, this was a credible or reliable source and reliable reporting. That's very simple to do. Any OSINT product should have these in- text characterizations. If not, then you're not transparent with your customer. Your customer has every right to know what it is that your analysis is defensible and by telling them what the sourcing is enables you to be credible and to defend your analytic judgments.

Terry Pattar: What's important there I think as well is when it comes to open source intelligence, in particular, is that the verification itself is a challenge. Obviously, that's where a lot of people spend their time in open source intelligence is verifying information and sourcing because as you know, we mentioned it comes from such a wide variety of places. Within Janes, we try not to use information we can't verify, but for anyone who does need to or whenever we do in a report, we would make sure that that's clear as well. This is maybe come from a blog or a social media account, let's say, as an example, which we've not been able to verify or we haven't seen coming from anywhere else, but maybe it's so important that you decide to include it. Yeah, I think that's really, really key thing to highlight in terms of the relationship between those analytic standards and open source intelligence.

Kathryn Haahr-Escolano: That's an excellent, excellent point. There's a whole subset to sourcing, right?

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: What is the veracity of your sources? Each organization and government agencies have their own, if you will, metrics for, if you will, assessing and evaluating veracity of sources. I've helped some government agencies with the crafting methodologies for how to do that. I think Janes is up in the stratosphere of having nailed that variable, but you raise a really good point, which is, at times, we want to rely. Not that we need to, but we want to rely on reporting which it may be unclear that... the credibility may be unclear and we may not be fully able to judge if it's the stuff that we're not, but it helps to anchor the analysis, the judgment. This is where the source summary is a great segue tearing into the source summary statement. In the source summary statement, that is where you talk about the weight, the value of the reporting that you have used in your analysis. Because this is where the analyst will briefly describe very succinctly the body of sourcing. It could be four different Twitter sources from speaking from my world, coming from the, let's call it the narco Twitters, for example, that's hard to determine reliability, but there are metrics to get you close to that, but you're letting your reader know I'm using these because writ large, it helps me tell my argument and it seems to suggest a credible argument. That's 0. 1. We move into another critical element of the source summary statement which is, what is the competence level that the OSINT level has then in that judgment based on the quality? The quality could be the strengths or the weaknesses of your reporting. I'm curious to hear from you if you and your analysts have worked with confidence levels of your OSINT judgments.

Terry Pattar: Yeah. This is really interesting because I think it's really important as a way of communicating to the reader how much they should take away from your analysis. Because I think, too often, especially, you keep saying people you want involved in the field, but even for anyone who is involved in it, for some people who are more experienced, they'll think, well, sourcing and describing sourcing, " This is bread and butter. This is quite basic." One thing I would say is that great intelligence really comes down to doing the basics really well. Often, that's where people fall down. Sometimes it's especially people who are quite experienced and don't necessarily pay as much attention to these things as they once did, but maybe, we're all guilty of that at various points. Yeah, I think the confidence statement is really important because it's one thing to say, " Okay, well, we've got this information which tells us X, Y, or Zed," but it is from sources that we think are of online reliability or we can't necessarily verify what they're saying at that point. I think then, the analyst's confidence statement really plays an important part in enabling the reader to understand what they should take away from it and then what they need to do, perhaps, to act upon that information or make a decision. Yeah, I think it's a really important aspect of any OSINT report. I think it's sometimes challenging to explain it to analysts or new and also, to help them implement doing that routinely. I don't know if that's something you've come across, maybe, sometimes as a challenge, particularly when you're taking some of these standards out from their intelligence community origins to apply to open- source intelligence in other contexts. Is it hard to get people to really understand what you mean when you're talking about that difference between the assessment of the source versus their confidence, and how to actually express it in a report?

Kathryn Haahr-Escolano: Yes. Yes, and very real world with a recent client, creating, if you are an all source analytic cell, which OSINT is a part of that. First of all, to understand, how do we bring, again, the standards to that collection process to understand how am I going to not only with my collection methodologies, but how am I going to assess these sources, which is a whole another conversation. What is the relationship with that assessment of the confidence level? That is a real struggle for new younger analysts that we, at least in the U. S., are bringing in, if you will, straight out of grad school, that have worked in private sector, aren't hailing out of the IC, for example, and finding a new career and bringing that analytic experience with them to the OSINT world where there is this fundamental understanding, in this case, of ICD 203, but also, how to apply confidence level. What's really key there is an analytic curriculum. That's key for me as part of when I develop these cells and work with the different clients. Everyone needs an analytic curriculum to understand how am I going to do step one, which is a research design, and how am I going to move out from asking myself, what is that key intelligence question, which any OSINT analyst should be asking themselves as well. Well, who's my client and what are they asking of me and how am I going to respond to that? What do they need to know, et cetera, et cetera? Not only having this analytic curriculum, which involves every member of the team. It's having a mentor as I call it, an analytic mentor. It's brainstorming with your colleagues at every point of the way. It's-

Terry Pattar: Sorry, but I got to interrupt.

Kathryn Haahr-Escolano: Yeah.

Terry Pattar: I guess learning from the experience of others around you. Is that what you-

Kathryn Haahr-Escolano: Yes.

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: That is-

Terry Pattar: Yeah. Yeah, that's really important.

Kathryn Haahr-Escolano: For me, having teams work. Again, this goes back to one of the points we're talking about earlier, having an OSINT team being divorced, if you will, from the rest of the ends, some people would argue it works well. Other people would argue that it doesn't work well. It depends on the mission, but looking at an OSINT mission, the analyst, in this case, I'm going to go back to... usually, it's going to be multilingual analysts for the whole life cycle. I always encourage teamwork and brainstorming together, again, to make sure that if somebody has a question, I'm not sure I really understand how to think about a confidence level as I apply it to my judgment. You don't always have to go to the experts. Expertise is overrated. You can have specialist at the analyst level where people bring to bear their own critical thinking experiences, but they're all bound together, again, by the structure. The rigor is established and directed and guided by, if you will, the team manager, but you need training and you need active mentorship. I think that's a really important point that you raise about how it is a challenge to explain, but also, to train.

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: It's no different from any of the other disciplines within OSINT as to go out and how to use the dark web, for example, and-

Terry Pattar: Do all of the collection of sets.

Kathryn Haahr-Escolano: You got to protect yourself.

Terry Pattar: Yeah. Yeah, and how to protect yourself with all of those things. Yeah, indeed. Indeed.

Kathryn Haahr-Escolano: I think a final point on confidence that it doesn't apply to all of OSINT, to all finished OSINT products, because as I've spoken about before, in other conversations, you have different types of products, right?

Terry Pattar: Mm-hmm (affirmative).

Kathryn Haahr-Escolano: You have that spectrum of analytic products. A descriptive product does not necessarily going to need or require a confidence level. You can have confidence levels in the main body of the text. If it's that important of an estimate or of evaluative analysis that you want your client to know, I'm telling you this, but my confidence in this as a Janes OSINT expert is, I have high confidence in what I'm telling you because of and then crosstalk.

Terry Pattar: Yes. Yeah.

Kathryn Haahr-Escolano: Knowing when and how to apply these characteristics of the standards, that becomes important for how you begin to use them.

Terry Pattar: Is there a challenge there, also, in what you found in terms of either training or from your own experience where analysts often can be nervous about putting assessments out, in general. Not just talking about confidence, but just generally, especially when it comes to more estimative or future- oriented analysis when the question is, what's going to happen next?

Kathryn Haahr-Escolano: Yes.

Terry Pattar: There's that nervousness about saying, what might happen next? The same might apply to when it comes to making those confidence judgements. There's a tendency to hedge bets a little bit for people to sit on the fence a little. Well, I guess, that does represent their uncertainty. It doesn't always help the audience in terms of determining what to do next with this intelligence that they're looking at. How best to address that, I suppose, is my question following from that?

Kathryn Haahr-Escolano: Terry, you very nicely led the conversation into standard two. Saying uncertainties or in judgements, looking out for major analytic judgements.

Terry Pattar: Well, I would love talking about uncertainty because I think uncertainty is at the heart of intelligence. It's what we're dealing in. I sometimes say to people that I think intelligence is not necessarily the art of bringing more certainty to where we have lots of uncertainty. It's really about making our customers, I think, comfortable with a certain level of uncertainty.

Kathryn Haahr-Escolano: Nicely said. That is absolutely correct. The analyst shouldn't be nervous that they have to get it right.

Terry Pattar: Right.

Kathryn Haahr-Escolano: Because usually, in the classified world or in the OSINT world, it's rare that we're going to get it right. If we get it right, based on my experience, there's a likelihood that it's based on somebody else's agenda.

Terry Pattar: Okay.

Kathryn Haahr-Escolano: Which is an interesting way of thinking about it that I think, perhaps, have been a curious critical way of thinking about when we get things right, well, why is that? Then again, how did we get it right and that's another conversation, but I really like what you said. It's making analysts and the customer, making them comfortable with the fact that there is uncertainty in the analysis. Again, we're probably talking about not so much of descriptive products, but the ones that you're full fledged explanatory evaluative and as you say, the forecasting. When you start talking about indicators analysis and even opportunity analysis where certain things have to happen for you to take advantage of taking certain steps. This is where we get into, again, for your audience in the second standard two is that explaining uncertainties is really getting their comfort level around understanding that there's a likelihood of something happening and a probability. There's a rating scale that we use in the U. S. I am familiar with that. Your country and even the Canadians and large part NATO, that it's this standardization, if you will, of likelihood and probability by using a rating scale from the lowest likelihood which has a percentage value assigned to it. Almost no chance of something happening, which is remote, zero to what is it, 0.1 to 0. 5%, all the way through to the end of the scale, which is almost certainly, which is you're nearly certain is going to happen. You give yourself that wiggle remote of a percentage at the higher end. What that does is, that really helps the analyst and not only how to think about how they are going to qualify their analytic judgment. This is likely to happen. Again, we come back to the importance of your sources. What is the confidence in what you're saying in your key message? Well, I have a large buy of reporting that gives me a high confidence that X is, if you're forecasting that something is going to happen or anticipating something is going to happen, then the analyst should feel comfortable saying that. Because again, your organization is writing behind the credibility, if you will, of that judgment. A lot goes in to making sure that that likelihood-

Terry Pattar: Yeah, of course.

Kathryn Haahr-Escolano: Yeah. Does that make sense-

Terry Pattar: Yeah. You're right. Definitely.

Kathryn Haahr-Escolano: ...in terms of how you're going to-

Terry Pattar: Yeah. Definitely.

Kathryn Haahr-Escolano: Okay.

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: Okay.

Terry Pattar: No, no. I think, yeah, for anyone coming into intelligence, I think this is one of the key things they have to learn is how to express that and make it understandable to an audience. I think it's been codified in a way in the standards that helps structure it so that everyone has that common understanding of, " Okay, when I see it written like this, this is this what it really means in terms of likelihood," et cetera. Yeah, it definitely is a useful aspect, I think, of the standards.

Kathryn Haahr-Escolano: Yes. If you are uncertain about how even to look at the rating scale and the analyst is not comfortable with the body of reporting here for various reasons, it's so unbalanced in terms of the reliability and the credibility of how to make sense of it. We probably don't have the time to get in the structure analytic techniques, but there are some simple structure analytic techniques that you can apply to each of the standards to help analysts tease out their thinking about each of these standards. For example, these uncertainties, what's going to force and again, I'm using the word force and now, let's think about what they know and what they don't know. One thing to do is to go back and do a qualitative information check. Let's be really rigorous and think about what it is that the sourciness telling us what it's not telling us and then if I'm walking myself and it's best to do this with one other individual or as a team is, you can't do it alone. The whole point, again, going back to teaming is that, if you're going in with your own biases, do you really think that you're going to be able to do a structured analytic technique against yourself?

Terry Pattar: Well, yeah. This is a really interesting area. We've talked about structured analytic techniques in a podcast before and it's one that comes up... that's a lot, I think, in the last decade since we started delivering training from Janes whether in open source intelligence or intelligence analysis more generally. Everyone has been keen to understand, okay, structured analytic techniques, how do we use them, how do we apply them. I think it's challenging in the sense that I think people almost get a bit too focused on individual techniques and trying to employ and use those individual techniques. What we've always been keen to emphasize is, actually, when you look at the principles that underlie them, it's the same across all the techniques and the things that you just don't identify there, actually, in terms of, " Okay, what information do I have? What is it telling me? What are my gaps? What assumptions are we having to make?" All of those kinds of things are coming across all of those techniques.

Kathryn Haahr-Escolano: Yes.

Terry Pattar: I think, as long as people think about it in those terms, as much as individual techniques do help in specific instances, actually, just having that thought process, I think, and especially, that culture within a team to review and go back over what they're looking at before they send out a product and build in that time to do it. That's really important. I think that's one of the key takeaways I tend to find whenever I'm teaching people about structured analytic techniques and that it's not necessarily about an individual technique. There's no silver bullet technique, I think. There's a lot to be learned from all of them, but the ones in particular that you found that have been particularly beneficial for specific tasks or activities that people have been undertaking with open source intelligence.

Kathryn Haahr-Escolano: Yes. Absolutely. Again, excellent point that there is no silver bullet and there is a tendency to want to pull from SAT play, if you will, of taking a cool sounding SAT and thinking, " Oh, I'm just going to fly it to this part of the life cycle."

Terry Pattar: Yeah, and then hoping it will give you an answer.

Kathryn Haahr-Escolano: Exactly. If anything is going to frustrate the analyst and it can misdirect as well.

Terry Pattar: Interesting.

Kathryn Haahr-Escolano: The simplest task for analystqqq is to start with an analytic checklist, which is based, literally, on this analytic architecture that we're talking about. Each analyst can have his or her own checklist, but I encourage any OSINT cell, if you will, that there should be this common analytic checklist to make sure that all analysts are operating from the same fundamentals, the same analytic principles and the critical thinking steps. Because otherwise, you don't have consistency and the final job of the other direct where you have two to three products that are inconsistent and something important that we haven't talked about is analytic lines. I'm curious, how often do you follow up with a client on a topic that they have asked you to report on a monthly basis, on a quarterly basis, but what we mean by the analytic line is, what have you reported on in the past and has anything changed with what you reported on in the past and if so, what does that mean for that analytic line. Imagine if you have an analyst who is struggling with, " Oh, I need to apply a SAt to, let's say, standard two," and it could misdirect them and shoot them down a rabbit hole where they lose sight of the fact that it's as simple as, " My analytic line changed from a month ago. I have no reporting that is either updating or it's refuting." It really comes as simple as that.

Terry Pattar: I think that's so useful as well, especially when analysts are engaged in that task of, like you said, maybe every month writing descriptive reports on a particular location country or situation or theme. Yeah, they're not always conscious of keeping in mind that line and keeping in mind the previous context. I think it's almost always too much of a snapshot of the current present moment. Like you say, then they get lost a little bit in trying to apply different techniques to understanding what it might mean in addressing the so what for the customer, when sometimes it can just be as simple as looking backwards and figuring out, " Okay, well, what has changed?"

Kathryn Haahr-Escolano: Absolutely. One that I think is, and it's actually one of the standards, we'll jump over standard three very quickly just for a moment ago to standard four, which is analysis of alternatives. That, in it itself, inherently, introduces the analyst and from the analytic life cycle to using different types of structured analytic techniques. From red team means, doubles advocacy. One I really like is argument mapping, which an analyst, if you're doing a more complex analytic product that is beyond descriptive helps the analyst conceptualize what their argument is, what their assumptions are. Is my judgment solid? Has it changed from what I reported before and why? What I like about analysis of alternatives, standard four, is that, again, it compels the analyst to ask themselves, " Hey, at least I'm going to ask myself, how can I get this wrong and what if I'm wildly wrong? What are the implications of that?" The standards for the audience doesn't mean you go through one, two, three four. That's just a numeric example of that.

Terry Pattar: Yeah. It couldn't be laid out in some order. They're all important.

Kathryn Haahr-Escolano: You have to be-

Terry Pattar: They're all important.

Kathryn Haahr-Escolano: Exactly. Exactly.

Terry Pattar: Yeah. It's not necessarily a prioritization.

Kathryn Haahr-Escolano: There's a certain logic that sourcing has been the first step, right?

Terry Pattar: Mm- hmm(affirmative). Definitely.

Kathryn Haahr-Escolano: I would ask that you and your audience think about... I've mentioned it before, I really like indicators on sign post analysis because it really forces you to think about, " What do I identify that could happen next based on what I've seen and that how do I position myself and potentially pivot myself if I have new reporting? What am I looking for? Already, you're thinking, without realizing that the analyst begins to think somewhat, if you will, alternatively, they start to challenge their thinking a little bit that there's more out there and I'm not necessarily beholding to this one source or through this analytic line. There are many, many other ones that, of course, that we could discuss. Everybody loves ACH analysis of competing hypotheses, but that's for multi- INT analysis and for a very highly analytic OSINT reports where you really want to make sure that your hypothesis are, in fact, undergirded by the evidence that you are positing. Do you have any assumptions that have gone into those hypotheses?

Terry Pattar: Yeah. It's about testing those hypotheses too, right?

Kathryn Haahr-Escolano: Yeah.

Terry Pattar: In terms of really trying to dig into our evidence and figure out, " Okay, what does it really tell me about each of these hypotheses?"

Kathryn Haahr-Escolano: Exactly. I would encourage analyst, as you said, not to be so concerned with using two to three sets for each part of the life cycle. It's what makes sense. Again, go back to your analytic checklist. Brainstorming Is really, for me, working with analyst and their managers is really one of the most important parts of the process and having a conceptualization process. I've done my research design. Let me do a pre- mortem, for example. Why wait for the surprise for the bad thing to happen and you have to do a post- mortem? Again, you need time to do this. We'll have to come back to talking about the timeliness. How can you do all of this when you have a one- day deadline?

Terry Pattar: Yeah.

Kathryn Haahr-Escolano: We're talking in the perfect world where the analyst has enough lead time to be able to work off of this checklist. How am I going to challenge myself with respect to sourcing and assumptions and my judgment? How would I put it all together and give my editor-

Terry Pattar: I think that's really useful though. I've always emphasized this in the training that we do. That often, in training course, we're able to give people the ideal wild scenario. Okay, this is what you would do at this stage and this is the technique you might want to apply here. The question often comes back as, " Okay, what if I don't have the time to do that?" Well, I think if people are trained in and drilled in how to use those techniques, then it makes it easier to improvise when you are short of time, but to still have in mind those principles. Especially, I think, what's great about the SATs is that they help analysts when, maybe, you're looking at a mass of information and you're just not sure how to make sense of it. You're not sure where to start. You're not sure how to check that you haven't missed anything and you want to check and test your assumptions, your hypotheses, et cetera. Like you said, ACH is a popular technique and one that we've talked about a lot in our training over the years. It did feature in our previous podcast episode when we talked to Dr. Martha White- Smith and some prior research, I think, showed that it's, maybe, not been as useful as some people would expect as a technique when it's been employed. I do wonder whether some of that is, perhaps, down to... like you said, it being useful in certain circumstances and people maybe trying to apply it to circumstances where it's not the right technique to use. To your point, originally, about how, perhaps, people do latch onto the techniques a little bit too much and in terms of thinking that they're going to work through them rather than selecting the one that might be most appropriate to that situation.

Kathryn Haahr-Escolano: Terry, this all goes back to planning, planning, planning. When the task comes in, what is my response time? What is it the customer wants? If the analyst isn't clear as to what we call in the intel world, what is my key intelligence question? It's not always evident. It may not always be sufficiently articulated. Quick note on that, unfortunately, at times, if the question may not... the task is not sufficiently articulated because the client is, there might be an attempt to receive an answer, if you will, that is more palatable to what that customer wants and that the end result is not the objective analysis that you should be producing, if that makes any sense.

Terry Pattar: Yeah. You mean, it's more about the process without trying to predict the outcome.

Kathryn Haahr-Escolano: Yes, but also, don't let the customer predict your analytic outcome, what your analytic judgment is.

Terry Pattar: Yeah, very important.

Kathryn Haahr-Escolano: Clarity and planning from the outset from when that task comes in, and in my experience is, indispensable. If you don't have that, then the boundaries of credibility and integrity melts away and you're playing, the analyst ends up playing in a dangerous land because they don't have anything to which to anchor themselves. Again, this is where the standards come in and I believe are very critical to the OSINT life cycle. Again, just thinking about from that task and planning, what that means, which is, now I know I need to have an answer and it's around this. It's either an update, it's a new analytic product that they want me to write or what does it mean by new analytic product and structure, structure, structure.

Terry Pattar: I'm so glad to hear you emphasize two things in particular. One, planning and two, structure. I'm really glad to hear you say those words because I think everyone I've trained over the years will have probably grown tired of me, repeatedly emphasizing those points. It's nice to hear someone else validating that as well, but I wanted to maybe bring you back to the tradecraft standards and ask, you mentioned the first few. Are there any others, in particular, that you would pick out that you think are most relevant to OSINT?

Kathryn Haahr-Escolano: This takes us to the craft of waiting. As you know, in the intel world, we use the inverted pyramid, which is, you start with the most important claim, what the main message is or what if you will and then you follow on with what indication of that is this so what. Otherwise, you're removed into the academic world and in terms of what it is that your client needs to know. The two other standards that we didn't discuss that I very briefly mentioned that come together, which is standard three, which is part of distinguish between the reporting. In other words, your evidence, your assumptions, and your analysis. The analyst needs to be very clear as to if he or she has any assumptions, they need to identify those in the analytic piece. Again, this is not required or even needed for descriptive analysis. We're talking about analytic products that are evaluative in nature where you're discussing the why of something and then beginning to forecast. You might have assumptions about the reporting and about the sourcing that you need to make note. You may also have assumptions about countries or an actor, you have a country or an actual individual of their behavior based on past reporting. Again, it is about reporting, but you can make an assumption about a behavior because you've documented in the past. It's very important that the analyst not be confused between what an assumption is and what piece of evidences or what a judgment is. They have to have that clarity of thought as to what their they're writing. Otherwise, the main point is completely lost on the client and the potential for misleading a client to take action on something can be significant. How do we deal with that? Well, standard six is logical argumentation. This is, really, just simply about how does the analyst tell his or her story line. The importance of this is, you're just reducing everything that you know into bite size rational, logical comments and that there should be this slow of logic between what you said remain argument. Here's the what is. This is the most important thing you need to know. If you walk away from this two- pager, this is what you need to know, but you also need to know what the implications of this what are, the so what. Because this is what's going to enable the company to pivot to do something one way or the other. Then what should follow from that is, if they need to add context to what the situation is that they're describing and what follows is the evidence. In the IC, we use bullets. That followed from this introductory, if you will, paragraph, which is always written in topic sentences and start to go nerdy on your audience, but topic sentences are structured in such a way so that there's no ambiguity in what your main claim is. It's actovoice. Again, the fundamentals I brought you, which I know you train in. You can see how ICD 203 and ICD 206 are closely or intimately integrated with the fundamentals of writing and critical thinking. I would also encourage the audience to look at those two standards as well and understand how they all begin to fit together. The logic argumentation enables you to get to standard five, stepping back, which is the customer relevance. I'll end on that, which is how credible and useful is that product to your client. If it's not clear what your message is, if it's not clear that if you don't have any evidence that is supporting your main judgment and the client walks away from the same, this doesn't make any sense to me, what am I suppose to do with this? In my world, you failed. It can be embarrassing.

Terry Pattar: Yeah. That's so important. Yeah. Well, I think that comes back to, I think, what you were saying before about planning in terms of, I think it's really worth unless conceptualizing at the outset what is going to be the relevance of this to my customer as I dig through it? This is the delicate balance, I suppose, without biasing, then how they judge their information, because they shouldn't just judge it based on what they think the customer wants to hear. That's a really important thing.

Kathryn Haahr-Escolano: Right.

Terry Pattar: I think it touches on what you mentioned right at the beginning of the podcast, which was around politicization and making sure that we're not just trying to give the customer what they want to hear, but actually, just giving them a very objective piece of analysis. Most objectives, we can make it. The point you made about logical argumentation, I think, is so vital because not for analysts, I think, to be able to express their analysis, but also, once they understand how to do that and they apply more those standards to their own work, I think it helps them also understand the information as well. It helps them make sense of it. It's really useful in that way and also, to gather that experience whether it's through training or on the job of really being quite diligent in explaining the logical argumentation.

Kathryn Haahr-Escolano: You are alluding to something that is really critical. It's not just about the life cycle. It's about establishing and building an analytic culture and an analytic self. I'm a people person. I love working with analyst, newly minted analyst, mid- level. I taught at my grad school the introduction to intelligence analysis and how bright this generation is and especially in the OSINT world where if you think about what the relevancy of information means and how they think about it, relevancy is basically the timeframe, right? What used to be information that might be valid for six months or a year, nowadays, its relevancy might be a few hours. All of these analysts inspire me to think about, if we could take this constellation of smart minds in an organization and help them grow because we want them to stay with us, but also, we're building an analytic culture. It's going to be unique to the organization, but it's there for a reason, which helps everybody enjoy what they're doing and that they're producing the highest quality analysis that they can. We take client in that, ultimately.

Terry Pattar: That's such an important point. You've hit on so many things right at the end there, which some of my favorite subjects to talk about and to really focus on, which is around building successful teams, building that high performance culture, building that culture around, doing intelligence work well and making the best of the people that you have in the team in terms of, like you said, there is so many bright intelligent people, especially, why you're speaking to now, not just within my organization but with other organizations we work with. I think there's so much potential for them to grow. It's all about that culture aspect that you hit upon, which is, as I said, one of my favorite subjects and something that I could easily drag you into another conversation which would last hours. Before I do, we should probably draw it to a close here, but I want to say, thank you, Kathryn. This has been a really, really interesting discussion. You mentioned a few times that we're getting a little bit geeky in some aspects of it, but I think, hopefully, what our audience wants as well because I'm sure when I have discussions with others, it's definitely where we end up going into that so lovely detail. Thank you so much for all of your thoughts and insight into the work that you do and what you've been doing over the course of your career.

Kathryn Haahr-Escolano: I thoroughly enjoyed this and this has been, as I said, such a tremendous experience to talk with you and your audience. Thank you so much, Terry, and Janes for producing me.

Terry Pattar: Yeah. That's been real pleasure. Thank you.

DESCRIPTION

In this episode of the Janes podcast Terry Pattar and Kathryn Haahr-Escolano discuss analytic standards, OSINT and the Intelligence Community Directive 203 (ICD 203).

Kathryn Haahr-Escolano has worked in the US Intelligence Community, and is a practitioner of the ODNI/AIS Analytic Tradecraft standards and their application to the craft of intelligence analysis and the OSINT lifecycle for government, academic, and international clients.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Kathryn Haahr-Escolano

|Practitioner of the ODNI/AIS Analytic Tradecraft standards