III.1. Comparative methods and process tracing

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

For instructions on how to follow a discussion thread by email, click here.

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Formalizing our thinking? Bayesianism and alternatives

PostTue Sep 06, 2016 3:38 pm

How useful are frameworks for formalizing our thinking in process tracing research (e.g. Bayesian analysis; Van Evera’s process tracing tests), and how desirable is their explicit application in empirical work?

For reference, examples of these approaches and how and when they might be applied in practice include:
(1) Andrew Bennett: “Disciplining Our Conjectures: Systematizing Process Tracing with Bayesian Analysis,” in Andrew Bennett and Jeffrey Checkel, eds, Process Tracing: From Metaphor to Analytic Tool, Cambridge University Press, 276–98.
(2) Tasha Fairfield & Andrew Charman: “Explicit Bayesian Anlysis for Process Tracing,” available at: http://tashafairfield.wixsite.com/home/research

Post Reply

Guest

Re: Formalizing our thinking? Bayesianism and alternatives

PostSat Oct 15, 2016 2:04 pm

Andy Bennett, Georgetown University. I am a member of the QTD steering committee but I am representing only my own views here.

I am surprised that no one else has weighed in on Tasha's question, as I think this is the most important question for actual process tracing practices that the new literature on Bayesian process tracing poses. To give some background for those who are new to these recent methodological developments, Tim McKeown discussed informal "folk Bayesianism" -- let's call it Bayeisan PT 1.0 -- as an underlying logic of process tracing in his January 1999 International Organization critique of King, Keohane, and Verba's book. I formalized this a bit in my work in my 2008 piece in the Oxford Handbook of Political Methodology, drawing on Steve Van Evera's discussion of four iconic types of evidentiary tests: smoking gun, hoop, straw in the wind, and doubly-decisive tests. Let's call this Bayesianism 2.0 I outlined the formal Bayesian logic that underlies these tests and applied it to well-known research examples. David Collier found the four kinds of tests useful and developed some teaching exercises using them (https://papers.ssrn.com/sol3/papers.cfm ... id=1944646).

These four kinds of tests appeared to have intuitive appeal for many scholars and proved easy to teach without getting bogged down in formal Bayesian mathematics, and scholars began using these tests explicitly to discuss the probative value of evidence in their work. The most systematic example of Bayesianism 2.0 that I know of is Tasha's "Going Where the Money Is: Strategies for Taxing Economic Elites in Unequal Democracies, World Development, July 2013. In this article, Tasha has a short appendix identifying which pieces of evidence qualify as smoking gun tests, hoop tests, etc..

Yet even as process tracing practitioners were starting to catch up to the latest methodological thinking and to use different kinds of evidentiary tests more self-consciously and explicitly, methodologists were working to formalize even more fully how Bayesian logic might be applied in process tracing. This latest literature --Bayeisanism 3.0 -- includes the two pieces Tasha cites in her post above, and I would also include here Alan Jacobs and Macartan Humphreys piece "Mixing Methods: A Bayesian Approach," in the American Political Science Review 2015, though this addresses wider issues than process tracing.

Of these works, Tasha's paper with Andrew Charman carries out the most complete and formal Bayesian analysis of process tracing evidence in an actual piece of empirical research, reanalyzing Tasha's 2013 World Development article using more formal Bayesian mathematics . This is in my view an extremely important paper, as it provides a proof of principle that formal Bayesian process tracing can in fact be done. At the same time, In Tasha's and Andrew's own view (and Tasha can elaborate or correct me here) the paper shows that doing this is a ton of work, and in this particular instance the more formal Bayesian analysis did not substantially change the conclusions of the informal Bayesian analysis that Tasha had already done in the appendix to her 2013 article.

My own view is somewhat more optimistic about the value of being more fully, formally, and explicitly Bayesian in our process tracing. Borrowing an argument from Alan Jacobs and Macartan Humphreys, I think being more explicitly Bayesian can help make more transparent the judgments we had to have already been making informally to draw inference from process tracing evidence, and it can help us find and fix cognitive biases and errors in our work. One reason that Tasha's and Andrew's formal analysis did not change the results of Tasha's less formal 2013 article is that Tasha had already been careful with her informal Bayesian tests. I suspect that a lot of scholars would find that if they use Bayesian mathematics to assess their research, they will end up changing (and usually reducing) the strength or confidence with which they make claims based on process tracing evidence. Using Bayesian mathematics more explicitly can also help scholars and their critics clarify where exactly they disagree: over priors, over likelihood ratios for evidence, over what evidence to include, or over the reading of the evidence itself.

That said, I agree with Tasha and Andrew that using Bayesian mathematics explicitly on all or even most pieces of evidence in the final write up of our research would make publications unacceptably tedious for both authors and readers. So the real question is whether there is useful middle ground between full formal Bayesianism 3.0 and the less formal Bayesianism 2.0 of hoop, smoking gun, etc. tests. And here we must keep in mind that most process tracing practitioners have not yet fully implemented Bayesianism 2.0. So might there be Bayesianism 2.1, 2.5, 2.7 etc, what would each one look like, and what would be its pros and cons?

One approach would be for authors to use Bayesian mathematics extensively in their own thinking and analyses in the pre-publication phases of their research, and to then present their research in the final publication version using the less formal terminology of smoking gun, hoop, etc. tests, and using this only for the most important pieces of evidence. More formal Bayesian mathematics (ie, explicit priors and likelihood ratios) for these key pieces of evidence might be relegated to footnotes or appendices. This would allow researchers to use formal Bayesianism as a check on their own potential cognitive biases, and it would provide considerable transparency for readers without clogging explanatory narratives with excessive details.

Alternatively, researchers could use formal Bayesianism sparingly in their own pre-publication analyses, applying it only to key pieces of evidence, and they could use hoop etc. tests infrequently or not at all in the published versions of their work.

I expect that in fact a range of practices will emerge, with some researchers using formal Bayesian analysis extensively on many pieces of evidence, both in their own thinking and in their publications, and others using it infrequently or not at all. I am interested in what Tasha and others have to say on this -- how are you using informal or formal Bayesianism both in thinking about your research and in writing it up? Do you think it would have been useful if you had been more formally Bayesian in earlier projects that you completed before learning about more formal approaches?

Post Reply

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Re: Formalizing our thinking? Bayesianism and alternatives

PostThu Oct 20, 2016 5:02 pm

The comments below are all relevant to this thread, so I've moved them here. Do also have a look at Andy Bennett's previous post on Bayesianism and transparency.

Hillel Soifer:
I'm taking the liberty of pulling Ingo's quote from another thread here to ask this question more generally: should all qualitative work adopt a Bayesian logic of inference? Does all qualitative work already implicitly draw on that framework? I ask this question because it would be useful to explicitly draw out the implications of this and other logics of inference as a response to the DART initiative develops.

Ingo Rohlfing:
Thanks for having me pulled over here. I would answer "no" to both questions. First, it should not be used by all qualitative work because it would ignore exploratory research. Tasha Fairfield might disagree because of the claim that there is no inductive-deductive distinction in Bayesianism (I am inclined to disagree), but you cannot start a genuine exploratory in a Bayesian way. If you have a Y-centered/causes-of-effects design, you only know what you want to explain, but now how. Because of this, you cannot have priors and without priors, there is no Bayesianism. Second, with some good will, one might say for some qualitative studies that they are implicitly Bayesian, but many are not. This does not necessarily make them less valuable, but they just answer different research questions than a Bayesian study would.

It is my impression that the debate about Bayesianism is still going on among a relatively small group of people. This makes it particularly interesting to read about the opinion of people who are new to Bayesianism, try to apply it at the moment or even believe it is not useful in qualitative research.

Marcus Kreuzer:
This is a very interesting question. I was wondering whether you could clarify its link to DA-RT. Are you suggesting that the Bayesian underpinnings of much qualitative research points to specific transparency practices? For example,
- Do Bayesian priors imply greater care and explicitness in literature reviews?
- Does Bayesian updating imply more explicit statements of where on the research cycle a particular finding is?
- Do conditional probabilities require more explicit defense of how alternative hypotheses were selected?
Just curious in what specific ways, Bayesian analysis could expand the current DA-RT guidelines and thereby become more receptive to qualitative research.

Post Reply

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Re: Formalizing our thinking? Bayesianism and alternatives

PostSun Nov 06, 2016 8:32 am

Andy Bennett wrote: I expect that a range of practices will emerge, with some researchers using formal Bayesian analysis extensively on many pieces of evidence, both in their own thinking and in their publications, and others using it infrequently or not at all. I am interested in what Tasha and others have to say on this -- how are you using informal or formal Bayesianism both in thinking about your research and in writing it up? Do you think it would have been useful if you had been more formally Bayesian in earlier projects that you completed before learning about more formal approaches?

Andy, I think you’ve put your finger on the critical things we need to think about as we move forward with trying to implement Bayesian process tracing more widely.

Let me start by saying that I think the Bayesian turn in process tracing, which Andy chronicled nicely in his post, is a real watershed in qualitative methods. In my view, it’s hard to overstate the importance of Bayesianism as a methodological foundation for inference in qualitative research. Andy correctly noted that A.E. Charman and I have reservations about how formally or explicitly Bayesian analysis should be used in qualitative research, but I have to say that over the past year I’ve become more positive about the value of incorporating Bayesian thinking more consciously into substantive process tracing. As Andy pointed out, Bayesian probability provides a very clear framework for figuring out why people might be drawing very different conclusion from the same evidence: different priors on the hypotheses, different assessments of likelihood ratios for the evidence, and different background information (which affects both our priors and our likelihoods). I’ve been impressed in my methods teaching this past year by the scope of variation in people’s background information and their interpretations of evidence, and in reading the qualitative literature, I’ve come to think that there is a lot of room for improving and systematizing inference.

Back to Andy’s key question of how to implement Bayesian process tracing in a practical way—while my paper with A.E. Charman focuses on applying Bayesianism as explicitly as possible, we’ve sketched out some options for more pragmatic, middle-ground approaches in the conclusion (V3.3 p. 30 http://tashafairfield.wixsite.com/home/research). We’re thinking in terms of a continuum, with the “explicit Bayesian process tracing” we’ve carried out on my Chilean tax reform case falling at one end of the spectrum. At the other end of the spectrum, we have what we call “implicit Bayesian process tracing,” where we would try to apply Bayesian insights heuristically in the context of case narratives. For example, we would aim to “mentally inhabit the world” of each hypothesis and ask how surprised we would be to encounter the evidence, without explicitly using the terminology of likelihood ratios or applying Bayesian mathematics to derive an inference. Several of Bennett and Checkel’s (2015:21) best practices also serve as an excellent starting point for implicit Bayesian reasoning (“Cast the net widely for alternative explanations,” “Be equally tough on the alternative explanations,” “Remember that conclusive process tracing is good, but not all good process tracing is conclusive”), and most of our guidelines for explicit Bayesian process tracing (Table 6, p.31) have analogs for implicit Bayesian process tracing.

To Andy’s last question, I think there’s probably wide variation among existing scholarship in how closely untrained intuitive reasoning approximates Bayesian logic. I'm optimistic that just learning the basic elements of Bayesian probability might help leverage intuition and improve inference, without applying Bayesianism explicitly. That said, learning Bayesianism isn’t trivial. Assessing likelihood ratios changes the way intuition is used, and my teaching experience to date suggests that a lot of practice is needed to master this key aspect of Bayesian thinking.

Post Reply

Timothy McKeown
U of North Carolina, Chapel Hill
Posts: 1
Joined: Sun Nov 27, 2016 2:31 pm

Re: Formalizing our thinking? Bayesianism and alternatives

PostSun Nov 27, 2016 9:22 pm

After a long absence from writing about this topic, I find myself edging back into the water to comment on the Fairfield and Charman paper. It is heartening to see the development of mutual engagement among case study researchers and Bayesian analysts in the developing of process tracing. It's a hopeful sign that a more unified conception of social science research methods and results is developing, and that the deep divide between qualitative and quantitative researchers is beginning to fade away.

Their paper is notable for opening the door to a discussion of the dual notion of “process” in process-tracing studies, and its application of Bayesian ideas to both of them. An attempt to reconstruct an historical process typically involves a research process in which evidence is gathered and evaluated sequentially. As the paper notes (especially in Appendix I and the discussion of that analysis), the researcher’s assessment of evidence is partly conditioned on the hypothesis under consideration, as well as by externally performed “calibration” in which previous tests have established some idea of the general accuracy of the instrument being used. Thus, the evaluation of “testimony” in the case depends both on these general assessments (which are imported into the case via initial probability assessments) and the specific hypothesis being considered (whose probable truth is subject to revision as additional information is gathered). Under competing hypotheses, the assessment of the truth of “testimony” could differ radically.

This insight is very helpful in making sense of what often appears to be endless bickering over the nature and quality of evidence presented by different process tracing researchers. In a domain such as the diplomatic history of the origins of the Cold War, substantive theoretical differences have long manifested themselves as extended arguments about what is and is not valid evidence about the process under consideration. The Fairfield-Charman analysis helps us to see how such arguments are in a sense inevitable, given that the disputants begin by entertaining quite different hypotheses about what is driving the process that they are considering.

One area in which the objective Bayesian approach favored by the authors is not going to be met with universal agreement is their advocacy of agnostic prior probabilities for the hypotheses under consideration, treating them all as equiprobable. They rightly note that some attempts to depart from this standard seem to involve arbitrary judgments that are at odds with the Bayesian framework that they advocate, and they suggest that it is better to rely instead on the accumulation of a sufficient number of cases so that the choice of initial values ultimately becomes unimportant. While this is an appealing solution in the abstract, it is one that will run into practical difficulties when applied to a domain where cases are relatively few and are expensive and time-consuming to study, and where prior views are well-developed. Thus, for example, it is not at all surprising that someone with a strong interest in studying modern wars of major powers – a relatively scarce phenomenon about which few are agnostic -- would devote seemingly inordinate amounts of attention to the question of how cases should be initially regarded (Rapport 2015). When cases are smaller and more plentiful, such concerns are unnecessary.

The argument in the paper could be made even stronger by drawing a clearer distinction among three different sequences: the sequence by which evidence is “considered” (i.e. included in the Bayesian equations), the sequence of evidence in the narrative of the case that the author presents to the reader, and the sequence by which the evidence originally is generated. Once evidence has been generated and is at hand, the commutativity of probability calculations guarantees that the order in which terms are written has no significance for the result. (“Sequence the evidence as convenient” (p. 12)). Written case narratives are typically fashioned with an eye to readability and there is a general tendency to favor chronologically organized accounts as the default solution for presenting the material. However, because interviews and archival research typically proceed in a one-at-a-time fashion, and because small or large surprises can induce revisions in one’s search strategy, a chronological narrative might have very little relation to the order in which the evidence was accumulated, and it will not shed light on how the search for evidence is dependent on previous findings. Those of us who write case studies need to think much more systematically about how we present information about our search processes. (The “presenting our evidence” topic thread on this site has some good ideas about this).

Overall, this is an excellent paper that has challenged my thinking and my practice. I look forward to more such contributions from the authors!

Reference
Aaron Rapport (2015) “Hard Thinking about Hard and Easy Cases in Security Studies,” Security Studies 24:3, 431-465, DOI: 10.1080/09636412.2015.1070615

Post Reply