III.1. Comparative methods and process tracing

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group III.1

PostTue Aug 29, 2017 10:19 am

Please use this thread to share feedback on the draft report

Post Reply


Ryan Saylor
University of Tulsa
Posts: 2
Joined: Wed Sep 07, 2016 1:15 pm

Re: Comments on Draft Report of Working Group III.1

PostTue Sep 26, 2017 6:30 pm

I think this report does an excellent job conveying how the pursuit of transparency intersects with key tenets of case study research (section II) as practiced by many of us in the QMMR section. Section III also helps to present some horizons of case study research in relation to the transparency movement. My only suggestion is to consider elaborating the caveats in the introductory section, particularly for section members involved in interpretive or ethnographic work, perhaps by noting that some of the larger epistemological debates are broached in the “Epistemological and Ontological Priors” draft report of the QTD Working Group I.1. And, last, thank you to the authors of this and all the other working group reports.

Post Reply


Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Re: Comments on Draft Report of Working Group III.1

PostThu Sep 28, 2017 1:46 pm

Thank you, Ryan, for your helpful input. Cross-referencing across the different working groups' reports can surely help convey a sense of these deliberations as a whole beyond the importantly differentiated parts.

Thank you also for signaling your agreement and what you like about the report. Even just short indications of support (from those who agree with a given report) are very helpful - and surely appreciated by the working group authors.

Post Reply


Frank Schimmelfennig
ETH Zurich
Posts: 1
Joined: Wed Oct 19, 2016 2:33 pm

Re: Comments on Draft Report of Working Group III.1

PostMon Nov 06, 2017 6:37 am

TimButhe wrote:Please use this thread to share feedback on the draft report

The criteria that the draft report sets up are both important and reasonable. They refer to what we ought to demand, and can realistically demand, of pieces of PT and comparative/case-study research. At the same time, I found that many of the recommendations apply generally to any kind of research and not specifically to the two types of research your report covers. Recommendations 1, 3, 4, 5 should apply to quantitative as well as qualitative research and are unlikely to be controversial for all kinds of (causal) inference-oriented research. I therefore wondered why these recommendations require repeating here. I expected the report to focus on the specific challenges of PT and (comparative) case studies.

Transparent case selection is such a specific criterion. Recommendation 2 could make clearer that this applies to comparative analysis as well as single, within-case analysis (be it congruence or process-tracing analysis). (It might be a understanding but I read the recommendation as too much focused on comparison.) I further thought that the first point you make in Section III should be moved to the main recommendations. Making explicit the prior information on the case that informed the case selection is extremely useful. It ties in with Recommendation 6 on background knowledge.

For some of the other recommendations, I missed a discussion of specific qualitative challenges. To give a few examples, for Recommendation 3, qualitative comparison requires a specification of causal relationship in terms of (configurations of) necessary and/or sufficient conditions, whereas PT requires a particularly careful elaboration of the causal mechanism. Recommendation 4: Alternative explanations in PT are alternative mechanisms producing the causal link between the condition and the outcome, whereas congruence and comparative analysis focus on alternative conditions of the outcome. Recommendation 5 would benefit from taking up some of the discussion on Bayesian inference in PT that follows later, whereas comparative analysis is based on convincing matching of conditions.

Recommendation 7 is a reasonable compromise, and the discussion of pros and cons of explicit Bayesian analysis in PT (and the subsequent recommendations) is excellent.

Post Reply


Hillel Soifer
Temple University
Posts: 9
Joined: Wed Apr 13, 2016 9:12 am

Re: Comments on Draft Report of Working Group III.1

PostTue Nov 07, 2017 11:13 am

Thanks, Frank, for giving the draft report a careful read and for your comments, which we will take under advisement as we revise. Our group is struggling a bit with regards to making recommendations that are overly anodyne, especially with regard to cross-case-based inferences, and your comments highlight this quite clearly.l

Post Reply


Joachim Blatter

Re: Comments on Draft Report of Working Group III.1

PostWed Nov 15, 2017 7:56 am

Dear Colleagues
I would like to thank the working group for their efforts. Tim and Alan have asked me to comment on the text; a wish I fullfil in the very last moment and without having the time for a more detailed and more contructive contribution.
I find many elements in the text which I fullheartedly can endorse and I am always happy to get hints about best practices which I can use in my writing and teaching. Nevertheless, I have some major concerns with the current text:
First, I agree with Frank Schimmelfennig that an explicit and intensive reflection on case selection is certainly one of the most central aspects when it comes to transparency in case study research. Therefore, it should be assigned a more central place. But I am not sure whether the current text should be the last word on this.
Second, my impression after the first reading of the text has been that the ".aim to largely sidestep methodological debates..." (introduction) has been forgotten in the writing of section III. In the second part of this section, a specific methodological approach to process tracing - which is an important one, albeit not the only one - is promoted under the heading of transparency. Given the fact that many of the practices that are mentioned in section II are not very specific for case study research, the last part of the paper gets a lot of weight within the document.
Overall, I would not feel very comfortable if the current text would be established as an authoritative point of reference for editors in order to secure transparence in case study research.

Kind regards,
Joachim

Joachim Blatter, University of Lucerne (CH)

Post Reply


Derek Beach

Re: Comments on Draft Report of Working Group III.1

PostTue Nov 21, 2017 1:46 pm

There are many points in this report that are well-established recommendations that could be taken as a minimum standard for 'good' process-tracing (PT). For example, few would disagree with the wisdom of points 1, 2, 3, 6 and 7. Some more explication on topics could be done, for example on concept formation/measurement given the centrality that this has in case-based research (see the classics like Sartori, along with newer work by Goertz and others). And in general, in the discussion of the state-of-the-art the report should try to be more reflective of broader debates within the field, noting areas of agreement and disagreement much clearer.

One way this could be done would be to put forward a line-of-demarcation between what PT actually is, and what does not qualify as PT. While this should be relatively broad – given the nature of the QTD – a broad and minimal definition could be that PT involves tracing mechanisms (however they are defined). This would mean that PT is not just assessing temporal co-varation of variables, as one would do following King, Keohane and Verba (1994). This would be a longitudinal case study, and not PT.

I have a broader concern with this report because of the emphasis on Bayesian analysis (and training), whereas issues like what we actually are tracing in PT are left ambiguous in the report, at best. PT is more than just the use of smoking guns or hoop tests. Basically, unless we are told what evidence is ‘evidence of’, it is not evidence. A positive result on a smoking gun ‘test’ is only evidence if we are told what the hypothesized theory of the crime is. In effect, I have no idea what 'good' PT looks like after reading this report because I only learn that I should be selecting appropriate cases, spend time thinking about concepts, and evaluate evidence in Bayesian terms.

Yet how can we claim a study is tracing a process when we are never told anything about what the process is? In this report, we are left in the dark about what we are tracing. The report talks about evidence in relation to ‘explanations’ (e.g. p. 8), but what are we trying to explain? Bayesian evaluation of ‘e’ is very difficult to do when we are not even told what ‘h’ actually is. If the ‘h’ (hypothesis) we are trying to explain is merely the occurrence of the outcome in a case by itself, more appropriate evidence would be evidence of difference-making produced by an experiment. The only way that within-case evidence can enable causal inferences is when we are able to provide evidence of a causal process (mechanism) linking a cause with an outcome (Illari, 2011; Russo and Williamson, 2007, 2011).

If we are working with a mechanistic explanation, some suggestions are required for how we can work in a more transparent fashion with linking evidence with underlying causal mechanisms. Irrespective of whether the process is going to be unpacked in considerable detail (parts of a mechanism), or black-boxed to some extent, the theoretical process and the causal logics underlying it needs to be made explicit before we can claim evidence is evidence. Simply put, how can we claim that the evidence is evidence of a process when we are not told about the process that it is evidence of?

In the large literature on causal mechanisms in the philosophy of science within medicine and biology, there is extensive guidance to how mechanisms can be formulated in abstract terms in a transparent fashion (e.g. terms that are not field-specific, but that can also travel to the social science). See for example Craver and Darden (2013), Machamer, et al (2000) and Machamer (2004). Within the social sciences, there is also a developing literature on the nature of causal mechanisms that is inspired by developments in the natural sciences on how to model mechanisms. These developments should be discussed in the report, resulting in a less lopsided report that only focuses on the empirical side without discussing at all what we are actually tracing.

Therefore, despite claiming that this report is about PT, at no point is any effort made to delve into questions of what we are really tracing (other than a mention of the term ‘causal mechanisms’ on p. 3).

Relating to this over-emphasis on Bayesian analysis, I have serious issues with using the terms PT 2.0 and 3.0, which clearly implies that only explicitly Bayesian analysis is ‘modern’ PT, and that more technical applications are better than more intuitive/qualitative applications (3.0 is better than 2.0....). I would strongly suggest that this part of the report is either removed, or is completely revised so that it reflects the state of the debates.

Additionally, the report should note that there is an established ‘alternative’ to using an explicit Bayesian approach: the historical method of being transparent about issues like source criticism and placing empirical material into its proper context before we try to interpret what it means. Of course best-practices in historical methods do not conflict with Bayesian logic, but the report should not overemphasize the need for case study scholars to learn Bayesian logic, given that training in good historical methods (in particular the continental tradition) would enable them to produce just as robust evaluations of evidence as someone using an explicit Bayesian approach – and would probably be better on issues relating to source criticism (a key comparative advantage of many historians who really get to know their sources and the contexts of production).

Taken as a whole, the report focuses too much on the issue of Bayesian analysis, and too little on other key elements of PT like what we are actually tracing (i.e. how can we work theories of mechanisms that make the causal logics underlying our mechanistic explanations transparent?).


More particular comments:

1) While all being notable scholarship, I am hesitant about the selection of exemplars. For example, the Owen study cited does not actually 'trace' the two-pronged theoretical process that he hypothesizes exists empirically. Ideally, he would have described what observable manifestations/evidence would be left by the operation of each part of the two-pronged theorized process that were then assessed empirically. Instead, he develops observables that are quite independent of the actual processes he claimed he was testing. In my own teaching and writing, I have stopped using the Owen article as an example of PT because there is no direct link between his empirics and the theory he claims to be testing.

I would only include much more recent work (e.g. last five years) as examples because of the recent date of many important methodological developments in the field of case studies, and PT in particular. For example, there are two relatively recent books focused exclusively on PT (2013 and 2014), along with several central books on case studies (2012 and on).

I would also suggest toning down the language in the descriptions of the exemplars.

2) Thought might be given to the use of the term variable in the report. Many case-based scholars would suggest they are conditions, given the asymmetric nature of causation assumed by many scholars (e.g. Ragin and others).

3) I have problems with the language of alternative explanations throughout the report. As Rohlfing (2014) has clearly illustrated in a nice article in SMR, we have to be more specific about what we are talking about. Alternative explanations can be about alternative explanations of the outcome, but also about a process (or parts thereof). They are also only relevant when we are dealing with mutually exclusive (empirically and theoretically) and individually sufficient. In Bayesian terms, alternative explanations in case studies would typically be more analogous to alternative explanations for finding the evidence - not for the outcome itself. For example, in Tannenwald's 'taboo talk' test, we are talking about alternative explanations for finding taboo talk - not necessarily alternative explanations for finding the outcome (which also are present in the case). Uniqueness should be understood merely as the rate of false positives, where ~h are any plausible alternative explanation for finding a positive result of the test (ie. we found e), except in the situation described by Rohlfing of mutual exclusiveness at both the theoretical and empirical levels.

References:
Craver, L. and C. F. Craver. 2013. In Search of Mechanisms. Chicago: University of Chicago Press.

Illari, Phyllis McKay. 2011. Mechanistic Evidence: Disambiguating the Russo-Williamson Thesis. International Studies in the Philosophy of Science 25 (2): 139–57.

Machamer, Peter, Lindley Darden, and Carl F. Craver. 2000. Thinking about Mechanisms. Philosophy of Science 67 (1): 1–25.

Machamer, Peter. 2004. Activities and Causation: The Metaphysics and Epistemology of Mechanisms. International Studies in the Philosophy of Science 18 (1): 27–39.

Rohlfing, Ingo. 2014. Comparative hypothesis testing via process tracing. Sociological Methods and Research 43(4): 606:642.

Russo, Federica, and Jon Williamson. 2007. Interpreting Causality in the Health Science. International Studies in the Philosophy of Science 21 (2): 157–70.

Russo, Federica, and Jon Williamson. 2011. Generic versus single-case causality: the case of autopsy. European Journal of the Philosophy of Science 1 (1): 47–69.

Post Reply


Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln
Posts: 23
Joined: Tue May 24, 2016 5:45 am

Re: Comments on Draft Report of Working Group III.1

PostWed Nov 29, 2017 3:37 pm

Ryan Saylor wrote:I think this report does an excellent job conveying how the pursuit of transparency intersects with key tenets of case study research (section II) as practiced by many of us in the QMMR section. Section III also helps to present some horizons of case study research in relation to the transparency movement. My only suggestion is to consider elaborating the caveats in the introductory section, particularly for section members involved in interpretive or ethnographic work, perhaps by noting that some of the larger epistemological debates are broached in the “Epistemological and Ontological Priors” draft report of the QTD Working Group I.1. And, last, thank you to the authors of this and all the other working group reports.


There are lots of overlaps and interdependencies between the groups a reader might not always be aware of, so this is very useful advice. Thanks.

Post Reply


Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln
Posts: 23
Joined: Tue May 24, 2016 5:45 am

Re: Comments on Draft Report of Working Group III.1

PostWed Nov 29, 2017 3:41 pm

We are not sure whether this can turn into an "authoritative" source for editors, but it is also not impossible. Because of the diversity of views and methods in the fields of comparative case studies and process tracing, the group found it difficult to cover all grounds. I take it from your comments and also Frank Schimmelfennig's that you encourage us to find a better balance between the "traditional" issues such as case selection and more recent developments such as Bayesian process tracing.

Post Reply


Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln
Posts: 23
Joined: Tue May 24, 2016 5:45 am

Re: Comments on Draft Report of Working Group III.1

PostWed Nov 29, 2017 3:47 pm

Derek Beach wrote:There are many points in this report that are well-established recommendations that could be taken as a minimum standard for 'good' process-tracing (PT). For example, few would disagree with the wisdom of points 1, 2, 3, 6 and 7. Some more explication on topics could be done, for example on concept formation/measurement given the centrality that this has in case-based research (see the classics like Sartori, along with newer work by Goertz and others). And in general, in the discussion of the state-of-the-art the report should try to be more reflective of broader debates within the field, noting areas of agreement and disagreement much clearer.


Thanks for the close reading and detailed comments. The report is not supposed to be a "how-to" guide for process tracing in general, but aims to focus on issues of transparency in empirical research. The two issues cannot always be clearly separated, which might have created the impression that the draft report focuses too much on some dimensions of case studies and too little on other issues. When writing the report, we found it difficult to find a good balance and I all the comments in this forum such that this is not the ideal balance. It is of course a central and important question what one is tracing in process tracing, but this would bring up deep into philosophical territory. The report might need to highlight this more clearly, but I think we can largely separate the question of what a mechanism, congruence etc. are from the question of how to achieve transparency.

Post Reply


Derek Beach
University of Aarhus
Posts: 8
Joined: Mon Apr 18, 2016 2:23 am

Re: Comments on Draft Report of Working Group III.1

PostMon Dec 04, 2017 9:10 am

ingorohlfing wrote:
Derek Beach wrote:The report might need to highlight this more clearly, but I think we can largely separate the question of what a mechanism, congruence etc. are from the question of how to achieve transparency.


While it is obvious the report should not go into the philosophical debates about the nature of mechanisms, I do believe that we need to be transparent about what process we are claiming our evidence is evidence of.

In my opinion, we need to be transparent about both: 1) the mechanism/process we are theorizing, and 2) the evidence of each of the working parts of the process. Not being transparent about the first results in the widespread practice of putting forward empirical material and postulating that it is evidence, but where we are left in the dark about what it is evidence of. How can I evaluate as a critical reader the probative value of evidence unless I am told about what it is postulated to be evidence of?

The analytical result of the lack of transparency on this issue is widely seen in many case studies, which put forward narratives (sequences of events), or pieces of empirical material that are claimed to be CPO's, but where we never are told about the causal process they are 'observations of'. We can do better.

In our 2016 book 'Causal Case Studies' (University of Michigan Press), we put forward a step-wise, systematic procedure for evaluating in a transparent fashion what evidence can tell us about our underlying causal hypotheses.

The framework draws on applications of Bayesian analysis in law and intelligence analysis, where we suggest that we put forward an argument roadmap that clearly links our causal hypotheses with the underlying evidence.

We distinguish between what evidence in theory/principle can tell us about our hypotheses in the form of propositions about observable implications, and what particular sources tell us about the existence of the observable implications. We include a reconstruction of the oft-discussed Tannenwald (1999) article that shows the utility of our procedure because it makes clear that in the Korean War case, she has very strong evidence for a norms matter hypothesis, but that it is NOT the 'taboo talk' observable that she claims is her primary 'test' of the hypothesis (on p. 440).

Post Reply



Return to “III.1. Comparative methods and process tracing”