Andy Bennett, Georgetown University. I am a member of the QTD steering committee but I am representing only my own views here.
I am surprised that no one else has weighed in on Tasha's question, as I think this is the most important question for actual process tracing practices that the new literature on Bayesian process tracing poses. To give some background for those who are new to these recent methodological developments, Tim McKeown discussed informal "folk Bayesianism" -- let's call it Bayeisan PT 1.0 -- as an underlying logic of process tracing in his January 1999 International Organization critique of King, Keohane, and Verba's book. I formalized this a bit in my work in my 2008 piece in the Oxford Handbook of Political Methodology, drawing on Steve Van Evera's discussion of four iconic types of evidentiary tests: smoking gun, hoop, straw in the wind, and doubly-decisive tests. Let's call this Bayesianism 2.0 I outlined the formal Bayesian logic that underlies these tests and applied it to well-known research examples. David Collier found the four kinds of tests useful and developed some teaching exercises using them (
https://papers.ssrn.com/sol3/papers.cfm ... id=1944646).
These four kinds of tests appeared to have intuitive appeal for many scholars and proved easy to teach without getting bogged down in formal Bayesian mathematics, and scholars began using these tests explicitly to discuss the probative value of evidence in their work. The most systematic example of Bayesianism 2.0 that I know of is Tasha's "Going Where the Money Is: Strategies for Taxing Economic Elites in Unequal Democracies, World Development, July 2013. In this article, Tasha has a short appendix identifying which pieces of evidence qualify as smoking gun tests, hoop tests, etc..
Yet even as process tracing practitioners were starting to catch up to the latest methodological thinking and to use different kinds of evidentiary tests more self-consciously and explicitly, methodologists were working to formalize even more fully how Bayesian logic might be applied in process tracing. This latest literature --Bayeisanism 3.0 -- includes the two pieces Tasha cites in her post above, and I would also include here Alan Jacobs and Macartan Humphreys piece "Mixing Methods: A Bayesian Approach," in the American Political Science Review 2015, though this addresses wider issues than process tracing.
Of these works, Tasha's paper with Andrew Charman carries out the most complete and formal Bayesian analysis of process tracing evidence in an actual piece of empirical research, reanalyzing Tasha's 2013 World Development article using more formal Bayesian mathematics . This is in my view an extremely important paper, as it provides a proof of principle that formal Bayesian process tracing can in fact be done. At the same time, In Tasha's and Andrew's own view (and Tasha can elaborate or correct me here) the paper shows that doing this is a ton of work, and in this particular instance the more formal Bayesian analysis did not substantially change the conclusions of the informal Bayesian analysis that Tasha had already done in the appendix to her 2013 article.
My own view is somewhat more optimistic about the value of being more fully, formally, and explicitly Bayesian in our process tracing. Borrowing an argument from Alan Jacobs and Macartan Humphreys, I think being more explicitly Bayesian can help make more transparent the judgments we had to have already been making informally to draw inference from process tracing evidence, and it can help us find and fix cognitive biases and errors in our work. One reason that Tasha's and Andrew's formal analysis did not change the results of Tasha's less formal 2013 article is that Tasha had already been careful with her informal Bayesian tests. I suspect that a lot of scholars would find that if they use Bayesian mathematics to assess their research, they will end up changing (and usually reducing) the strength or confidence with which they make claims based on process tracing evidence. Using Bayesian mathematics more explicitly can also help scholars and their critics clarify where exactly they disagree: over priors, over likelihood ratios for evidence, over what evidence to include, or over the reading of the evidence itself.
That said, I agree with Tasha and Andrew that using Bayesian mathematics explicitly on all or even most pieces of evidence in the final write up of our research would make publications unacceptably tedious for both authors and readers. So the real question is whether there is useful middle ground between full formal Bayesianism 3.0 and the less formal Bayesianism 2.0 of hoop, smoking gun, etc. tests. And here we must keep in mind that most process tracing practitioners have not yet fully implemented Bayesianism 2.0. So might there be Bayesianism 2.1, 2.5, 2.7 etc, what would each one look like, and what would be its pros and cons?
One approach would be for authors to use Bayesian mathematics extensively in their own thinking and analyses in the pre-publication phases of their research, and to then present their research in the final publication version using the less formal terminology of smoking gun, hoop, etc. tests, and using this only for the most important pieces of evidence. More formal Bayesian mathematics (ie, explicit priors and likelihood ratios) for these key pieces of evidence might be relegated to footnotes or appendices. This would allow researchers to use formal Bayesianism as a check on their own potential cognitive biases, and it would provide considerable transparency for readers without clogging explanatory narratives with excessive details.
Alternatively, researchers could use formal Bayesianism sparingly in their own pre-publication analyses, applying it only to key pieces of evidence, and they could use hoop etc. tests infrequently or not at all in the published versions of their work.
I expect that in fact a range of practices will emerge, with some researchers using formal Bayesian analysis extensively on many pieces of evidence, both in their own thinking and in their publications, and others using it infrequently or not at all. I am interested in what Tasha and others have to say on this -- how are you using informal or formal Bayesianism both in thinking about your research and in writing it up? Do you think it would have been useful if you had been more formally Bayesian in earlier projects that you completed before learning about more formal approaches?