III.1. Comparative methods and process tracing

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

For instructions on how to follow a discussion thread by email, click here.

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Presenting our evidence

PostTue Sep 06, 2016 12:23 pm

How should scholars present the evidentiary basis of their analysis? Is it best to discuss the evidence holistically as part of the published analysis? Should we use active citations? When might compiling a list of specific pieces of evidence be helpful?

Post Reply

Marcus Kreuzer
Villanova University
Posts: 26
Joined: Sat Apr 16, 2016 9:48 am

Re: Presenting our evidence

PostWed Sep 21, 2016 11:40 am

I would like to touch on one specific and little discussed type of evidence: unreported findings.

The APSA Guidelines list under data access the importance to make available data unreported in the final article. Having easy access to such unreported data is important to counter problems like p-hacking or publication bias. But what precisely is the qualitative equivalent to to unreported quantitative data (e.g. null findings, various model specifications)? And what are the benefits of providing access to unreported qualitative evidence?

Sean Yom's 2015 CPS article provides a possible answer to these questions. He shows how actual scholarship follows a process of inductive iteration which is misleadingly presented in its published form as one of deductive hypothesis testing. This misleading presentation consequently backgrounds much of the information that went in to theorizing and constructing the actual tests. He advocates for sharing more of this pre-testing stage of social inquiry and thus advocates in effect for more data access to unreported findings (in this case not test-related findings but theorizing related findings)

Ultimately, Yom argues that our publishing conventions make it difficult to provide access for unreported findings. Do many of you experience this same problem and wished you could provide more access to unreported findings?

Tasha Fairfield wrote:How should scholars present the evidentiary basis of their analysis? Is it best to discuss the evidence holistically as part of the published analysis? Should we use active citations? When might compiling a list of specific pieces of evidence be helpful?

Post Reply

Hillel Soifer
Temple University
Posts: 8
Joined: Wed Apr 13, 2016 9:12 am

Re: Presenting our evidence

PostSat Sep 24, 2016 8:12 am

Thanks, Marcus, for your post. I suppose that in order to answer your question, one would have to pin down exactly what qualifies as an unreported finding in qualitative research. In so doing, I think one can begin by noting that since a finding is not the same as data, the full body of archive documents or interview notes are not in play here: instead a finding is the output from some analysis conducted by the researcher.

We might often encounter null findings in qualitative research: this would be the case where I look in a specific slice of my data for evidence to evaluate one implication of a given hypothesis and don't find anything convincing one way or another. It seems easy enough to report this, and it seems desirable to do so, since it helps show that the extent of confirmation of an overall hypothesis may be less than perfect even if the overall preponderance of evidence supports that hypothesis, and it helps to show the limits of what the data we have amassed can say about the hypotheses we are evaluating.

As for model specifications, I suppose that the analogy here would be to various coding rules used to evaluate whether evidence supports a claim or not. If we take a narrow view of concept X, we find little evidence for the effect of X on Y, but with a broader view of X, the link between X and Y becomes clear. Take, for example, your own work on the origins of electoral systems and the strategic incentives of politicians in early 20th century Europe. If in looking for evidence of those incentives, we only consider direct quotes from politicians stating that certain strategic elements were at the forefront of their thinking as evidence, perhaps we would conclude that those incentives were not at work. But a less stringent set of standards for evaluating whether strategic thinking shaped decisions might lead us to a different 'coding' of this case. I think it might be very valuable to discuss these issues in our research.

Space constraints, of course, come into play when we consider whether and how to present evidence of null findings and the effects of model specification, as they do when we consider how to present our evidence more generally. But I would suggest that
explicit discussion of both of these issues, in a transparency appendix and/or in footnotes if not in the text itself, would be valuable.

Perhaps there are other ways to think about what unreported findings are in qualitative research; I would welcome other thoughts on these issues as this discussion proceeds.

Post Reply

Marcus Kreuzer
Villanova University
Posts: 26
Joined: Sat Apr 16, 2016 9:48 am

Re: Presenting our evidence

PostMon Sep 26, 2016 12:59 pm

Thanks, Hillel. I would like to underscore two important implications of your post.
First, your two suggestions for unreported findings are spot on. Many of the works that I enjoy reading are enjoyable in part because they report more null findings and model specifications. Being taken along for the various analytical struggles makes a work more engaging and also inspires more confidence than one that is all shinny, perfect, and invariably non-transparent. Books are bit like people, knowing a bit about their struggles and darker sides make them far more interesting and easier to trust.

Second, your opening point that unreported findings and data access more generally have to be be defined narrowly is incredibly important. The DA-RT recommendations strike me as resting on a false analogy between the information contained in a data file of a quantitative scholars and the information contained in the primary material or fields notes from a qualitative scholar. The data file usually contains only theory-laden evidence, that is information predicted by a particular theory and information that hence has been purpose collected for testing. By contrast, primary material contains such theory-laden information plus a lot of additional information. This additional information would encompasses predictions from other theories, as of yet untheorized predictions, and lots, lots of noise. I think qualitative scholars are aware of this and this might explain why they are a bit dumb-founded by DA-RT's data access demands for qualitative scholars.

Post Reply

Guest

Re: Presenting our evidence

PostSat Oct 01, 2016 1:00 pm

This intervention might be tangential, but I am reading the question posed here about citing presenting data narrowly, as being about presenting interview data, and my overall reaction is that it is difficult (and probably unrealistic) to expect all interview data to be presented. Often material from an interview leads the researcher to collect numerical data (often data that was not part of the original research design) that can then be presented in the the text. In that case the interview was an essential part of the fieldwork, but it is not the source of essential information per se. Similarly, material from one interview is often what leads the researcher to contact someone else, and it is that subsequent interview with key nuggets of information that is cited; in that case the first interview is background -- invaluable background, but background that would be awkward to cite. Most generally, much of the information we obtain in interviews, particularly open-ended discussions, is essentially grist, information that helps us make sense of what we observe. It's important to give the reader a clear sense of what sorts of actors were interviewed, but reporting each interview (not to mention the content of each interview, another issue altogether) seems unnecessary and perhaps counter-productive.

Post Reply

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Re: Presenting our evidence

PostMon Oct 03, 2016 5:53 am

Thank you for these thoughts. I think your comments on right on, and they go to the core issue of what expectations are both reasonable for authors and also helpful for readers. I would agree that we gather so much "background information" (in Bayesian terms) that informs our inferences that is it simply not feasible to list it all out, nor do I think doing so would provide readers with much insight beyond what we convey in the text of our analysis. I handled the issue of interviews that I did not explicitly cite in my book by listing the number and type of informants I spoke with for each of my three countries in my reference list (e.g x business people, y opposition politicians, z government officials). I thought that seemed like a reasonable compromise--it didn't impose a lot of undue work, and it gives readers a sense of the depth of my research and the larger body of evidence I am drawing on.

Post Reply

Guest

Re: Presenting our evidence

PostMon Oct 03, 2016 11:43 am

Thank you Tasha for the feedback. This exchange brings up another pragmatic issue, of cost and benefit in trying to achieve "transparency."

I'll speak from my personal experience as a scholar in comparative politics currently finishing a book manuscript based on extensive fieldwork in a set of countries over an extended period of time. Deciding on how to use my interviews and deciding how to present them, no doubt worthwhile things to think about, has been exceptionally taxing. To illuminate, let me explain what I'm doing, as I think it's worth being transparent about transparency, i.e. what we're doing, why we're doing it, and what's to be gained.

The first step is to go through years of notes to make a full list of all the interviews conducted. Then, for each interview I note if it is cited or referred to explicitly in the text. If I refer to an interview with such and such a figure in the text (always confidential, so by what they do and not who they are), I include the interview in the list that will go at the back of the book. If I do not cite or refer to it explicitly, i.e. it's what I described above as "grist," then it's not going in the list of interviews. I'm thinking of these in the same way as we think of the tens or hundreds of articles and books we’ve read but don't cite (i.e. in the same way the reference list includes works cited and is not a full list of everything I've read on the topic, the interview list features interviews cited but is not a full list of everyone I talked to). But I don't ignore those interviews either. Rather, I am producing another table that lists all of the types of actors (again, not names but by location/profession/function) interviewed in each country. Any interview on the list at the back of the book is reflected in the table, though not all the interviews in the table are listed in the back of the book.

Is all this sufficient for transparency? I don't know, but I know that it's been a lot of work and time-consuming, and to be frank I'm not sure anyone will find my book any more or less credible on account of these efforts. I think the people who buy the argument and research would do so anyway and those who do not still will not, so I'm not convinced I'm accomplishing much for the efforts. (I should also note that one reason why compiling this list and making these decisions has been so time consuming is because the norms for "transparency" have changed substantially since I started the project, and as a result I feel like I'm trying to retrospectively apply today's norms to yesterday's research.)

Post Reply

Guest

Re: Presenting our evidence

PostMon Oct 03, 2016 3:48 pm

I am generally hesitant to lay down firm rules for all works at all times but it seems to me that presenting at least this kind of summary, which Tasha lays out here, is a reasonable guideline. It could even be done in an article as it doesn't take much space and does give a useful overview of the empirical universe the author is working with. Something similar could be done in a couple of sentences to describe the broad sets of documents consulted either in archives or online, e.g., "I consulted all the party programs available online on national party websites as well as historical programs in the national archives." Speaking as someone who does a lot of reviewing, I think this is also often helpful for the author because it's a pretty clear answer to the question of what original empirical material the author is bringing to the debate.

On the original question of when it might be helpful to compile a list of specific pieces of evidence, this is something I often do for myself, especially when I am trying to put together a chronology for process tracing or some other reason. I create big timelines out of all of my kinds of data, organized by date and with sources and direct quotes when relevant. It might have a newspaper article describing some development in December of 2014, then a link to a congressional debate that took place in January 2015, then an interview describing some social movement strategizing in late January in response, etc. The question for me is when, if ever, should that be shared. I wouldn't put it into an article or even a book because it's too long and detailed. Many of the things on it wouldn't make it into a narrative even in a book-length piece. I suppose it could be provided as an appendix, but my gut feeling is that it's more the sort of thing I could provide if there were someone who wanted more information about the unfolding.

Sometimes these discussions of how much information to provide seem to assume that providing more and more information is the most useful/transparent, but I can easily imagine situations where providing more information (all of all the interviews; all of all the documents consulted to show you didn't cherry-pick) actually hides more than it illuminates.

Post Reply

Veronica Herrera
University of Connecticut
Posts: 13
Joined: Wed Aug 31, 2016 8:07 am

Re: Presenting our evidence (and the absence of evidence or data)

PostThu Oct 06, 2016 10:16 am

Regarding this issue of how to present information about the interview process in a more transparent way (and building on Tasha's comments on this), I have found the book Interview Research in Political Science, editor Lana Mosley to be very helpful, e.g. chapter 4, "How to Report Interview Data" by Bleich and Pekkanen. I read it before I began interviewing for my second book project recently and was able to apply this careful tracking of information about the stratified sample of respondents, with information such as how many respondents from each subtype (e.g. NGOs, Industry, Local Government, etc), average length of interview, whether saturation point was reached, and several other categories. An important point brought up in this book is that we should report not only who we get access to but who we could not get access to. They recommend doing this table for all the respondents we invited to interview, and then report what groups were underrepresented and the extent to which it poses a problem (or doesnt) to the claims we are making. As a reviewer, I think these tables are very helpful and should become standard wherever possible, to be adapted specifically for the needs of a particular project of course. (As another poster suggests, something similar could be done with archives or documents consulted, that is, the empirical universe of data consulted)....

Also, this last point about reporting who we could not access I think relates to Marcus' earlier point about unreported findings, although this perhaps is an example of unreported data, which is different. In any event a more general idea coming out of many of these threads is how to report what is not there; unreported data, missing data in archives, key interviewees that refused to be interviewed, essentially how do we handle reporting data on the presence of zero, in qualitative research....

Post Reply

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Re: Presenting our evidence (and the absence of evidence or data)

PostSun Oct 09, 2016 6:33 am

Thank for your post Veronica! These are all interesting points. Regarding the Bleich and Pekkanen article, I think its important to point out that their perspective comes from survey research, where the norm is random sampling from a population, and things like response rates, non-response bias, and representativeness of the sample matter quite a lot for orthodox large-N statistical inference. However, these concerns by and large are not relevant to process tracing—we’re usually not aiming to infer the ‘average view’ or the mean value of some trait in a population (e.g. local NGOs, governing party MPs); instead, we are looking for key informants who can provide us with diagnostic evidence to assess alternative hypotheses, following Bayesian logic. Of course we want to dig hard for evidence and try to reach everyone who is in a position to share relevant information, but the strength of our inference is not based on how many people we interviewed, of which type, or how long the interviews lasted, but whether the information we uncovered is strongly decisive in weighing in favor of a working hypothesis against a rival.

I would also question the need to systematically list informants whom we could not access. I can certainly imagine in some circumstances it would be salient to point out in the text that: “Minister Y could not be reached to comment”—that in itself might serve as salient evidence, depending on the research question and the context. But generally speaking, from an analytical perspective, what do we learn by seeing a list of informants who were not actually interviewed (setting aside whether human subject concerns permit drawing up such as list)? We have no way of knowing what information might have come out of those hypothetical interviews or how that information might have affected our conclusions, and from a Bayesian perspective, its irrelevant—we make the best inferences we can based on the data we actually have. This is another contrast with orthodox frequentist statistics, where the data that we might potentially have collected but did not plays an important role in inference, whereas in Bayesianism, inference is based entirely on the actual data we did collect.

So this brings me back to what I think is a central question, which our second Guest raised as well: What kinds of information can we reasonably give our readers about our research process that will help them better understand and evaluate our analysis, as opposed to providing detail for the sake of providing detail?

Post Reply

Marcus Kreuzer
Villanova University
Posts: 26
Joined: Sat Apr 16, 2016 9:48 am

Re: Presenting our evidence (and the absence of evidence or data)

PostMon Oct 10, 2016 11:52 am

Interesting takes here on the value of whether or not to report interviews, or textual sources for that matter, that were not accessible.
I think Tasha is correct to point out that process tracing and Bayesian analysis substitutes the frequency of evidence for what is called its probative value. (i.e. is it smoking gun, hearsay, circumstantial etc.) And according to this logic, there is no value to report inaccessible, potential evidence.
But what about the value of knowing what information was not accessible for the updating process? Isn't there a value, for example, for Cold War scholars to point out whether their research did or did not have access to classified information or old Soviet archives? Doesn't such knowledge help us to decide when to stop updating? So, yes the value of reporting inaccessible evidence is small for individual updates, but they seem to matter for deciding when to stop updating.
Tasha Fairfield wrote:Thank for your post Veronica! These are all interesting points. [....]
I would also question the need to systematically list informants whom we could not access. I can certainly imagine in some circumstances it would be salient to point out in the text that: “Minister Y could not be reached to comment”—that in itself might serve as salient evidence, depending on the research question and the context. But generally speaking, from an analytical perspective, what do we learn by seeing a list of informants who were not actually interviewed (setting aside whether human subject concerns permit drawing up such as list)? We have no way of knowing what information might have come out of those hypothetical interviews or how that information might have affected our conclusions, and from a Bayesian perspective, its irrelevant—we make the best inferences we can based on the data we actually have. This is another contrast with orthodox frequentist statistics, where the data that we might potentially have collected but did not plays an important role in inference, whereas in Bayesianism, inference is based entirely on the actual data we did collect.

Post Reply

Guest

Re: Presenting our evidence (and the absence of evidence or data)

PostSun Nov 27, 2016 2:11 pm

[quote="Marcus Kreuzer"]Interesting takes here on the value of whether or not to report interviews, or textual sources for that matter, that were not accessible.
I think Tasha is correct to point out that process tracing and Bayesian analysis substitutes the frequency of evidence for what is called its probative value. (i.e. is it smoking gun, hearsay, circumstantial etc.) And according to this logic, there is no value to report inaccessible, potential evidence.
[color=#0000FF]But what about the value of knowing what [i]information was not accessible[/i] for the updating process?[/color] Isn't there a value, for example, for Cold War scholars to point out whether their research did or did not have access to classified information or old Soviet archives? Doesn't such knowledge help us to decide when to stop updating? So, yes the value of reporting inaccessible evidence is small for individual updates, but they seem to matter for deciding when to stop updating.
[quote="Tasha Fairfield"]Thank for your post Veronica! These are all interesting points. [....]
I would also question the need to systematically list informants whom we could not access. I can certainly imagine in some circumstances it would be salient to point out in the text that: “Minister Y could not be reached to comment”—that in itself might serve as salient evidence, depending on the research question and the context. But generally speaking, from an analytical perspective, what do we learn by seeing a list of informants who were not actually interviewed (setting aside whether human subject concerns permit drawing up such as list)? We have no way of knowing what information might have come out of those hypothetical interviews or how that information might have affected our conclusions, and from a [color=#0000FF]Bayesian perspective, its irrelevant—we make the best inferences we can based on the data we actually have. This is another contrast with orthodox frequentist statistics, where the data that we might potentially have collected but did not plays an important role in inference, whereas in Bayesianism, inference is based entirely on the actual data we did collect[/color]. [/quote][/quote]

Post Reply

Richard Pasquier
Temple University
Posts: 2
Joined: Sun Nov 27, 2016 2:13 pm

Re: Presenting our evidence (and the absence of evidence or data)

PostSun Nov 27, 2016 10:59 pm

I am struck as I read this discussion of the practical implications of making the evidence we rely on more transparent, that it is important to remember that there is a lot at stake. I offer this reaction as someone who is a PhD. student who has not yet completed but is contemplating small-n dissertation project that could involve my gathering a significant amount of evidence from participant interviews. I also bring the perspective of someone who has been a practicing lawyer in a corporate setting for many years and have come to appreciate the challenge of distinguishing reliable from unreliable factual claims.
I recently compared two articles analyzing a single case in the field of international financial regulation: the Basel Accords of 1987. Both articles not surprisingly used process tracing and relied on evidence gathered from official documents and participant interviews. Both addressed the theoretical question of which variables cause regulators to seek coordination across borders and develop international standards. At issue were the extent to whether regulators acted primarily from “consensual knowledge” about the origins and causes of banking crisis or whether their behavior was better explained by structural features such as institutional arrangements or incentives.
What hit me strongly was the extent to which both authors relied on recollections of “what we really were getting at…” offered by participants in interviews. In particular, in the case of the article that concluded that “consensual knowledge” played a decisive role, everthything hinged on the credit given to the accounts given by officials within the Federal Reserve and the Bank of England as to why they pushed for international standards. Is it a mere coincidence that the story that emerged was a hero’s tale of farsighted regulators battling parochial interests and in the pursuit of fair-minded and tough rules to protect the world against catastrophe? The second article was a bit more convincing but only because the author had constructed a theoretical model of regulator behavior that was more nuanced and realistic. Nevertheless, both analyses shared an essential weakness that the key theoretically crucial amswer to the question of “why” the regulators opted for coordinated standards depended on the results of participant interviews, which are not available to subsequent researchers for examination.
I am less concerned about being able to review notes of every interview conducted by a researcher than I am about being able to gain insight into the bases for a researcher drawing a crucial inference from an interview. Without that, process tracing in “expert areas” such as international banking or environmental regulation risks becoming a recitation of the favored narratives of participants, who of course prefer magnifying their role and minimizing the extent to which their deliberations were guided by mere politics, national rivalry or institutional dynamics.
I would like to eventually write a dissertation to shed light on what drives the creation of regulatory regimes in the areas of chemical safety, in particular the REACH regulation in the EU. On the surface, it represents a good test case for weighing power-based and knowledge-based theories about what drives regime creation. I would learning much from participant interviews, be it with regulators, industry stakeholders, NGO officials or scientists. I now understand that this type of evidence needs to be weighed very carefully in order to avoid the type of participant bias that I noted in the accounts of the Basel Accords. It will be important to select interview subjects carefully and also subject their accounts to keen analytical scrutiny. I would expect those building on my work to want to understand the choices I made of interview subjects and have the independent ability to verify the bases of my factual inferences from these interviews. This might present some challenges in setting the ground rules of interviews with subjects and obtaining the permissions necessary to make the contents available for active citation. Nevertheless, I now believe this step would be an important boost the credibility of my conclusions, particularly if I conclude that result hinged on ideas and not broader power-based factors, whether they be political or economic or a combination of both.
I wonder if any participants in this discussion have any insight to offer on how to combat some of the inherent limitations of participant accounts in constructing causal stories. To me this issue transcends the more narrow concern about whether we require active citation of interviews. The alternative may be to be transparent in the narrative itself as to the evidentiary issues that confront process-tracing and in-case methods generally.
An example from another field might be instructive. In his biography of the French statesman, Talleyrand, the French historian Jean Orieux weaves arguments with his sources right into the narrative flow, often expressing skepticism or outrage at the claims of the various memoirists and other primary sources and taking the time to explain why. These issues are not buried in footnotes, but are right upfront for every reader to consider as they read the quotations and contend with the contemporary interpretations. There is no smooth surface, only contingency and contention displayed boldly right in the center of narrative. At first, this practice was jarring, but over time its cumulative effect was to give greater confidence that evidence was being weighed carefully and no observers' particular point of view was accepted as truth without considerable effort at corroboration. Is this an alternative approach to consider? If it is then word limits in journal submission requirements would need to be looked at.

Post Reply

Tasha Fairfield
LSE
Posts: 17
Joined: Mon Sep 05, 2016 4:05 pm

Re: Presenting our evidence (and the absence of evidence or data)

PostTue Nov 29, 2016 10:36 am

Thank you for these thoughts Rick. Making inferences from interview information certainly requires careful thought. I would encourage you to take a look at my paper with A.E. Charman--we devote a whole section to talking about how to define and analyze interview evidence from a Bayesian perspective. http://tashafairfield.wixsite.com/home/research In short, the evidence is always "some source S said X in some context C", and we have to evaluate which of our rival hypotheses makes that evidence more likely given our background knowledge--and this is where all of our careful thought about what incentive the source might have to hide or distort (or simply misremember) the truth come into play. You might also check out the post on our thread about on Bayesian inference.

Post Reply

Richard Pasquier
Temple University
Posts: 2
Joined: Sun Nov 27, 2016 2:13 pm

Re: Presenting our evidence (and the absence of evidence or data)

PostTue Dec 06, 2016 12:17 pm

Tasha Fairfield wrote:Thank you for these thoughts Rick. Making inferences from interview information certainly requires careful thought. I would encourage you to take a look at my paper with A.E. Charman--we devote a whole section to talking about how to define and analyze interview evidence from a Bayesian perspective. http://tashafairfield.wixsite.com/home/research In short, the evidence is always "some source S said X in some context C", and we have to evaluate which of our rival hypotheses makes that evidence more likely given our background knowledge--and this is where all of our careful thought about what incentive the source might have to hide or distort (or simply misremember) the truth come into play. You might also check out the post on our thread about on Bayesian inference.

Thanks, Tasha. I have already profited from study of the Appendix to your fascinating article on taxing economic elites in unequal democracies (2013). I plan to build in Bayesian techniques into my future research designs and see where that takes me. I look forward to reading your paper with A.E. Charman for further insights into interviews.

Post Reply

Aaron Schneider
University of Denver
Posts: 1
Joined: Mon Dec 19, 2016 6:17 pm

Re: Presenting our evidence

PostMon Dec 19, 2016 6:54 pm

Tasha Fairfield wrote:How should scholars present the evidentiary basis of their analysis? Is it best to discuss the evidence holistically as part of the published analysis? Should we use active citations? When might compiling a list of specific pieces of evidence be helpful?

Aaron Schneider. University of Denver. I am not sure if the following observations fit in this thread or another, but I wanted to log the comment so that the collators of responses might have access to one more piece of information. In fact, the way they are collecting these posts and aggregating them into a statement on qualitative and process tracing analysis might be instructive to the way qualitative researchers do their work.

In considering the evidentiary basis of analysis coming from qualitative research, there are a few scenarios that perhaps could be useful to discuss. As a number of contributors have noted, many of our interviews provide the grist and background to help update our beliefs. As we become more confident in an argument, we return to a single interview subject, document, or quantitative data observation that then expresses the narrative we have built up over the interviews. This last step in the research serves as the smoking gun, and it is usually the one cited. It seems reasonable to provide some holistic or macro- form of citation of all the building blocks that led to that observation, and I can imagine scenarios in which a more complete description of the Bayesian updating process might be useful. One scenario would be a case in which the sequence or character of the updating interviews or qualitative observations are themselves part of the story. A subsequent researcher would need to know those prior steps if they wished to replicate the process of updating or reproduce the study in another context. The roadmap provided by the prior steps might belong in an appendix, if they are not included in the main text. Another scenario would be one in which the updating interviews provided clues that ruled out rival arguments, making the interviews more than non-findings or simply building blocks in an updating process, and worth including in case readers are curious if those rival arguments were considered. In cases in which the only issue is to provide a reliable and replicable observation that supports or refutes a portion of an argument, it seems sufficient to only include the last step. In such cases, the point is to provide something along the lines of a test of an implication of a larger argument. The piece of evidence presented is not the only thing supporting an argument, nor would it be read that way, it simply helps the reader understand how one implication is supported or refuted by the evidence.

Post Reply

Derek Beach
University of Aarhus
Posts: 7
Joined: Mon Apr 18, 2016 2:23 am

Re: Presenting our evidence

PostMon Dec 19, 2016 8:10 pm

Tasha Fairfield wrote:Thank you for these thoughts. I think your comments on right on, and they go to the core issue of what expectations are both reasonable for authors and also helpful for readers. I would agree that we gather so much "background information" (in Bayesian terms) that informs our inferences that is it simply not feasible to list it all out, nor do I think doing so would provide readers with much insight beyond what we convey in the text of our analysis. I handled the issue of interviews that I did not explicitly cite in my book by listing the number and type of informants I spoke with for each of my three countries in my reference list (e.g x business people, y opposition politicians, z government officials). I thought that seemed like a reasonable compromise--it didn't impose a lot of undue work, and it gives readers a sense of the depth of my research and the larger body of evidence I am drawing on.


I agree on background information and 'non-reporting', but I believe that when we are making explicit judgements about the uniqueness of a particular piece of evidence that build on our understanding of context from this information, we have to be able to make bare our claims for why we believe the evidence found actually is unique. Otherwise we are just postulating that finding actor A saying X is relatively unique in the given context without any justification.

Post Reply

Derek Beach
University of Aarhus
Posts: 7
Joined: Mon Apr 18, 2016 2:23 am

Re: Presenting our evidence

PostMon Dec 19, 2016 8:13 pm

Guest wrote:Thank you Tasha for the feedback. This exchange brings up another pragmatic issue, of cost and benefit in trying to achieve "transparency."

I'll speak from my personal experience as a scholar in comparative politics currently finishing a book manuscript based on extensive fieldwork in a set of countries over an extended period of time. Deciding on how to use my interviews and deciding how to present them, no doubt worthwhile things to think about, has been exceptionally taxing. To illuminate, let me explain what I'm doing, as I think it's worth being transparent about transparency, i.e. what we're doing, why we're doing it, and what's to be gained.

The first step is to go through years of notes to make a full list of all the interviews conducted. Then, for each interview I note if it is cited or referred to explicitly in the text. If I refer to an interview with such and such a figure in the text (always confidential, so by what they do and not who they are), I include the interview in the list that will go at the back of the book. If I do not cite or refer to it explicitly, i.e. it's what I described above as "grist," then it's not going in the list of interviews. I'm thinking of these in the same way as we think of the tens or hundreds of articles and books we’ve read but don't cite (i.e. in the same way the reference list includes works cited and is not a full list of everything I've read on the topic, the interview list features interviews cited but is not a full list of everyone I talked to). But I don't ignore those interviews either. Rather, I am producing another table that lists all of the types of actors (again, not names but by location/profession/function) interviewed in each country. Any interview on the list at the back of the book is reflected in the table, though not all the interviews in the table are listed in the back of the book.

Is all this sufficient for transparency? I don't know, but I know that it's been a lot of work and time-consuming, and to be frank I'm not sure anyone will find my book any more or less credible on account of these efforts. I think the people who buy the argument and research would do so anyway and those who do not still will not, so I'm not convinced I'm accomplishing much for the efforts. (I should also note that one reason why compiling this list and making these decisions has been so time consuming is because the norms for "transparency" have changed substantially since I started the project, and as a result I feel like I'm trying to retrospectively apply today's norms to yesterday's research.)



This is just 'background information' until you start to begin making inferences about why or what happened based on the information. Once inferences are involved we have to be able to back them up with evidence. In presenting evidence, what we are actually doing is justifying why a given piece of empirical material means what we believe it means - i.e. that it enables either confirming or disconfirming inferences to be made.

Post Reply

Derek Beach
University of Aarhus
Posts: 7
Joined: Mon Apr 18, 2016 2:23 am

Re: Presenting our evidence

PostMon Dec 19, 2016 8:48 pm

Tasha Fairfield wrote:How should scholars present the evidentiary basis of their analysis? Is it best to discuss the evidence holistically as part of the published analysis? Should we use active citations? When might compiling a list of specific pieces of evidence be helpful?


In my new book titled Causal Case Studies, we have a proposal in chapter 6 for using an 'argument roadmap' that includes the presentation of the evidence underlying each inference made. While onerous to do, we cannot claim that we have confirmed or disconfirmed to some degree a claim with empirical evidence unless we can back it up with proof.

Beach and Pedersen (2016) Causal Case Studies: Foundations and Guidelines for Comparing, Matching and Tracing. Ann Arbor: University of Michigan Press.

Post Reply

Guest

Re: Presenting our evidence

PostSun Jan 01, 2017 2:01 pm

[quote="Guest"]This intervention might be tangential, but I am reading the question posed here about citing presenting data narrowly, as being about presenting interview data, and my overall reaction is that it is difficult (and probably unrealistic) to expect all interview data to be presented. Often material from an interview leads the researcher to collect numerical data (often data that was not part of the original research design) that can then be presented in the the text. In that case the interview was an essential part of the fieldwork, but it is not the source of essential information per se. Similarly, material from one interview is often what leads the researcher to contact someone else, and it is that subsequent interview with key nuggets of information that is cited; in that case the first interview is background -- invaluable background, but background that would be awkward to cite. Most generally, much of the information we obtain in interviews, particularly open-ended discussions, is essentially grist, information that helps us make sense of what we observe. It's important to give the reader a clear sense of what sorts of actors were interviewed, but reporting each interview (not to mention the content of each interview, another issue altogether) seems unnecessary and perhaps counter-productive.[/quote]

Kent Eaton, UC Santa Cruz. I use interviews in a similar fashion. Even though I tend to target political elites, officialdom, and government decision-makers in my interviews (as opposed to societal actors who are further from the decisions I seek to understand in my research), it is actually quite rare that those interviews generate specific quotes that I would bring into the body of the text as evidence to support a theoretical claim. Like this commenter, the interviews I conduct are mostly "grist" -- conforming or challenging my understanding of events, pointing me to factual information or data sources I was not aware of, and putting me in touch with other interview possibilities. It's not at all clear to me how this kind of data could/should be shared.

Post Reply

Guest

Re: Presenting our evidence

PostSun Jan 01, 2017 2:09 pm

[quote="Guest"]I am generally hesitant to lay down firm rules for all works at all times but it seems to me that presenting at least this kind of summary, which Tasha lays out here, is a reasonable guideline. It could even be done in an article as it doesn't take much space and does give a useful overview of the empirical universe the author is working with. Something similar could be done in a couple of sentences to describe the broad sets of documents consulted either in archives or online, e.g., "I consulted all the party programs available online on national party websites as well as historical programs in the national archives." Speaking as someone who does a lot of reviewing, I think this is also often helpful for the author because it's a pretty clear answer to the question of what original empirical material the author is bringing to the debate.

On the original question of when it might be helpful to compile a list of specific pieces of evidence, this is something I often do for myself, especially when I am trying to put together a chronology for process tracing or some other reason. I create big timelines out of all of my kinds of data, organized by date and with sources and direct quotes when relevant. It might have a newspaper article describing some development in December of 2014, then a link to a congressional debate that took place in January 2015, then an interview describing some social movement strategizing in late January in response, etc. The question for me is when, if ever, should that be shared. I wouldn't put it into an article or even a book because it's too long and detailed. Many of the things on it wouldn't make it into a narrative even in a book-length piece. I suppose it could be provided as an appendix, but my gut feeling is that it's more the sort of thing I could provide if there were someone who wanted more information about the unfolding.

Sometimes these discussions of how much information to provide seem to assume that providing more and more information is the most useful/transparent, but I can easily imagine situations where providing more information (all of all the interviews; all of all the documents consulted to show you didn't cherry-pick) actually hides more than it illuminates.[/quote]

Kent Eaton, UC Santa Cruz. I also like this idea that Tasha has proposed of providing a summary, and concur with this commentator that it could even be done in an article to give "an overview of the empirical universe" that the author is working in. The idea of giving the reader in a few sentences a clear sense of the range and type of documents and sources consulted seems very feasible to me.

Post Reply