IV.2. Settings of political violence

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

For instructions on how to follow a discussion thread by email, click here.

Zachariah Mampilly
Vassar
Posts: 11
Joined: Wed Aug 31, 2016 11:14 am

Question 3: On innovation and examples

PostMon Oct 24, 2016 1:59 pm

3) What are excellent examples of transparency practices in research in settings of political violence, and what makes them good models for others? What innovative transparency practices might research on political violence adopt?

Post Reply

Guest

Re: Question 3: On innovation and examples

PostThu Dec 01, 2016 7:34 pm

[quote="cherianmampilly"]3) What are excellent examples of transparency practices in research in settings of political violence, and what makes them good models for others? What innovative transparency practices might research on political violence adopt?[/quote]

This is the million dollar question in my mind. At the risk of speaking for others, I think most of us agree that improving the transparency of our research would be a good thing, and that protecting our subjects should be imperative. So, can we do both and if so how? One possibility could be to make de-identified versions of our interviews available (deleting anything that could be a possible identifier like a village or neighborhood name as per Ana’s reply to question 2). This approach would allow other researchers to see the wider body of “data” from which we are making inferences and increase confidence that we are not “cherry-picking” and, therefore, in the validity of our conclusions. At the same time, this approach may also have some non-trivial costs. In particular, I worry that it could have a chilling effect on our research. When I have conducted interviews in the past, I discussed with my interviewees (as per my IRB) that I would use their interviews to draw conclusions about my research question and possibly quote parts of those interviews but I would not publish the interviews themselves. If I had instead informed interviewees that I would be making anonymized versions of their interviews public, I wonder if some of my interviewees would have declined to be interviewed or would have self-censored their responses. While perhaps interviewees should not feel threatened by this, it can be difficult to gain interviewees’ trust in a setting where, if the information shared in the interview were to be exposed, they would face real and significant risks (e.g. arrest, retaliation, etc.) Requiring researchers to make available anonymous versions of interviews requires another layer of trust from respondents and, I worry, could mean that researchers are less able to conduct high quality research on the most important and difficult research questions.

With this in mind, I wonder if we could make more of the content of our interviews available short of providing the full texts of interviews? For instance, I could imagine developing a standard that scholars report the question asked as well as the answer provided by the respondent when quoting from an interview. Or, similarly, that scholars provide a longer, de-identified excerpt from the interview for each quote used. These approaches could help give readers more insight into the wider context of the interview and the scholar's interpretation of the interview material without compromising respondents’ security or having a chilling effect on research.

If the concern that we are responding to is cherry picking, I can also imagine developing a standard that scholars report the number of consistent interviews and inconsistent interviews (with an initial argument or hypothesis) and provide quotes and/or short excerpts from both. Like efforts to encourage quantitative studies with null results (e.g. as in the recent CPS issue using results-blind review), such a standard could help to mitigate publication bias that exists in qualitative as well as quantitative studies. Ultimately though, such a standard, as with any standard that would be adopted, would depend on the integrity of the researcher to uphold.

These are some quick, initial ideas and I would welcome hearing others’ thoughts. Thanks to the working group organizers for creating this forum for dialogue within our community!

Yael Zeira
University of Mississippi

Post Reply

Zachariah Mampilly
Vassar
Posts: 11
Joined: Wed Aug 31, 2016 11:14 am

Re: Question 3: On innovation and examples

PostMon Dec 05, 2016 8:36 am

Thanks Yael for the thoughtful suggestions. I hope others can use this as a starting point to debate what may be both ethical and practical as this process moves forward.


Guest wrote:
cherianmampilly wrote:3) What are excellent examples of transparency practices in research in settings of political violence, and what makes them good models for others? What innovative transparency practices might research on political violence adopt?


This is the million dollar question in my mind. At the risk of speaking for others, I think most of us agree that improving the transparency of our research would be a good thing, and that protecting our subjects should be imperative. So, can we do both and if so how? One possibility could be to make de-identified versions of our interviews available (deleting anything that could be a possible identifier like a village or neighborhood name as per Ana’s reply to question 2). This approach would allow other researchers to see the wider body of “data” from which we are making inferences and increase confidence that we are not “cherry-picking” and, therefore, in the validity of our conclusions. At the same time, this approach may also have some non-trivial costs. In particular, I worry that it could have a chilling effect on our research....

Post Reply

Paul Staniland (University of Chicago)

Re: Question 3: On innovation and examples

PostTue Dec 20, 2016 11:42 am

This is a slightly different take on transparency, but one that I think is worth noting. There are two kinds of transparency at play here. One is about the underlying data - notes, transcripts, identities/their lack. The other is about how inferences are made. They're obviously related, and the former has much more at stake for questions of journal submission guidelines, DA-RT, etc. But I think it's worth thinking hard about how qualitative researchers make inferences. Something that I'd suggest for improving transparency is more open discussion of failures - theories that died in the face of evidence, cases that simply don't work, and research strategies that turned out to be inadequate, inefficient, or wildly unrealistic.

My sense from talking with reasonable quantitative people is that they are often more concerned about the latter issue than the former - it's not that they think that qual/multi-method are necessarily making up out of whole cloth their interviews, but instead that they are shading away contrary evidence, forcing case outcomes into alignment with theoretical expectations, etc. And when they show up a job talk with 3 cases that all magically work for their argument, skepticism grows. Which is not to say that these scholars are in fact playing games, to be clear.

A different/complementary solution for transparency, rather than being forced to reveal all evidence, is to instead be clearer about where things went wrong, systematically working through how theories were built, where they worked and where they didn't, and, in fact, when the goal is generalization vs. careful, theoretically informed description. That might be wrong, or might be more appropriate for some forms of research more than others, but it's a distinction I've been thinking about recently and could help in at least a few cases.

Paul Staniland
University of Chicago

Post Reply

Scott Straus
UW-Madison
Posts: 2
Joined: Tue Jan 03, 2017 3:07 pm

Re: Question 3: On innovation and examples

PostTue Jan 03, 2017 3:54 pm

Paul's suggestion is excellent. My sense is that many of us are doing this work already. The fix, as per Paul's idea, would be to draw attention to--increase transparency!--the work that is already being done in this regard.

Paul Staniland (University of Chicago) wrote:This is a slightly different take on transparency, but one that I think is worth noting. There are two kinds of transparency at play here. One is about the underlying data - notes, transcripts, identities/their lack. The other is about how inferences are made. They're obviously related, and the former has much more at stake for questions of journal submission guidelines, DA-RT, etc. But I think it's worth thinking hard about how qualitative researchers make inferences. Something that I'd suggest for improving transparency is more open discussion of failures - theories that died in the face of evidence, cases that simply don't work, and research strategies that turned out to be inadequate, inefficient, or wildly unrealistic.

My sense from talking with reasonable quantitative people is that they are often more concerned about the latter issue than the former - it's not that they think that qual/multi-method are necessarily making up out of whole cloth their interviews, but instead that they are shading away contrary evidence, forcing case outcomes into alignment with theoretical expectations, etc. And when they show up a job talk with 3 cases that all magically work for their argument, skepticism grows. Which is not to say that these scholars are in fact playing games, to be clear.

A different/complementary solution for transparency, rather than being forced to reveal all evidence, is to instead be clearer about where things went wrong, systematically working through how theories were built, where they worked and where they didn't, and, in fact, when the goal is generalization vs. careful, theoretically informed description. That might be wrong, or might be more appropriate for some forms of research more than others, but it's a distinction I've been thinking about recently and could help in at least a few cases.

Paul Staniland
University of Chicago

Post Reply