Substantive Dimensions of the Deliberations

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

The transition to Stage 2 of the deliberations is currently underway but will take some time to complete. In the meantime, we very much welcome additional contributions to the existing threads in this forum.

For instructions on how to follow a discussion thread by email, click here.

Milli Lake
ASU
Posts: 3
Joined: Sun Apr 24, 2016 9:03 am

Do no harm

PostMon Apr 25, 2016 6:18 pm

Dear all,

I shared something similar to this in a comments thread a few days ago, but I wanted to make this overarching point on the main page too. I am troubled by the impact that academic researchers working with politically sensitive material can have on the lives and personal safety of their contacts and interlocutors. This post is about ways to minimize that risk. These concerns of course affect research subjects, but I think that, as a field, we tend to think less carefully about the drivers, interpreters, fixers, RAs and other local interlocutors that support our work. When we do not take sufficient precautions to obscure and anonymize politically sensitive work, these individuals can also face considerable harm by virtue of being associated with us. From this perspective, "do no harm" is not just about removing identifiers and withholding interview dates and transcripts, but about making the overall tone and content of our work "safe," or as safe as it can possibly be, for the various individuals who could be implicated in its content.

I would like to share a couple of examples. The first resonates with many contributors to this site, and is a point that has been made many times already - I think it's worth reiterating. In contexts of violence, in repressive authoritarian regimes, or in situations of conflict-related insecurity, the research environment can change rapidly over time. What is deemed safe one day, may not be safe the next day, or many months or years later. Thus, oftentimes, research subjects consent to participating (and/or being identified), and RAs and informants consent to assisting us, based on levels of risk on the day the research is being carried out. As much as we often value deferring to local expertise, the individuals we interact with may not always be in a good position to assess risk in two years time, under a new political regime, in an election season, or in a political purge. They may also face financial incentives - implicit or explicit - to help out or participate in our research, even if they are aware of potential risks. Furthermore, few individuals know at the time the research is carried out, what the ultimate punchline of the piece is going to be or what it will look like once published.

For those of us working in conflict zones, I'm not sure it's ever possible to assert with 100% certainty that work that is political in content poses no risk to those who accompanied and assisted us in the field (even once we have destroyed data and encrypted transcripts). Even without publishing anything at all, many know who drove us around and who were our "friends" in the field. For those working in authoritarian contexts, this fact already makes it difficult to publish even the most self-censored work. I am glad that we are having a conversation about anonymity and political risk. However, I believe that we should be brainstorming about about ways we can further protect sensitive data, and protect researchers so that they can feel confident and secure in invoking greater anonymity and taking the precautions they deem necessary to mitigate these risks when making novel and important contributions to scholarship.

It is also worth noting that even the most disorganized of political regimes are becoming increasingly savvy in terms of surveying the research that's out there and vetting academics writing on potentially political topics. It is already extremely stressful for researchers working in such environments to weigh these risks and to still feel comfortable writing on their areas of expertise. I am increasingly beginning to believe that "do no harm" means no longer conducting my own research in the places I have recently worked (Eastern DR Congo and Rwanda) because I cannot trust that anything I write won't be used against people I know at a later date. I know many others - particularly working in Rwanda - who feel the same way.

My second example pertains to the integrity with which some academic researchers conduct themselves in the field. While the vast majority of researchers hold themselves to very high ethical standards, there are also those who are either not fully aware of the risks their work poses to interlocutors and assistants, or are not sufficiently familiar with their field sites to accurately assess risk. I wanted to share this story about a book written by a journalist in which informed consent was not sought before identifying an individual in a book on the Sri Lankan civil war: https://blogsmw.wordpress.com/2016/04/2 ... f-trouble/

While such behavior (identifying interviewees by name) may be rare in academia, I do not think it is entirely absent. I am very wary of a world where we create incentives that encourage those who may already be careless or less scrupulous than we would hope in the field, to engage in behavior that has the potential to compromise the security of their informants or assistants. It seems highly plausible that there are inexperienced researchers, traveling to sites they know little about, who are not fully aware of the gravity of the data they are dealing with. Providing these individuals with professional incentives to consider anonymity as secondary to transparency seems troubling. Making it easier for them to get work published if they release transcripts or other identifiers sets a worrying precedent for those insufficiently attentive to security concerns. If we are to develop further professional guidelines regulating data access, I believe these should first and foremost emphasize the principle of "do no harm" and ensure that even those who do not know what they are doing are not encouraged or incentivized in any way to place the guidelines provided by journals (and their perceived chances of being published) above the personal security of their subjects and support staff.

Post Reply

Alan Jacobs
University of British Columbia
Posts: 38
Joined: Fri Feb 26, 2016 9:59 pm

Re: Do no harm

PostMon Apr 25, 2016 8:24 pm

Thank you, Milli Lake, for this extremely rich and thoughtful set of comments. The principle of "do no harm" must surely be a non-negotiable ethical bedrock of all of our research practices; and your post very helpfully elaborates some of its implications in contexts of political violence, repression, or instability.

I'd be very interested in hearing from others about what "do no harm" might concretely imply for transparency in different kinds of research contexts. Which specific forms of transparency are compatible with this principle in what kinds of settings? And which are incompatible? How can we make our empirical findings interpretable and open to evaluation by others while still fulfilling our core ethical commitments to research participants?

Post Reply

Jesse Driscoll
UCSD
Posts: 5
Joined: Tue Apr 26, 2016 4:19 pm

Re: Do no harm

PostMon May 16, 2016 11:03 pm

I just wanted to thank Milli Lake. I think I am in total agreement. I have had to stop working in Tajikistan because the concerns that Milli has raised have not been, and are not likely to be, answered to my satisfaction. But it didn't stop me from writing something afterwards. I'm not sure if what I wrote is going to make the world a better place for my Tajik subjects. I hope what I wrote didn't make things worse. I am not always sure how I would know.

JD

Post Reply

Lee Ann Fujii
University of Toronto
Posts: 6
Joined: Thu Apr 07, 2016 9:56 am

Re: Do no harm

PostWed May 18, 2016 10:23 am

I, too, agree with what Milli wrote, especially in the context of a place like Rwanda where the regime actively goes after (as in public and ongoing character assassination in print, on the internet, and at conferences) academics who say anything the least bit critical of the regime. Researchers are socialized to think in instrumental timelines. We are pursuing doctorates so we need to finish our dissertations. We are trying to get a job so we need to finish an article or book ms. What tends to fade when we arrive back home are the risks and dangers that our friends, colleagues, research assistants, drivers, fixers back in the field continue to face after we are gone. The risk comes from simply having been associated with us. Authoritarian regimes can have a long reach and long memories. They may not be able to imprison Americans (though Kagame did that with one American lawyer) but they bar researchers from ever returning and they can always go after those we had worked with during our time in the field. Guilty by association is a constant risk there as it is in many other places in the world. DA-RT not only doesn't foresee these kinds of extra-IRB dilemmas, its tenets encourage researchers (especially graduate students or early career scholars) to focus on the products and procedures of DA-RT-prescribed research rather than reflecting on the real conditions they encounter in their field sites.

Post Reply

Guest

Re: Do no harm

PostSun May 22, 2016 8:05 am

Thank you to Jesse, LeeAnn and Alan for your thoughtful responses. It's always helpful to hear how others have approached these issues, even if there is no satisfying answer. Alan, in answer to your questions - Sarah Parkinson just wrote an excellent post on how to evaluate methodological rigor and employ transparency in ethnographic or other field-based violence work. Among other things, she points to what she looks for when reviewing this type of work (length of time in the field, depth of understanding of research site, whether the alleged methods match the findings and content of the published work, care and attention in describing interviews, research subjects, approaches, etc). While none of these tactics are ever entirely foolproof, she highlights a number of ways we might think about evaluating the findings and conclusions generated by sensitive fieldwork without releasing potentially compromising information. I don't know if I have much to add here, but I thought it worth mentioning that post.

Thanks to all, and I will continue to ponder this.

Post Reply

Paula M. Pickering

Re: Do no harm

PostMon May 23, 2016 3:11 pm

I appreciate those who have emphasized the importance of doing no harm.

My research seeks to understand how citizens in post-conflict settings react to and impact international efforts to reconstruct post-war states and communities. Because I believe it is impossible to understand the ways in which citizens in post-conflict countries impact statebuilding without spending time in their communities, my research very often involves substantial field work. While in the field, I gather both qualitative and quantitative data. The qualitative methods I use most commonly are semi-structured interviews and participant observation. My work on citizens who are ethnic minorities in their local communities in South Eastern Europe– groups who were often targeted by violent ethnic cleansing campaigns or even genocide during war -- feel vulnerable in their ethnically polarized, economically depressed, corrupt, and politically dysfunctional polities today. This was particularly so when I conducted research two to eight years after mass violence, but it is still true—albeit to a lesser extent—20 years after war. For these reasons and in accordance with the ethical norms of social science research (Sieber 1992) and the “do no harm” norm of development and post-conflict practitioners (Anderson 1999), I strongly believe that I must be transparent about the methodology I use for collecting data and as transparent as possible in analysis of these data, but vigilant about protecting the anonymity of my sources who are vulnerable citizens.
The communities in which I have lived and conducted research are small. Small enough so that local friends, acquaintances, and colleagues frequently let me know when they see me at a café, on a tram, walking with so-and-so, etc. One thing I learned through living with citizens who are ethnic minorities in their communities, soaking and poking (Fenno 1978) and observing their attempts to re-integrate into their polarized post-war communities is that their strategies are frequently intentionally quiet and involve careful outreach to members of the “majority” community with whom they can form helpful, weak ties (Granovetter 1973). In order for me to recognize these strategies and the conditions in which they are successful, as well as to encourage citizens to speak and act candidly in my presence, I have invested years in learning local languages and in cultivating the trust of citizens in these diverse post-conflict communities. Some political scientists argue that citizens in post-conflict states are “passive” and thus cannot influence post-conflict reconstruction. But my fieldwork suggests that scholars cannot assume away citizens’ agency but can only understand the impact (or lack thereof) of citizens in these communities if they do intensive, time-consuming qualitative research in local languages, with the promise of anonymity.
I am always able to leave these post-war communities, a luxury that my respondents and informants lack. Instead, these minority citizens must find ways to make the most of their lives in trying post-war settings, where one respondent told me, “even breathing is political.” Even if citizens in post-war communities do not fear for their physical security, they often live in very precarious financial situations. They are concerned about their ability to keep their jobs and thus provide for their families. They worry that critiquing co-workers, neighbors, politicians, or even international donors who have provided aid to them could put at risk their fragile co-existence. Because citizens are struggling to rebuild “normal lives” in complex, dynamic post-conflict social and political contexts, I believe we learn most about their impact on post-conflict communities by observing them in real life contexts rather than in a lab or through pub opinion polls.
Making publically accessible field notes gathered from participant observation, even in the unlikely event that it did not place my respondents and informants at risk, would not enable outside scholars to holistically understand or replicate the dynamic local community relations in which my informants are embedded and that shape their strategies for negotiating with local community members and state institutions. From my work with local survey experts in developing survey questionnaires and conducting pretests of questionnaires, I recognize that quantitative, survey responses are also constructed, with respondents reacting with different levels of candor to the same interviewer and interpreting differently the same questions and response sets. I am confident that if I needed to tell potential respondents that I must publically post their interview responses on-line even without identifying them that they would quickly hang up the phone, shut the door, or walk away. Demanding this kind of intrusive transparency could not only jeopardize the well-being of citizens in post-conflict countries, but would also discourage much needed in-depth social scientific research in fragile post-conflict communities. Such demands for individual-level qualitative data would limit the ability of political scientists to contribute to scholarly work in this field and to practitioners who struggle to do good work in such challenging environments. I cannot put respondents at risk by publicly posting interview testimony or field notes from participant observation that could be used by local peoples potentially capable of identifying and punishing my respondents and informants.
While I strongly believe that vulnerable respondents and informants deserve protection, I also strongly believe that social scientists working in post-conflict communities should be transparent about their methods for collecting and analyzing data. This includes specifying explicit criteria for case selection, respondent recruitment, and details of interview protocols, structures for ethnographies (also Laitin 1998), demographic characteristics of respondents (Pickering 2007). In terms of analysis, political scientists should also strive to be as transparent as possible. For example, I have used the qualitative software NVivo (Scolari 2010) to probe extensive field notes to help identify the conditions under which minorities were successful in getting help from members of the ‘majority” group and from post-conflict institutions. Use of such qualitative analytical tools allows for scholars to be transparent about search terms and size of text units for within-context analysis, as well as to use random selection to prevent cherry picking data and to conduct tests of inter-coder reliability. The methods that political scientists use should be determined by the nature of the questions they ask and the contexts in which they work. While scholars of post-conflict peacebuilding who gather qualitative data from vulnerable populations should be transparent about their methods for collecting and analyzing data, they should not be compelled to put at risk their respondents and informants or the quality of the testimony and observations they gather by providing their field notes and interview testimony.
Paula M. Pickering, College of William and Mary

Post Reply