Do no harm
Posted: Mon Apr 25, 2016 6:18 pm
Dear all,
I shared something similar to this in a comments thread a few days ago, but I wanted to make this overarching point on the main page too. I am troubled by the impact that academic researchers working with politically sensitive material can have on the lives and personal safety of their contacts and interlocutors. This post is about ways to minimize that risk. These concerns of course affect research subjects, but I think that, as a field, we tend to think less carefully about the drivers, interpreters, fixers, RAs and other local interlocutors that support our work. When we do not take sufficient precautions to obscure and anonymize politically sensitive work, these individuals can also face considerable harm by virtue of being associated with us. From this perspective, "do no harm" is not just about removing identifiers and withholding interview dates and transcripts, but about making the overall tone and content of our work "safe," or as safe as it can possibly be, for the various individuals who could be implicated in its content.
I would like to share a couple of examples. The first resonates with many contributors to this site, and is a point that has been made many times already - I think it's worth reiterating. In contexts of violence, in repressive authoritarian regimes, or in situations of conflict-related insecurity, the research environment can change rapidly over time. What is deemed safe one day, may not be safe the next day, or many months or years later. Thus, oftentimes, research subjects consent to participating (and/or being identified), and RAs and informants consent to assisting us, based on levels of risk on the day the research is being carried out. As much as we often value deferring to local expertise, the individuals we interact with may not always be in a good position to assess risk in two years time, under a new political regime, in an election season, or in a political purge. They may also face financial incentives - implicit or explicit - to help out or participate in our research, even if they are aware of potential risks. Furthermore, few individuals know at the time the research is carried out, what the ultimate punchline of the piece is going to be or what it will look like once published.
For those of us working in conflict zones, I'm not sure it's ever possible to assert with 100% certainty that work that is political in content poses no risk to those who accompanied and assisted us in the field (even once we have destroyed data and encrypted transcripts). Even without publishing anything at all, many know who drove us around and who were our "friends" in the field. For those working in authoritarian contexts, this fact already makes it difficult to publish even the most self-censored work. I am glad that we are having a conversation about anonymity and political risk. However, I believe that we should be brainstorming about about ways we can further protect sensitive data, and protect researchers so that they can feel confident and secure in invoking greater anonymity and taking the precautions they deem necessary to mitigate these risks when making novel and important contributions to scholarship.
It is also worth noting that even the most disorganized of political regimes are becoming increasingly savvy in terms of surveying the research that's out there and vetting academics writing on potentially political topics. It is already extremely stressful for researchers working in such environments to weigh these risks and to still feel comfortable writing on their areas of expertise. I am increasingly beginning to believe that "do no harm" means no longer conducting my own research in the places I have recently worked (Eastern DR Congo and Rwanda) because I cannot trust that anything I write won't be used against people I know at a later date. I know many others - particularly working in Rwanda - who feel the same way.
My second example pertains to the integrity with which some academic researchers conduct themselves in the field. While the vast majority of researchers hold themselves to very high ethical standards, there are also those who are either not fully aware of the risks their work poses to interlocutors and assistants, or are not sufficiently familiar with their field sites to accurately assess risk. I wanted to share this story about a book written by a journalist in which informed consent was not sought before identifying an individual in a book on the Sri Lankan civil war: https://blogsmw.wordpress.com/2016/04/2 ... f-trouble/
While such behavior (identifying interviewees by name) may be rare in academia, I do not think it is entirely absent. I am very wary of a world where we create incentives that encourage those who may already be careless or less scrupulous than we would hope in the field, to engage in behavior that has the potential to compromise the security of their informants or assistants. It seems highly plausible that there are inexperienced researchers, traveling to sites they know little about, who are not fully aware of the gravity of the data they are dealing with. Providing these individuals with professional incentives to consider anonymity as secondary to transparency seems troubling. Making it easier for them to get work published if they release transcripts or other identifiers sets a worrying precedent for those insufficiently attentive to security concerns. If we are to develop further professional guidelines regulating data access, I believe these should first and foremost emphasize the principle of "do no harm" and ensure that even those who do not know what they are doing are not encouraged or incentivized in any way to place the guidelines provided by journals (and their perceived chances of being published) above the personal security of their subjects and support staff.
I shared something similar to this in a comments thread a few days ago, but I wanted to make this overarching point on the main page too. I am troubled by the impact that academic researchers working with politically sensitive material can have on the lives and personal safety of their contacts and interlocutors. This post is about ways to minimize that risk. These concerns of course affect research subjects, but I think that, as a field, we tend to think less carefully about the drivers, interpreters, fixers, RAs and other local interlocutors that support our work. When we do not take sufficient precautions to obscure and anonymize politically sensitive work, these individuals can also face considerable harm by virtue of being associated with us. From this perspective, "do no harm" is not just about removing identifiers and withholding interview dates and transcripts, but about making the overall tone and content of our work "safe," or as safe as it can possibly be, for the various individuals who could be implicated in its content.
I would like to share a couple of examples. The first resonates with many contributors to this site, and is a point that has been made many times already - I think it's worth reiterating. In contexts of violence, in repressive authoritarian regimes, or in situations of conflict-related insecurity, the research environment can change rapidly over time. What is deemed safe one day, may not be safe the next day, or many months or years later. Thus, oftentimes, research subjects consent to participating (and/or being identified), and RAs and informants consent to assisting us, based on levels of risk on the day the research is being carried out. As much as we often value deferring to local expertise, the individuals we interact with may not always be in a good position to assess risk in two years time, under a new political regime, in an election season, or in a political purge. They may also face financial incentives - implicit or explicit - to help out or participate in our research, even if they are aware of potential risks. Furthermore, few individuals know at the time the research is carried out, what the ultimate punchline of the piece is going to be or what it will look like once published.
For those of us working in conflict zones, I'm not sure it's ever possible to assert with 100% certainty that work that is political in content poses no risk to those who accompanied and assisted us in the field (even once we have destroyed data and encrypted transcripts). Even without publishing anything at all, many know who drove us around and who were our "friends" in the field. For those working in authoritarian contexts, this fact already makes it difficult to publish even the most self-censored work. I am glad that we are having a conversation about anonymity and political risk. However, I believe that we should be brainstorming about about ways we can further protect sensitive data, and protect researchers so that they can feel confident and secure in invoking greater anonymity and taking the precautions they deem necessary to mitigate these risks when making novel and important contributions to scholarship.
It is also worth noting that even the most disorganized of political regimes are becoming increasingly savvy in terms of surveying the research that's out there and vetting academics writing on potentially political topics. It is already extremely stressful for researchers working in such environments to weigh these risks and to still feel comfortable writing on their areas of expertise. I am increasingly beginning to believe that "do no harm" means no longer conducting my own research in the places I have recently worked (Eastern DR Congo and Rwanda) because I cannot trust that anything I write won't be used against people I know at a later date. I know many others - particularly working in Rwanda - who feel the same way.
My second example pertains to the integrity with which some academic researchers conduct themselves in the field. While the vast majority of researchers hold themselves to very high ethical standards, there are also those who are either not fully aware of the risks their work poses to interlocutors and assistants, or are not sufficiently familiar with their field sites to accurately assess risk. I wanted to share this story about a book written by a journalist in which informed consent was not sought before identifying an individual in a book on the Sri Lankan civil war: https://blogsmw.wordpress.com/2016/04/2 ... f-trouble/
While such behavior (identifying interviewees by name) may be rare in academia, I do not think it is entirely absent. I am very wary of a world where we create incentives that encourage those who may already be careless or less scrupulous than we would hope in the field, to engage in behavior that has the potential to compromise the security of their informants or assistants. It seems highly plausible that there are inexperienced researchers, traveling to sites they know little about, who are not fully aware of the gravity of the data they are dealing with. Providing these individuals with professional incentives to consider anonymity as secondary to transparency seems troubling. Making it easier for them to get work published if they release transcripts or other identifiers sets a worrying precedent for those insufficiently attentive to security concerns. If we are to develop further professional guidelines regulating data access, I believe these should first and foremost emphasize the principle of "do no harm" and ensure that even those who do not know what they are doing are not encouraged or incentivized in any way to place the guidelines provided by journals (and their perceived chances of being published) above the personal security of their subjects and support staff.