University of Utah
- Posts: 8
- Joined: Thu Oct 12, 2017 6:03 pm
Colleagues, I am writing to comment on the 8-14-2017 draft report of QTD Working Group IV.1 Research in/on Authoritarian/Repressive Regimes. Professors Bellin, Greitens, Herrera, and Singerman have done a service to the discipline by writing a concise document based on their experiences in this significant research context. As they observe, “Authoritarian and repressive conditions prevail in well over half the world today” (p. 1).
I find the report to be very compelling, but I offer a few thoughts related to language choices and unarticulated assumptions about IRBs. But, first, and perhaps most important: Although the report uses the “transparency” word, that word is not necessary! And, in fact, the document belies the simplistic approach embedded in the DA-RT and JETS “transparency project,” with its language of “production transparency” and “analytic transparency.” As I finished the report, I become convinced that the DA-RT sections now in the 2012 APSA Ethics Guide (sections III.A.6, pgs. 9-10), should be challenged and changed. That the scholars who do this important, often risky work must ask for “exemptions” to a system designed for the safe work of quantitative data analysis is bizarre. As the report observes on p. 9, “the ethical default should be caution and confidentiality rather than ‘exemption’ from mandatory disclosure.”
All those who promote DA-RT as somehow neutral with regard to methods/approaches/topics need to read this statement in the report (p. 6, original emphasis):
“the only way to build the trust necessary to get local contacts to agree to interviews, share information and be candid in their views, is to promise anonymity. As one researcher put it, ‘lack of transparency is the only protection I can offer a local official.’”
Given the importance of context, the “best practices” phrase in Section III should be replaced with “suggested practices.” “Best practices” has taken on a very political usage—suggesting that the “best” should be used everywhere and puts those who don’t use them based in their particular context on the defensive.
I was so glad to see the discussion of repositories and the Boston College case on p. 5. On the later, I would add a reference, Palys and Lowman (2012). On the former, in a period of astounding data insecurity, I have yet to see proponents of DA-RT/JETS seriously address the hacking problem for sensitive data stored in “trusted digital repositories.”
As I observed above, the report does not use the phrases, “production transparency” and “analytic transparency,” instead moving from two to five meanings of “transparency.” I think that the “transparency” word could be replaced altogether – as a move to shift the implied embrace of the DA-RT/JETS project, which has attempted to enforce a universality of understanding that undermines methodological pluralism. Because of that effort, the perfectly fine “transparency” word has morphed into a bludgeon and, for that reason, should be replaced.
The report’s five “meanings” encapsulate practices that exist and have existed outside the DA-RT framework, so why not taken up that previous language?
a. “Transparency of method” is simply what we used to call, for articles, the methods section, and for books, a methodological appendix.
Two cautions. First, the idea of “full” transparency as expressed in DA-RT (“a full account of how they draw their analytic conclusions from the data, i.e., clearly explicated the links connecting data to their conclusions”) flies in the face of understandings of the practice of experts, which involves tacit knowledge. [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] As Margaret Keck observes in note 23 of the QTD Working Group II.1, Research with Text-Based Sources: “To analyze documents and interviews, I rely not just on language skills but on knowledge accumulated from 35 years of work in a region….” Standard peer review relies on experts of a similar caliber to assess her research claims—why we read and trust peer review articles. But DA-RT sets up the expectation that any reader can assess her claims based, say, on an active citation link to a bit of an interview or document. That is an expectation that disappears the tacit knowledge of expert researchers.
Second, the phrase, “biases and distortions,” on page 2 seems to imply that some “ideal” exists in other research contexts (say, democracy). If democracy is that ideal, please spell it out. One of my pet peeves is the use of the word “bias” without clearly specifying the neutral point. As an interpretive scholar myself, I’m skeptical that many such neutral points exist.
b. “Transparency of position” seems to refer to “positionality” – a term that is at odds with the simplistic conceptualizations of DA-RT.
Again, I don’t think the word “bias” is useful. Where is the neutral position from which to generate data implied by that phrase?
The authors might simply use the word, reflexivity, as they do at the bottom of this section. Transparency is an unfortunate metaphor that encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with reflexivity—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation.
c. “Transparency as clarity of context” could be called contextuality.
Since you use Geertz, why not forgo ideas like the “true meaning” and “raw data.” Both of these are inconsistent with his perspective. Fieldnotes are never, “raw,” but already and inevitably based on positionality within the field. Meaning is never “true” as in settled but always contested (even though we can and do arrive, in epistemic communities and in societies, on intersubjectively constructed truths).
d. “Transparency of commitment” seem to be about the ethics of communication with research participants or, simply, promises made to research participants.
DA-RT (section 6.4) provides no guidance here other than to put the onus on researchers that their request for exemptions must be “well founded” and that they must “exercise appropriate restraint in making claims as to the confidential nature of their sources, and resolve all reasonable doubts in favor of full disclosure.” Again, the starting point for all of this is squabbles among quantitative researchers but it gets extended to all research communities, many of whom already have shared practices appropriate to their research topics, epistemologies, and research contexts.
e. “Transparency of purpose and funding” seems to be, logically, an extension of section d. in terms of what should be communicated to potential research participants.
This is an important and complex debate. IRBs expect consent processes to reveal both of these things, which makes some sense for U.S. medical patients (due to the biomedical origins of IRBs in the U.S.). Yet whether “purpose” can be understandably conveyed is a substantive question and, for some social science topics, fully conveying research purpose may “implant” expectations in research participants, i.e., a study on whether gender matters should not begin by telling potential participants that you are studying whether gender matters. Funding is always revealed in published reports but I can understand why whether it should or should not be revealed to potential participants has divided the committee.
Rejecting IRB review as a “minimum”
The paragraph under II.a., The Challenge of Ensuring the Safety of Interlocutors is thoughtful but, in important ways, problematic. One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.
Why shouldn’t IRBs be considered the starting point for research ethics?
IRBs are guided by the 1979 Belmont Report. However, in the third endnote to the Belmont Report its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.
Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power in either authoritarian or democratic contexts.
It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report statement (p. 5, original emphasis) that “IRB requirements are a *minimalist* interpretation” is unfortunate. These statements give too much credence to the idea that both the Belmont Report and IRB review are appropriate and helpful for social scientists. The QTD Working Group II.2, Evidence from Researcher Interactions with Human Participants, gives examples that pertain to this report, describing on page 7 the ways in which “governments in Egypt, Rwanda, and Tajkistan and elsewhere have targeted political scientists.” This is precisely the kind of issue that the Belmont Report (as the guide to IRB decision making) cannot coherently address. This report (p. 7) quotes a researcher who states that DA-RT “would push knowledge of authoritarianism further to the margins… Perversely it gives strength to authoritarian regimes’ agnotological tendencies, granting them a veto over research agendas…” The same thing is already occurring under many U.S. IRBs.
This criticism of IRBs should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best. Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.
Again, thank you for this report. I learned a lot and many in the discipline could learn a lot from reading it as well.
Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.
Palys, Ted and Lowman, John. 2012. Defending Research Confidentiality “To the Extent the Law Allows”: Lessons from the Boston College Subpoenas. Journal of Academic Ethics 10: 271-97.