IV.3. Research with vulnerable and marginalized populations

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group IV.3

PostTue Aug 29, 2017 1:18 am

Please use this thread to share feedback on the draft report

Post Reply


Peri Schwartz-Shea
University of Utah
Posts: 8
Joined: Thu Oct 12, 2017 6:03 pm

Re: Comments on Draft Report of Working Group IV.3

PostWed Nov 15, 2017 7:23 pm

Colleagues, I am writing to comment on the 8-13-2017 draft report of QTD Working Group IV.3, Research on Vulnerable and Marginalized Populations. I want to recognize all of the thoughtful, and careful work of Professors Lake, Majic, and Maxwell that has gone into this document. The authors have done a service to the discipline by spelling out the tensions between scholars’ ethical obligations and the mandated imperatives of JETS/DA-RT. In particular, I endorse their views that it should “ultimately be up to the researcher to determine the extent to which [data sharing] is advisable or desirable” (p. 6).

The report provides extensive examples for this judgment and, as important, explains in vivid detail the ways in which the JETS policy is already affecting graduate students’ choices about research topics (p. 8), i.e., it is discouraging work with vulnerable and marginalized populations. I also agree with the report’s first recommendation that there is already a dominant norm in the discipline that methods statements (and/or appendices) should be available to peer reviewers and, where feasible, to readers. In fact, DA-RT and JETS occurred without really addressing the relationship of DA-RT to peer review. Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses? Peer review is a process based in the expertise of reviewers and respects the expertise of researchers.

What I offer here as critique pertains to the report’s implicit endorsement of the IRB process (as well as some comments on the other recommendations). One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that much of the report’s arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.

Like JETS, the IRB system has already undermined researcher autonomy and judgment and is already affecting researchers’ choices. Empirical studies of the effects of prior review, (van den Hoonaard 2011; Noorani, Charlesworth, Kite, and McDermot 2017) show researchers and, especially graduate students, choosing methods and topics that are more in line with the biomedical setup of the system, i.e., experiments over ethnography.

Those endorsing the IRB typically reference the 1979 Belmont Report, which serves as the ethical touchstone for the IRB system and boar review. What many miss about the Report is its third endnote in which its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.

Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power and the effects of power on vulnerable populations.

It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report statements that normalize IRB review are unfortunate. For example, on p. 10 (emphasis added), it is stated that IRB review “help[s] research communities to ensure that research studies are grounded in rigorous and well-thought through research designs that are appropriate for the questions asked.” On page 11, the report seems to endorse the idea of journal editors “asking whether authors received IRB approval.” It may be the case that some IRBs improve the actual ethical conduct of research but the competence of board reviewers depends on the local activism of social scientists and is hardly assured. In turn, editors requiring IRB undeservedly legitimizes a system designed by medical professionals for medical research and extended without cause or consultation to the social sciences.

The other difficulty with the analysis on p. 11 is that the list of questions that it is recommended editors ask of researchers is lengthy. Specifically (numbered for analysis):

Editors should be engaged in asking questions such as:
1. “did the researcher make clear the goals of the study to her research subjects and participants?”;
2. “did she make clear the goals of her study to the broader community in which she was working (and of not, why not)?”;
3. “was the researcher clear in her explanations of the costs and benefits of the research to her research subjects?
4. How did research subjects respond to the perceived costs and/or benefits of the research?”
5. “How did researchers’ subjects evaluate the positionality and objectives of the researcher?”
6. “How did interviewees’ or research participants’ perceptions of the researcher likely affect the conclusions drawn?”.

I submit that while most researchers will be able to provide information on questions 1 and 2, questions 3-6 would be impossible to address without a parallel study! These are great questions for those who study research ethics and for discussion in seminars about research ethics, but too much for editors and reviewers to take on.

The report also argues on page 12, emphasis added, that “Research that poses a harm to research subjects or populations in any capacity should not pass peer review unless its benefits can be clearly demonstrated to outweigh the potential harm.” But the idea that all members of a population will feel the same about research processes and what is published seems unwarranted. Is one unhappy member of a community sufficient to block peer review? I do not mean to minimize the many careful, important risks that the report enumerates, but the recommendations seem to set the bar so high that much important research would be stymied.

Finally, making the goals of research “crystal clear” (p. 12, emphasis added) to subject populations may be difficult for a variety of reasons: research goals may change over the course of a project; the academic language of research questions may not be easily translated into the vernacular; and sometimes the research question itself may “plant” ideas in research participants’ heads rather than allowing them to act naturally. For example, a researcher trying to understand the extent to which gender affects conduct may not want to tell participants that’s what she is studying.

The critique I offer here should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best.

Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.

References

Noorani, Tehseen, Charlesworth, Andrew, Kite, Alison, and McDermont, Morag. 2017. Participatory research and the medicalization of research ethics processes. Social and Legal Studies 26/3: 378-400.

van den Hoonaard, Will. C. 2011. The seduction of ethics. Toronto: University of Toronto Press.

Post Reply



Return to “IV.3. Research with vulnerable and marginalized populations”