I.2. Research Ethics: Human Subjects and Research Openness

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group I.2

PostTue Aug 29, 2017 9:11 am

Please use this thread to share feedback on the draft report

Post Reply


Lee Ann Fujii
University of Toronto
Posts: 7
Joined: Thu Apr 07, 2016 9:56 am

Re: Comments on Draft Report of Working Group I.2

PostMon Oct 02, 2017 5:22 pm

I really appreciate all the hard work that the authors put into this report. Needless to say research ethics is one of the most important issues that JETS badly misconceives—to the point of actually promoting unethical principles and practices, I would argue.

The report does a good job of pointing out the ways in which an overly narrow conception of transparency is at odds with both the principles and practices of ethical research. I agree with the authors’ main conclusion: that rather than promoting one-size-fits-all requirements for data sharing, journal editors and peer reviewers should encourage greater reflexivity in *all* research. I applaud their call for journals to raise their word limit to enable authors to include such discussion in the main text of their articles, rather than in footnotes or appendices.

What might be useful to add is a line or two about what reflexivity entails. The literature makes a useful distinction between reflection and reflexivity. The former may entail simply assessing how the research is going from the researcher’s standpoint; while the latter (reflexivity) requires the researcher to think about how his/her practices, strategies, and ways of interacting with people in the research site affects participants, interlocutors, assistants, and anyone else tied to the researcher. Greater reflexivity would invite researchers to be more open and honest about their own practices and experiences in the field, regardless of methods or methodological assumptions, and promote more ethical ways of being and engaging with others in the field site.

Another excellent point that the authors make is that narrowly conceived transparency “norms” might have deleterious effects on the discipline as a whole by "diverting" scholars away from research questions that require field research (p. 10). I would go further. Such norms could also discourage scholars, especially graduate students seeking TT jobs, from doing research in any context or with any group (e.g., vulnerable, hard-to-reach, secretive, underground, stigmatized, marginalized) that would make “data sharing” complicated. This could have a chilling effect on studies of inequality, poverty, policing, immigration, refugees, protests, etc. if scholars actively avoid research questions, research sites, and communities and groups that may pose difficulties later when they try to publish in key journals.

I would also like the statement to challenge more forcefully the assumption that data sharing is a good thing in and of itself. So much of the emphasis seems to be on how to share data in ways that are consistent with ethical norms. Authors write, for example, that “The default expectation should not be that all data would be shared… but rather the reverse: that all data would be protected unless determined by the author to pose no risk to any human subject now or in the foreseeable future” (p. 11). For me, this position cedes too much ground. Even if qualitative data could be scrubbed clean, whom and what would does it serve to post them online? Much of field research is not meant to be replicated so why even discuss this issue at all? Second, posting de-identified data may introduce new sources of risk. No journal editor or researcher can guarantee that future “users” of the data will use them ethically. If there is controversy with a study, all kinds of future users might try to identify participants, as occurred with Alice Goffman's book. Third, research contexts change. Regimes crumble; rights are violated; crises arise; protests grow. What constitutes adequate protections today may not be adequate in the future. In fact, part of the reasoning behind lifetime embargoes of personal papers, archives, etc. is the notion that we can’t see what harms might arise in the future. Given these risks, we should question the assumption that posting data is a shared goal and a shared good. It is not, I would argue.

Finally, authors point out (pp. 14-15) that researchers, not journal editors, are best situated to make ethical decisions at the publication stage. I would emphasize that granting journal editors the power to decide whether data should be posted or not amounts to a usurpation of the researcher’s expertise and knowledge about actual conditions in her field site(s) and the actual process that produced the data. If we believe in erring on the side of protecting participants then that usurpation is itself an unethical act. Vesting journal editors with the power to decide is itself a source of risk and harm to participants. For ethical reasons, then, no journal editor should have that power.

Post Reply


Macartan Humphreys

Re: Comments on Draft Report of Working Group I.2

PostSat Oct 14, 2017 9:28 am

Thanks to the committee for the time and thought put into this draft report.

I read with a lot of interest and appreciation but came away with some concerns I’d like to put on the table.

First to note that I agree with a lot in the diagnosis parts of the report; especially the need to "emphasize a fundamental implication of ethical practices, namely, that research subjects give their informed consent to participating." Reflexivity and keeping a focus squarely on subject protection are critical and rightly front and center here. And the risk, noted in the report and also emphasized in comments by Lee Ann Fujii, that a drive to transparency could result in researchers shying away from important topics that do not lend themselves so easily to data sharing, seems to me like a real one and underscores the importance of engaging with this head on.

With that said I am confused on a couple of aspects of what I see here as the core principle. I think the statement below puts it in the clearest form (though I am not fully sure that this version is the one ultimately advocated):

QTD1.2: “The default expectation should not be that all data would be shared unless exempted, but rather the reverse: that all data would be protected unless determined by the author to pose no risk to any human subject now or in the foreseeable future (Le Bas 2016).”

Note this principle might usefully be contrasted with two others that differ on one or other of two dimensions (a) default expectation and (b) ultimate arbiter.

Alt 1: “The default expectation should be that all data would be shared unless determined by the author to pose a risk to some human subject now or in the foreseeable future.”

Alt 2: “The default expectation should be that all data would be shared unless determined by the journal (or maybe: "unless the journal accepts the arguments of the authors (or others?)") to pose a risk to some human subject now or in the foreseeable future.”

Given these two dimensions there is logically an Alt 3, though I am not sure if anyone supports it:

Alt 3: “The default expectation should be that all data would not be shared unless determined by a journal (or maybe: "unless the journal accepts the arguments of the authors (or others?)") to pose no risk to any human subject now or in the foreseeable future.”

Note that Alt 1 empowers authors to be arbiters but stipulates sharing as a default. Alt 2 is close to what I understand as the intent of the JETS statement where "determined by the journal" is interpreted as "the journal accepts the arguments of the authors.” Alt 3 might naturally locate other groups as the arbiter, such as a professional or ethical commission, rather than the journal or authors. Alt 3 places the protection principle as primary but does not have the researcher be the final arbiter.

Also note that perhaps the report authors would be happy with Alt 1, I am not sure; most of the text focuses on the arbiter dimension not the default dimension.

Finally note that (as I understand them) none of these principles advocate that a journal should have the power to make an author do anything to harm subjects or to break agreements with them; the issue is more whether the journal can determine that such protections, and the inability to share that they can entail, can be grounds for non-publication. It seems hard to conceive of any ethical scheme that encouraged ignoring promises of confidentiality except in very unusual circumstances. I think this interpretation is consistent with the report’s; the concern is more about the exercise of power through outlet and prestige and not about actually forcing an outcome.

Here are my worries about principle QTD1.2 as stated above:

1
First this statement reads to me almost like the opposite of an ethical statement. I know that sounds strong but here is what I mean. As I see it the real challenge of professional ethics is to seek a set of standards of behavior that we can expect of each other and that the public can expect of us. There will very often be reasons to make exceptions to these standards but in such cases argumentation is expected. If those standards are to let the individual researcher decide because they are best placed to judge then in a way we do not have professional ethics (on this issue).

Now I don’t think this is an incoherent position; if you see no ethical value around data sharing then there is nothing really lost by leaving this to researcher discretion; more on that below. But if you do see ethical benefits from sharing then the leave-it-to-the-researcher approach does seem to sidestep the professional ethics question. Now obviously QTD is not suggesting dealing with professional ethics like this in general; rather I think the committee suggests this here in order to defer to the primacy of protection (more on that below). But still I think it is useful to think through how we would feel about this approach to ethics applied in other areas; if we suggested that the default should be "no expectation of subject consent" citing as argument that researchers are best placed to determine when this is appropriate or not, or no restrictions on deceit; for all of these a researcher *might* be best placed to make this determination, but I expect that that would be interpreted as an effort to protect researchers from this ethical concern and might leave a lot of us worried about the judgments our colleagues might make. Indeed set up as it is in this example and the statement of the principle above, the research does not even have to engage with the issue: inaction by the researcher is sufficient. Note that this first worry is largely dealt with by Alt 1 above since Alt 1 clarifies the expectations.

[A small aside is that I believe the principle assumes that the researchers will err on the side of protection; and in general this may be right, but it is quite possible that some researchers will not and when one sees researchers sharing data that looks likely to put people at risk surely one would want a stronger principle to point to than that implied by QTD1.2 above.]

2
Second I am confused on the enforceability of this; i.e. what it is really meant to mean when the rubber hits the road. Note that here I am not disagreeing with anything, I am just voicing confusion about what the principle is really saying. Here is an example of a hard case where this principle would have bite and where the alternatives wouldn't. I think it is worth thinking through. Say an author produces a piece of work and everyone believes that, if the results are what the author says they are, the findings are important and the piece should be published. But the journal and reviewers have doubts about the data and the analysis; they ask to see the data and the author responds that they will not share it because they have not determined that it will pose no risks in the foreseeable future. Perhaps the author does give reasons for possible risks but the reasons just don't sound plausible to the editors. But perhaps they just have no interest in making a decision on the matter. I think in such situations an editor might feel that they are unhappy proceeding with publication given their doubts about the data: they are unwilling to publish something they do not have faith in. The question then is: does the QTD principle (or Alt 1 over Alt 2) mean that the editors do not have the right to refuse to publish on the grounds of non availability of the data; i.e. that they would be contravening ethical guidelines by refusing to publish a piece for this reason. That seems to push against other ethical principles for editors. It seems to me that this is the kind of case where the QTD1.2 formulation tries to produce a different result to Alt 2 above: Alt 2 lets the editor make their choice, possibly taking account of the author's decision; but QTD1.2 seems to mean that the editor does not have a choice given the author's decision. Or is the intent of QTD1.2 something else in these cases?

So it seems to me that the cases in which this makes a difference are ones in which the principle forces a decision on a journal to publish something they do not want to publish and specifically when they do not find the author's arguments on risks compelling. As a practical matter you might worry if that would have any bite: if an editor doesn't believe the data one might expect that they will not publish but just cite the credibility of the research rather than the fact that data was not being shared.

If I am missing the point here, it might be useful to think through alternative instances in which the QTD principle produces a result different to what would be produced under Alt 1 and Alt 2.

3
Third, the text seems to assume that in the case of competing principles there cannot be two incompatible defaults. It is not obvious to me that that is right and I wonder what ethicists say about this. What I mean here is that I think the motivation for *not* having a default principle of openness is that the default principle of subject protection should take primacy and these can conflict with each other. But it seems to me that there is nothing incoherent with having two default expectations that sometimes conflict with each other (in that case of course the arbiter determination is critical). So for example the following pair of principles do not seem to be mutually incoherent to me:

• P1: The foremost ethical obligation and therefore the first duty of scholars is to the ethical treatment of those who participate in our research, particularly its subjects. The default expectation should be that researchers take no action that poses a risk to human subjects now or in the foreseeable future.
• P2: Scholars have an obligation to transparency and in particular to make as accessible as possible the empirical foundation and logic of inquiry of evidence-based research. The default expectation should be that all data would be shared.

[Aside: as stated I think P1 here is too narrow as the set of people we have duties to is broader]

In practice P1 and P2 will come into conflict and so both defaults cannot be implemented. This is what forces ethical reflection. Sometimes a researcher, e.g. studying up, might put evidence in the public domain in order to support claims that cause trouble say for a human rights abuser. P2 along with other considerations trumps P1. In many cases P1 could trump P2.

4
Fourth, I worried that the memo didn't really provide a real discussion of the advantages and disadvantages of the proposal. Reflexive openness does I think call for thinking through the different sides. Let me lay out the other side a little (on the default dimension not the arbiter dimension) and say why, positively, I find a default of transparency important for ethical reasons (again, where default certainly does not mean human subjects protection does not count as an argument for exceptions). I hope this responds a little to the points Lee Ann Fujii raised.

In my area of work researchers are commonly providing evidence based on field research that is picked up to influence policy debates. Whole areas of development programming are determined based on beliefs about how different sorts of interventions work. The research, and the credibility of the research, can have consequences for many people well beyond the set of subjects. Sometimes it is in part for these reasons that subjects take part in research.

Until recently there was no expectation of data sharing. When I first did a replication class and we contacted authors for their data -- all for influential published work -- between a third and half did not comply. Some just refused, some had lost the data, and so on. Some didn't feel they needed to make arguments at all; they saw a default of no sharing and so no great need to justify not sharing. Of those that did share the early classes found many mistakes in the analysis which they shared back with authors, some consequential. There was no “gotcha” component to this. Other replication efforts have also found important errors, though sometimes long after studies have had real world impacts. These days when we do a replication authors almost always share; in some cases data has been shared under very controlled conditions, and in a couple of cases not at all. But in all cases when authors do not share they provide reasons for it, generally reasons that are immediately understood as reasonable. Another thing that has changed in the last couple of years is that the replications almost always work out; authors are preparing for sharing data and analysis at the outset and they don’t lose their code and their data any more, they organize their work in a way that makes sharing a simple thing.

So I feel I have seen three transformations:
• people share more and feel they need to articulate reasons when they don't --- this articulation can encourage the kind of engagement with ethics that the QTD report encourages, but which I fear the principle obviates
• people change their work practices and have become more organized in their practices and better able to articulate and demonstrate their basis of evidence
• there seem to be fewer mistakes being made (or course this could be due to other things also)

I see these three as arising from a change in default expectations. They don’t have much if anything to do with the journals being the arbiter.

5
Fifth, and perhaps more positively, I see a few other directions that might be useful for this discussion and perhaps could have more space in the report.
• Doubtless people in this group have given a lot of thought to how to set up research in the first place so that there is clearer consent from subjects about what can be shared or not. As some posts mentioned, current consent procedures are not at all clear on this point. Perhaps the memo could gather proposals on this. (The “ethics briefs” idea seems a good one also for this.)
• In a similar vein perhaps there could be some more thinking about how to have finer specification of with whom data can be shared. Some discussions seem to be based on the ideas that the options are either the whole world or just the researcher. But in some cases one could imagine other options. For example a lock box in a secure data archive might be safer than a researcher’s lap top. As with some admin data, sharing in restricted spaces, or indirect sharing (e.g. where requests can be made for limited data consultations implemented by a third party) might be possible in some cases. In some cases, one might be able to share with journals for verification but not release publicly. Basically, there could be a set of options that might serve different purposes depending on the risk and help make this all less polarized.
• One could imagine developing procedures at the time of data gathering that makes, prospectively, anonymization of qualitative data an easier task. I know there is a lot of innovation in this already but imagine the memo could usefully describe some best practices in this area.

That's it. I know that a lot of thought went into the QTD1.2 memo and there are likely good responses to all of these but I hope these thoughts are useful for the deliberations.

Post Reply


Guest

Re: Comments on Draft Report of Working Group I.2

PostWed Oct 18, 2017 4:19 am

I appreciate all the work that went into drafting this report, and I think it raises important and overlooked points. In particular, I agree with Lee Ann and Macartan that the point about the potential deleterious effect on research choices as a whole is an excellent one, and should be taken seriously.

My main substantive suggestion also echoes Lee Ann and Macartan's comments: it would be helpful to have more clarity about what reflexivity is, and how it would work in practice. This could also be linked more clearly to how reflexive openness mitigates the problems that the report identifies.

As a more organizational matter, I would include this definition or recommendation clearly in the introduction of the report. This could follow rather than precede the points that subjects take precedence, and that scholars should be the decision makers on issues of transparency (though I take Macartan's points that the report would be stronger if it took the potential benefits of transparency seriously in the course of arguing for this point; I also thought the discussion of how interpretive qualitative work does not lend itself to "transparency" as seemingly understood by DART would be an important element of this discussion).

I also think it's worth telling the reader in the introduction that you'll point out the negative consequences of DART and JET not just potentially for subjects, or for scholars, but also for political science research as a whole (as you detail on page 9).

Depending on the audience, I think you could put the second section in an appendix, or substantially cut the first three paragraphs of that section. Maybe it's naivete, but I hope all scholars, at least in the US, are well familiar by now with the Belmont principles, and I worry that you'll lose some readers by reviewing them here.

Finally, on page 10, returning to your concept of reflexive openness, the section on the concept jumps right into "a broader conceptualization" but at this point I wasn't sure what the narrow one was yet. Perhaps a repeat of the concept here would be helpful.

Thank you again for all your work.

Post Reply


Guest

Re: Comments on Draft Report of Working Group I.2

PostWed Oct 18, 2017 4:20 am

I appreciate all the work that went into drafting this report, and I think it raises important and overlooked points. In particular, I agree with Lee Ann and Macartan that the point about the potential deleterious effect on research choices as a whole is an excellent one, and should be taken seriously.

My main substantive suggestion also echoes Lee Ann and Macartan's comments: it would be helpful to have more clarity about what reflexivity is, and how it would work in practice. This could also be linked more clearly to how reflexive openness mitigates the problems that the report identifies.

As a more organizational matter, I would include this definition or recommendation clearly in the introduction of the report. This could follow rather than precede the points that subjects take precedence, and that scholars should be the decision makers on issues of transparency (though I take Macartan's points that the report would be stronger if it took the potential benefits of transparency seriously in the course of arguing for this point; I also thought the discussion of how interpretive qualitative work does not lend itself to "transparency" as seemingly understood by DART would be an important element of this discussion).

I also think it's worth telling the reader in the introduction that you'll point out the negative consequences of DART and JET not just potentially for subjects, or for scholars, but also for political science research as a whole (as you detail on page 9).

Depending on the audience, I think you could put the second section in an appendix, or substantially cut the first three paragraphs of that section. Maybe it's naivete, but I hope all scholars, at least in the US, are well familiar by now with the Belmont principles, and I worry that you'll lose some readers by reviewing them here.

Finally, on page 10, returning to your concept of reflexive openness, the section on the concept jumps right into "a broader conceptualization" but at this point I wasn't sure what the narrow one was yet. Perhaps a repeat of the concept here would be helpful.

Thank you again for all your work.

Abbey Steele
University of Amsterdam

Post Reply


Peri Schwartz-Shea
University of Utah
Posts: 8
Joined: Thu Oct 12, 2017 6:03 pm

Re: Comments on Draft Report of Working Group I.2

PostTue Nov 14, 2017 3:55 pm

Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS

Colleagues, I am writing to comment on the 8-24-2017 draft report of QTD Working Group I.2, Research Ethics: Human Subjects and Research Openness. I want to recognize all of the thoughtful, and careful work of Professors MacLean, Posner, Thomas, and Wood that has gone into this document. The authors have done a service to the discipline by spelling out the tensions between scholars’ ethical obligations and the mandated imperatives of JETS. In particular I endorse their views that “scholars—not editors—should make key decisions, such as when their obligations to research participants mean that data should not be shared” (p. 1).

What I offer here as critique pertains to the report’s explicit endorsement of the IRB process. One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that much of the report’s arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.

For example, as the report observes on page 3, emphasis added: “(The IRB may rule some research exempt from full review, but that designation is made by the IRB not the researcher, and usually requires the submission of documentation for the study to qualify for exemption from ongoing IRB monitoring.)” At a minimum, there is a logical tension in the fact that the report endorses “scholars—not editors—[who should be making] key decisions” when it comes to DA-RT and JETS but not when it comes to IRBs.

The report begins with a brief history of the 1979 Belmont Report. However, that reading misses a key part of the report. In the third endnote to the Belmont Report its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.

Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power.

It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report statements (p. 4) that “The IRB review process is an imperfect realization of the Belmont principles” and that “the meeting the requirements of the IRB should be understood as the minimal ethical standard” are unfortunate. These statements give too much credence to the idea that both the Belmont Report and IRB review are appropriate and helpful for social scientists. The report’s discussion on page 7 on the ways in which “governments in Egypt, Rwanda, and Tajkistan and elsewhere have targeted political scientists” is precisely the kind of issue that the Belmont Report (as the guide to IRB decision making) cannot coherently address.

The report also observes that the “new transparency norms may divert scholars from pursuing challenging questions, the answers to which require field work, especially those using qualitative research methods. Discouraged by the high costs of creating their own data sets from human participants, scholars may opt to make use of existing aggregate data. The result might be that fewer scholars are willing to pursue sensitive questions and challenging contexts, skewing political science research and foreclosing the investigation of a whole range of important political topics” (p. 10). I whole heartedly agree; the IRB system is already having the same effects and, for that very reason, should not be endorsed as a defense against DA-RT and JETS. For empirical studies of these effects of prior review, see van den Hoonaard (2011) and Noorani, Charlesworth, Kite, and McDermot (2017).

The critique I offer here should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best.

Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.

References
Noorani, Tehseen, Charlesworth, Andrew, Kite, Alison, and McDermont, Morag. 2017. Participatory research and the medicalization of research ethics processes. Social and Legal Studies 26/3: 378-400.

van den Hoonaard, Will. C. 2011. The seduction of ethics. Toronto: University of Toronto Press.

Post Reply


Guest

Re: Comments on Draft Report of Working Group I.2

PostFri Dec 01, 2017 7:47 pm

I very much appreciate this careful report. I think it raises important issues in terms of tensions in our various potential ethical obligations as scholars. Particularly as a conflict researcher, these tensions resonated for me.

I agree with Abbey and others above that more about how we should think about reflexive openness and, specifically, perhaps some additional principals for applying it would be tremendously helpful.

Again, thank you for this work which certainly helped me think through these difficult questions.

Aila Matanock
University of California, Berkeley

Post Reply



Return to “I.2. Research Ethics: Human Subjects and Research Openness”