II.2. Evidence from researcher interactions with human participants

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group II.2

PostTue Aug 29, 2017 9:59 am

Please use this thread to share feedback on the draft report

Post Reply

Erik Bleich
Middlebury College
Posts: 5
Joined: Thu Apr 07, 2016 3:03 pm

Re: Comments on Draft Report of Working Group II.2

PostSat Sep 30, 2017 8:20 am

Thank you, Leo, Mark, and Anastasia, for a thorough and thoughtful report. I especially appreciate your identifying both the pros and cons of greater transparency, and being specific about each. I support the conclusion of this report: that generic requirements for data access and replicability should be avoided, especially when imposed by journals in a submission process. While I recognize that the purpose of this report was to engage the scholarly community in this important discussion (which you've done admirably), I wonder if you might clarify and amplify few bullet-point take-aways for journal editors. I have heard some feedback that people would appreciate a clear signal to editors that may have already adopted the JETS. One potential risk from this report is that editors skim it and conclude, OK, for qualitative work, I presumptively need to see all the bullet points on p. 15, and especially each of the 5 (relatively time- and space-consuming) practices outlined on pp. 16-19. So if I might make one suggestion, it would be to have the "suggestions for journal editors" paragraph on p. 14 be a bit clearer in emphasizing that no one researcher can reasonable do everything; that any one or more of those things may suffice; and that if editors or reviewers have questions about qualitative transparency, they should communicate directly with authors to discuss transparency choices prior to making a final decision about a manuscript. I'm not sure I've got that language quite right, but something in that direction (maybe highlighted, in bold, with neon lights, etc.), could open up an opportunity for a valuable discussion. What do you think?

Post Reply


Re: Comments on Draft Report of Working Group II.2

PostMon Oct 02, 2017 7:45 pm

Thank you for taking on this project. I am generally quite pleased with the draft report. The I agree with the earlier comment that it is important to emphasize for editors that no one researcher is likely to have the time and resources and space to use all of the best practices you outline. All of the options are useful in their own right and would move in the direction of greater transparency, but some are more appropriate than others given the particular study. Would it also be possible to add an "executive summary" of some sort if we are concerned about editors skimming for the takeaway points rather than reading each working group report in depth?

Post Reply

Alice Kang
University of Nebraska-Lincoln
Posts: 2
Joined: Sat May 14, 2016 11:17 am

Re: Comments on Draft Report of Working Group II.2

PostWed Nov 01, 2017 3:51 pm

I second Erik and the guest's suggestions on clarifying the "suggestions for journal editors" and writing an "executive summary" for all the reports.

Please find here my comments:
1. The report does an excellent job of representing the concerns of scholars working across a wide range of epistemologies. The need to seriously consider and respect epistemological diversity in the discipline of political science is paramount. I think that the differences between positivist and interpretivist research are an underlying source of miscommunication in debates over transparency and human participant research. The report shows an awareness and respect for this diversity throughout.

2. Assessment of benefits: "transparency could help guard against dishonesty". I would like to see more discussion of the debate that is referenced in footnotes 25 and 26. Ultimately, does it seem like there is agreement among scholars that dishonesty is an actual problem? Should this be listed as the first benefit if participants didn't agree?

3. Assessment of benefits: I think the first and second benefits (or purposes) of transparency is that it helps others evaluate your claims and understand your research. Right now, these benefits are listed as the fourth and fifth ones. Ultimately, I wasn't clear on how the 1st-5th benefits differed. They seem to overlap a lot.

4. Assessment of costs/Human subjects protections and Access to human subjects: These are important sections. Definitely keep!

5. p. 15 The bullet points of transparency suggestions didn't seem very compelling to me. I would keep the paragraph on p. 15 that warns against "best practices" and more quickly get to the section, "In-article Transparency Discussion".

6. Advancing Research Reliability: When I read this section, which I took to be the conclusion, I was confused about how this fits with the themes of data access and research transparency. Maybe the importance of advancing reliability was discussed in the introduction and I missed it. In any case, reliability and transparency to me are different things. So it might be better to end by returning to the definition, purpose, benefits, and costs of transparency when doing research with human participants.

Post Reply

Peri Schwartz-Shea
University of Utah
Posts: 8
Joined: Thu Oct 12, 2017 6:03 pm

Re: Comments on Draft Report of Working Group II.2

PostTue Nov 14, 2017 6:53 pm

Reject DA-RT and JETS as inimical to the scholarly value of methodological pluralism

Colleagues, I am writing to comment on the 8-25-2017 draft report of QTD Working Group II.2, Evidence from Researcher Interactions with Human Participants. I want to recognize all of the thoughtful, and careful work of Professors Arriola, Pollack, and Shesterinina that has gone into this document. I sincerely believe that all of you are acting in good faith in taking up the call by QMMR (as articulated by Alan Jacobs and Tim Büthe) to adapt DA-RT to the kinds of scholarship conducted among those doing qualitative and interpretive scholarship in political science. Moreover, there is much that I agree with—even as I take issue with a number of assumptions that seem to permeate the document.

First, the very last paragraph of the report states (p. 20, emphases added):

“The variety of practices discussed, ranging from the design to the write-up phases, can be readily implemented by most scholars to make their findings easier to evaluate in peer review. Greater recognition by journals of these practices as being consistent with transparency guidelines would facilitate the case-by-case determinations that editors and reviewers inevitably need to make when assessing the reliability of scholarship.”

The problem with this statement is that the relationship between DA-RT and JETS has never been fully explicated by the DA-RT proponents. Specifically: How is DA-RT meant to relate to peer review? Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses? Most important, this statement fails to recognize that JETS puts an additional hurdle in place prior to peer review: “The editor shall have full discretion to follow their journal’s policy on restricted data, including declining to review the manuscript or granting an exemption with or without conditions. The editor shall inform the author of that decision prior to review.”

This JETS approach stigmatizes all those qualitative and interpretive scholars who will now be on the defensive as supplicants to editors as they ask for “exemptions” to standards that have emerged out of the natural sciences passing through psychology’s “replication crisis” to then be smuggled into the APSA Ethics Guide.

Second, as that last phrase intimates, as my colleague Dvora Yanow and I document (Schwartz-Shea and Yanow 2016), the adoption of DA-RT within APSA is of questionable legitimacy. And the role of APSA in sponsoring the non-inclusive meeting that produced JETS is especially galling. I will not repeat those arguments here. Instead, my point is the report presumes the legitimacy of both DA-RT and JETS when, instead, it should be questioned.

Third, while I take at face value the report statement (p. 1) that “Our consultations with scholars in the discipline, combined with insights drawn from contributions to the QTD online forum as well as published materials, reveal broad support for the principle of transparency in social science research,” I would observe that the DA-RT language takes some long-standing practices and repackages them in ways that are pernicious. Here are some of the most worrisome elements of the DA-RT-articulated “transparency” project:

(a) The fetishization of “transparency” encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with “reflexivity”—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation;

(b) The instrumentalization of our academic work transforms our vocation, our calling, into the excessive documentation of our research “steps” in a linear fashion, such that the joy and “a ha” moments of the research process are now suspect rather than celebrated. Instead there is “production transparency” is if we were workers in an assembly line who need to be “incentivized” (see page 14). That is the language not of the academy but of “new public management.” That “innovation” in governance practices imagines that workers are not intrinsically motivated, not agentic, but, instead, homo economicus, responding to “incentives” dangled before them by oh-so-wise managers.

(c) The DA-RT articulation of “analytic transparency” asks the impossible: that researchers can (emphasis added) provide “a full account how they draw their analytic conclusions from the data, i.e., clearly explicate the links connecting data to conclusions.” The notion of a full account flies in the face of understandings of the practice of experts, which involves tacit knowledge [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] As Margaret Keck observes in note 23 of the QTD Working Group II.1, Research with Text-Based Sources: “To analyze documents and interviews, I rely not just on language skills but on knowledge accumulated from 35 years of work in a region….” Standard peer review relies on experts of a similar caliber to assess her research claims—why we read and trust peer review articles. But DA-RT sets up the expectation that any reader can assess her claims based, say, on an active citation link to a bit of an interview or document. That is an expectation that disappears the tacit knowledge of expert researchers.

(d) And, finally, in the technical realm, in a period of astounding data insecurity, I have yet to see proponents of DA-RT seriously address the hacking problem for sensitive data stored in “trusted digital repositories.” Nor have they responded adequately to the “Boston College” case in which the U.S. government subpoenaed oral history evidence that was meant to be sequestered, for a period of time, of interviewees who had trusted researchers to keep their identities confidential. (See Palys and Lowman 2012.)

I want to emphasize that my objections are grounded in the coercive, top-down aspects of DA-RT and JETS; I have no quarrel with the idea that scholars may voluntarily chose to archive data or to take up suggested research practices that make sense for their particular projects. I agree with the judgments of Dara Strolovitch (p. 11) that DA-RT rules seem “very unlikely to produce better knowledge or insights about the political world.” And I agree with Rudra Sil (p. 13) that “adding new layers of procedures and regulations requires uniform understandings of what constitutes ‘knowledge’ or ‘truth’….” These are deeply philosophical issues that cannot be settled through rules and to enforce one standard will harm the very quest for truth and knowledge.

In my considered view, the DA-RT project and particularly its instantiation as JETS should be rejected as a threat to the scholarly value of methodological pluralism in the pursuit of truth and knowledge.

Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.

Palys, Ted and Lowman, John. 2012. Defending Research Confidentiality “To the Extent the Law Allows”: Lessons from the Boston College Subpoenas. Journal of Academic Ethics 10: 271-97.

Schwartz-Shea, Peregrine, and Yanow, Dvora. 2016. Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-making Role of a Professional Association.” Politics & Gender,12 (3), e11, 1-19. doi.org/10.1017/S1743923X16000428

Post Reply

Return to “II.2. Evidence from researcher interactions with human participants”