IV.2. Settings of political violence

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Tim Buthe
Duke University
Posts: 55
Joined: Fri Feb 26, 2016 11:39 pm

Comments on Draft Report of Working Group IV.2

PostTue Aug 29, 2017 1:14 am

Please use this thread to share feedback on the draft report

Post Reply


Peri Schwartz-Shea
University of Utah
Posts: 8
Joined: Thu Oct 12, 2017 6:03 pm

Re: Comments on Draft Report of Working Group IV.2

PostWed Nov 15, 2017 8:10 pm

Colleagues, I am writing to comment on the 8-20-2017 draft report QTD Working Group IV.2, Research in Violent or Post-Conflict Political Settings. I want to recognize all of the thoughtful, and careful work of Professors Arjona, Mampilly, and Pearlman that has gone into this document. The authors do a service for the discipline by discussing this significant area of research, an area that is threatened by the coercive, top-down aspects of DA-RT and, especially, JETS. I have no quarrel with the idea that scholars may voluntarily chose to archive data or to take up suggested research practices that make sense for their particular projects.

I have two categories of comments: (1) The extent to which the DA-RT/JETS “transparency” mandate and, also, the report’s language of “openness” is problematic; (2) The extent to which resistance to DA-RT/JETS has, unfortunately, further legitimized and normalized the IRB system.

The DA-RT transparency project
Many of the practices discussed in the report already exist and are honored in the peer review system. Bringing them together under the “transparency” label is problematic in a number of ways.

(1) Transparency is unfortunate metaphor that encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with “reflexivity”—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation;

(2) As your report recognizes there is an unwarranted systematization/uniformity in DA-RT/JETS that stigmatizes all those scholars who will now be on the defensive as supplicants to editors as they ask for “exemptions” to standards that have emerged out of the natural sciences passing through psychology’s “replication crisis” to then be smuggled into the APSA Ethics Guide.

(3) A major set of questions that have yet to be answered to my satisfaction are: How is DA-RT meant to relate to peer review? Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses?

For all its flaws, the major advantage of peer review is that it involves the application of quality assessment standards by expert readers in a project-centered manner. DA-RT and JETS encourage researchers and, especially, graduate students to aspire to general, abstract standards if they want to escape the exemptions stigma and the possibility that their work cannot even arrive at the peer review stage. Self-censorship of potentially career-risky projects is already happening under the contemporary, prior-review IRB regime and now DA-RT/JETS closes the circle. We should not underestimate the ways in which these systems will discourage new researchers from taking up difficult questions and unorthodox methods. What these systems communicate is that researchers are not to be trusted unless they become ciphers – transparent to the world, to all imagined readers. DA-RT/JETS fails to recognize the developmental arc of a scholarly career and the ways in which our research practices develop and deepen over the years as we develop into bona fide experts. [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] The projected-centered, peer review system honors and respects scholarly expertise; it is a system that communicates that researchers should develop substantive knowledge with the promise that their projects will be assessed by others with similar substantive backgrounds—who understand the ins and outs of the questions and methods in the area—and who will then apply scholarly standards that fit those projects. Well-functioning peer review critically assesses a project in ways that support and elevates the scholar’s efforts.

IRBs
One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.

Why shouldn’t IRBs be considered the starting point for research ethics?

IRBs are guided by the 1979 Belmont Report. However, in the third endnote to the Belmont Report its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.

Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power in either authoritarian or democratic contexts.

It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report should avoid normalizing IRB requirements. To do so gives too much credence to the idea that both the Belmont Report and IRB review are appropriate and helpful for social scientists. The QTD Working Group II.2, Evidence from Researcher Interactions with Human Participants, gives examples that pertain to this report, describing on page 7 the ways in which “governments in Egypt, Rwanda, and Tajkistan and elsewhere have targeted political scientists.” This is precisely the kind of issue that the Belmont Report (as the guide to IRB decision making) cannot coherently address. This report (p. 7) quotes a researcher who states that DA-RT “would push knowledge of authoritarianism further to the margins… Perversely it gives strength to authoritarian regimes’ agnotological tendencies, granting them a veto over research agendas…” The same thing is already occurring under many U.S. IRBs.

This criticism of IRBs should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best. Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.

References

Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.

Post Reply


Ana Arjona
Northwestern University
Posts: 2
Joined: Tue Aug 02, 2016 7:35 pm

Re: Comments on Draft Report of Working Group IV.2

PostWed Dec 13, 2017 12:26 am

Thank you for your comments, Peri. We will take them into account as we finalize the report.

Ana, Wendy and Zachariah

Peri S-S wrote:Colleagues, I am writing to comment on the 8-20-2017 draft report QTD Working Group IV.2, Research in Violent or Post-Conflict Political Settings. I want to recognize all of the thoughtful, and careful work of Professors Arjona, Mampilly, and Pearlman that has gone into this document. The authors do a service for the discipline by discussing this significant area of research, an area that is threatened by the coercive, top-down aspects of DA-RT and, especially, JETS. I have no quarrel with the idea that scholars may voluntarily chose to archive data or to take up suggested research practices that make sense for their particular projects.

I have two categories of comments: (1) The extent to which the DA-RT/JETS “transparency” mandate and, also, the report’s language of “openness” is problematic; (2) The extent to which resistance to DA-RT/JETS has, unfortunately, further legitimized and normalized the IRB system.

The DA-RT transparency project
Many of the practices discussed in the report already exist and are honored in the peer review system. Bringing them together under the “transparency” label is problematic in a number of ways.

(1) Transparency is unfortunate metaphor that encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with “reflexivity”—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation;

(2) As your report recognizes there is an unwarranted systematization/uniformity in DA-RT/JETS that stigmatizes all those scholars who will now be on the defensive as supplicants to editors as they ask for “exemptions” to standards that have emerged out of the natural sciences passing through psychology’s “replication crisis” to then be smuggled into the APSA Ethics Guide.

(3) A major set of questions that have yet to be answered to my satisfaction are: How is DA-RT meant to relate to peer review? Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses?

For all its flaws, the major advantage of peer review is that it involves the application of quality assessment standards by expert readers in a project-centered manner. DA-RT and JETS encourage researchers and, especially, graduate students to aspire to general, abstract standards if they want to escape the exemptions stigma and the possibility that their work cannot even arrive at the peer review stage. Self-censorship of potentially career-risky projects is already happening under the contemporary, prior-review IRB regime and now DA-RT/JETS closes the circle. We should not underestimate the ways in which these systems will discourage new researchers from taking up difficult questions and unorthodox methods. What these systems communicate is that researchers are not to be trusted unless they become ciphers – transparent to the world, to all imagined readers. DA-RT/JETS fails to recognize the developmental arc of a scholarly career and the ways in which our research practices develop and deepen over the years as we develop into bona fide experts. [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] The projected-centered, peer review system honors and respects scholarly expertise; it is a system that communicates that researchers should develop substantive knowledge with the promise that their projects will be assessed by others with similar substantive backgrounds—who understand the ins and outs of the questions and methods in the area—and who will then apply scholarly standards that fit those projects. Well-functioning peer review critically assesses a project in ways that support and elevates the scholar’s efforts.

IRBs
One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.

Why shouldn’t IRBs be considered the starting point for research ethics?

IRBs are guided by the 1979 Belmont Report. However, in the third endnote to the Belmont Report its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.

Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power in either authoritarian or democratic contexts.

It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report should avoid normalizing IRB requirements. To do so gives too much credence to the idea that both the Belmont Report and IRB review are appropriate and helpful for social scientists. The QTD Working Group II.2, Evidence from Researcher Interactions with Human Participants, gives examples that pertain to this report, describing on page 7 the ways in which “governments in Egypt, Rwanda, and Tajkistan and elsewhere have targeted political scientists.” This is precisely the kind of issue that the Belmont Report (as the guide to IRB decision making) cannot coherently address. This report (p. 7) quotes a researcher who states that DA-RT “would push knowledge of authoritarianism further to the margins… Perversely it gives strength to authoritarian regimes’ agnotological tendencies, granting them a veto over research agendas…” The same thing is already occurring under many U.S. IRBs.

This criticism of IRBs should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best. Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.

References

Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.
Peri S-S wrote:Colleagues, I am writing to comment on the 8-20-2017 draft report QTD Working Group IV.2, Research in Violent or Post-Conflict Political Settings. I want to recognize all of the thoughtful, and careful work of Professors Arjona, Mampilly, and Pearlman that has gone into this document. The authors do a service for the discipline by discussing this significant area of research, an area that is threatened by the coercive, top-down aspects of DA-RT and, especially, JETS. I have no quarrel with the idea that scholars may voluntarily chose to archive data or to take up suggested research practices that make sense for their particular projects.

I have two categories of comments: (1) The extent to which the DA-RT/JETS “transparency” mandate and, also, the report’s language of “openness” is problematic; (2) The extent to which resistance to DA-RT/JETS has, unfortunately, further legitimized and normalized the IRB system.

The DA-RT transparency project
Many of the practices discussed in the report already exist and are honored in the peer review system. Bringing them together under the “transparency” label is problematic in a number of ways.

(1) Transparency is unfortunate metaphor that encourages a “laying bare” of the scholarly self, implying that our human role in knowledge production is meant to disappear; contrast transparency (and, also, “openness”) with “reflexivity”—a concept that acknowledges human embodiment in all of its complexity, see, e.g., Timothy Pachirat’s October 1st post on the QTD Working Group Report on Ethnography and Participant Observation;

(2) As your report recognizes there is an unwarranted systematization/uniformity in DA-RT/JETS that stigmatizes all those scholars who will now be on the defensive as supplicants to editors as they ask for “exemptions” to standards that have emerged out of the natural sciences passing through psychology’s “replication crisis” to then be smuggled into the APSA Ethics Guide.

(3) A major set of questions that have yet to be answered to my satisfaction are: How is DA-RT meant to relate to peer review? Why isn’t peer review sufficient? If it is not, what specific elements need improvement and why are DA-RT and, especially, JETS the proper responses?

For all its flaws, the major advantage of peer review is that it involves the application of quality assessment standards by expert readers in a project-centered manner. DA-RT and JETS encourage researchers and, especially, graduate students to aspire to general, abstract standards if they want to escape the exemptions stigma and the possibility that their work cannot even arrive at the peer review stage. Self-censorship of potentially career-risky projects is already happening under the contemporary, prior-review IRB regime and now DA-RT/JETS closes the circle. We should not underestimate the ways in which these systems will discourage new researchers from taking up difficult questions and unorthodox methods. What these systems communicate is that researchers are not to be trusted unless they become ciphers – transparent to the world, to all imagined readers. DA-RT/JETS fails to recognize the developmental arc of a scholarly career and the ways in which our research practices develop and deepen over the years as we develop into bona fide experts. [See Flyvbjerg’s (2001, 10-24), discussion of Dreyfus’ model of the development of expert knowledge.] The projected-centered, peer review system honors and respects scholarly expertise; it is a system that communicates that researchers should develop substantive knowledge with the promise that their projects will be assessed by others with similar substantive backgrounds—who understand the ins and outs of the questions and methods in the area—and who will then apply scholarly standards that fit those projects. Well-functioning peer review critically assesses a project in ways that support and elevates the scholar’s efforts.

IRBs
One unfortunate side-effect of the debates on DA-RT and JETS has been the implicit and explicit endorsement of IRBs as appropriate to social scientists using interviews, participant-observation/ethnography, and surveys. (I am bracketing, here, the experimental method and, particularly, field experiments – as these fit much better with the prior review and the biomedical origins of IRBs.) Indeed, I would argue that arguments against the mandates of DA-RT and, especially, JETS should, similarly, be turned against IRBs. The IRB system and JETS both involve prior review and assessment by those (whether editors or IRB board members) who almost inevitably lack the expertise of bona fide peer reviewers.

Why shouldn’t IRBs be considered the starting point for research ethics?

IRBs are guided by the 1979 Belmont Report. However, in the third endnote to the Belmont Report its authors state: “Because the problems related to social experimentation may differ substantially from those of biomedical and behavioral research, the Commission specifically declines to make any policy determination regarding such research at this time. Rather, the Commission believes that the problem ought to be addressed by one of its successor bodies.” To date this has not occurred.

Examination of the Belmont Report shows that it reflects the biomedical behavioral concerns of its time, i.e., the Tuskegee affair as well other medical abuses. The animating vision is that of the vulnerable patient attended by powerful doctors who have sublimated their medical role to a scientific one. What is missing from this vision is vast—but, perhaps most relevant for political scientists, is the absence of powerful institutions—corporations and governments—as topics of research. The Hippocratic oath, in particular, focuses on the health of individual patient and an analogy to the body politic breaks down quickly because benefits and harms rarely redound uniformly across collectivities. As an ethical framework, the Belmont Report provides, at best, few resources for addressing the sorts of complex ethical problems faced by political scientists and, at worst, instantiates a perspective ill-suited to the study of power in either authoritarian or democratic contexts.

It is for these reasons (as well as many other reasons not gone into here for the sake of brevity) that the report should avoid normalizing IRB requirements. To do so gives too much credence to the idea that both the Belmont Report and IRB review are appropriate and helpful for social scientists. The QTD Working Group II.2, Evidence from Researcher Interactions with Human Participants, gives examples that pertain to this report, describing on page 7 the ways in which “governments in Egypt, Rwanda, and Tajkistan and elsewhere have targeted political scientists.” This is precisely the kind of issue that the Belmont Report (as the guide to IRB decision making) cannot coherently address. This report (p. 7) quotes a researcher who states that DA-RT “would push knowledge of authoritarianism further to the margins… Perversely it gives strength to authoritarian regimes’ agnotological tendencies, granting them a veto over research agendas…” The same thing is already occurring under many U.S. IRBs.

This criticism of IRBs should not be construed to mean that research ethics do not matter. Obviously, they do matter! But we should not miss the similarities between DA-RT / JETS and IRBs. Both are systems that paint scholars as, at base, untrustworthy—while simultaneously implying that trust should be placed in the hands of editors and IRB board reviewers—dubious presumptions at best. Let's not use the flawed IRB system as the primary justification for resistance to DA-RT/JETS.

References

Flyvbjerg, Bent. 2001. Making Social Science Matter. Cambridge, UK: Cambridge University Press.

Post Reply



Return to “IV.2. Settings of political violence”