I.1. Ontological/Epistemological Priors

Forum rules

We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.

For instructions on how to follow a discussion thread by email, click here.

Guest

Re: Dishonesty in research raises concern

PostSun Jan 01, 2017 10:00 pm

Erik Martinez Kuhonta, McGill University: I have read through many posts on the QTD website, but will write my post here, since my comments are of a general nature.

First, I think it is crucial to ask whether there is a crisis in transparency for qualitative research. It is not clear to me that the advocates of DA-RT have ever made the case that there is a problem of transparency within qualitative research, simply transposing a quantitative template about what constitutes transparent data onto qualitative research. The failure of the DA-RT initiative and the JET statement to reflect carefully on different research ontologies, and how that affects the very nature of observations, evidence, and data is deeply problematic.

Second, given the fact that we have no clear examples of bad qualitative research due to lack of transparency, perhaps one should ask what the exact value-added would be for qualitative work that we already know well? For example, why not ask whether James Scott’s widely acclaimed Weapons of the Weak, would have been more effectively assessed, and therefore given greater value, if Scott had given us access to his fieldwork notes? Specifically, would access to Scott’s fieldwork notes have strengthened our assessment of the theoretical claims in his book or of the validity of his observations?

Anyone who reads Scott will see how precise and careful he is in his extensive footnotes, noting for example the exact Malay word that was employed in the conversations he uses in the book. It is not clear to me that Weapons of the Weak would have been more robust because of awareness of DA-RT guidelines on transparency. It already was transparent. Would it not have been more effective for the DA-RT proponents to point to Scott, or any other exemplary study, as models for qualitative research, rather than seeking rigid rules to impose on a diverse community of scholars? Did the proponents of DA-RT actually look carefully at qualitative research, especially ethnographic research, to see what the best practices in the field already are? My general point is this: is it not possible that qualitative research already has high levels of transparency in work that is highly regarded in the discipline? Is this not the place to begin the debate within qualitative research, rather than assume there is a crisis of transparency?

Third, the costs to qualitative researchers have been noted by many already, but ultimately this is where the dangers of DA-RT lie. Numerous posts have already noted the problem of excessive time spent on appendices such as TRAX and the costs to junior scholars that will result if this disciplining initiative is granted such a broad mandate. Crucially, the disincentives to carrying out qualitative research that will inevitably be forced to comply with a higher and different standard from quantitative research (as Mark Beissinger notes in this thread), will become institutionalized in the discipline. If PhD students and junior scholars conclude that the costs of publishing qualitative research in top-tier journals is prohibitive both in terms of time and capacity, the outcome in terms of a pluralistic discipline will be obvious to all. This is the worst-case scenario of the DA-RT initiative.

Fourth, as the QTD Steering Committee moves to make recommendations about increasing transparency, I urge that they will center on the term “reasonable.” Tasha Fairfield and Kent Eaton have made a number of valuable comments about what “reasonable” means, including summaries of the types of interviews, or a sense of the universe of interviews that one was conducting, etc. Summaries of one’s type of sources or somewhat extended, but still concise, discussion of how one chose one’s cases (as Kenneth Roberts noted in his post) are helpful to the reader and provide broader context, but they are not excessively burdensome or impractical to the point of deterring actual research and publication.

Ultimately, I do very much hope that the QTD Steering Committee will reflect carefully on the effects their recommendations will have on the future pluralism of the discipline, but also reflect closely on what this debate says about the direction of political science and the kind of priorities that are being set by some. It is worth asking if a forceful emphasis on methodology, and specifically on detailed formalization of methodology, is where scarce resources (time, space, money) should be spent. I say this as a professor who teaches qualitative methodology and is fully committed to causal, explanatory social science.

Post Reply

Guest

Re: Dishonesty in research raises concern

PostTue Jan 03, 2017 1:45 pm

[quote="jane mansbridge"]Regarding the LaCour study, no LaCour "field notes" had anything to do with the discovery of the fraud. According to the relevant article on the Wikipedia, "Broockman et al. found that the survey company LaCour claimed to have used denied performing any work for LaCour and did not have an employee by the name LaCour listed as his contact with the company.[10] In addition, LaCour had claimed that participants were paid using outside funding, but no organization could be found that had provided the large amount of money required to pay thousands of people." (https://en.wikipedia.org/wiki/When_cont ... nges_minds) In other words, LaCour completely made up the data and said that a survey company had done the work, but in fact no survey company had done so. Most importantly for our purposes, these were quantitative data, not qualitative data. They had nothing to do with in-depth interviews or field notes.

If there is a problem in qualitative research, it would be useful to focus on cases in which there has been dishonesty in that field. I do not happen to know of any, but am happy to be educated. If there are very few, it might be worth considering the costs and benefits of any form of proposed policing. As in current claims of "voter fraud" in the US, one could possibly do more harm by creating problematic restraints on the overwhelming majority in order to catch the few or none actually engaging in fraud.[/quote]

Peri Schwartz-Shea and I have been studying IRBs for over a decade, and the point you make here, Jenny, has been ours as well: Unlike in the world of medical and psychological experimentation, the sources of IRB policy, we have found no evidence in the qualitative or interpretive social sciences of scientific fraud. The methods textbooks that discuss research ethics and other literature on the need for IRBs commonly cite the work of one or more of 3 researchers -- Milgram, Zimbardo, and Humphreys -- as evidence and rationale for policing field research. Note that the first 2 of these did experiments; only Humphreys did field research, and all evidence suggests that he did not harm his 'subjects' in any way, even if the potential for harm was there. [But he was very careful to keep from harming them. We discuss this at length in a 2015 APSA paper.] None of the methods or research ethics literature finds any other examples, nor has our search produced any.

As to LaCour, the whole business started unraveling as Broockman and his colleague-friend tried to repeat LaCour's research. Their discovery of flaws in that research led eventually to the revelations that he had fabricated the things you mention, in addition to others. The other major case in political science in recent time was the interventions in Montana, NH, and California elections. Here, too, the research was not qualitative, but a field experiment.

Dvora Yanow [who hasn't figured out how to sign on to this thing, hence the 'guest' post]

Post Reply

Guest

Re: Dishonesty in research raises concern

PostTue Jan 24, 2017 10:07 pm

Let me begin by thanking my colleagues who have devoted considerable time and thoughtful reflection on these issues. I am skeptical about DART and the motivations behind it, for reasons I discuss below. Nevertheless, I appreciate the effort on the part of those who are working very hard to address the implications DART has for qualitative research in Political Science.

Defining the Problem

As we learn from the great policy scholar Deborah Stone, how we define a problem frequently privileges a particular solution. Consequently, problem definition is political. Particular solutions often have vested interests behind them with access to various resources that can be leveraged to define a problem in a particular way. My approach to understanding DART comes from a similar perspective of asking how the problem is defined and who the stakeholders are behind the proposed solutions.

I believe this perspective on problem definition is a helpful one because, on first glance, it is far from clear what the problem is that DART purports to solve. As I understand the claims made on behalf of DART, there is concern about the integrity of academic work. Yet, I have not seen convincing evidence about the scope of this problem or why current practices of scholarly conduct and editorial discretion are somehow not up to the task. As someone who works with archival materials, I expect scholars to follow norms of academic citation, including precise notation regarding sources. I have not seen any evidence to suggest that such practices have led to widespread abuse or are somehow particularly vulnerable to fabrications of one sort or another. The History profession provides further evidence that disciplinary standards of citation and peer review perform well as safeguards against academic fraud in the use of archival materials.

I am willing to accept that some academic fraud exists. However, I do not see evidence that this problem is of such a scale or severity that would justify the fundamental changes in academic work proposed by advocates of DART. At the risk of sounding polemical, to my ear, the concern about academic fraud echoes debates about voting fraud or food stamp fraud. Although the possibility of fraud exists, the incidence is very low and current mechanisms of policing are up to the task of preventing or punishing instances when they occur. Pushing the comparison further, concerns about voting fraud and food stamp fraud are actually part of a larger political struggle over representation, voice, and access to resources.

Without ascribing motives, I see similarities with DART. This debate is not about fraud. Concerns over transparency and integrity are a proxy fight over a much larger issue: epistemology and what counts as social scientific knowledge.

Rival epistemologies

This is not the place to rehearse debates over quantitative versus qualitative methods. The point I wish to make is that qualitative scholars are engaged in a different kind of knowledge production from quantitative work and these differences are particularly evident around issues of research materials and replicability. Scholars on both sides of the qual/quant divide need to recognize this in order to understand the strengths and limitations of different traditions in social scientific research. I think an exemplary way to do this is found in the work of Robert Mickey. I quote at length from his award-winning book Paths out of Dixie, which examines how the democratization of authoritarian enclaves unfolded in three states of the Deep South of the United States. As a work of comparative historical analysis, Paths Out of Dixie is meticulously researched and copiously footnoted. Its aim is to “identify configurations of causal forces” in order to explain observed outcomes as well as “develop new concepts and generate theories that can be tested on additional cases (p. 22).”

As Mickey explains, his approach has limitations.

“The comparison of richly detailed narratives has some important disadvantages. One is that it cannot suffice as a testing ground for theory. The theoretical approach…is in part informed by the narratives; the narratives cannot then be said to test this account. Another is that narratives resist replication, in part because they are not assembled from systematically collected datasets. Rather, this study relies on process tracing to describe and explain democratization challenges and their consequences (ibid).”

As Mickey further elaborates, process tracing through comparative case analysis
“puts a premium on internal validity, but at the expense of external validity. Despite these (and other) problems, this brand of research is appropriate given our state of knowledge on subnational authoritarian politics…Theory generation remains a priority, and this research design advances this goal (p. 23).”

As this passage shows, Mickey is exceptionally clear (e.g. transparent) about his methods, its strengths, and its weaknesses. It is model social scientific reasoning.

But there is a deeper point here, and that is about knowledge production. Whereas the DART debate acknowledges differences between qualitative and quantitative research, the discussion largely elides the epistemological priors (and corresponding choice of materials and methods) scholars bring to the study of politics. More precisely, scholars have different views on what can be known and how. In part, this is because they are asking different kinds of questions.

To illustrate, and to bring the discussion back to the matter of DART, Mickey discusses in a footnote the distinction made by David Collier, Henry Brady, and Jason Seawright between “casual process observations” and “data-set observations.” The former is “an insight or piece of data that provides information about context, process, or mechanism” (Collier, Brady, and Seawright quoted in Mickey, p. 368n59). The latter “appear in quantitative format and constitute comparable observations gathered…as part of systematic comparison” (Mickey, p. 368n59). Both are data in the sense of discrete pieces of information, but they offer researchers leverage on different kinds of questions. Whereas casual process observations are building blocks for producing knowledge about configurations of causes, data-sets provide the raw materials for comparative statics. In the right hands, both kinds of data can produce social scientific insights that contribute to the accumulation of knowledge. In the wrong hands, both kinds of data can produce garbage.

My point is that proposals currently on offer in DART, such as active citations or a “transparency appendix”, are attempts to transform one kind of data, that used by qualitative researchers, into something that looks more like another kind of data, that used by quantitative researchers. The reason for this has nothing to do with transparency. It has everything to do with ongoing struggles within Political Science over what is considered social scientific knowledge. Debates over epistemology or methods belong in journal articles and books on the subject. It is disingenuous and dangerous to let these academic differences be executed through a set of editorial rules that will systematically diminish the voice of qualitative researchers in the discipline.

Adam Sheingate
Professor of Political Science
Johns Hopkins University

References:
Collier, David, Henry E. Brady, and Jason Seawright, “Sources of Leverage in Causal Inference: Toward an Alternative View of Metholdology,” in Henry E. Brady and David Collier, eds., Rethinking Sodcial Inquiry: Diverse Tools, Shared Standards, 2nd ed. (Lanham, MD: Rowman and Littlefield, 2010).
Mickey, Robert, Paths Out of Dixie: The Democratization of Authoritarian Enclaves in America’s Deep South, 1944-1972 (Princeton: Princeton University Press, 2015).
Stone, Deborah, Policy Paradox: The Art of Political Decision Making (New York: W.W. Norton, 2012).

Post Reply

Guest

Re: Dishonesty in research raises concern

PostTue Feb 07, 2017 2:57 pm

Adam,

Why speculate about motives? You can't really claim to know what they were. It is very unlikely that everyone associated with DART even shared motives. Do you think that many people share motives with Moravcscik? Think about it.

Of course, this is an empirical question. Even a little bit of qualitative inquiry could reveal quite a lot -- like the fact that DART's website does not mention fraud as motivation. So why go straight to speculation?

Many if not most of the DART leaders appear to be qualitative scholars. Lots of people lost track of this fact in the post-Isaac hysteria. Why do you think that their goal was to "systematically diminish the voice of qualitative researchers in the discipline?" Or do you think that the quants tricked them? Seems unlikely.

Which leads to your claim that debates about how different types of scholars know things shouldn't be adjudicated "through a set of editorial rules." But that's exactly where these types of claims have been adjudicated for a long time -- and will continue to do so as long as scholars value peer-review publications. This fact does not preclude other fora for working through these questions.

Your reaction disappointed me. In the weeks and months after DART, too many scholars sought refuge in scapegoats and too few took seriously the question of how to increase our intersubjectivity. Looking back, this was embarrassing. It is good that more forward-looking approaches like this website have emerged.

I'd sign my name, but I'm a vulnerable professional position.










[quote="Guest"]Let me begin by thanking my colleagues who have devoted considerable time and thoughtful reflection on these issues. I am skeptical about DART and the motivations behind it, for reasons I discuss below. Nevertheless, I appreciate the effort on the part of those who are working very hard to address the implications DART has for qualitative research in Political Science.

Defining the Problem

As we learn from the great policy scholar Deborah Stone, how we define a problem frequently privileges a particular solution. Consequently, problem definition is political. Particular solutions often have vested interests behind them with access to various resources that can be leveraged to define a problem in a particular way. My approach to understanding DART comes from a similar perspective of asking how the problem is defined and who the stakeholders are behind the proposed solutions.

I believe this perspective on problem definition is a helpful one because, on first glance, it is far from clear what the problem is that DART purports to solve. As I understand the claims made on behalf of DART, there is concern about the integrity of academic work. Yet, I have not seen convincing evidence about the scope of this problem or why current practices of scholarly conduct and editorial discretion are somehow not up to the task. As someone who works with archival materials, I expect scholars to follow norms of academic citation, including precise notation regarding sources. I have not seen any evidence to suggest that such practices have led to widespread abuse or are somehow particularly vulnerable to fabrications of one sort or another. The History profession provides further evidence that disciplinary standards of citation and peer review perform well as safeguards against academic fraud in the use of archival materials.

I am willing to accept that some academic fraud exists. However, I do not see evidence that this problem is of such a scale or severity that would justify the fundamental changes in academic work proposed by advocates of DART. At the risk of sounding polemical, to my ear, the concern about academic fraud echoes debates about voting fraud or food stamp fraud. Although the possibility of fraud exists, the incidence is very low and current mechanisms of policing are up to the task of preventing or punishing instances when they occur. Pushing the comparison further, concerns about voting fraud and food stamp fraud are actually part of a larger political struggle over representation, voice, and access to resources.

Without ascribing motives, I see similarities with DART. This debate is not about fraud. Concerns over transparency and integrity are a proxy fight over a much larger issue: epistemology and what counts as social scientific knowledge.

Rival epistemologies

This is not the place to rehearse debates over quantitative versus qualitative methods. The point I wish to make is that qualitative scholars are engaged in a different kind of knowledge production from quantitative work and these differences are particularly evident around issues of research materials and replicability. Scholars on both sides of the qual/quant divide need to recognize this in order to understand the strengths and limitations of different traditions in social scientific research. I think an exemplary way to do this is found in the work of Robert Mickey. I quote at length from his award-winning book Paths out of Dixie, which examines how the democratization of authoritarian enclaves unfolded in three states of the Deep South of the United States. As a work of comparative historical analysis, Paths Out of Dixie is meticulously researched and copiously footnoted. Its aim is to “identify configurations of causal forces” in order to explain observed outcomes as well as “develop new concepts and generate theories that can be tested on additional cases (p. 22).”

As Mickey explains, his approach has limitations.

“The comparison of richly detailed narratives has some important disadvantages. One is that it cannot suffice as a testing ground for theory. The theoretical approach…is in part informed by the narratives; the narratives cannot then be said to test this account. Another is that narratives resist replication, in part because they are not assembled from systematically collected datasets. Rather, this study relies on process tracing to describe and explain democratization challenges and their consequences (ibid).”

As Mickey further elaborates, process tracing through comparative case analysis
“puts a premium on internal validity, but at the expense of external validity. Despite these (and other) problems, this brand of research is appropriate given our state of knowledge on subnational authoritarian politics…Theory generation remains a priority, and this research design advances this goal (p. 23).”

As this passage shows, Mickey is exceptionally clear (e.g. transparent) about his methods, its strengths, and its weaknesses. It is model social scientific reasoning.

But there is a deeper point here, and that is about knowledge production. Whereas the DART debate acknowledges differences between qualitative and quantitative research, the discussion largely elides the epistemological priors (and corresponding choice of materials and methods) scholars bring to the study of politics. More precisely, scholars have different views on what can be known and how. In part, this is because they are asking different kinds of questions.

To illustrate, and to bring the discussion back to the matter of DART, Mickey discusses in a footnote the distinction made by David Collier, Henry Brady, and Jason Seawright between “casual process observations” and “data-set observations.” The former is “an insight or piece of data that provides information about context, process, or mechanism” (Collier, Brady, and Seawright quoted in Mickey, p. 368n59). The latter “appear in quantitative format and constitute comparable observations gathered…as part of systematic comparison” (Mickey, p. 368n59). Both are data in the sense of discrete pieces of information, but they offer researchers leverage on different kinds of questions. Whereas casual process observations are building blocks for producing knowledge about configurations of causes, data-sets provide the raw materials for comparative statics. In the right hands, both kinds of data can produce social scientific insights that contribute to the accumulation of knowledge. In the wrong hands, both kinds of data can produce garbage.

My point is that proposals currently on offer in DART, such as active citations or a “transparency appendix”, are attempts to transform one kind of data, that used by qualitative researchers, into something that looks more like another kind of data, that used by quantitative researchers. The reason for this has nothing to do with transparency. It has everything to do with ongoing struggles within Political Science over what is considered social scientific knowledge. Debates over epistemology or methods belong in journal articles and books on the subject. It is disingenuous and dangerous to let these academic differences be executed through a set of editorial rules that will systematically diminish the voice of qualitative researchers in the discipline.

Adam Sheingate
Professor of Political Science
Johns Hopkins University

References:
Collier, David, Henry E. Brady, and Jason Seawright, “Sources of Leverage in Causal Inference: Toward an Alternative View of Metholdology,” in Henry E. Brady and David Collier, eds., Rethinking Sodcial Inquiry: Diverse Tools, Shared Standards, 2nd ed. (Lanham, MD: Rowman and Littlefield, 2010).
Mickey, Robert, Paths Out of Dixie: The Democratization of Authoritarian Enclaves in America’s Deep South, 1944-1972 (Princeton: Princeton University Press, 2015).
Stone, Deborah, Policy Paradox: The Art of Political Decision Making (New York: W.W. Norton, 2012).[/quote]

Post Reply