To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.
For instructions on how to follow a discussion thread by email, click here.
William J. Kelleher, Ph.D.
- Posts: 17
- Joined: Thu Apr 07, 2016 4:38 pm
 transparency about how scholars have made observations or generated evidence through research with human participants; and
 questions of when, why, and how this evidence can or should be made available or easily findable to others.
The first point is methodological. The second point is ethical. I would like to comment on the first point.
I think it is important to emphasize that the word “transparency” has different implications for the methods of quantitativists and qualitativists/interpretivists.
As a quantitativist ideal, transparency assumes that a collection of facts or statistics can be stored so that another researcher can use them to test the inferences drawn therefrom, or to replicate the method by which the data was aggregated. Replication is presumed to be the key to scientific validation of reported research results. But the association, almost equation, of transparency with replicability is generally inappropriate for qualitative/interpretive studies.
If having a methodological appendix is to be considered “transparent,” and therefore “good science,” then that requirement could reinforce the presumed importance of the quantitative model of science to which it belongs. But that model of science and its implicit configuration of methodological values is a threat to the scientific status of qualitative/interpretivist approaches.
While “transparency” as a matter of professional ethics is important to discuss and to clarify, the discussion of methodological transparency should include the warning that no endorsement of replication is intended.
The deeper problem here isn’t that journals requiring a methodological appendix should also have a condescending exception for submissions made with appeals to privacy concerns. That exception would only preserve the implication that qualitative/interpretivist approaches are less scientific than the quantitative approaches – like a tolerated step child of the real political science.
An appendix discussing “how scholars have made observations or generated evidence through research with human participants” should not be understood to honor the positivistic ideal of replication as the key to validity. Such studies cannot be replicated. Each is a unique product of the political scientist who produced it, in one-of-a-kind circumstances.*
Replication as a general value for political science is misplaced and self-defeating for qualitative/interpretivist science. While it may be appropriate for some kinds of quantitative studies, it is inappropriate for unique interpretive studies. Of course, methods for validating claims to knowledge are crucial for any scientific study. The methods for qualitative/interpretivist studies can be criticized on procedural or methodological grounds, and the conclusions offered by such studies can be challenged by logical analysis and/or the results of other participant observer studies. But validity depends on expert consensus, not on replicability.
* See my post on this thread, An Example of Non-Replicable Good Science: Alice Goffman’s On the Run