To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.
The transition to Stage 2 of the deliberations is currently underway but will take some time to complete. In the meantime, we very much welcome additional contributions to the existing threads in this forum.
For instructions on how to follow a discussion thread by email, click here.
- Posts: 3
- Joined: Wed Apr 20, 2016 10:43 pm
DA-RT is presented with the intention of enhancing transparency in the conduct of empirical political science research. That research (which some may criticize as too quantitative, or hypothetico-deductive, or what not) idealized by this vision is what I have called the "deductive template" in past work -- the 4-step blueprint where (1) we derive axiomatic and falsifiable hypotheses, next (2) predict all the implications of those propositions, then (3) collect all data and relevant cases, and (4) analyze data/cases by comparing predictions with observations. The revised APSA Ethics Guide lays out data access and production transparency as important points, but both deal only with the third step -- how we generate, code, and source our information. Analytic transparency has to do with the fourth step, by asking us to be more explicit in how we generate defensible conclusions from the data.
In some circles, especially those of us working with experiments, though, there is an additional initiative that covers the first two steps, namely preregistration. Preregistration would, in theory, require all of us to deposit all of our proposed causal hypotheses at the start of any study (if we can even call them that) into some depository, at the very least; then, when the study concludes, we are prevented from going back and changing the hypotheses so that we can market our research as having predicted or tested that proposition all along. In other words, procedure matters more than results. There are obvious reasons why others of us, such as comparative-historical researchers sensitive to complex causation, laugh at this: our finished theoretical contributions (think Moore's classic multi-pathway model of industrialization, democratization, and dictatorship) are the product of exhaustive research that fleshes out the mechanisms, links, and temporalities that we cannot possibly imagine. Virtually no study of historical path-dependence involving complex causal relationships is a study of elegant hypothesis confirmation, with the scholar having hypothesized all those nuanced linkages and complexities from the start.
I doubt any of us have ever actually carried out a multi-year study that resembled, in any way, the step-by-step proceduralism of this research model. But the real question is this: if DA-RT proponents we really want to ensure proper conduct, why no push for preregistration? Did it slip off the agenda at the initial DA-RT meetings starting in 2010? It is like DA-RT proponents are pushing a halfway model of "best practices" that still allow for great malfeasance on part of the researcher when pursuing the very template of practices idealized by DA-RT. If we really want transparency, shouldn't we demand preregistration?
- Posts: 11
- Joined: Wed Apr 06, 2016 8:24 pm
Operationally I do not think that journals should create a dataverse for pre-registration for experiments, I think it should be something the universities do maybe in a standardized fashion so that there is a global dataverse that can be searched. And ideally it does not just contain the research design it also contains a report about the results so that we can learn from "failed experiments". I think that a lot of people are duplicating experiments that have null results around the world and nobody knows about them. A journal of null results research notes would be the ideal locus to reduce publication bias, but lacking that, a repository of pre-registered designs and quick standardized reports on the outcomes would be a very good second best.
What all journals could do is to encourage that the experiment is pre-registered (there is a space for non pre-registered exploratory experiments so imposing it as a requirement would be limiting).
By the way I have never been asked my IRB approval number I always add it in a footnote, but do journals ask if you do not? Also is there a global repository of IRB numbers? Because if there is not and it is extremely time consuming to check if the IRB number I have posted is linked with an actual proper IRB then there is quite an issue to solve. IRB now for example requires re-evaluation once a year during the study and destruction of the data after x years in some universities, and having an initial number does not mean that a researcher has complied to the entire IRB requirements. A stronger integration with respect IRB seems a very easy step to take and I am wondering why there is not much of that in dart. Is it again because it was designed mostly with secondary data in mind that does need IRB?
Univ of New Mexico
- Posts: 16
- Joined: Wed Apr 06, 2016 9:20 am
HfP/Technical Univ of Munich & Duke University
- Posts: 32
- Joined: Fri Feb 26, 2016 11:39 pm
I agree the "RT" in DA-RT suggests pre-registration should be part of it. A full report of what you did to get at the results you report does not necessarily show everything you did in the course of your research. In qualitative research, you might have started with one case, but then decided to choose another because of insufficient access to sources or another reason. Or you wanted to use one archive, but then moved to another. These and other changes can be legitimate and necessary to adjust your design throughout the research process, but it needs to be made transparent to the reader because these decisions might be systematically related to what you (not) find and conclude. The best way to make it transparent is to pre-register your design.