Transparency and Common Sense in the Face of Diverse Models of Social Science
Posted: Fri Dec 02, 2016 3:01 pm
Since the debates over DA-RT took off a few years ago, I have been having trouble putting a finger on what the problem is that we are trying to solve. Yes, we want to ensure that researchers are honest and transparent, and we want to do this by making sure others can evaluate different aspects of their various knowledge claims. But, isn't that what we already do? For most researchers, is it not simply a matter of common sense that convincing a skeptical audience of one's claims requires laying bare one's assumptions, explaining one's models, and demonstrating how empirical data constitute evidence of claims? For this purpose, if there is not enough space in the version of the article to be published, certainly journals can make space for online storage of appendices or data-sets, giving scholars the opportunity to provide as much documentation as they believe is necessary to convince skeptical readers that their research is innovative and compelling. But, beyond this, current professional practices, from double-blind reviews to post-publication debates between authors and their critics, already do much to incentivize scholars to lay bare the basis for their knowledge claims and to lay bare the principles informing different aspects of an argument (from the formulation of questions and definition of concepts to the operationalization of variables and the claims that certain models or narratives are superior to alternative explanations).
Indeed, this is arguably a more holistic process for encouraging transparency, while also adding expectations of originality and logical coherence. COMMON SENSE already obliges researchers to do what they can to be convincing to audiences that they must assume to be skeptical given the looming threat of rejection from reviewers and the possibility of critical appraisals of their published work. And, some type of "policing" is already evident in common-sense understanding of different forms of social scientific research, and that adding new layers of procedures and regulations requires uniform undersandings of what constitutes "knowledge" or "truth." As for those inclined to knowingly circumvent ethical standards and obligations, adding new standardized procedures simply shift the loci and techniques of evasion, as is amply evident from cases in other disciplines (psychology and economics, for example) where transparency procedures have long been in existence but where some scholars nonetheless sought to evade detection of dishonest or problematic practices (and succeeded in doing so for a period). For more intrinsically diverse disciplines like political science, the main point of vulnerability in a knowledge claim varies for different styles of research aimed at different intellectual objectives and even different kinds of audiences -- and for each of the resulting research products, scholars have long been aware of the things they need to do to lay out convincing knowledge claims, as this is the entire point of their professional identities and commitments.
In purely statistical work relying on data-sets involving intrinsically quantitative data (e.g. GDP growth rates, migration statistics, voter turnout), it is in the interest of most researchers to tout their data-sets and codes to gain recognition for their approach, and repeated refusals by scholars to share their data with colleagues interested in replicating their work carry mounting reputational costs. More problematic are data-sets that require the researchers to quantify observations that do not originally appear as numeric quantities (e.g. assigning a value on a continuous variable to capture the extent of democratization or the strength of identification with an ethnic group), as here the most critical locus of the problem has to do with the initial CODING of historical events, institutional attributes and social processes so that these can be treated as quantitative data. For qualitative researchers relying on archival research, the discussion in the text and footnotes allow authors to anticipate some of the challenges and lay out their assumptions about contested historiography or their motivations for selecting from the range of available primary sources (as self-aware historians have sought to do to make their arguments more convincing). For interpretive work relying on close-range observations of human subjects, there are not only issues with ensuring the privacy and safety of the subjects themselves (a point repeatedly stressed by ethnographers) but also a more fundamental issue of capturing the moment of the observation within a spatio-temporally bound context. For the latter, the issue becomes one of whether similar acts of observation and interpretation might yield alternative narratives within other contexts that appear to be similar - and it is out of the resulting debates that we seek to establish the value of a particular narrative. Unless we assume that we live in a world of final truths that all of us define and approach in the same way, there is no reason to insist on a uniform definition of transparency or a uniform view of the purpose of replication.
Short of that kind of unified consensus, which has eluded us for centuries for a good reason, the very act of convincing skeptics among their primary audiences has a built-in incentive to make different steps and components of one's research clear to readers, and this obligation is carried forward in different ways in different disciplines and in different styles of research in our own discipline. Against this backdrop, an effort to create and impose uniform procedures across journals that have historically showcased diverse approaches and arguments for diverse audiences will create unevenness in submissions, acceptance rates, and costs in terms of time and resources. At least as importantly, these procedures will simply shift the locus of "dishonesty" for the small minority of researchers who consciously seek to conceal some form of "malpractice." In some ways, the DA-RT effort is reminiscent of the creation of long, elaborate, complicated tax codes that make tax evaders simply find different loopholes, while leaving the rest to pay the costs by having to wade through more complicated forms and expend more energy in filing their taxes.
Indeed, this is arguably a more holistic process for encouraging transparency, while also adding expectations of originality and logical coherence. COMMON SENSE already obliges researchers to do what they can to be convincing to audiences that they must assume to be skeptical given the looming threat of rejection from reviewers and the possibility of critical appraisals of their published work. And, some type of "policing" is already evident in common-sense understanding of different forms of social scientific research, and that adding new layers of procedures and regulations requires uniform undersandings of what constitutes "knowledge" or "truth." As for those inclined to knowingly circumvent ethical standards and obligations, adding new standardized procedures simply shift the loci and techniques of evasion, as is amply evident from cases in other disciplines (psychology and economics, for example) where transparency procedures have long been in existence but where some scholars nonetheless sought to evade detection of dishonest or problematic practices (and succeeded in doing so for a period). For more intrinsically diverse disciplines like political science, the main point of vulnerability in a knowledge claim varies for different styles of research aimed at different intellectual objectives and even different kinds of audiences -- and for each of the resulting research products, scholars have long been aware of the things they need to do to lay out convincing knowledge claims, as this is the entire point of their professional identities and commitments.
In purely statistical work relying on data-sets involving intrinsically quantitative data (e.g. GDP growth rates, migration statistics, voter turnout), it is in the interest of most researchers to tout their data-sets and codes to gain recognition for their approach, and repeated refusals by scholars to share their data with colleagues interested in replicating their work carry mounting reputational costs. More problematic are data-sets that require the researchers to quantify observations that do not originally appear as numeric quantities (e.g. assigning a value on a continuous variable to capture the extent of democratization or the strength of identification with an ethnic group), as here the most critical locus of the problem has to do with the initial CODING of historical events, institutional attributes and social processes so that these can be treated as quantitative data. For qualitative researchers relying on archival research, the discussion in the text and footnotes allow authors to anticipate some of the challenges and lay out their assumptions about contested historiography or their motivations for selecting from the range of available primary sources (as self-aware historians have sought to do to make their arguments more convincing). For interpretive work relying on close-range observations of human subjects, there are not only issues with ensuring the privacy and safety of the subjects themselves (a point repeatedly stressed by ethnographers) but also a more fundamental issue of capturing the moment of the observation within a spatio-temporally bound context. For the latter, the issue becomes one of whether similar acts of observation and interpretation might yield alternative narratives within other contexts that appear to be similar - and it is out of the resulting debates that we seek to establish the value of a particular narrative. Unless we assume that we live in a world of final truths that all of us define and approach in the same way, there is no reason to insist on a uniform definition of transparency or a uniform view of the purpose of replication.
Short of that kind of unified consensus, which has eluded us for centuries for a good reason, the very act of convincing skeptics among their primary audiences has a built-in incentive to make different steps and components of one's research clear to readers, and this obligation is carried forward in different ways in different disciplines and in different styles of research in our own discipline. Against this backdrop, an effort to create and impose uniform procedures across journals that have historically showcased diverse approaches and arguments for diverse audiences will create unevenness in submissions, acceptance rates, and costs in terms of time and resources. At least as importantly, these procedures will simply shift the locus of "dishonesty" for the small minority of researchers who consciously seek to conceal some form of "malpractice." In some ways, the DA-RT effort is reminiscent of the creation of long, elaborate, complicated tax codes that make tax evaders simply find different loopholes, while leaving the rest to pay the costs by having to wade through more complicated forms and expend more energy in filing their taxes.