Post by jdriscoll » Wed Nov 30, 2016 6:50 pm
steeleaa wrote:cherianmampilly wrote:2) In what ways does research in settings of political violence present particular risks, costs, or limitations for transparency practices? Are there specific issues related to transparency that arise from the study of political violence that render it distinct from other areas of inquiry?
I think the risk of identifying informants is an obvious and serious one in settings of political violence, but I don't think that there is no parallel risk in other settings, such as under authoritarian regimes. In either setting, the priority has to be on protecting informants' safety over transparency to the broader public.
Abby and I both hang out with a lot of well-funded people in the discipline (who I consider colleagues and friends, and I'd be willing to bet she does, too) that are extremely comfortable talking about “bad guys.” They know what they mean, and they know what they are doing, when they use this language. It is prescriptive. The bad guys need to go away, and hopefully that happens with as few collateral civilian casualties as possible. But do not mistake the fact that it is prescriptive. That's why they're funded. "Policy impact." "Evidence of impact." They actually took this advice seriously in graduate school and, as a result, they are now quite secure in their lives.
But if we are not those people, why does it mater? It matters because it means that a lot of “us” are, whether we choose to admit it to ourselves or not, at least potentially, in the “preventative kinetic action” business. So are our graduate students. That ship has sailed. Thick description of life in places like Mogadishu or the North Caucasus, if complete with proper nouns, can be an intelligence asset. More subtly, the social intelligence can be deployed to provide warrants for the “they *are* bad guys” claim, allowing other people to connect the dots themselves. I don’t think any of this is new; I think “critical languages” are critical for a reason. The criticism of things like the Human Terrain Program are familiar to most readers of this board, I am sure. But the bottom line, for me, is that many young scholars hedge their bets. They are aspirational political science guild members, but they are keeping the idea of working (or consulting) for three-letter agencies in their back pocket. It is hardly original to notice that lot of area studies expert knowledge can be re-purposed in the interest of the national security state. This is point that is, perhaps, a little bit more salient now that we live in an era of predator-drone targeting, but it is not a new point. Local knowledge can, absolutely, be seek-and-destroy. The British Empire invented area studies; comparative politics has grown out of that root-stock. There is a literature in counterinsurgency and counterterrorism that is explicitly prescriptive; there is also an edgy way of doing descriptive work that is implicitly prescriptive, and that definitely sells books.
That means that when you run around a war zone, or recently-settled conflict zone, and hand out university business cards, and promise people you’re grad student, they don’t stop looking at you funny. In one of my interviews in Tajikistan, a respondent nailed it: “Of course you’re a graduate student today. That just means when you go in, you’ll go in as a G-13.” The interview didn't stop then, however. Like many other interviewees, who were themselves anonymous ("thanks, IRB!"), he had no problem at all naming *other people’s names* and even drawing sprawling maps of networks. I would bet almost anything that some of my best key informants spoke to me with the candor that they did because of, not in spite of, the fact that they had a working theory that I was there in the first place to identify “bad guys” and that I might pass a list of names of “bad guys” on to…someone. This biased my data in some ways, and I try to talk about them openly in my book because I am positive that I am not the only researcher in my generation for whom this is true. But in the end, I had to destroy a lot of my notes. Transparency took a back-seat to liability.
Plenty of future graduate students and scholars are going to try to do this kind of work anyway. Roger Petersen (2001) makes it really crystal-clear that understanding what the +2’s are up to that is the key to understanding certain kinds of violent social activism; I honestly think that going and interviewing these people, if it can be done safely, is a kind of work that is really worth doing. To the extent that there is a risk that the DA-RT regime would be implemented in a way that makes this infeasible or very costly, it’s to the detriment of the discipline, I think. Most people can agree that transparency in the method of collection is a reasonable expectation. Most people can also agree that researchers working in war zones, or “hard authoritarian” environments ought to be pre-delegated, by the guild and by our home institutions, to make the judgement calls that keep people safe, or to keep faith with promises made to people who do not enjoy our social protections. (Not everyone has a blue passport). In my experience, the representatives of the IRB, as easy as it is to caricature them, are actually pretty sympathetic to the idea that total transparency is, when it comes to war zones, neither feasible nor desirable. This gives me some hope that as we move forward as a community, we will not focus on straw-men or edge-cases. I have written this email to focus on something that is more structural: I think the assumption that our entire community is (potentially tagged as) implicated in U.S. military activities may taint our data collection efforts. The strategies to combat this source of data bias are going to vary a lot person to person, but it is easy to imagine ways in which a heavy-handed interpretation of the DA-RT regime deters the next generation of scholars from trying to improvise in this space. (Though the rewards of trying to operate in this space are small, and the War on Terror has no end in sight, so all of these psychological claims may be premature).
Mark Beissinger’s quickly written post on the epistemology/fraud thread is absolutely spot-on. If we "quallies" are being asked to provide documentation that is orders of magnitude more than what our quant colleagues regularly do not provide, we shouldn't even bother crying foul -- we should just take our toys and move to a different part of the playground. It might be that all of this comes from an earnest attempt at replicable science -- and, to the extent that this is true, of course we ought to share replication files (I do). But it might also be, more than some admit, about raising the barriers to entry for certain kinds of work, like the kind that Abby and I (and probably anyone who has found their way to this venue) value. To the extent that it’s the latter, it’s a real problem.
Jesse Driscoll
UCSD
[quote="steeleaa"][quote="cherianmampilly"]2) In what ways does research in settings of political violence present particular risks, costs, or limitations for transparency practices? Are there specific issues related to transparency that arise from the study of political violence that render it distinct from other areas of inquiry?[/quote]
I think the risk of identifying informants is an obvious and serious one in settings of political violence, but I don't think that there is no parallel risk in other settings, such as under authoritarian regimes. In either setting, the priority has to be on protecting informants' safety over transparency to the broader public.[/quote]
Abby and I both hang out with a lot of well-funded people in the discipline (who I consider colleagues and friends, and I'd be willing to bet she does, too) that are extremely comfortable talking about “bad guys.” They know what they mean, and they know what they are doing, when they use this language. It is prescriptive. The bad guys need to go away, and hopefully that happens with as few collateral civilian casualties as possible. But do not mistake the fact that it is prescriptive. That's why they're funded. "Policy impact." "Evidence of impact." They actually took this advice seriously in graduate school and, as a result, they are now quite secure in their lives.
But if we are not those people, why does it mater? It matters because it means that a lot of “us” are, whether we choose to admit it to ourselves or not, at least potentially, in the “preventative kinetic action” business. So are our graduate students. That ship has sailed. Thick description of life in places like Mogadishu or the North Caucasus, if complete with proper nouns, can be an intelligence asset. More subtly, the social intelligence can be deployed to provide warrants for the “they *are* bad guys” claim, allowing other people to connect the dots themselves. I don’t think any of this is new; I think “critical languages” are critical for a reason. The criticism of things like the Human Terrain Program are familiar to most readers of this board, I am sure. But the bottom line, for me, is that many young scholars hedge their bets. They are aspirational political science guild members, but they are keeping the idea of working (or consulting) for three-letter agencies in their back pocket. It is hardly original to notice that lot of area studies expert knowledge can be re-purposed in the interest of the national security state. This is point that is, perhaps, a little bit more salient now that we live in an era of predator-drone targeting, but it is not a new point. Local knowledge can, absolutely, be seek-and-destroy. The British Empire invented area studies; comparative politics has grown out of that root-stock. There is a literature in counterinsurgency and counterterrorism that is explicitly prescriptive; there is also an edgy way of doing descriptive work that is implicitly prescriptive, and that definitely sells books.
That means that when you run around a war zone, or recently-settled conflict zone, and hand out university business cards, and promise people you’re grad student, they don’t stop looking at you funny. In one of my interviews in Tajikistan, a respondent nailed it: “Of course you’re a graduate student today. That just means when you go in, you’ll go in as a G-13.” The interview didn't stop then, however. Like many other interviewees, who were themselves anonymous ("thanks, IRB!"), he had no problem at all naming *other people’s names* and even drawing sprawling maps of networks. I would bet almost anything that some of my best key informants spoke to me with the candor that they did because of, not in spite of, the fact that they had a working theory that I was there in the first place to identify “bad guys” and that I might pass a list of names of “bad guys” on to…someone. This biased my data in some ways, and I try to talk about them openly in my book because I am positive that I am not the only researcher in my generation for whom this is true. But in the end, I had to destroy a lot of my notes. Transparency took a back-seat to liability.
Plenty of future graduate students and scholars are going to try to do this kind of work anyway. Roger Petersen (2001) makes it really crystal-clear that understanding what the +2’s are up to that is the key to understanding certain kinds of violent social activism; I honestly think that going and interviewing these people, if it can be done safely, is a kind of work that is really worth doing. To the extent that there is a risk that the DA-RT regime would be implemented in a way that makes this infeasible or very costly, it’s to the detriment of the discipline, I think. Most people can agree that transparency in the method of collection is a reasonable expectation. Most people can also agree that researchers working in war zones, or “hard authoritarian” environments ought to be pre-delegated, by the guild and by our home institutions, to make the judgement calls that keep people safe, or to keep faith with promises made to people who do not enjoy our social protections. (Not everyone has a blue passport). In my experience, the representatives of the IRB, as easy as it is to caricature them, are actually pretty sympathetic to the idea that total transparency is, when it comes to war zones, neither feasible nor desirable. This gives me some hope that as we move forward as a community, we will not focus on straw-men or edge-cases. I have written this email to focus on something that is more structural: I think the assumption that our entire community is (potentially tagged as) implicated in U.S. military activities may taint our data collection efforts. The strategies to combat this source of data bias are going to vary a lot person to person, but it is easy to imagine ways in which a heavy-handed interpretation of the DA-RT regime deters the next generation of scholars from trying to improvise in this space. (Though the rewards of trying to operate in this space are small, and the War on Terror has no end in sight, so all of these psychological claims may be premature).
Mark Beissinger’s quickly written post on the epistemology/fraud thread is absolutely spot-on. If we "quallies" are being asked to provide documentation that is orders of magnitude more than what our quant colleagues regularly do not provide, we shouldn't even bother crying foul -- we should just take our toys and move to a different part of the playground. It might be that all of this comes from an earnest attempt at replicable science -- and, to the extent that this is true, of course we ought to share replication files (I do). But it might also be, more than some admit, about raising the barriers to entry for certain kinds of work, like the kind that Abby and I (and probably anyone who has found their way to this venue) value. To the extent that it’s the latter, it’s a real problem.
Jesse Driscoll
UCSD