To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.
The transition to Stage 2 of the deliberations is currently underway but will take some time to complete. In the meantime, we very much welcome additional contributions to the existing threads in this forum.
For instructions on how to follow a discussion thread by email, click here.
- Posts: 11
- Joined: Wed Apr 06, 2016 8:24 pm
I propose a thread in which we can introduce ourselves and maybe add why we got involved in this and our top 3 problems with Da-rt. The latter should be a quick simple preview, we can discuss things in details in specific threads. This is more of a survey of the general sentiment of the community. So do not worry about repeating arguments posted by others or not explaining things in detail.
I am starting, my name is Paolo Spada, I am postdoc at Southampton University, and I work with mixed methods on Democratic Innovations.
I got involved in this discussion at APSA when I attended a panel on ethics of research in which Elizabeth Wood analyzed some of the problems of imposing standards devised for quantitative research on qualitative research.
I am not a primarily qualitative researcher so I will probably read most of the things the other members of the community will write, ask clarifications when I do not get something and maybe jump in if somebody starts a discussion on QCA or integration of quant/qual data.
3 quick things about Da-rt:
1) I have done limited qualitative research in low risk environments during my projects, but the very little I have done has generated boxes full of notes and field diaries, half of which I can barely understand myself after a few years. My primary method of research is statistical analysis and experiments, I conduct participants observations and informal interviews before, during and after my quantitative work for obvious reasons, I can't design any decent impact evaluation without speaking to the people that are living what I need to evaluate and without participating myself in it. I use this data mostly for framing, story telling and theory building. The idea that I have to scan those notes that are basically personal reminders of life experiences is surreal. Without a massive amount of work they would be meaningless to anybody. And I write badly in 3 languages and I take notes usually in a weird mix of them depending on the setting. On the other hand I do structured and semi-structured interviews, those instead would be easier to upload precisely because I have digital records and I get consent before each one. Studying democratic innovations many of my interviews are with elites and public figures and have very low risk if consent is given. My other interviews with participants and NGO organizers that criticize heavily local governments are another story. They might get their funding cut or have some other reputation costs, hence even in a zero physical risk field like mine I often discuss with the interviewee at length what to do. Many times they want to have their voice published, other times they think nobody will ever read my academic stuff, lastly some time they tell me to not even quote that I had an interview with anybody and use the material as if I had come-up with it via participant observations.
2) The idea that there is technological solution to malpractice in academia that to me appears to be a driver of this is problematic. Replication from original data only allows to check internal consistency. There is no way to assess that the data is fake without new research. Replication is actually a confusing word even if we just limit the focus on simple quantitative studies, because it alludes to two separate things: 1) checking that the code runs on the dataset and reproduces the exact same results in the published paper (transparency?), 2) doing a new study with the same methodology to explore external validity (replication?). The dutch psychologist that faked more than 50 experiments was caught not because somebody could not replicate one of his experiments using his code and data, but because a grad student denounced him. The same occurred with the LaCour experiment fraud, it was uncovered only when new data was generated.
3) Da-art emerged because we have finally realized that we can't replicate almost anything in published quantitative research because there is little focus on data & code transparency practices in our teaching of quant methods. While there is a teaching value in replicating quantitative research because students and researchers learn the code, it is unclear to me that the same value can be generated from field notes in ethnographic articles and books such as the Weapons of the Weak. Research based on structured interviews that we transcribe in low risk settings, process tracing, QCA, archival research in which everything is public, high-risk research appears to be all very different stories. There are many standards and without this type of conversation to define what works for each sub-field and method forcing a standard for all will generate all sorts of biases. The quant standard of da-art works fairly well adding some sort of embargo or limitations to prevent scooping, we got one semi-right, now we need a few more.