II.2. Evidence from researcher interactions with human participants

Forum rules

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

Alison Post
U.C. Berkeley
Posts: 3
Joined: Fri Nov 18, 2016 12:39 pm

Cybersecurity and work with human subjects

PostFri Nov 18, 2016 3:21 pm

There are scholarly reasons, as others have noted, for sharing the data we collect from human subjects. Others may be able to build on our work, and knowledge thus accumulate more quickly. It may provide disincentives against misrepresenting evidence. In addition, funding bodies are increasingly requiring that data they finance the collection of be shared with the broader community.

One concern that has not yet been raised in this forum, however, is how cybersecurity should factor into our decision-making regarding the costs and benefits of transparency, and particularly graduated or intermediate forms of data sharing. Cyber “insecurity” may affect the circumstances under which data from human subjects may be shared without endangering participants.

It is important to consider the increasing insecurity of digital storage systems. When sharing transcripts or recordings with journals or a repository (even with access restrictions in place, such as limiting access to members of the scholarly community), one is relinquishing control over the materials, and thus is unable to personally ensure that materials are not broadly disseminated by accident. This is abundantly clear as instances of hacking facilitated by sophisticated phishing schemes proliferate. (I would certainly never fault a journal or QDR if they were hacked by the Chinese or wikileaks, given that the U.S. government has fallen prey to attacks.) Perhaps the way forward is to give researchers the leeway to decide what level of sharing is truly unlikely to put subjects further at risk than they are already, based on their knowledge of the context in which they work, the nature of their IRB approval, content of the interview, etc. This will likely mean sharing transcripts or notes from some interviews (perhaps those that can be realistically anonymized, or where the subject matter could in no way be conceived of as objectionable) and keeping others on one’s own computer (with names coded, etc., and nothing stored on the cloud.) Usually, references to anonymous interviews are complemented by material from other sources in articles, so it does not seem unreasonable to ask journals to be accommodating on this front.

My concerns about security do not just pertain to interview data. It is also becoming increasingly difficult to anonymize survey data in an age of big data. My colleagues in more technical disciplines maintain that there is sufficient data on most individuals in countries like the US that much survey data collected these days can be traced back to specific individuals. This is particularly the case for geo-referenced data such as that collected through cell phones. As we move towards tablet-based surveys that collect GPS coordinates, etc. it is incumbent upon us to ensure that measures we take to anonymize data are in fact effective, and if not, that survey responses would not put respondents at risk.

Post Reply

Anastasia Shesterinina
Yale University
Posts: 17
Joined: Thu Apr 07, 2016 11:51 am

Re: Cybersecurity and work with human subjects

PostSat Dec 17, 2016 2:46 pm

Thank you, Pr. Post, for highlighting cyber "insecurity" as a critical issue in the discussion of transparency in the discipline. Do you have suggestions or examples of the more effective measures to anonymize data?

Post Reply

Return to “II.2. Evidence from researcher interactions with human participants”