Academia is Surveillance

To come right out with it: Transparency is scary.

Note, this is quite a weird statement for me to make. I have publicly advocated for more transparent research, I have published about it. I do highly value transparency and think there should be a lot more of it. I strongly believe that we cannot do science without as much transparency as possible1. Still, performing transparency scares me.

I recently read two papers which helped reconcile, why I am scared of being transparent while also strongly believing that transparency is necessary. The first one is “Transparency is Surveillance” by C. Thi Nguyen[3], and the second one is “Aspiring to greater intellectual humility in science” by Rink Hoekstra & Simine Vazire [1]. Needless to say I highly recommend both, and will try to add to them by sharing my perspective as an ECR.

Paranoia, Paranoia, Everybody’s Coming to Get Me.

Nguyen’s core point when they argue that transparency is surveillance is… well, what the title says. They argue, that transparency is opposed to trust and that it can be quite bad when experts are under surveillance by “the public”. This is for two reasons. First, when under surveillance by laypeople, experts are forced to only make decisions they can justify to their surveyors: The epistemic intrusion. Second, decisions made by experts are by their nature often only accessible to the experts and their peers: The intimate reasons. Due to the epistemic instrusion and intimate reasons, when being forced to be transparent, experts will stop using their expertise which they can’t communicate and will instead favour decisions they can justify with simplyfied reasoning. One example Nguyen gives is the performance of a department being distilled into performance numbers such as student throughput and publication numbers, as these numbers are easily interpretable: higher == better. What is missing in this practice are quality markers such as “What skills where the students taught” or “Do we benefit our community” as these markers can’t be distilled to numbers and any assessment - of skills for example - may be meaningless for anyone who does not understand what these skills entail.

I can get that argument. While we as experts cannot expect our guidance to be followed when we cannot explain it’s validity it also requires a lot of extra work to communicate the rats tail of knowledge required to understand our decisions. I cannot expect my partner to buy the good coffee beans when I cannot explain to her why they taste better. Moreover, at least since the pandemic we know how hostile laypeople may refuse to engage with even the best explained argument and even engaged laypeople can get lost with them having a different experiential basis than an expert. My partner just does not taste the nuances in coffee I do ever since I became an insufferable coffee snob. Nonetheless, I think for us scientists it is very important that we can accurately convey why we made a decision and how we came to a conclusions. Both for the benefit of the wider public and our ability to reproduce each others results.

Hoekstra and Vazire discuss one form of transparency where my lofty believe in our duty to comprehensive transparency gets into real trouble: intellectual humility - the acknowledgement of limitations, the not-overselling of results and the graceful reception of critique. Obviously these are good points and most researchers do probably agree that being humble and not overselling your results is the right thing to do. After all, are we not supposed to be disinterested parties, solely reporting on what the data conveys? In practice - however - reviewers and editors are clamoring for novel, transformative, groundbreaking results. I have experienced firsthand how a submission I reviewed was rejected because Reviewer 2 thought it was not novel. They even agreed with me that the paper was well done, while also giving the worst possible score - again - solely on the ground of missing novelty2. Hoekstra and Vazire bring this up as well: They argue that being intellectually humble will put you at a competitive disadvantage in a world where editors and founders want to know they spend their (imaginary) pages and/or money “wisely”. This paradox of the scientist - be humble while also have findings as important as possible - has long been discussed [2], seemingly to no real change.

And this is the point where for me these two papers converged. When you are transparent in your research, you have to lay bare your work’s (and probably your own) weaknesses. You open yourself up to surveillance, not to fellow experts who have compassion for the mess that is doing research, who value open communication of weakness as an important part of accurately documenting our progress, but to arbiters who judge what works are worthy of the prestige they can bestow by accepting a work to their journal.

I Want to Publish Zines, and Rage Against Machines

In all my academic work so far, I strived for transparency. Let me iterate: I think it is fundamentally important to do research as out in the open as possible. It is a central tenet of my self image as an upper-case S Scientist. We cannot practice science that is to benefit all of humanity, that is supposed to generate new, reliable knowledge, that is self-correcting, behind closed door. Only independet verification can identify robust knowledge and only by sharing what we know we can collaboratively work on a problem we seek to solve for the benefit of all. If we only rely on trust we have no way of weeding out bad actors or even just finding honest errors in our ways. Consequently, I made all research artifacts I created public as soon as possible, I try to report all the little decisions I make and I try to honestly reflect in my limitations section… and arguably that is tactically not the best thing to do. I - as Hoekstra and Vazire put it - may put myself at a competitive disadvantage.

I am a PhD student. I had four years to get four publications out, now I have a little bit over one year with two, preferably three still left to go. At this point, every rejection could mean I sit out on street in 2023 with a set of skills no one outside of academia might want and not even a doctorate to show for it. Consequently, I catch myself seeing peer review as a threat. Nguyen argues that expert transparency is better than public transparency as peer experts do not force you to simplify your reasoning. There should be nothing to worry about when your work is well done. Ideally peer review is done by fellow experts who fairly judge the quality of your work and offer their expertise to improve on it. In practice, I found - and Nguyen argues - that this is rarely the case, as expertise at best only partly overlaps.

Nguyen continues that expert transparency may lessen the epistemic intrusion of public transparency. After all your peers should be able to follow your argument and you would not need to simplify your - let’s say - complex statistical analysis to a single, easily interpretable number. Right? However, outside experts will rarely be able to follow the intimate reasons as every group of researchers have their own culture, their own set of methods, styles and resources. Hence, in practice, I find myself still breaking down my own work to potentially appease hostile, or even just unfamiliar, peer reviewers. This may be a problem confounded with the multidisciplinarity in HCI, but even other reasearch groups I peaked inside off find themselves - to some extent - doing their own thing in their own ways. Further, actual research practice is littered with arbitrary decisions. Decisions for which we do not have good, academic reasoning. Either because their just is no good reason to give for a certain decision or because scientific ambitions have met with reality.

Why did you collect this many people? Because that was the amount of people I was able to lure into the university basement.

Why did you use this tool over that one? Because I find they are practically the same and I saw that one tool first.

Why didn’t you use this method? Because I don’t know how that works.

I don’t think I could write this in a paper or be this blunt about arbitraty decisions in a rebuttal.

Mind, I do not advocate for lying, but I think that even during Nguyen’s “expert transparency”, I am incentivized to post-hoc rationalise decisions I made during my research as admitting to imperfection in a way I cannot rationalise away with fancy academic arguing gives peer reviewers ammunition they will use to reject my work. However, being transparent, being humble, means we have to openly discuss our shortcommings, our flaws and just the plain reality of science being messy, because all these things influence our results and the knowledge we gain.

I am Not Sick, but I’m Not Well

Am I overthinking this? Maybe?

The reason why I write this down is not to criticise transparency, but instead to deal with these reflexive thoughts of mine - the thoughts that tell me that hiding vulnerabilities and mistakes might be more beneficial if not necessary for me to keep putting food on the table. This little voice in the back of my head that tells me I’d be better of to just play the game. I find these thoughts concerning, not because I intent to follow through with it but because I think I am not the only one who has fears like this. The brunt of Open Science practices are still performed by people at risk, people who might already put themselves at a disadvantage by eschewing high prestige journals in favour of open access. Marginalized people who see themselves with yet another vector for attack. People who are far more at risk than I am. I enjoy massive privilege from who I am and by having a supervisor who shares my values and indulges my Open Science misadventures. Even so, I am an ECR and I have a lot riding on every draft I finish.

Transparency is the right thing to do to improve science. However, we cannot expect people to willingly show their hand if it only brings them drawbacks. We cannot ask people to voluntarily make themselves surveilable when we cannot guarantee them the surveilence will be to their benefit as well. We have an abhorrent culture of error correction within an insane publishing system that punishes honest reflection on short commings, embedded into a system that requires speed and prolificness to stay employable… and we leave it to the people who are already in the most vulnerable positions to show this beast of a system their throats.

I don’t really have a solution to offer. I do not think there is one besides a fundamental shift in our culture - a shift I am actually optimistic about. Aside from the dread I feel every time I submit my work, every time I invite critical eyes on what I have done, I also know there are an increasing number of people out there who share my values, who believe in collaboration, intellectual humility and kindness in the face of errors. I truly believe we are currently in the middle of a paradigm shift. I think it is more and more likely to get reviewers who are accepting of weakness. We will see more and more job posting that highly values open science practices over simplyfied metrics. We will soon become supervisor who value graceful acceptance of errors more than an oversold paper in Nature or Science. The times are changing and with the old guard being cycled out, we will be the ones setting the culture. To bring this back to Nguyen: Transparency - as benevolent, righteous and needed it might be - is surveillance. Those who are to be transparent are always being watched and it is natural to be scared of that. In the end, when we ask for transparency, we have to also make sure there is no reason to be afraid of it.

A picture of a single monkey clapping in an empty concert hall.
Harvey Danger - Flagpole Sitta

Something… Something… Panopticon.

References

[1] Hoekstra, R. and Vazire, S. 2021. Aspiring to greater intellectual humility in science. Nature Human Behaviour. (Oct. 2021). DOI:https://doi.org/10.1038/s41562-021-01203-8.

[2] Merton, R.K. 1973. The Sociology of Science.

[3] Nguyen, C.T. 2021. Transparency is Surveillance. Philosophy and Phenomenological Research. (2021). DOI:https://doi.org/10.1111/phpr.12823.


  1. “As possible” here meaning that some times other needs - such as the safety of participants - absolutely come first.

  2. Also it was a one paragraph review and at that point, just… you know, insert your favourite four letter word here.

Jan B. Vornhagen
Jan B. Vornhagen
PhD Fellow Digital Design
comments powered by Disqus