Papers are getting more complicated.

Its genAI’s fault.
Research
Author

Jan B. Vornhagen

Published

May 15, 2026

It feels like, over the last year I did two things: Literature reviews and peer review. I have been working on several systematic literature reviews – so read papers to combine into meta-analyses – and I reviewed journal papers, conference papers, WIPs and did some ACing. I have been reading and evaluating a lot of both older and brand new research. I’ve been reading a lot.

Recently, I noticed something weird: When doing peer review, as compared to other reading, I have started to struggle understanding these new submissions. I would read through paper discussions full of what felt like easy sentences overloaded with complex word. I felt like I should understand what I was reading, but failed to grasp the meaning the authors wanted to convey. Obviously I can’t share anything – peer review is anonymous and I will not name and shame people for, per definition, unfinished work not ready for the public – however here is a made up example of the style of language I am talking about: Lets say, authors tested if throwing rocks at participants would make them feel bad and leave the experiment. Instead of writing this, however, they write something along the lines of “adverse geological haptic events constituting a proximal risk factor for negative affect and reduced re-engagement rate”.

This is not a wrong description of the study. In fact, it might be the precise description of the experiment and associated measures. However, in the discussion I don’t expect not the exact, precise – and, lets be honest, tortured – language of statistical results, but its translation into the real life relevance. In the discussion I don’t want to read a second time that there is a statistically significant, negative correlation between two measures. I want to read what throwing rocks at people in an experiment means for the broader problem we want to answer.

My first instinct is – of course – to suspect that I am finally losing it. This new research may just be to much for me and I am losing my edge. However, during my literature reviews, reading through papers from three, maybe even two years ago, I do not suffer the same feeling of getting lost. Of course, there are always more and less complex papers and more and less complex writing styles, yet, it seems to me that this is a recent trend. Especially discussions, seem to have become considerably more complicated to read.

To not beat around the bush, I blame genAI for this. Without fail, all papers I recently reviewed had their little, dutiful disclaimer. How the authors used juuuuust a little bit of AI. Totally not to write anything (or god forbid do anything with the data), but just a little bit to refine the language. To help with grammar and writing. I get why: English is often not our first language; writing and editing is hard and tedious; and the punishment for not great writing may be a paper being rejected despite the merits it may otherwise has. Many of us may also not really have a love for writing – or lost it somewhere between admin, meetings, and the painful slowness of putting thoughtful, carefully crafted words to page1.

If this is the case, genAI can feel like a godsend. A technology that can help us write things! To take some burden off of us. To fix our language. Indeed, these bots can undeniably sound like scientific writing. When looking at outrageous slop papers – those that are clearly scientific fraud, completely hallucinated – they have this typical academic denseness. They seem to spill over with complicated words, expressing complicated research in sentences three lines too long for their own good. Academic writing can indeed be rather complicated, but it is not this complexity that makes scientific writing. Good scientific writing is only ever as complicated as it needs to be. We write complicated things because things are complicated and we need to be precise; not because we want them to sound complicated. Especially discussions should be the place where we ease up with the complexity. It is where we translate results – the part where we need to be very precise – back into real world relevance; where they are contextualized; where the reader is told clearly what to take away from the abstraction that is the experiment.

Requiring readers to engage in a whole hermeneutic circle, just find out how to summarize our findings in the two sentences they need for their background section, is rude. It is a shitty thing to do. It is wasting the readers time in order to sound more intelligent, more scientific. Readers have better things to do with their time.

AI, of course, does not give a shit. It literally cannot give shits. It takes a piece of writing, something with intent and a unique voice, and makes it sound `scientific’: dense, complex, and outright hostile to understanding. I once read that this may work well in peer review. The logic being that no professor would embarrass themselves by admitting that they did not get a paper. Instead they would laud its thoughtfulness. How brilliant it is. At least so far, I found this not to be the case. The peer reviews I recently witnessed seemed to consistently hate overly complex writing and call it out. To some extent, it seemed like it even made papers come across as worse as they are. If in plain writing, those papers were unexciting, the overly complex wording made them sound almost deceitful. As if they wanted to trick readers and oversell their results.

AI is shit. For many reasons. In general, any commercial AI product should not be used for the simple fact that its harms will outweigh any benefits it may provide. For writing, in particular, what it sell us – better writing, correct English, that gets papers accepted – is a lie. Instead, it makes us lazy and hinders our learning in order to shuffle these skills from us to those who own these models – billionaires who despise science, democracy, and people – so they can then sell our stolen works and stolen skills back to us to build doomsday bunkers in New Zealand.

Writing can absolutely be a pain in the ass. However, it is the way we communicate our research. Understanding how to communicate our research is a key skill we need to keep honing. When using AI, we either give up on learning an essential skill, or, worst case, don’t bother about communication – and if authors don’t bother writing it, why should readers bother reading it?

Footnotes

  1. Like these words very much are not.↩︎

Reuse