Video Games Expertise

Introduction

What makes an expert for video games? Instinctively, I would argue someone who writes about them for a living has a high chance to be an expert. A brief look into any given comment section under any given game review shows that this definition is far from universally shared, or at least routinely questions. But why is it important to know (or at least define) who counts as an expert? Why am I as a researcher interested in knowing if the participant currently filling out my questionnaire is an expert?

Let’s have a look and let’s start with the second question:

Controlling Variables

Actually, no. Let’s start with a brief refresher on experimental study design:

When doing research, chances are we are using some sort of (quasi-)experimental setup. In these cases we, as researchers, make a specific manipulation and want to study how this manipulation effects the participant. In HCI, an example would be that we want to test if people like a game more, if they use a fancy new controller with a joystick for each finger compared to a standard controller which only has a laughably minuscule amount of two sticks.

In this case, we want the only change between the two conditions to be the controller our participants use. If another thing (for example the game our participants play) changes as well, we suddenly have a confounding explanation for our results: maybe it was not the controller, but the game the participants played. This leads to us having to write a longer Limitations section and, I guess more importantly, reduces the explanatory power of our study.

These alternative explanations are the reason why we usually control for demographic variables like Age, Gender, Nationality etc. (and not because they are great for p-Hacking the results): When our participants meaningfully differ between conditions, it could be the actual reason for the found difference, instead of our intended manipulation.

Expertise is another one such variable. But why?

Expertise

In general, Expertise means that someone is very familiar, knowledgeable or very good within a certain domain. So often times an expert is someone who worked in specific field for quite some time. So far, so uncontroversial1.

Now, when studying the experience of people, them being very familiar with the experience becomes an important factor. Why? Well, because an expert literally experiences their domain differently than a layperson does.

How does Expertise change the experience of the domain? Here I will be mostly talking about art expertise (because that is the one I am somewhat knowledgeable about), but the general concept should be agnostic to the specific domain.

In general, expertise means on ehas accrued specialized knowledge, which in turn leads to a deeper understanding of underlying structures and connections [1]. In practice, this means that experts have an easier time classifying objects in their domain and tend to use more levels to do so. So while a layperson may classify a set of artworks by dominating color or by content, art historians may use several layers of style, genre, artist, epoch etc.

Together with the domain specific knowledge, experts are further said to be able to separate their instinctive, emotional “gut” response to an artwork from their deliberate, cognitive responses [4]. In practice, and in the study above, this means if you show laypeople and experts disgusting images and say they are art, experts will have less extreme emotional reactions (they will feel and show less disgust) while also rating them as more positive, potentially because they incorporate judgments of style, artistic quality, etc.

In empirical aesthetics, Expertise being studied as an important, more or less fixed concept resulted in measurement tools like the Vienese Art Interest and Art Knowledge Questionnaire (VAIAK [9]). This tool aims to measure how interested people are in art and actually seeks to test the participants actual art knowledge in order measure their expertise past self-reporting measurements. In turn, this allows researchers to accurately control for the expertise of their participants, be it just for control or for experimental manipulation.

So what does this mean?

An expert will generally react different to a stimulus in their domain compared to a layperson. Their experience will incorporate more contextual factors, more factors of the work, prior knowledge and their reactions are often more tempered (as they have a larger pool of references to compare the present stimulus to). Further, as this contextualizing of a stimulus happens in the later stages of the art experiences (see [8]), they are more easily verbalized, allowing for experts to more easily, for example, write out a review arguing the pros and cons of a given work.

So what does this not mean?

Just in case it is not clear: A laypersons experience is not wrong. It is not the case that a layperson does not “properly” think about their art experience and therefore come to a wrong conclusion. Rather, the past experiences of layperson and expert differ, leading to an emotional reaction and aesthetic judgment that is true to them. There is no objective right or wrong when it comes to the personal art experience.

Videogame Expertise

So during an experiment, chances are that an expert will react different to our manipulation. While individual differences in participants are offset by researchers collecting data from many participants, an unknown number of experts may limit the explanatory power of our study. Obviously, the potential amount of damage is dependent on our manipulation and how said manipulation interacts with expertise.

Subsequently, we should control for expertise when collecting our sample, in the same way we control for gender, age, etc. However, to do this, we need to measure expertise, which, for videogames, is not that well defined.

When it comes to traditional art, expertise is usually defined by the person having formal knowledge about the domain. For example, many studies about experience and art use art history students and psychology students as experts and laypeople respectively. In fact, the VAIAK was validated with these two populations and it makes sense, as the former population is explicitly taught how to interpret art while the any individual in the latter would need to have taught themselves.

For games, this differentiation is not as easy. On the one hand: I do not think there is an equivalent to art history students when it comes to games as university courses about games are usually more about making them than interpreting them. On the other hand: Games are withholding and/or massively changing their experience depending on player input. The skill aspect seems to have been the focus when it comes to Expertise in HCI, as most studies I have found seem to focus Expertise purely as skill.

For example, Lindstedt and Gray [5] identified metrics in Tetris to predict expertise, in this case defined as the ability to get a high score. Destefano et al. [2] showed that player behavior in Tetris, i.e., number of rotations or deciding mid-fall if there is a better spot for the given piece, is dependent on Expertise, read skill, as well. In another study, Miller et al. [7] found that higher Expertise lead to higher engagement, arguing that the easier fulfillment of Competency fosters more intrinsic motivation. Miller et al., however, also incorporate prior knowledge into their study (albeit for biochemistry – the topic of the citizen science game they used – and not for games), which was linked to repeated sessions with the game.

One way to define expertise as more multi-faceted is the Expertise Model for MMOGs by Taylor et al. [11]. They point to four facettes to expertise: Investment, Skill, Discourse and Game Knowledge. This means expertise is not just the raw skill people have, but the time they are willing to invest in the game, their ability to converse about the game and with other players using the games specific lingo, and their knowledge about the game itself and in context. Personally, I think this is a good distinction, however, as far as I can tell, these categories have not yet been empirically distinguished and were not yet formulated to cover games in general.

Nevertheless expertise has been measured before for videogames and in quite a variety of ways. An overview of the methods used to measure expertise can be found in this bachelor thesis: Mairue [6], points out that many works rely on self-reporting questions about the amount of time being spent playing games. This is a decent measurement as a stand-in for interest and is, for example, also part of art expertise questionnaires. However, a person who plays the same videogame for 30 hours a week, and a person who plays wide variety of “indy” or niché titles will be grouped as the same. Depending on what is studied, this might not be sufficient.

The wide variety in measurements, even within one domain, observed by [6], might very much be due to a lack of generalizable theory of expertise for games and often a lack of need for it. Many of the studies mentioned above do not have a need for a universal theory of gaming expertise, simply due to the fact that they are only interested in the player having spend time in the specific game they are giving them. The multi-facetted theory by Taylor et al. is an outlier in the sense, that they explicitly noticed that players of MMOs end up with different play style that lead to specialized knowledge about the game. This is hardly a concern when measuring the experience of Tetris.

So, while there are a few studies and works that did look at gaming expertise, I would argue (though I have by no means conducted a complete literature review) that there is no theory, and probably no awareness of expertise in a similar way to the concept in empirical aesthetics. While I don’t think the above mentioned studies suffer from this – their measurements and concepts seem perfectly fine for their research questions – studies that might want to look at more general gaming experiences very much could and therefore should mind expertise as a confounding variable. Especially in light of the studies mentioned above that do report differences in gaming behavior between experts an laypeople.

A tale of two terms

For some people, the way I talk about expertise above – as a familiarity with the medium and the ability to contextualize a given work, as opposed to raw skill – might ring familiar. As a matter of fact, and unbeknownst to me until I was made aware of it halfway through writing this, there is the concept of video game literacy. I have trouble to pinpoint the origin of the term, but I think it goes back to both/either Squire [10] and/or Zagal [12], with both building on Gee [3]. I will focus on Zagal here, because I find their work a bit more applicable, though there are a lot of similarities between Squire and them… also Zagal’s text is more easily accessible.

Zagal states that literacy requires the ability to decode, understand meaning and produce meaning in regard to a specific domain. There games can be understood in the context of human culture, other games, technology and by deconstructing them. This sounds very similar to the concept of expertise I talk about above.

However, while gaming expertise has found entry into some empirical work, I have not yet found work that studied the influence of video games literacy on the gaming experience. Especially not in the sense of a confounding variable. So for now, video game literacy, just like expertise, seems to get mostly ignored in study about video game experience.

Conclusion

In general, my point is that the prior experience people have with games in general and how it influences their experience should probably be studied a bit more. One the one hand because of it’s potential influence when it comes to performing studies, but also to get a better understanding for how we as people experience games as cultural artifacts, as art. Games are widely available and come in countless shapes. Their players, in turn, have widely different contexts when experiencing them. From the FIFA super fan, who owns a console solely for it’s yearly incarnation, to the indy games enthusiast who scours itch.io for niche art projects. From the semi-professional CS:GO player to the fan of walking simulators. Uncritically grouping all these players together, be it in an experiment or any theoretical model about the player experience will not be sufficient to explain why people experience games as they do. I very much do not argue for an equivalent of player types for expertise. I don’t think this would be the right direction to go into. Rather I think we need more awareness among researchers about the individuals that walk into our studies and how they differ before they do so if we want to understand why they differ afterwards.

References

[1] Augustin, M.D. and Leder, H. 2006. Art expertise : A study of concepts and conceptual spaces. Psychology Science. 48, 2 (2006), 135–156.

[2] Destefano, M. et al. 2011. Use of Complementary Actions Decreases with Expertise. Proceedings of the Annual Meeting of the Cognitive Science Society. 33, (2011).

[3] Gee, J.P. 2004. What Video Games Have to Teach Us About Learning and Literacy. Palgrave Macmillan.

[4] Leder, H. et al. 2014. What makes an art expert? Emotion and evaluation in art appreciation. Cognition and Emotion. 28, 6 (2014), 1137–1147. DOI:https://doi.org/10.1080/02699931.2013.870132.

[5] Lindstedt, J.K. and Gray, W.D. 2013. Extreme Expertise: Exploring Expert Behavior in {Tetris}. Proceedings of the 35th Annual Conference of the Cognitive Science Society. (2013), 912–917.

[6] Mairue, S. 2020. Game expertise: A multi-faceted concept.

[7] Miller, J.A. et al. 2019. Expertise and engagement: Re-designing citizen science games with players’ minds in mind. ACM International Conference Proceeding Series. (2019). DOI:https://doi.org/10.1145/3337722.3337735.

[8] Pelowski, M. et al. 2017. Move me, astonish me… delight my eyes and brain: The Vienna Integrated Model of top-down and bottom-up processes in Art Perception (VIMAP) and corresponding affective, evaluative, and neurophysiological correlates. Physics of Life Reviews. 21, (Jul. 2017), 80–125. DOI:https://doi.org/10.1016/j.plrev.2017.02.003.

[9] Specker, E. et al. 2018. The Vienna Art Interest and Art Knowledge Questionnaire (VAIAK): A Unified and Validated Measure of Art Interest and Art Knowledge. Psychology of Aesthetics, Creativity, and the Arts. October (Oct. 2018). DOI:https://doi.org/10.1037/aca0000205.

[10] Squire, K.D. 2008. Video-Game Literacy – A Literacy of Expertise. Handbook of research on new literacies.

[11] Taylor, N. et al. 2011. Modeling play: Re-casting expertise in MMOGs. Proceedings of the 2011 ACM SIGGRAPH Symposium on Video Games (New York, NY, USA, Aug. 2011), 49–53.

[12] Zagal, J.P. 2008. A framework for games literacy and understanding games. Proceedings of the 2008 Conference on Future Play Research, Play, Share - Future Play ’08 (Toronto, Ontario, Canada, 2008), 33.


  1. If you are an epidemiologist or a climate scientist, my sincere condolences.

Jan B. Vornhagen
Jan B. Vornhagen
PhD Fellow Digital Design
comments powered by Disqus

Related