On April 5th, members of OTOinsights t=zero partnership with the Indiana University School of Informatics participated in a workshop on “Evaluating User Experience in Games” hosted by the International Conference on Human Factors in Computing Systems (CHI 2008) conference in Florence, Italy. CHI is the largest and most prestigious conference in human-computer interaction (HCI), with over 2,000 participants from dozens of countries.
At the workshop, we presented and discussed our plans for executing a multi-modal evaluation of game play experience. Following presentations from the other workshop participants, we participated in a substantive discussion about the current state of game play experience research and practice. The remainder of this post is dedicated to some of the insights from that discussion and a reflection on how the t=zero team’s work is reflective of the latest changes to player experience evaluation methods.
A major theme from the workshop centered on the difficulty or need to distinguish immersion from engagement and determining the effects of each on player experience. Some workshop participants interchanged the terms immersion and engagement in reference to a singular concept; however, other participants saw immersion and engagement as distinct forces. To some, immersion is simply the extent to which a person prefaces the current media over other information (playing a game and not hearing someone yell for you) while engagement is something beyond immersion that fully brings a player into the game world. Participants concluded that we should establish a shared understanding of where immersion and engagement begin and end and to what extent each of those concepts (or a unified concept) affects the game play experience. Without such an understanding, valid and useful measures for evaluating player experience with games will be difficult to develop.
Numerous methods used for evaluating experience with digital media are in use. For example, with the assistance of Quantemo, we simultaneously measure 5 or more biophysical modalities and combine that information with additional behavioral research methods. One identified weakness in all of those methods is a reliance on an expert’s interpretation of the results. Researchers are crucial to the process of evaluating player experience, but in some situations players themselves are best equipped to interpret results generated from our work. Who, then, is the “expert” that interprets the data? A great example from the workshop came from the use of eye tracking equipment in the usability test of a video game. Eye tracking equipment readily shows a researcher “what” a player is looking at, but cannot as easily address “why” a player is looking at an object. Showing eye tracking footage to players and asking why their visual patterns change at any given moment may reveal meaningful information that even expert researchers cannot see.
The final issue brought up during the workshop concerns the difficulty of interpreting biophysical signals as a measure of player experience. Researchers at the workshop commented on the difficulties of relying on any individual biophysical measure to measure changes in experience. Individual measures, especially galvanic skin response, can be susceptible to even subtle changes in a research environment and may not accurately or entirely reflect a players experience with a game due to the noise added to the data by other stimuli (air temperature, background noise, etc). The multi-modal method that our Quantemo lab uses for player experience evaluation was judged by participants to be a promising example of how to compensate for the shortcomings of individual biophysical measures. Monitoring multiple biophysical signals and supplementing those measurements with traditional, vetted behavioral research methods yields results that can be viewed with increased confidence by stakeholders and researchers alike.
The t=zero team was extremely happy to have the opportunity to participate in the workshop and share our experiences with peers from around the world. Methods for evaluating experience with digital media are growing and changing at a rapid pace and events like this workshop help all of us to understand the mutual challenges that we face as a field. While we are currently focused on creating a shared understanding of our terminology, leveraging research participants as knowledge co-creators and developing robust and reliable methods for evaluating experience, you never know what the next challenge will be. Participating in this workshop gives us confidence that t=zero is well positioned to tackle the current and upcoming challenges created by the ever-changing digital landscape.
Tyler Pace, Shaowen Bardzell, Ph.D., Jeffrey Bardzell, Ph.D.
Recent Comments