|Prof. Gudrun Klinker
|Plecher, David (@ne23mux)
Augmented Reality (AR) and Serious Games (SG) have started getting more prevalent over the years in both the public and the industry. With the continuous progress in technology, keeping up with the constant demand for knowledge is challenging and the need for new educational, more efficient learning methods is ubiquitous, ranging from everyday classroom teaching to the advanced medical sector. According to learning theories such as action learning or situated learning, embedding a learning process in an application context that actively involves and challenges the learner, supports, and enhances the learning experience. On paper, combining AR and SG creates the possibility for a modern and highly interactive digital game-based learning environment that can serve as a foundation for a playful introduction to any topic while also maintaining the players' motivation by using game elements.The game introduces a number of different Kana and Kanji, and in order to advance in the story, the player must complete various tasks, some of which embedded in an AR environment.
This Guided Research consists of two main parts:
- The further development of Dragon Tale, a project created at TUM in 2017 and continuously updated since. The game uses Unity3D with Vuforia Engine for AR features and is optimized for mobile devices.
- To conduct a user study to investigates the effects and limitations of Augmented Reality in a Serious Game
The player takes on the role of Yuni, the female protagonist of the game. One day she finds a newly-hatched dragon that seems to have been separated from its family. The two of them set off on a journey passing places imbued with magic, where they uncover ancient mysteries, solve puzzles and defeat enemies, following the traces of the past. On their way, they come across different kanji, which seem to be the key to controlling the dragon's magic. These kanji are a core component to progress in the story. Currently the game has 8 levels.
Changes and Additions during this Project
In response to the feedback received from previous user studies on the game, we were focused on improving existing features instead of adding new ones:
Quest System reworked
An improved version of the existing Quest System was implemented to fully support serialization of game states and make creation and designing of quests easier. The new system allows multiple quests and objectives to be active simultaneously and just provides more possibilities for the developer and the design in general.
Trigger System added
Before interaction was only possible as one-time events when entering/exiting a trigger in a certain point in the story progression. The Trigger System provides a new and improved approach for the player to interact with basically anything now.
Dialogue System reworked
The old dialogue system was a list of strings that would get triggered on demand. This did only allow for linear and scripted dialogues. The rework replaced the old system with a node-based approach, where each node carries its own logic and can influence and gets influenced by variables, states, and enables conditional and dynamic sequences to happen.
Waypoint System added
Most NPCs did not have an AI controller and logic before. The waypoint system allows the developer to easily create predefined paths for NPCs to follow and move on e.g. when not interacting with the player. However, the system is not used at the moment, because of missing animations for the characters. See future work.
The existing User Interface was partially outdated, still used placeholders and was not scaling properly with different resolutions. Most active elements were exchanged for a more modern and cleaned up version.
The basic layout now adapts and scales with the resolution of the target device.
New AR Level added
We added an AR level that test the players knowledge of Hiragana, one of the two syllaberies of the Japanese writing system.
The user must be able to do two things: correctly identify the locational relation between two hiragana, depicted on a 6-sided cube, and remember the readings of the hiragana learned in another level. A 3D cube is placed in midair at the position of a virtual anchor and then floats in place. One hiragana, as part of a set of unique and randomized hiragana, is shown on each side of the cube with a rotational alignment to one of four axes. Thus, two different hiragana can have the same locational relation (“A is above B”) to each other, and “A is to the right of B” must not imply “B is to the left of A”. The player can rotate the cube around its center of gravity by using touch input on the screen or can physically walk around the floating cube to find the proper alignment (see video below). Once the player has answered ten questions, the final score shows on screen. A score greater than four will allow the player to progress in the game, while a score equal or lower to four send the player back to the previous level to repeat the learning content.
Localization Support added
All strings shown in the game are now saved alongside their localized variants in a central database and can be referenced by using unique ids. This has two main benefits:
- the game language can be easily changed given that all strings have been completely localized
- the developer has one central place where all string variants are shown and can be translated
Audio Support added
A general Audio Manager class now controls all audio, which also integrates the Dialogue System. This allows the developer to e.g. add audio to every dialogue node just by providing an audio file in the corresponding field in the Inspector.
As for the game content, we need to work on the depth of the learning material. For example, if we draw more from Japanese culture, Japanese traditions or beliefs, we could provide a richer experience. In addition, in many places the learning content needs to be better connected as there are currently many missed opportunities to do so. Existing level concepts must continue to be reassessed to ensure the future development of the game. An example of this is the level where two Kanji can be combined in AR to form a Japanese compound word. Especially if the goal is to become a game that can teach more than just a few Japanese words, we also need to increase the amount of learning content (hiragana, katakana, kanji and grammar) in the game. Also, audio would be a valuable addition. Currently, the game lacks any audio, such as background music, ambient and effect sounds, as well as audio used to support learning (e.g., pronunciation of Japanese words). Another point that is especially relevant for the vividness of the game world are animations for NPCs. In many parts of the game, these currently behave more like statues than they should do to convey the feeling of an interactive and living game world. For a larger user study in particular, most of these topics would hold great importance.
During this project, the game received many quality changes and improvements. Although there was no new learning content added, the overall structure of many core game components has changed and improved. Overall, the game still has a lot of undiscovered potential and can only benefit from further development and attention. For the second part of this Guided Research, a user study (n=18) was conducted to get a better understanding of the effects of augmented reality on a serious game and coupled limitations. Due to Covid-19, the study had to be conducted online resulting in limited control of the testing environment, which may have affected the results. In general, AR was well received in the game by the participants, including the majority who had little to no experience with the concept prior. On the other hand, hardly any negative effects, such as frustration or tension, were observed. It should be noted, however, that most of the players were also under the influence of the novelty effect.