Author:

Neske, Marvin
Supervisor:Prof. Gudrun Klinker
Advisor:Dyrda, Daniel (@ga67gub)
Submission Date:[created]

Abstract

Without balance in gamespaces, players will not like the gamespace and even worse, they will will not like the game. To prevent players from not liking the game, level designer create many iterations of each gamespace. For each iteration of a gamespace, playtest data is required. Rather than using human playtesting to generate the data required, we propose using agent-based modeling (ABM). ABM used in this context swaps out the humans during the playtesting and replaces them with artificial agents. To bring the artificial agents as close as possible to human-like behavior, we use machine learning agents. By training the machine learning agents, human-like behavior was achieved in a small set of scenarios. The data generated by the trained agents in gamespace can be used by designers to more efficiently create new iterations of their gamespaces.

Conclusion

The process of creating a gamespace is done via constant iteration. To improve on the previous iteration, game designers require lots of playtest data. To generate the required data, conventional methods use human players. However, using human players has its drawbacks. To reduce the amount of human playtests required, we use machine learning agents and let them play instead of humans. Letting machine learning agents play, is part of our proposed method. The proposed method is supposed to use ABM for generating the required data to balance gamespaces.
While the machine learning agents do not resemble human players very well yet, they showed promising results in simple scenarios. Within these simple scenarios, the
expected result of human players matched with the generated data from our agents. Future work includes fixing the discrepancies between the implemented model and real arena FPS. Afterwards, a more complex and human-like behavior for the agents should be the main goal, since the generated data improves with the quality of the agent. Especially an agent that is able to work in arbitrary gamespaces would allow for modeling any arena FPS.
Parallel to further improvements in terms of agent behavior, further verification, validation and also replication are necessary to confirm the correctness of the model. Using real playtest data in different scenarios to further macrovalidate and empirically validate is a recommended first step. 

[ PDF 

[ Slides Final