Author:

Julius Krüger
Supervisor:Prof. Gudrun Klinker
Advisor:Daniel Dyrda (@ga67gub)
Submission Date:[created]


Abstract

For our practical course, we were tasked with developing a game on the topic of AI. We developed The Chiaroscuro Conspiracy, an isometric game, focused heavily on Narrative and Exploration set in a fictional city-state during the Renaissance era. Throughout the project, we explored how generative AI models could be used in different stages of the game development process and in the content creation pipeline. In this talk, we will present how we used different models during the different stages of the game and whether they improved our development process or hindered it.

Trailer

Game Design

The goal of the practical was to develop a game around the theme of AI and we were encouraged to explore the applications of G enerative Artificial Intelligence for video games. We decided to integrate these AI models into our asset pipelines and use them to assist in the content production .  

Based on this decision we tried to find a game design that would work well with the strengths of Generative AI . At the time of development   these were the generation of text and 2D images . Consequently , we decided to make a top-down isometric point and click adventure . At the core of the game we wanted exploration and a branching narrative. Players would explore the environment by steering our main character through the game and would find out more about the world through exploration a nd talking to the characters inhabiting the world . Engaging with NPCs in conversation would also be our main device to deliver our story .  

Inspiration

Disco Elysium (2019) Pillars of Eternity (2015)

Implementation

Story

Set in a fictional city state during the Renaissance period, players take control of art merchant, forger, and spy , Cornelius. The story begins with Cornelius receiving a message from an old friend who asks him to meet. He needs to convey important information to him. When Cornelius goes to meet him it turns out that his friend has been detained by the city guard, but he finds out about a conspiracy aiming to take over the city state. To find out more players need to recruit informants, find their old friend and gain access to the illustrious masquerade ball taking place in the city.  

We used ChatGPT throughout most of the writing process. After we had settled on an initial promise we generated high-level plot points using the AI. We selected the points that struck us as most interesting and worked them into our high-level plot. We then repeated the process fleshing out our plot line with every generation. We then proceeded with a similar process to develop the characters required by the plot.  

Since our story is mostly conveyed through dialogue we needed to generate dialogue. We wanted to implement the dialogue using the narrative scripting language Ink and convey it to the player through a UI we built in the Unity Engine on top of that.  

We generated each conversation in the game individually. Since we wanted our narrative to be consistent we always needed to prime ChatGPT with the writing style we wanted , as well as the context required for a certain dialogue and character of a speaker. Additionally, we wanted ChatGPT to generate dialogue directly in the scripting language Ink.   This turned out to be somewhat tricky , since the AI would never be primed the exact same way even when using the same initial prompt. This made m aintainin g consistency between dialogues challen g ing .  

Our process for generating dialogue was to generate a first draft of a dialogue and then refine it from there to get closer to a desired outcome. One issue with this approach was that it was hard to go back to previous generations when a dialogue had developed into the wrong direction. Since any prompt and generation are part of the AI’s “memory” it can be challenging to fully reverse .    

While ChatGPT enabled us to generate large amounts of content in a very short amount of time the result was ultimately underwhelming . A lot of additional human interventi on would be  required to get the text to what we’d consider adequate quality. It is also possible that better results could be achieved by someone more pr oficient at prompt engineering.  

  Story UI 

Level Design

As mentioned before, the level was set in a fictional Renaissance Town. We wanted the city to be diverse while still being quite condensed, so we decided to build the city via some prominent districts that will display the distinctive parts of our city, visually as well as story wise .  

We came up with four major districts – The Harbor, The Market, The Rich District and The Poor District. Each district allowed us to vary the visuals a bit, such as colors and sounds, to heighten the immersion of each district.  

At first each district was set to be its own scene in Unity, but we decided quite quickly to condense everything into only one map, so the worl d gets a more unified feeling. With the iterations also came the ever-increasing verticality of our level. The rich district was heightened far above the city and the poor district was lowered even further than the harbor. This verticality represented the social statues of its inhabitants with our main protagonist's art store in between both worlds.  

  Initial Level Sketches   First Level Blockout   Level Iteration   Advanced Level Prototype

  Final Level

Art

The art of our game was made in a mix of AI generated, AI assisted , handmade as well as some assets from asset stores.  

For c oncept a rt and m oodboard r eferences we used Midjourney a lot, as it was a very fast way t o create i mages and c oncepts of a certain style .  

Our main focus was on creating the 3D Houses of our Renaissance City. To create so many unique houses we also relied on Mindjourney’s Image generation. We generated images depicting houses from an isometric view. Here we had to keep an eye on consistency, as it was important that the art style and lighting was consistent throughout all generations. To get the images to be more consistent we reused generated images we liked back as refere nce inputs in Midjourney . We still generated about 1200 images to receive our 46 images we used at the end, but it was still a lot faster than making it all by hand.  

Midjourney-generated Houses

When we got an image we liked, we reconstructed the view with fSpy and imported the image into Blender. There we built a simple model which resembled the houses in 3d. This allowed us to UV project the generated image back onto the 3d model to texture our house asset. To create additional texture maps such as a No rm al Map or AO Map we used Substance Sampler, which allowed us to create a complete texture set from just one image i nput texture.   This way we were able to create a finished house asset in mere hours from start to end.  

Art Generation Pipeline WhatsApp Video 2023-10-15 at 11.32.14.mp4 Final Result for Houses

Unfortunately, this approach had its limitations when it came to more precise assets, like our stairs. Midjourney was not able to create consistent , fitting and especially structural logical image of more complex structure where the details were of more importance. As it did not create consistent stone steps of the stairs. The same problem occurred with trees. Midjourney was not suitable for objects where the details were smaller and of more importance, so we decided to just create those assets ourselves with Blender and Substance Painter.  

Handmodeled Stairs Rock Tree

For our character we at first tried a simila r approach to what we did with our houses. Create an image texture in Midjourney of the front and back side from a character standing in a T -pose and then UV pr o jecting the generated images back onto a character model. This unfortunately led to a quite prominent texture seam and smears at the side of the character. So, this approach was not good enough for us as the characters were near enough to the camera that this was too visible.  

Initial Character Creation Attempt Seams and smears on initial approach Polygon Characters

As a solution we used the character models from Synty ’s Fantasy Characters Asset pack. As the characters were quite low poly and flat textured, we adjusted and UV unwrapped them in Blender and completely retextured them in Substance Painter to fi t into our city's art style more.  

Original Polygon Knight Adjusted Polygon Knight Adjusted Polygon Characters

For the UI elements, we generated Button and Panel textures via Midjourne y , as well as the character p ortraits for our dialogue scenes.  

Animation

The game features two sorts of animations : those for the main character and those for the Non- Playable Characters (NPCs). Given that players interact with the main character most frequent ly , our go al was to enhance their immersion and deepen the character's personality through custom animations . For the NPCs, who required a multi tude of animations , we opted for Mixamo a nimati ons .  

The main character's animations were crafted using an AI- assisted animation tool called Cascadeur , which uses AI to refine character animations for a more lifelike appearance . To add depth to the main character and pique the player's curiosity about a potential background story , we introduced an injured leg. To effectively convey this injury to the player , we tailored the animations to reflect his condition . This involved equipping him with a walking cane and emphasizing his reliance on it during walking , turning , and idle animations .  

Walking.mp4

Playtests

During our playtests we recieved some valuable feedback in regards to various aspects of our game.  

Firstly , although the animations have been received with praise , there's a notable concern regarding the walking tempo, especially when traversing extensive areas . This issue has raised concerns about player boredom and frustration .  

The next   remark that we got, was the desire for more engaging activities beyond mere movement and story progression . Players are keen to see the integration of a dditional gameplay mechanics to infuse more excitement and variety into their in-game experiences .  

Amidst these challenges , it's important to note that our game's artwork has garnered numerous positive remarks .  

  Final Game

Conclusion

In summary, the integration of AI in our game development process has indeed accelerated various aspects of the workflow such as art, animation and story design. However, it's worth noting that AI has faced challenges when confronted with precise requirements, which is why it should be used thoughtfully.  

One important takeaway from our experience is the potential for all aspects of the game, whether AI-assisted /generated or not, to be undervalued or overlooked due to AI's involvement in specific areas. This can unintentionally diminish the sense of accomplishment derived from our collective efforts.  

In conclusion, achieving a harmonious balance that recognizes both the efficiency AI can bring and the human creative contributions is essential for a more rewarding game development experience.  

 

[ PDF (optional) ] 

[ Slides Kickoff/Final (optional)]