Author:

Markus Gumbart, Jonas Goos, Nina Cordes
Supervisor:Prof. Gudrun Klinker
Advisor:Daniel Dyrda (@ga67gub)
Submission Date:[created]


Abstract

Aetheria is a game about an A.I. robot traveling through time to save the world. The project's goal was to develop a game in the context of modern A.I. by either including it as a theme or making use of it in the development process. Our team aimed to achieve both by having a story revolving around a robot on one hand and by using several different A.I. based tools in the development of the game on the other. We used midjourney for concept art and inspiration, git copilot for coding support, different lesser known A.I.s for texture and skymap generation and several more.

Trailer

AetheriaTrailer.mp4

Game Concept

Initial Concept

Initially, we wanted to make a game that revolves around building a base, farming and gathering resources. This rather basic concept is spiced up by traveling through different timelines. There is the past, present and three different futures. In one future, there was a nuclear war and all humans are dead, leading to nature reclaiming the earth and thick vegetation everywhere. In another one, all the humans are dead as well, but because they used up all resources on the planet and exhausted nature enough to kill all living beings, leading to a dystopian, Mars-looking landscape. The best of all three futures is called Utopia, where humanity learned how to live in harmony with nature and technical advances lead to a nice and easy life with futuristic buildings and flying cars.

We found that the times were not different enough so we took away the present and made Utopia the present, leaving only two futures, in both of which humanity goes extinct. This also lead to the main idea of the story. The little robot is send out to gather resources to build a rocket. This rocket will be used to send humanity to a different planet, making more space on earth and avoiding the futures, which are not that pleasant. The robot was constructed for this because humans cannot survive in the futures because of nuclear waste and toxic fumes.

There was supposed to be one working portal in the beginning. More portals are scattered throughout the map, although broken and unusable. When a player repairs a portal, they gain the recipe for it and can from now on build their own portals where ever they want. This was however completely scrapped and in the final game, traveling through time costs resources but there is only one portal and no further ones can be built.

Mood boards of different timelines
Mood boards of different timelines

Core Mechanics

Most of the time playing is spent gathering resources. This can be done in two ways: collecting and mining. Collectibles are stones, sticks and peat. They can be picked up by standing close to them and pressing E, which is indicated on the screen by a sprite saying "E" that is floating above the collectible item. Trees and stones can be mined by standing close to them, looking at them and pressing the primary mouse button making the character hit with the equipped tool. If it is the correct tool to mine the destructible, resources are gained with every hit, while energy from the battery decreases. Depending on the timeline and the type of stone or tree, different resources are gained with different probabilities. 

The stones are split into three categories: bronze, silver and gold. An overview on which stone gives what can be found here:

The character controller for running around is confirming to current standards in the industry and using the common key bindings. There is an option to sprint, however this will use energy. If the energy is low, sprinting is no longer possible and if it is empty, the character only moves very slowly, the screen gets dark and mining is no longer possible.

Game Development

Key Examples

Character

The 3D character model of the robot and its animations are self-made in blender. The original concept art on the other hand is A.I. generated based on prompts of how we imagined our character to be. 

While we were able to import most animations directly from blender into unity, the tracks had to be animated a bit differently. In order for them to work in blender, they needed a certain order of modifiers which could not be exported from blender. So instead of exporting a mesh with animations, we exported an already baked 3D animation with several frames and iterate through those via code in unity. That way it was easy to adjust the track speed depending on the current speed of the robot. 

The lower half of the character consists of a middle piece, the tracks and some beams to connect them together. In human models inverse kinematics can be used to adjust the legs of the model to the ground by simply adjusting the foot to the normal of the ground on which it stands. But as our robot has such a large base without joints, we had to approach this problem differently:
In each frame there are four raycasts from each corner of the robot downwards and where they collide with a surface, the corresponding normal is taken. Then the expected normal value in the middle between those is calculated and used to adjust the lower body. The length of the raycasts is reduced so that the result still looks reasonable when the character is standing on a cliff. The adjustment also only occurs when the character is grounded and is slowly reset when in the air.

Map

(automatic collectable placement, grass optimization and water system)

Timelines

Because the whole game concept revoles around the idea of timelines, we needed a fast and easy way to build a world which can change into different timelines. On the other hand we wanted to avoid building four different maps and rather decided to make four variations of the same map. The implement this we made use of reactive programming. We have a single observable which stores the current timeline, whenever the timeline is changed by the use of a portal, the observable puts a value into its stream. We then use this stream to change the world in different ways. One common approach was to create an object with four child objects. The parent object has a contoller which subscribes the before mentioned stream and enables the correct object based on the current timeline. This method was used to create different trees, rocks, workbenches, but also post-processing volumes for the respective timelines.

Usage of A.I.

AI has been used throughout the whole development process. When we started to develop our game idea we used Midjourney a lot to generate images for our moodboard, but also concepts for our robot and timemachine. It was a really good way to visualize how our game could look like and especially for the moodboards we found it very helpful.

Another big use of Midjourney and Stable Diffusion was the UI. Almost all UI elements were AI generated. This definitely sped up the design process and is a good alternative to using free UI assets, because we could get specifically those icons we needed. But it was quite difficult to keep all the icons in a similar style. Many iterations and rephrasing of the prompts were needed to achieve a good result.

Additionally many of the textures used are also AI generated. Generating environment textures worked quite well and rather easy using e.g. poly, which generates full PBR textures with normal and ambient occlusion maps. We also generated textures for the 3D models, like the pickaxe, to get different upgrade levels using dream textures. This was a rather difficult and time consuming process. Dream textures uses a projection of the 3D model and generates a texture using the depth data from this projection. This has the obvious disadavantage that the texture will only work from this perspective. This can be mitigated on symmetrical models by applying the same texture to the other side or by generating multiple textures from different angles. But this method needs intensive tweaking and post-processing like adjusting the UV maps and generating normal maps (using another AI). We were also not very happy with quality of the generated textures most of the time.

The skyboxes (like the one above) we used in the game are fully generated by an AI that is specifically trained on such images. The skybox generator by Blockade Labs generally works like any other text to image AI and worked great for our purposes, although we needed many iterations to get to our final results.

Of course we also used ChatGPT to help us write the introduction text and also to come up with a name for the game. Additionally there were some repetitive coding tasks which were done using ChatGPT. Additionally everyone in the team used GitHub CoPilot which helped us speed up our coding.

Files

Game executable: https://nextcloud.in.tum.de/index.php/s/ekebZHAGFEAG8NR

Attributions

Textures and Models

Code