The Birth of a New Game Studio,
Part One:
Humble Beginnings
Design workflow
For this first process, we did some research on how design was being done in other companies, and came with our own philosophy based on the paradigm of enriching the vision. Under this model, one (or more) people are in charge of the initial creative energy, which is then freely enriched by the rest of the team. The person doing the initial design must keep it short and simple: in our case, this design was basically choosing a setting for the game, and sketching the gameplay and graphics line. From this moment, creative control is shared with everyone, with the initial designer acting only as an "idea collector" and moderator. In our project, much of the technology and art has been directly designed by the programmers and artists.
We chose this method because, while controlling the process from a central design team, it exploits all the benefits of working as a group doing mindstorm sessions. Having an established vision helps to focus the team's talent, and their contribution then is really significant. We were lucky enough to find an initial vision that excited everyone in the team, so the core game pleased everyone. Then, adding elements on top of that was just a matter of letting the teams' talent run free.
As a proof of this method in action, I'll give an example. Early on, when we began working, we decided to do a technology demo showcasing our ideas. The demo was to contain (among other items) a palace, visible both from the outside and from the inside. Below you can see three images that clearly illustrate the "enrich the vision" process: the designer's view in the left, the working concept in the middle, and the beautifully modeled artwork to the right. As you can see, my role as designer was only to sketch out the shapes. I only detailed those parts that would affect the gameplay later on: number of stories, size, global style. Going beyond that would have been a mistake, as I would invaded the creativity field of the concept artist. As can be seen, the final version preserves all the structural elements, while going way beyond in terms of visual quality and appeal.
|
|
![]() |
![]() | ||
![]() |
Three images that clearly illustrate the "enrich the vision" process. |
Data
workflow
All games (except for rare exceptions like Tetris) involve huge amounts of data. In fact, the relationship between code and data in terms of size can be as extreme as 5% to 95%. In our example demo, the executable file is about 500 Kb in size (we do not use any resources embedded into the EXE file), and the total data is about 60 Mb. Thus, the data production-and-integration pipeline must be clearly laid out, to avoid redundant work.
Basically, by planning the data workflow a company defines the relationship between the technology and art teams, which boils down to deciding who will take care of what. It can be a good idea to analyze the abilities and weaknesses of your specific team to help you make these decisions. In our case, for example, we were all capable of using mainstream 3D and 2D art tools. Still, only the truly technical members of the team were programmers. It made more sense for the technology people to enter the art arena than the other way around. For this and other reasons, we decided that, upon designing the game engine, the technology team would define the internal art formats to be used. As both programmers were art-sensitive, the decisions taken made sense to the art team.
Similarly, it was decided that art would be inserted into the demo by the technology people. The reason for this is rather complex. Like many teams, we started coding the main engine and underestimated the time required to then build the art into it. Later on we had our art team working in a perpetual crunch-mode, while the engine code was basically stable. Thus, the technology people had to get some work done on the art front to ensure all the work was ready on time. In our case, we were lucky to be able to solve this issue because both teams know modeling. Still, this is a lesson to all teams out there: crafting art is a slow process. Plan ahead or prepare for disaster.
The last decision to make was the graphic look of the game. You can see a clear example of the evolution in the art below, which depicts how a riverboat concept evolved, from a first cartoonish version to the final, dense and overwrought drawing. Some companies use concept art only as a medium for modelers to get the idea of what to model. Others consider them as a legitimate art form, and use them also as promotional game tools. We clearly are part of the latter, as we think that good concepts can not only showcase the art quality, but also the mood and feel of the game. Moreover, good concepts can be used to help focus the vision of the team: having them running around the office will help not only modelers, but also programmers and designers to immerse themselves in the game world early in the development process, and thus create a superior product.
|
|
![]() |
![]() | ||
![]() |
A riverboat concept evolved from a first cartoonish version to the final, dense and overwrought drawing. |
Lastly, we decided to follow the typical KISS (Keep It Simple,Stupid) principal: we stuck to popular, off-the-shelf applications, and used common data formats. Making games is complex enough, there is no need to make things worse using arcane tools. The only exception to this rule was our texture file format. Our engine relies heavily on multipass rendering techniques via OpenGL, and many times ended up doing weird things to the texture's alpha channel. Thus, we decided to encode textures in a proprietary format. This involved creating a small command-line tool to create the files. Other than that, we stuck to mainstream tools widely available. We plan to expand our tool library to include a full-blown editor to create levels from the artist's perspective, but obviously this is a task way beyond the throughput of a two man programming team.
Code
workflow
The final and most complex workflow was the one taking place in the core engine. Our technology today consists of about 200 C++ files, totaling well over 50,000 lines of code. It was written, from scratch, in five months by two people working remotely, so keeping in sync was a major issue. Frequently, one of the programmers would work on some new feature, which would have unknown side-effects on the other's work. Two strategies were established in order to fight this.
First, we followed a hard-core object oriented approach. In our whole code base, only one file was not a class or object, and that one was the file containing the main program routine. Apart from that one, all functionality was encapsulated into classes and, when required, higher-abstraction design patterns. This required the development team to spend some extra time discussing interfaces and methods. Still, the benefit was clear, as once the interfaces were stable, changes could be made to the innards of some subsystem and the overall impact would be reduced.
The second decision was an architecture based in micro-kernels and external subsystems. The idea of micro-kernels comes from Operating System design, and consists of having a lightweight piece of code acting as a kernel dealing only with base engine functionality (window management, events, and base types), and to implement all remaining functionality as external subsystems. These subsystems can be added, deleted or modified with little or no problem. In our engine, the core package would include vertex, color, camera & texture handling, and the rest would be implemented in more or less autonomous packages. A good example of this is our character animation module. This module is quite big, consisting of about ten classes with deep semanthics built into them. Our system is designed to animate seamless skinned skeletal models with props. Still, we can activate or deactivate it easily. In fact, we can separate that part of code from the rest of the program, and thus treat it almost like an independent entity. Similar results could easily be achieved by using a CVS-kind of system. Still, in our first months we did not have such a tool available, so some discipline was required from both coders to achieve the same result. As our coding team expands, the urge for a CVS such as Visual SourceSafe will increase, as more programmers also mean a higher possibility of a major disaster in the code.
Epilogue
After much discussion, coffee and sleepless nights, we finished our first demo or, at least, we have something we can show and talk about. This week, we will go to San Jose, California, to take part in our first Game Developer's Conference as a company. We won't really be showing much, as we are still in our infancy and everything is quite primitive and in an unpolished state. Showing unfinished work is a good way of causing a bad impression, so we are going to play it safe. Still, we know that these events provide a concentration of ideas and people that makes them the right place to make contacts, publicize your work, and learn new techniques. I have attended four Siggraphs, one GDC and countless e-commerce trade shows and have always returned with fresh, new ideas.
In this
first article, we have seen how a little planning and serious teamwork can
produce results, even with a sub-minimum team and infrastructure. In the
next issue of this series (significantly titled "Goin' Places") we will be
covering our stay at GDC 2001, the conferences, people and expo. We will
try to show how game events like this can help a newborn developer. We
hope to have some new experiences to share, and lessons learnt along the
way. See you then.