Contents |
|
Scaling Special Effects
Adding distinctive special effects is a great way for developers to distance their title from the competition. The kind of special effects we're talking about here include particle systems for generating smoke and fire, texture tricks, fog volumes, lens flares, and similar onscreen pyrotechnics.
Many kinds of effects can be scaled effectively.The simplest way to handle scaling is to just switch off the effect when running on lower end machines. Resourceful developers are also investigating more sophisticated ways of scaling these techniques.
Many multi-texture tricks being used by developers to increase realism in applications can be reduced or switched off to add scalability. Some of these include gloss mapping (to make objects look partially shiny), dynamic environment mapping (to reflect moving objects and not just stationary ones, detail texturing (to add detail), and bump mapping (to add surface relief).
A particle system offers a good example of the many ways in which effects can be scaled. Basically, a particle system is a collection of objects (particles) that behave according to a set of rules for the given system. They typically all move differently, but follow some global pattern. This makes them useful for depicting smoke, steam, blowing leaves, and so on. You can scale down a particle system by altering the granularity of the effect (reducing the number of particles), but increasing the particle size so that the effect has the same density, but a grainier appearance). Another way is to simplify the manner in which the individual particles are drawn. Is each one a full polygonal model, or just a sprite? Also, you can scale the manner in which particles interact with their environment. Does the application detect collisions between particles and their environment, or not? Are collisions between the particles themselves detected? Simplify the behavior and you can effectively scale down the effect for low performance systems.
Setting the Scaling Method
Every application has its own unique differences and the best techniques for supporting scalability will vary from application to application. Regardless of the method the developer chooses to build scalability into a game, another question remains: how to set the appropriate level of scaling.
The different approaches for deciding what level of content to use fit into four areas:
Typical applications apply one or more of these approaches in combination.
Letting the user set the level is probably the most common means of addressing this issue. Typically, a control panel lets the end user enable or disable features, or select the appropriate level of content. This approach is certainly the easiest to implement, and it also gives the end user some control over their experience. On the negative side, it requires that the end user understand some fairly complex performance tradeoffs and system configuration concerns. Novice users can easily become frustrated.
Some applications determine default settings according to designated hard limits. An application surveys the hardware configuration during installation (including CPU, graphics card, memory, and so on), and selects a predetermined level of content that corresponds with that level of system performance. This approach somewhat effectively matches content to system performance, but it has some drawbacks, too. One drawback is that the application may run into a configuration that the developer never anticipated. Also, factors other than those collected while determining the system configuration could adversely affect the performance. For applications that set the level of detail at install time, a hardware configuration change (such as the user upgrading their CPU or graphics card) would invalidate the settings.
Some applications employ pre-runtime profiling. Either during installation or while the application is loading, the performance of the system is measured through a series of tests. Then information returned from the testing is used to set the appropriate level of content. This approach attempts to tune the level of content to the system's performance, but it has a possible drawback. The 'test' results are assumed to be representative of the performance during game playing-- in reality, the load balance could change significantly.
Runtime profiling is considered the Holy Grail of scalability, but it is also the approach most fraught with problems. The underlying idea is that while the application is running, the current performance is measured and used in a feedback loop to tune the content level up or down. This tunes the content to the game play dynamically, but implementing this kind of system can be very difficult.
Typically, runtime profiling is accomplished by tracking a number of variables, settings, and weights, and then feeding this information back to the application. Some of the elements often used are the frame rate, distance and size of objects, object importance, pixel error, and so on. Care needs to be taken in the application design so as not to 'flip flop' back and forth between too high and too low a level of detail. This usually involves some sort of damping or maximum step size.
We hope this article has pointed out some of the challenges that game developers face when developing titles to run on the broadest range of PCs. As the gap between the high-end and low-end machines increases in coming years, developers will have to become even more resourceful to ensure that games take full advantage of the cutting edge equipment while still performing well on earlier machines. Now that you know a bit more about what is going on behind the scenes, you'll begin to notice how some of the effects and features described in this article influence the game performance of your own computer.
Dean Macri is a Senior Technical Marketing Engineer with Intel's Developer Relations Division. He is currently researching real-time physics with an emphasis on cloth simulation. He welcomes e-mail regarding NURBS and other parametric surfaces, or anything mathematical and related to real-time 3D graphics. He can be reached at mailto:%20dean.p.macri@intel.com.
Kim Pallister is a Technical Marketing Engineer and Processor Evangelist with Intel's Developer Relations Group. He is currently focused on realtime 3D graphics technologies and game development. He can be reached at mailto:%20kim.pallister@intel.com.
________________________________________________________