![]()
Visiting guest. Why not sign in?
|
|
U.N. Bot - Navigation Experimentation Framework Ambitious First Release | |
U.N. Bot - Navigation Experimentation Framework
I've just released the first demo of the U.N. Bot, an Utterly Naive agent that spends all its time discovering and patrolling terrains. The Ubiquitous Navigation scheme allows you to reconfigure most behaviours from scratch, with more parameters the way! http://bots.ai-depot.com/UN-Bot.html Comments always welcome! |
|
amazing work
This looks fantastic Alex, looking forward to taking it home and having a play. If you get the U.N bot to learn a map will it have that knowledge next time it starts the game? Andy |
|
Saving Maps
No. I disabled the ability to auto-save and reload existing maps since I really wanted to stress-test the automatic representation generation code. When this agent becomes a real deathmatch bot, he will do that, naturally ;) |
|
Other Bots
Just wanted to point you to other Bots for Q2 which also don't rely that much on user input for navigation (and you can feed them any map): All in all I've got my doubts that the UN Bot performs as good in more difficult terrain where you have to climb ladders, crouch, swim and dive and use tricky button combinations to open closed doors. But nevertheless good job ! |
|
Others...
Hi Markus, welcome to the boards. CFG has a pre-processing phase, which William describes in his "Terrain Analysis for 3D Games" article. That's not first person. The Neural Bot does not claim to learn the map, since it actually can't. It has no input or state to allow it to do that. And the parabot requires human interaction, even if not directly. The aim was to make this one fully autonomous. As you said, the bot doesn't yet do too well in complex environments, but this is being polished right now... it will soon. All the features you mention are the next level up in the navigation, since they can be classed as high-level behaviours. Thanks for the comments. |
|
CGF
CGF intentionally does not autonomously learn the map. The bots together do learn the tactical value of waypoints (getting a tactical understanding of the is done both in-game and off-line, and any of them already has an effect). Early on in the development of CGF, the bots were able to do some roaming and they created a waypoint graph on the fly. I tossed this functionality out for several reasons: <ul> <li>building a good map using solely roaming is quite tough in the Q2 environment</li> <li>having the AI operate using incomplete information about the terrain, while also gathering more information is a large and interesting task in itself (as Alex demonstrates), but not one that coincided with my aims</li> <li>most gamers don't want to fight bots that still need to learn the terrain; instead, they want opponents at about the strenght of their friends</li> <li>exploring the terrain in-game takes away a significant amount of CPU (most of CGF was developed on a PentiumPro 200Mhz)</li> <li>the roaming/mapping code and the changing terrain information dimensions would greatly complicate my (already) large design</li> <li>investments in "auto-mapping AI" typically aren't appreciated by game developers, because they don't need it in their games (note that most Q3Arena engine based games do not use the off-line automated mapping (AAS) facilities included in the engine license)</li> </ul> William |
|
Map Learning
Regarding your 3rd point; it takes a bot significantly less time to learn a map than it does a human. Bots are very well suited to remembering stuff, and as long as the exploration scheme is reasonable -- it's acceptable in the demo, but the curiosity scheme I'm working on will make it better -- you can build up a map decent enough to embarrass a human player ;) For the 6th point, that's true for deathmatch FPS. But assuming a more interesting storyline, with an AI character that progresses through the level as you move forward, it would be an amazing feature. For MMORPG to, pre-processing is rarely possible. The applications are far and wide ;) |
|
some comments & questions
"Regarding your 3rd point; it takes a bot significantly less time to learn a map than it does a human." I fully agree (although the bot might not autonomously gain the understanding that the level editor could give him using a few 'hint waypoint flags'). However, the problem is that if the AI needs to learn a map (and hence use incomplete terrain information), it is hard to write very efficient terrain reasoning algorithms on top of it. Do you have an example of a MMORPG where I cannot do pre-processing (I don't play them, so I don't know much about them)? |
|
Embedded Agents
I don't play MMORPG either. Isn't it difficult to find time when you're a developer? Anyway, there are two cases where learning a terrain is essential -- aside just providing realistic human-like terrain exploration: 1) Huge static terrains, where pre-computing everything would take too much memory. You'd need forgetting for the bot to be able to cope with new areas, so learning goes hand in hand with it. Anyway, the navigation designed fits into my research approach to game agent AI, namely situatedness. The agent is embedded into its body (the animat), and cannot cheat by pre-processing or accessing unknown information. This is a bit of a hassle, and a lot of work to optimise, but possible and most definitely rewarding! Though I would have agreed with your other points a couple month's ago, the quality of the maps that are currently been generated are amazing for terrain reasoning: they more than match the brute force waypoint placing thanks to the simplicity of the representation, and since they are iteratively corrected, they tend towards the perfect representation for the bot -- the best compromise between simple and useful. Since your mesh is dense, you have to deal with information in a statistical fashion. With a simple representation, that can be done with vector arithmetic and graph operations. I'm polishing up the lower-level before I move on, but this is one of the things I'm intensely looking forward to (notably using a knowledge-based GA for topography based squad tactics). There are a few problems with the discovery still, like finding all the places in Q2 'base1', as you said on flipCode. This can be done with really random discovery behaviours, or brute force wall following, but I don't like the way it's done at all. But once these problems have been overcome, the system's potential may be unleached ;) |
|
on large terrain and on your work
>Huge static terrains Operation Flashpoint comes to mind... Impressive game. Should have received a number of awards... This is exactly an area where pre-computing pays off. Keeping everything in memory is often not feasible (PS2 - not enough memory) or incurs significant overhead due to page misses and access to memory that had to be swapped to disk. I'd be curious how an 'on-the-fly' learning system would deal with an Operation Flashpoint situation. (Frankly, I believe Operation Flashpoint does not use not either method, but basically navigates on the heightmap and a 2.5D mesh; the terrain is pretty much free of obstacles, and solely in a handful of buildings and in guard towers you can arrive above other accessible locations). >[writing a paper] William |
|
Different Approaches
On landscapes, it is true that very little terrain representation is required, as very simple reactive rules can get you around obstacles. This applies to towns aswell, but the more complex they get, the more a higher level of intelligence is required. I'm sure the streaming approach would work, and having spent a bit of time on landscapes with static pre-computed LOD levels, I can foresee how that would be done. Consoles are well suited to background loading in such a way. I guess I just like the idea of honesty ;) Until it comes crashing down on me that is! The paper will be on the path planning only, as that is a topic big enough for a paper. I thinking of ask you to read it anyway, but a review would be even better ;) The execution of the motion, and the learning of the terrain will be inside my dissertation. Cheers! |
|
botepidemic.com
Could somebody tell me if I can find all interesting stuff that were located at botepidemic.com is available on another site. Is it mirrored elsewhere? It seems like botepidemic.com doesn't contain anything interesting any more. /Björn |
|
mirror of botepidemic.com
Found an archive containing a "mirror" of botepidemic.com. It's quite big though, 58 MB. ftp://ftp.splatterworld.de/games/mirrors/www_botepidemic_com.zip |
|
UN Bot - A.I. Essay
Hi Alex Thanks Giles Roadnight |
|
Level Of Detail Path Planning
Welcome to the boards Roaders. The algorithm is something I came up with, blending approaches from networking, graph theory, and AI. Parts of it have been done before, but I doubt the combination has, and definitely not applied to game AI. I'm not sure the world is ready for all the details yet, but I can tell you it works with incremental updates to the state which converges to the right result. Since movement is usually very small, you can take advantage of a strong frame to frame consistency anyway, so this is ideal for in game agents. All the path costs and node rewards are taken into account dynamically too, since all the updates are performed continuously. That's quite a radical jump from A*, admittedly. So either it'll blow up in my face, or become a great success! That's the catch with such algorithms ;) Let us know when you have written your essay, it will be interesting to read! |
|
pathfinding is used more often for other purposes than movement
>it works with incremental updates to the state which >converges to the right result. Since movement is usually >very small, you can take advantage of a strong frame to >frame consistency anyway This is a nice design. Be aware however that in a full blown game AI application, the largest customer of the pathfinding system is not the 'bot movement AI' but the goal evaluator. This goal evaluator frequently evaluates the best course of action, which typically is about: <ul> <li>am I healthy and is a health powerup nearby</li> <li>does it make sense to pursuit a threat that went out of sight</li> <li>which is the best weapon to pickup, given their value and distance to the bot</li> <li>what should I do: defend the base, attack the enemy base, escort the flag carrier (typically based on the distance to the base and carrier)</li> </ul> Most of the goals the AI might adopt have a value that depends on the (travel time) distance to some object or location. Without at least a very good cache and caching strategy, any dynamic path finder will become the AI's bottleneck. For that reason, and for performance/frame rate, and for simplicity, most FPS AI simply uses path lookup tables rather than dynamic pathfinding (like A*). Table based path lookup tends to be 200 times faster than a heavily tuned A* implementation. In other words, for the same costs of computing a single A* path, the AI could have retrieved and compared 200 static paths. That will give you some time to check and deal with the occasional exception (pre-computed path is blocked, so I need to fall back on an A* dynamic pathfinder for this particular path). William |
|
Caching
The whole thing is essentially a cache. Goals may be reused often over the course of the game, but they are not usually changed at small intervals. Though the algorithm on default settings can produce satisfactory results within 1 logic frame, but it's acceptable to take up to 0.5s to make such a decision. When the higher-level AI sets a goal, it increases the reward for a node. The chosen path then iteratively converges to the most optimal -- if no noise is added. Short paths do not need to be cached since rewards can be combined. You set the reward for a closer item, and the path to the furthest reward node will take the detour. I can guarantee this dynamic path-finder will not be the bottleneck. Firstly, it can return after any amount of time. Secondly, it uses consistency and plasticity, which means estimates are updated as time goes by. You can take advantage of it fully by voluntarily adding dynamic conditions, otherwise, it'll just converge to perfection. Alex |
|
path lengths, not the paths
Alex, thanks for the reply and clarification. I'm not sure we're discussing the same problem. My statement is that when you complete your AI into a full blown game AI, the main demand for paths is about path lengths, not the paths themselves. Full blown bots will spend much of time (and code) in considering alternatives (should I take cover, attack, get a power-up, get the flag, camp here, join up with teammate x,y,z, etc.). Nearly every evaluations of such alternatives requires 'distance' information: is there any cover available with 2s distance, can I get to the threat quickly, is there any power-up nearby, who is the nearest teammate, etc. Your design sounds very interesting, and it's the first time I've seen concepts like plasticity being used. You also address the problem of optimizing a path towards multiple goals (TSP like). For reference, on a PIII/500MHz, I could do over 5,000,000 distance lookup/s, whereas I could do solely ~1,700 A* path searches. (The earlier posted difference between A* and path lookup was incorrect, and should be a factor 2,000, not 200). William |
|
Speed Tests
The cache also has distance values (or travel-time, or cumulated cost), so the penalty for a query is simply an array lookup. I'm in the process of reducing the memory overhead for each node -- there's few nodes anyway, but every little helps. I'm going to write a paper about it for the ICCG. I've no idea if this is the sort of thing they are looking for, but it's definitely a good opportunity to do lots more testing. Cheers for the interest ;) |
|
Quake
Hi Alex Thanks Giles |
|
Quake 3 Conversion
Not yet I'm afraid. There are medium term plans to do so, but don't expect too much as this isn't the top priority ;) |
|
Emulation
I could be wrong as I don't run XP, but I was under the impression that XP had the capability to emulate the other windows platforms. I friend of mine said he couldn't run Max Payne, and then he simply right clicked on the executable, selected "windows 98" and tried again, and everything worked fine. I'd have thought you could do the same for Q2. cheers, Andy |
|
Xp emulation
yes you can do emulation things but it doesn't seem to work for Quake 2. I don't know why he would need to do this for Max PAyne as it is a new game and works fine in XP> Thanks for the tip anyway Giles |
|
give Q2 a try...
Quake2 runs fine under Win2000. I can't imagine why it wouldn't work on WinXP (which basically is Win2000 under the hood). |
|
UN. Bot (Torque)
I like your U.N. Bot very much. Do you release your source code for free. I need this code to use it for the Torque(www.garagegames.com) Please reply.. Thankx, Martin |
|
SDK
Hi martin, welcome to the AI Depot. I will not be releasing the source-code for free. It will be part of a navigation SDK which you will have free access to. Its fully configurable nature will ensure that you can get the most out of it. As for Torque, the whole code is modular and based on an interface with the environment (input/output). All you need to do is port a minimal subset to this engine, and you're sorted. The more functionality you add to the sensory layer, the better the system will perform. I'll let you know when I have something more concrete. |
|