CS248 Final Project Report

			12/07/99


  Team: mazewar
  Platform: PC ( OpenGL + DirectSound + TCP/IP )
  Member: Xiaofeng Ren, Bin Li, Quanqiu Wang


1. How to run the game?

  Requirement:  win9x/NT, DirectX ( DirectSound ), TCP/IP networking

  To start a game in single player mode ( or server mode ), simply use command
'mazewar'; use 'mazewar server_name' to start a client which automatically 
connects to the server,  server_name refers to the server's host name ( or 
IP address ).

  * Please run our program on a windows machine with sound card and DirectX
support. If there is any problem in making the program run, please let us 
know and we could give more details of the development environment.



2. How to play the game?


  The game environment is a maze, in which one family of mechs fight with
the other. The controls are standard: use the four arrows keys to move 
yourself, use the space key to fire. There are two additional keys, 'd' 
and 'f', which allows you to move in the lateral direction.

  Health, score, and remaining bullets are displayed on the panel to the
right of the screen. The health ( equivalently, energy level ) will be 
gradually restored. Each time you eliminate one opponent, the score 
increases by 1. In a networked game, players were designed to cooperate; 
but you can choose to fight each other too.



3. Features




-- Lighting and smooth shading

Inside our maze, we can specify several lights ( at most eight). To do
lighting, we set up light sources, set up materials, and specify
geometry with normals. Also the lamps on the wall could be toggled on as
light sources; however it tends to slow down the program and breaks
the overall feel of the scene. We recommend running the game with 
the lamp lights off ( default ).

-- Texture mapping

We implement texture mapping for the floor, the ceiling, and all the
walls inside our maze. Steps: Use the code provided to read RGB
files. Establish texture context. Load the texture. Use the texture          
for the geometrical primitives.

-- Sound

DirectSound is used in our project to play sound and music. The original
 C++ code is from a sample program in DirectX SDK. Secondary DirectSound
buffers are used since we want the automatic mixing feature. We have made
significant changes to the code so that now our game supports playing multiple
 sound effects at the same time. We have background music, gunfire effects
and explosion effects in the game.


-- Collision Detection

  The movement in the maze is essentially 2D. This makes the collision
detection relatively simple. It's based on the 2D distance between two
objects, or between an object and a wall. The sizes of objects are
specified in model.cpp.

  To detect whether a bullet hits an object is more complicated, since
the bullet could move at a high speed. We implemented a general Clear
function, which is also used to determine whether an object is occluded by
others when a mech attempts to shoot. Using the Clear function we determine
whether the bullet hits anything along the way between two updates of the
scene. Analytic appproach is possible but hard to implement and not easy to
adapt to changes in game design. Hence we used a sampling technique,
essentially checking all the positions along the path, see if any collision
has happened.


-- View Frustrum Culling

  Culling is essential to the performance of our game since we have a large
world and possibly many objects. On the other hand, it's easy since we are 
in a world consisting of cells. For every update of the scene, each cell is
determined whether it's in the view frustrum or not. Only culling in the
horizontal direction is considered; there is no culling in the vertical
direction, due to the structure of the scene. An angle is computed, and 
if it's larger than the field of view, it's marked as invisible.


-- Occlussion Culling


  This is also cell-based, not hard to implement but it reduces the visible 
cells to usually 5 - 15, and makes it feasible to play the game in real time.
The four corners of a cell is each checked for visibility; if all of four 
are occluded by other cells and hence invisible, the cell is marked invisible.
All objects in an invisible cell are not drawn.



-- Levels of Detail Control

  The original modes are fairly complex and envolves a large number of 
triangles. Using a free tool LODESTAR ( see reference ), we reduced the models 
to different levels of detail, and used them in the game. The control is quite
simple: there is a base level which can be adjusted from the menu; starting 
from the base level, a distance-based criterion is based. The farther away an 
object is, the more simple model we use.


-- Motion Animation

  We didn't have access to any motion capture data, so we did the 
decomposition and refinement of poses manually. The sequences we finally 
used include a 8-step walk sequence and two 5-step rotation sequences.

  To integrate motion control with collision detection and AI, the 
state of a computer-controlled mech is divided into several levels. The
lowest level is the Action Level, specifying which action ( walk, turn, 
stand, peek, shoot, etc. ) the mech is performing and which step of the 
action it is in. The intermediate level is the Tactic Level, specifying 
the intention that the mech wants to turn to a specific direction, or 
want to go to a certain cell. The highest level is the State, the mech 
could be patrolling, attacking, retreating ( not implemented ), etc. 
And lower-level variables are set according to the high-level variables.


-- AI


  We have used a few simple techniques to enhance the AI of the game. Every
computer-controlled mech is supposed to have a remote sensor, and it can 
determine the position of a player-controlled mech once it's within a certain
range. A simple path planning algorithm ( use value iteration ) determines 
the closes way to reach the player-controlled mech. It's invoked when the 
object can not be seen and/or it's getting out of attack range.

  The mech will enter a 'maneuver' mode when it gets close to the object to 
attack and fires. No sophisticated AI techniques are used here; it just moves 
randomly to survive the bullets. ( The bullet is assumed to move fast and 
there is no time for reaction. )


-- Network


we choose UDP socket for its speed advantage over TCP socket. It seems
reliability, that is, loss of packets, is not a very big problem here. First
because the network we are using is fairly reliable. Second, the missing
packet may manifest only as key stroke not registered, or some small jitter
in the game. It should not cause much uncomfort for the user.

When the program is run without argument, it automatically run as a server
and handle connection from clients.  When the program is run with a argument,
i.e., the remote machine name, it will run as client and try to connect to
the specified remote machine .

The server and clients exchange packets between each other. When a client
make a move, it will send a packet to the server. The server will update
the game status according to the events of itself and events  from clients.
And the game config is encapasulated in a packet and send to all clients.
The clients will update the game config according the newest server packet.

We intend to use packetNo to maintain the order of packets. We also keep
the time when the packet is receiced. The timing can serve several purpose,
to make sure we received packet is recent and to log when do we last heard
from client or server. This information is used to check the server and client
status.
                     


-- Game Level Editor



Game level editor enables the user to specify the layout of the wall, the
position of light , robots, and players. It is required that the map contains
at least self, that is, myself. Otherwise the editor will not save and exit.

It is a orthogonal view of a map, the position of the map is determined by
the position of mouse.

The user can pick a object to draw from a panel. Then the user can start to
draw in the drawable map region. And the objects on the map can be "brushed
away" by the brush , which simply remove any objects in the cell where the
brush is applied.  The editor also enable the user to choose the map size,
player numbers etc from menu selection.

The map infomation is then saved in a 'mwar.map' file , which can be readed 
by the game program by moving the file to the game directory and renaming 
it to 'mazewar.map'.
                                                                          

-- On-Screen Panel


The control panel is meant to be what the player should see inside the mech,
that is, a round window of the scene , and some display and control inside the
machine.

A control panel is added to display the game information and enable some user
input. It is drawn on top of the main display with a orthogonal view
projection. The side of the window is drawn using triangle and quads.

The user info including bullets,score,health and message are passed on to the
drawing function as a CInfo object. The information is then drawn at appropriate
 places. Message is drawn on the bottom, ie , message board. Health info is
displayed as a clock, which shows the percentage of the player health.

Also a map is also displayed. The postion of the players and robots are drawn
on the map to provide a overhead view of the maze.

We haved originally wanted to let the user choose weapons from the control
panel. This is done by the mouse function, which will change the user weapons
if appropriate region is selected. This feature is not drawn on the panel yet
because we didn't have time to implement the weapon change in the game.   


-- Procedural modeling


We use particle systems to model fire. The used method is briefly
described in Chapter 20 of the textbook. We specify the initial
particles with positions, moving directions, velocity, and colors.
The position of each particle at subsequent times is computed by
adding its velocity vector to its current position. We add
some control over the shape of the fire, and the color changing mode
of the fire.






4. External Resource


  The two 3D models, sound files, and the texture images are all free
 resources from http://www.3dcafe.com/. The original 3D models are under 
./external/. They are converted into C code by 3D Exploration from XD Software.

  The code for reading RGB images is provided to us. The code for reading
WAVE files and playing streaming DirectSound buffer is adapted from a Microsoft
 DirectX SDK sample.

  The model decimation is done using the free tool LODESTAR from 
http://www.cg.tuwien.ac.at/research/vr/lodestar/.