Consider a shooting game, common in video game arcades. These games are first person perspective, with targets and other sprites rendered over a background, which is usually also a rendering of 3D objects.
To adapt this experience to the broadcast arena, we need to send all the information via the broadcast, then allow the receiving client to filter and interpret the information, render the scene and the sprites, and resolve the effects of user interaction, such as breaking a window or shooting an enemy.
Furthermore, to accommodate the existing "channel surfing" paradigm, we enough information to be accessible in order to allow someone to enter the game late.
The main concentration of this project will be the exploration of the limits to which client side processing and I/O can be used to interact with the broadcast medium.
Since this video panorama is not our expertise and area of focus, ideally, we would be able to just borrow this technology from another group. In the mean time, we may just choose to implement fixed one-color background tiles, or a single frame of background (a la Quicktime VR). Later on, we can implement video panorama.
A big challenge will be adapting old formats or inventing new ones to deliver concise representations of objects at the right time (i.e., as close as possible to when it is needed). The format used will have to handle 3-D data, animation data, and interactivity data in as little bandwidth as possible (since video background will occupy quite a bit).
What if a user joins our channel in the middle of a game? Then slowly, the 3D scene would build up, and sprites would start appearing. However, all sprites in mid-transmission will be lost and discarded. Only new objects will start.
This is also a complication to our attempts at interactivity and playing with the time dimension, since channel surfers will miss out on later events if the require earlier actions. For example, if the player needs to shoot Wario fifteen minutes into the program in order to have a chance at saving the princess, a player who tunes in twenty minutes into the program may receive an alternate ending (the infamous "thank you Mario, but our princess is in another castle!").
Also, it is possible that the data may be received in different formats (e.g., a panorama for the backdrop, 3D models of objects, and MPEG sprites). In that case, the information must be composited in a visually acceptable way.
The following list describes behavior that the client must resolve in
order to interact with the user:
How do we resolve these effects on the client side, considering
there is no feedback to the broadcaster, only local processing?
Note that many of these interactivity issues also raise transmission problems: how do we send the information "just in time"? Data that describes an event must be sent before the event can occur, but sending it too early may result in it being missed by someone who tunes in late. Furthermore, the data must be scheduled so as not to exceed the bandwidth available. See also Transmission.
Finally, we may wish to allow interaction aside from just game play.
For example, user-supplied textures, competing for scores, or even configuration
options might be specified ahead of time.
The issue here is how to design an effective data structure for each sprite, to capture all the possible conditions and store this meta data. Some objects may get broadcast early, to take advantage of open bandwidth. Some objects may never be used/displayed if conditions are not met.
With an elaborate system, we can simulate the multiple paths scenario, where the user appears to have a choice in where to go, etc. The tradeoff is large flexibility in the time dimension (lots of stored data and buffering possibly required) versus allowing channel surfers a full experience.
Ideally, the content creator would have a tool that would make decisions
such as when to send sprite or interaction data transparent to the producer;
this may become part of the project if content creation proves sufficiently
hard.
Last Updated: Feb. 25, 1999, slusallek@graphics.stanford.edu