Difference between revisions of "AWG: Core simulator exploration"
(Thoughts on core tasks for simulators...) |
|||
Line 63: | Line 63: | ||
Does this really help? I'm not sure. Here's why, roughly... We assume each resident is keeping a complete | Does this really help? I'm not sure. Here's why, roughly... We assume each resident is keeping a complete | ||
snapshot of the region they are in. (We ignore for the moment, the cost of getting that to their local machine, or we allocate a grid resource to run the copy for them, and assume we can clone the base region state to the grid resource efficiently) Now, for each frame, (within reason, the looser the constraints, the less shared state we get) Now we distribute the set of inputs to each copy of the region and run the regions in parallel. Since they are running the same algorithms on the same data, we each get similar, ideally identical results. Of course, lost packets, and lag in sharing inputs will rapidly make each copy of the region subtly different. We also have the minor problem of now sharing every avatars inputs, and every scripted object's state changes across all the other copies of the region. | snapshot of the region they are in. (We ignore for the moment, the cost of getting that to their local machine, or we allocate a grid resource to run the copy for them, and assume we can clone the base region state to the grid resource efficiently) Now, for each frame, (within reason, the looser the constraints, the less shared state we get) Now we distribute the set of inputs to each copy of the region and run the regions in parallel. Since they are running the same algorithms on the same data, we each get similar, ideally identical results. Of course, lost packets, and lag in sharing inputs will rapidly make each copy of the region subtly different. We also have the minor problem of now sharing every avatars inputs, and every scripted object's state changes across all the other copies of the region. | ||
[[Category:AW Groupies User Pages]] |
Latest revision as of 08:20, 1 October 2008
Zha's thoughts on 'the core business of simulators'
Raw thoughts warning
This is an early draft, of some raw thoughts. Please feel free to comment on them, ask questions and such. This is not a collective work product at the moment, but rather an atttempt to share my thinking. I do think that a more formal, collectively mulled over version of this page would be a good thing. This isn't it, at least not yet. - Zha
This is a companion to my notes on state melding AWG: state melding exploration. It attempts to explore what the core activities of a region simulator might be, in terms of the rough language of the evolving second generation grid activity being pursued by the AWG. It focuses, as much as possible, not on precise implementation details, but broad, technical approaches, rooted in the AWG's assumptions. It is, again, very much an attempt to get some ideas out and talked about. Feedback, and discussion on this is deeply appreciated and desired.
So, once again, we start with the base terms. We have a region, being hosted in a Region Simulator. This is a second generation deployment, so, the region simulator is only running the simulation. The agent specific portions of activity are assumed to be split off to agent servers running in a separate agent domain, per the proposed design. We will also, for the moment, model scripts slightly oddly, attempting to not make any locality assumptions about where the scripts are run.
The simulator holds:
- A static landscape
- A set of prims:
- Some physical
- Some in linksets
- Some attached to avatars
- Some running scripts
- A set of avatars
- The avatars are coupled to
- Agents in the agent domain
- Clients
- The avatars are coupled to
- A physical simulation program
We start at a given moment, with a frame. The frame is a momentary snapshot of the region, corresponding to the last state all the observers have seen. Our goal is to step forward from the current frame to the next frame, noting what needs to occur as we create the new frame.
We accept a set of sets of inputs: (Some of which may be null sets)
- Motion requests from the clients associated with our avatars
- Chat utterances from the clients associated with our avatars
- Avatar appearance updates
- Changes in the avatar's linksets
- Changes in objects due to scripting inputs
- New objects created in the region due to requests made on the region
- Requests to delete objects due to requests made on the region
- The addition of Avatars
- The subtraction of Avatars
We run a frame of physical simulation. this:
- Moves a set of objects
- Causes some collisions to occur
- May move some objects or avatars out of the region
At this point, we now have a new frame. We now face the challenge of letting all the relevant observers know about the new frame. Each client needs to know, as does every scripted object with a sensor. If we have any viewpoints looking into the region from outside, they too need to be updated.
What needs to be reported?
We could just send the entire new frame. But.. In practice, we deeply don't want to do that. What we want, in fact, most likely, is to send all the changes, from each viewpoint's perspective. Roughly, for from each observer's location, out to the distance they observer (The view distance for the clients, the sensing distance for any scripted object) we want to send an update for each changed object
There are several ways to imagine doing this. All revolve around knowing, the location of all the changed objects, the location of all the observation points, and then getting the two together.
Note that this is independent of how we manage the region. We don't yet speak about processes, boundaries and such. A simple approach, tho, is that we have every viewpoint, register a listener on every object inside its observation window, and have the changed objects notify each viewpoint, as they update. As the viewpoint moves (or the size of its observation window changes) we add and delete registrations to the objects accordingly.
Running in parallel
So, we all say, lets go peer to peer, or run the simulation in parallel...
Does this really help? I'm not sure. Here's why, roughly... We assume each resident is keeping a complete snapshot of the region they are in. (We ignore for the moment, the cost of getting that to their local machine, or we allocate a grid resource to run the copy for them, and assume we can clone the base region state to the grid resource efficiently) Now, for each frame, (within reason, the looser the constraints, the less shared state we get) Now we distribute the set of inputs to each copy of the region and run the regions in parallel. Since they are running the same algorithms on the same data, we each get similar, ideally identical results. Of course, lost packets, and lag in sharing inputs will rapidly make each copy of the region subtly different. We also have the minor problem of now sharing every avatars inputs, and every scripted object's state changes across all the other copies of the region.