User talk:Morgaine Dinova

From Second Life Wiki
Jump to navigation Jump to search

Talk to me, somebody! :P

Visitors, add stuff here

Hello Morgaine, I'd like your help with understanding the challenge of distributing physics (and any kind of interaction, ultimately) in a supergrid model of SL:

  • In theory, the task of processing Second Life can be divided into as many seperate processes as there are individual objects and avatars.

Interaction between those individual processes (the most basic form of concurrency) runs against the capacity to transport, between those processes, the information necessary for their interaction. In simpler words: when SL is exploded into millions of parallel processes running seperate from each other, the main problem will now be the interconnection needed between them. Is that right ?

So the problem of ensuring each of the processes connects to the relevant other processes is key. This necessary interconnection increases with the concurrency of those processes. Be it physical interaction or chat, it appears to grow with the factorial of the number of concurrent items. Right ? We tend to think that regrouping the concurrent processes on the same machine eliminates much of the delay problem, and it is true, but of course it also reduces any benefit of seperating the processes in the first place (making them run faster individually): the optimisation of this regrouping, so that it adapts to the concurrency, is essential. In current SL this is done by migrating tasks to the sim that runs the region - we've seen it introduces a new problem: that of handling the sometimes massive migration load. However this solution can be reused even when physics are distributed (either by migrating the tasks themselves just like it is done today, or by migrating the sims running those tasks onto the same machine).

Pushing the thought further, it seems SL's function is really about concurrency. It all comes down to a quest for the best compromise (between "adding more capacity at running an individual process" - more CPUs - and "keeping up with their interaction" - fighting lag) in order to produce the result of the interaction between a growing number of individual tasks. What I understand of parallelization, is that we don't really need to produce a big One and Only result of the sum of all interaction to be shared between all the participating tasks. It's not necessary at all, because all what each individual task needs, in the end, is just its own little part of the result and not the whole. So the process of calculating the interaction should be distributed too.

In order to do that, I think that the concurrency could be encoded into the states of connections between the tasks themselves: connection opened means there is concurrency (because there needs to be some interaction), no connection means no concurrency (because no interaction is possible or meant to happen).* This means the problem becomes one of finding the proper rules for each task to know when to open or close such connections. In my proposal this is done through "presence lists". I would love to know your opinion on this method, or if you know other methods that could be used as well.

Let me illustrate with an example:

It's Snowball Fighting day in the supergrid, avatars are throwing snowballs at each other everywhere in the whole metaverse.
When avatar A throws a snowball at avatar B, he does not need to know nor care that avatar C, not in his field of vision, is throwing a snowball at avatar D, which is neither in his field of vision, even though C and D are in the same region. Here the "field of vision" becomes the field of interaction, it determines what the task "avatar A" is gonna connect to (to interact with), and tasks which are not in this field won't be connected to (no interaction with them).
Each snowball runs independantly: it knows where it is and how fast it is going, it knows the local presence list, and it manages its own movement and collision. It has to know the collision conditions, so it has to ask them to relevant processes present there, this is some necessary interaction, limited to a restrained set of processes and further reduced by the snowball's own logic. In case of collision the snowball only has to treat its own half of it, at most it notifies the collided task that they bumped together (and how hard).

Another illustration:

Let's figure a popular game in the supergrid SL, where the players walk over a big oscillating platform to tilt it under their weight, and in this way control - collectively - the movement of a ball in a maze (a sort of multiplayer tilt maze). With centralised physics one sim has to calculate all the physics, movement handling and resulting tilt and ball movement and send it all (dynamics updates for each other included) to each task.
With distributed physics it only needs to gather the relevant weight position data, already calculated by each task handling its own movement, and integrate it to get the resultant tilt, then send back this tilt to each task. Each task is responsible for updating its own position and telling the other tasks, if needed. When players bump into each other they calculate each the result, the tilting platform only receives updates that already include the collision handling, it does not have to handle it itself.

Does all this sound about right to you ?--Jesrad Seraph 06:31, 14 December 2007 (PST)

* Maybe I should replace "connection" with "circuit" ?

Rudeness

Additions or changes made without discussion in the corresponding Talk page will simply be reverted without further ado. If you have a serious point to make (as opposed to simply being a BOFH), then write the point down and give appropriate reasoning. I always examine reasoned arguments, and accept those that make logical sense. No, a catchall template will not do, sorry.

Drafts

Multi-Process_Client_VAG_--_draft

Comments should best go in Talk:Multi-Process_Client_VAG_--_draft.