Difference between revisions of "User talk:Morgaine Dinova"
(left note) |
|||
(17 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
Talk to me, somebody! :P | Talk to me, somebody! :P | ||
= | = Visitors, add stuff here = | ||
= Distributing physics = | |||
* | Hello Morgaine, I'd like your help with understanding the challenge of distributing physics (and any kind of interaction, ultimately) in a supergrid model of SL: | ||
* In theory, the task of processing Second Life can be divided into as many seperate processes as there are individual objects and avatars. | |||
Interaction between those individual processes (the most basic form of concurrency) runs against the capacity to transport, between those processes, the information necessary for their interaction. In simpler words: when SL is exploded into millions of parallel processes running seperate from each other, the main problem will now be the interconnection needed between them. Is that right ? | |||
So the problem of ensuring each of the processes connects to the relevant other processes is key. This necessary interconnection increases with the concurrency of those processes. Be it physical interaction or chat, it appears to grow with the factorial of the number of concurrent items. Right ? We tend to think that regrouping the concurrent processes on the same machine eliminates much of the delay problem, and it is true, but of course it also reduces any benefit of seperating the processes in the first place (making them run faster individually): the optimisation of this regrouping, so that it adapts to the concurrency, is essential. In current SL this is done by migrating tasks to the sim that runs the region - we've seen it introduces a new problem: that of handling the sometimes massive migration load. However this solution can be reused even when physics are distributed (either by migrating the tasks themselves just like it is done today, or by migrating the sims running those tasks onto the same machine). | |||
Pushing the thought further, it seems SL's function is really about concurrency. It all comes down to a quest for the best compromise (between "adding more capacity at running an individual process" - more CPUs - and "keeping up with their interaction" - fighting lag) in order to produce the result of the interaction between a growing number of individual tasks. What I understand of parallelization, is that we don't really need to produce a big One and Only result of the sum of all interaction to be shared between all the participating tasks. It's not necessary at all, because all what each individual task needs, in the end, is just its own little part of the result and not the whole. So the process of calculating the interaction should be distributed too. | |||
In order to do that, I think that the concurrency could be encoded into the states of connections between the tasks themselves: connection opened means there is concurrency (because there needs to be some interaction), no connection means no concurrency (because no interaction is possible or meant to happen).* This means the problem becomes one of finding the proper rules for each task to know when to open or close such connections. In my proposal this is done through "presence lists". I would love to know your opinion on this method, or if you know other methods that could be used as well. | |||
Let me illustrate with an example: | |||
:It's Snowball Fighting day in the supergrid, avatars are throwing snowballs at each other everywhere in the whole metaverse. | |||
:When avatar A throws a snowball at avatar B, he does not need to know nor care that avatar C, not in his field of vision, is throwing a snowball at avatar D, which is neither in his field of vision, even though C and D are in the same region. Here the "field of vision" becomes the field of interaction, it determines what the task "avatar A" is gonna connect to (to interact with), and tasks which are not in this field won't be connected to (no interaction with them). | |||
:Each snowball runs independantly: it knows where it is and how fast it is going, it knows the local presence list, and it manages its own movement and collision. It has to know the collision conditions, so it has to ask them to relevant processes present there, this is some necessary interaction, limited to a restrained set of processes and further reduced by the snowball's own logic. In case of collision the snowball only has to treat its own half of it, at most it notifies the collided task that they bumped together (and how hard). | |||
Another illustration: | |||
:Let's figure a popular game in the supergrid SL, where the players walk over a big oscillating platform to tilt it under their weight, and in this way control - collectively - the movement of a ball in a maze (a sort of multiplayer tilt maze). With centralised physics one sim has to calculate all the physics, movement handling and resulting tilt and ball movement and send it all (dynamics updates for each other included) to each task. | |||
:With distributed physics it only needs to gather the relevant weight position data, already calculated by each task handling its own movement, and integrate it to get the resultant tilt, then send back this tilt to each task. Each task is responsible for updating its own position and telling the other tasks, if needed. When players bump into each other they calculate each the result, the tilting platform only receives updates that already include the collision handling, it does not have to handle it itself. | |||
Does all this sound about right to you ?--[[User:Jesrad Seraph|Jesrad Seraph]] 06:31, 14 December 2007 (PST) | |||
''* Maybe I should replace "connection" with "circuit" ?'' | |||
:* I've been away since early December dealing with domestic issues, Jesrad, and only saw your entry just now. | |||
: I find this very interesting, although I'm not sure what brought distributed physics into discussion here. While it's clearly the method of the future (since it's the only implementation of physics that scales well), I've not grasped that bull by the horns at all. Instead, I've always assumed per-zone centralized physics computations, mainly for the pragmatic reason that that's what LL are doing now and are unlikely to change any time soon. This was true even in my virtualized regions design: each virtual region would still have a single physics computation to do, and even if that were computed with the aid of multiple nodes (for instance, partly on the client CPU), it would still be a single, centrally-controlled deterministic computation. | |||
: That said, your suggestion is much more interesting, design-wise. This would necessarily end up as another distinct implementation of SL servers I feel, and not just a distributed add-on for physics. The multi-process approach for objects and physics would impact on so many things that one might as well design the entire system that way. | |||
: If this subject is being discussed in the Physics and Geometry VAG, I'd be interested in joining in. --[[User:Morgaine Dinova|Morgaine Dinova]] 08:19, 29 January 2008 (PST) | |||
= Rudeness = | |||
Additions or changes made without discussion in the corresponding Talk page will simply be reverted without further ado. If you have a serious point to make (as opposed to simply being a BOFH), then write the point down and give appropriate reasoning. I always examine reasoned arguments, and accept those that make logical sense. No, a catchall template will not do, sorry. | |||
= Drafts = | |||
== [[Multi-Process_Client_VAG_--_draft]] == | |||
Comments should best go in [[Talk:Multi-Process_Client_VAG_--_draft]]. | |||
== Object composition == | |||
I just wanted to add that while object linking may in retrospect look like a mistake, it is important to remember that during SL's infancy it was pushing the limits of both CPU and GPU capabilities. The design considerations were different, nobody knew for sure what GPU and CPU developments were coming. -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 06:39, 9 December 2008 (UTC) | |||
* Oh, indeed Strife. But as an engineer, having once identified a problem (as Philip and Cory did), one should attempt to fix it rapidly, especially when it's such a critical and fundamental disaster as totally denying the population the ability to build from components and so take product engineering to a higher level. The fact that all SL items are still being made from raw atoms rather than from wheels and engines is nothing short of an unmitigated disaster. We have a world full of giants, and no way of getting up on their shoulders ... /me weeps. [[User:Morgaine Dinova|Morgaine Dinova]] 07:49, 9 December 2008 (UTC) |
Latest revision as of 02:42, 9 December 2008
Talk to me, somebody! :P
Visitors, add stuff here
Distributing physics
Hello Morgaine, I'd like your help with understanding the challenge of distributing physics (and any kind of interaction, ultimately) in a supergrid model of SL:
- In theory, the task of processing Second Life can be divided into as many seperate processes as there are individual objects and avatars.
Interaction between those individual processes (the most basic form of concurrency) runs against the capacity to transport, between those processes, the information necessary for their interaction. In simpler words: when SL is exploded into millions of parallel processes running seperate from each other, the main problem will now be the interconnection needed between them. Is that right ?
So the problem of ensuring each of the processes connects to the relevant other processes is key. This necessary interconnection increases with the concurrency of those processes. Be it physical interaction or chat, it appears to grow with the factorial of the number of concurrent items. Right ? We tend to think that regrouping the concurrent processes on the same machine eliminates much of the delay problem, and it is true, but of course it also reduces any benefit of seperating the processes in the first place (making them run faster individually): the optimisation of this regrouping, so that it adapts to the concurrency, is essential. In current SL this is done by migrating tasks to the sim that runs the region - we've seen it introduces a new problem: that of handling the sometimes massive migration load. However this solution can be reused even when physics are distributed (either by migrating the tasks themselves just like it is done today, or by migrating the sims running those tasks onto the same machine).
Pushing the thought further, it seems SL's function is really about concurrency. It all comes down to a quest for the best compromise (between "adding more capacity at running an individual process" - more CPUs - and "keeping up with their interaction" - fighting lag) in order to produce the result of the interaction between a growing number of individual tasks. What I understand of parallelization, is that we don't really need to produce a big One and Only result of the sum of all interaction to be shared between all the participating tasks. It's not necessary at all, because all what each individual task needs, in the end, is just its own little part of the result and not the whole. So the process of calculating the interaction should be distributed too.
In order to do that, I think that the concurrency could be encoded into the states of connections between the tasks themselves: connection opened means there is concurrency (because there needs to be some interaction), no connection means no concurrency (because no interaction is possible or meant to happen).* This means the problem becomes one of finding the proper rules for each task to know when to open or close such connections. In my proposal this is done through "presence lists". I would love to know your opinion on this method, or if you know other methods that could be used as well.
Let me illustrate with an example:
- It's Snowball Fighting day in the supergrid, avatars are throwing snowballs at each other everywhere in the whole metaverse.
- When avatar A throws a snowball at avatar B, he does not need to know nor care that avatar C, not in his field of vision, is throwing a snowball at avatar D, which is neither in his field of vision, even though C and D are in the same region. Here the "field of vision" becomes the field of interaction, it determines what the task "avatar A" is gonna connect to (to interact with), and tasks which are not in this field won't be connected to (no interaction with them).
- Each snowball runs independantly: it knows where it is and how fast it is going, it knows the local presence list, and it manages its own movement and collision. It has to know the collision conditions, so it has to ask them to relevant processes present there, this is some necessary interaction, limited to a restrained set of processes and further reduced by the snowball's own logic. In case of collision the snowball only has to treat its own half of it, at most it notifies the collided task that they bumped together (and how hard).
Another illustration:
- Let's figure a popular game in the supergrid SL, where the players walk over a big oscillating platform to tilt it under their weight, and in this way control - collectively - the movement of a ball in a maze (a sort of multiplayer tilt maze). With centralised physics one sim has to calculate all the physics, movement handling and resulting tilt and ball movement and send it all (dynamics updates for each other included) to each task.
- With distributed physics it only needs to gather the relevant weight position data, already calculated by each task handling its own movement, and integrate it to get the resultant tilt, then send back this tilt to each task. Each task is responsible for updating its own position and telling the other tasks, if needed. When players bump into each other they calculate each the result, the tilting platform only receives updates that already include the collision handling, it does not have to handle it itself.
Does all this sound about right to you ?--Jesrad Seraph 06:31, 14 December 2007 (PST)
* Maybe I should replace "connection" with "circuit" ?
- I've been away since early December dealing with domestic issues, Jesrad, and only saw your entry just now.
- I find this very interesting, although I'm not sure what brought distributed physics into discussion here. While it's clearly the method of the future (since it's the only implementation of physics that scales well), I've not grasped that bull by the horns at all. Instead, I've always assumed per-zone centralized physics computations, mainly for the pragmatic reason that that's what LL are doing now and are unlikely to change any time soon. This was true even in my virtualized regions design: each virtual region would still have a single physics computation to do, and even if that were computed with the aid of multiple nodes (for instance, partly on the client CPU), it would still be a single, centrally-controlled deterministic computation.
- That said, your suggestion is much more interesting, design-wise. This would necessarily end up as another distinct implementation of SL servers I feel, and not just a distributed add-on for physics. The multi-process approach for objects and physics would impact on so many things that one might as well design the entire system that way.
- If this subject is being discussed in the Physics and Geometry VAG, I'd be interested in joining in. --Morgaine Dinova 08:19, 29 January 2008 (PST)
Rudeness
Additions or changes made without discussion in the corresponding Talk page will simply be reverted without further ado. If you have a serious point to make (as opposed to simply being a BOFH), then write the point down and give appropriate reasoning. I always examine reasoned arguments, and accept those that make logical sense. No, a catchall template will not do, sorry.
Drafts
Multi-Process_Client_VAG_--_draft
Comments should best go in Talk:Multi-Process_Client_VAG_--_draft.
Object composition
I just wanted to add that while object linking may in retrospect look like a mistake, it is important to remember that during SL's infancy it was pushing the limits of both CPU and GPU capabilities. The design considerations were different, nobody knew for sure what GPU and CPU developments were coming. -- Strife (talk|contribs) 06:39, 9 December 2008 (UTC)
- Oh, indeed Strife. But as an engineer, having once identified a problem (as Philip and Cory did), one should attempt to fix it rapidly, especially when it's such a critical and fundamental disaster as totally denying the population the ability to build from components and so take product engineering to a higher level. The fact that all SL items are still being made from raw atoms rather than from wheels and engines is nothing short of an unmitigated disaster. We have a world full of giants, and no way of getting up on their shoulders ... /me weeps. Morgaine Dinova 07:49, 9 December 2008 (UTC)