User:Gareth Ellison/Views of the Gareth

From Second Life Wiki
Jump to navigation Jump to search

Gareth Nelson's views on the new grid architecture

As always I must let my views be known in a mildly eccentric manner. I shall do so upon this wiki.

So, watch this space and the rest of the wiki articles for my views.......

General considerations

  • The network protocol needs to be headed towards something RFCish to really take off
  • There needs to be multiple implementations of every component of the whole system

Comments on central services

The idea of central services under the control of LL makes sense on purely technical grounds (centralise search, domain lookup, L$ and identity management) but the social and legal issues need to be analysed further: Would LL still enforce the TOS as strictly for 3rd party sims that merely make use of these central services?

Viewer architecture

Where all end users see it all: everything starts here.

A few points as to how the viewer should function:

  • Cross-platform
    • The viewer needs to be accessible to as many platforms as possible, I am personally of the view that a minimal library of core protocol functions needs to be maintained as a seperate project to enable fast porting to new platforms
  • Lightweight
    • The viewer as it stands is to be blunt a bloated mess - I believe that the structure of the code could do with some improvement
    • On top of the code structure, the system requirements are too high
  • Graceful degradation of content
    • Tieing in with the previous point - the viewer is all or nothing: you have all the fancy new features or you have nothing currently. This needs to be replaced with something more graceful

Scaling up individual regions

I distributed a notecard in-world with this: Let's say we have a standard 256x256 region. We want to fit 500 users into it. Opensim's average benchmarks scaled up have given a general estimate of 100 users per sim. This means that for 500 users we need 5 sims regardless of the geographical/spatial configuration.

So, we have a 256x256 area of land to fit our 500 people into, using 5 sims.

256x256=65536 65536/5=13107

13107m2 of land can hold 100 users with a single sim instance. Laying this out is more complex, but I would suggest partitioning it with simple equal divisions, cutting up any non-regular region borders into regular geometric shapes to make this easier. A standard BSP algorithm will do for the more complex region border shapes. For a basic 65536m2 square-shaped border this works:


W 3333 E


(Excuse the crap ASCII art)

Now, if an avatar starts in our region at the north in sim 1 and walks south, they will have to cross 4 borders. Presuming that the actual crossing is just another UDP packet like the standard AgentUpdate the overhead in these crossings will be updating each sim's internal data structures.

In real terms this involves the old sim downgrading the agent's thread to a child agent and the new one upgrading the child agent to a full agent and setting co-ordinates. Other overhead related to moving the avatar around is ignored as irrelevant (it exists regardless of whether you walk around inside one sim or across into another).

So, the client has to wait 4 times for the following operations: 1.Boolean flag flip on old sim to downgrade 2.Extra "this avatar has moved sim" packets to other clients 3.Boolean flag flip on new sim to upgrade 4.Extra "this avatar just arrived in local sim" packets to other clients on new sim.

Without having profiling figures to hand - the only real bottleneck here is informing other clients. However, one must also note that this becomes essentially the same overhead as AgentUpdates anyway.

An avatar is either moving locally or moving sim<>sim but never both at the same time, and each state requires 1 packet to move it.

What this means ultimately is that the overhead of sim crossings turns out to be nothing but a simple bit flip operation on a properly designed sim. 1 instruction on modern CPUs!

Contrast this with how much network overhead would result from distributed physics and the complex interactions between different objects simulated on different network hosts.