Difference between revisions of "User:Jesrad Seraph"

From Second Life Wiki
Jump to navigation Jump to search
 
Line 12: Line 12:


=== My proposal for SL architecture ===
=== My proposal for SL architecture ===
I've written down most of what I advocate for SL architecture design on the [[Brainstorming]] page, under Per-resident subdivision of the Grid as a scaling method.
I've written down most of what I advocate for SL architecture design on a dedicated page: [[AWG Scalability through per-resident subdivision of the Grid]].


I'll try to make a clearer, shorter presentation here, maybe using pictures.
I'll try to make a clearer, shorter presentation here, maybe using pictures.

Latest revision as of 02:41, 12 November 2007

Jesrad Seraph tries to help with the Architecture Working Group but does not speak English as mother tongue, so technical vocabulary might be off or unusual on this page and in contributions. Work is limited to workflow design, and does not enter debates on underlying technology for transmission, storage or format of data.

General architecture considerations

  • Client computers constitute a significant processing power reserve, which has the advantage of scaling with concurrent user number, so any smart architecture design has to tap into it.
  • Client's network connections constitute a significant bandwidth reserve, which has the advantage of scaling - on global average - with the concurrent user number, so any smart architecture design has to make use of it.
  • Any realistic current or future Grid design involves a large number of computers (large here meaning >64), which puts Grid processing into the supercomputing category of computational problems, so supercomputing best practices are advised as guidelines.
  • Basically, grid processes are to be as parallel as possible, and the "global grid state", the sum of all the states of all the variables that make up the grid, has to be manipulated through a preferably NUMA method.

Specific considerations

  • Grid processes can, theoretically, be parallelized down to the individual rezzed object, individual avatar, individual script, individual parcel. Each and every of these entities can be seen as a distinct finite-states automaton running in parallel with the others. Making SL scalable is akin to pooling all the resources of the computers involved in the Grid processing (hopefully including, at some point, the clients' computers' processing power) in order to run this Plethora of automatons, so what we're trying to achieve in making SL as scalable as possible is akin to designing on-demand dynamic virtualization for these entities.
  • Physics, and specifically collisions, are the one big obstacle on the road to massive parallelization. Current SL implementation mitigates the computational explosion of collisions across the Plethora through geographical subdivision of the Grid processes (Regions), and further geographical subdivision by octrees seperation pre-processing.

My proposal for SL architecture

I've written down most of what I advocate for SL architecture design on a dedicated page: AWG Scalability through per-resident subdivision of the Grid.

I'll try to make a clearer, shorter presentation here, maybe using pictures.

  • The idea: Instead of running SL content on the machine that is bound to the virtual location, make them run on the machine that is bound to the owner of that content.
  • Instead of one sim per region, there is one sim per land owner, that runs this person's content gridwide. This means stacking more sims on the same number of machines, but each of them runs a much lighter load.
  • Pros:
  • SL scales much better
  • Your inworld content is accessible gridwide just like your inventory
  • No more region crossing lag, because the whole Grid is your region
  • You can make someone and all their belongings disappear entirely from your Second Life, and not waste a kilobyte of bandwidth or disk space on them anymore
  • You can at long last enjoy perfect privacy in SL, even on main continents
  • You can travel freely anywhere, even off world, or over (or beneath) the oceans
  • No more red banlines, the places you are not allowed in simply appear to not be there at all, save for the bare ground
  • No more autoreturn, your stuff can stay forever anywhere as long as you have the corresponding land somewhere in the Grid ; and no more need for autoreturn because your stuff can be "disappeared" from just the view of other people with a single click from them, and does not enter anyone else's prim count
  • No more need for "no-script" zones, since your scripts run on your land and not the host's
  • Your prim count is global, it stays valid anywhere in the Grid
  • You only lag as little as you can afford, being in a crowded place makes you lag more but you can cap that effect
  • Makes it possible, with some more work, to decentralize the Grid entirely and host your own or other people's content
  • You may customize your ground textures per parcel
  • You don't collide with non-rezzed stuff anymore
  • Login is faster
  • Cons:
  • You have a limit on the number of running scripts
  • In crowded places you may not be able to see your present friends at all if you max out your client's capacity
  • Stuff may delay more before it starts rezzing
  • Shooting (projectiles) may not work as well or at all, unless you're shooting your own stuff*
  • Sensors may be slower, or even broken*
  • Collision events may be slower or broken for static objects, except when triggered by your own objects*

* unless the interacting owners have sims that run on the same machine or on closely-networked machines

Ethics and the designs that make use of client-side processing and bandwidth

Any architecture design that makes use of client-side processing power (distributed processing, virtualization) and/or bandwidth (load-balancing, swarmcaching, proxies) has to take into account the fact that clients are not necessarily willing to dedicate such power and bandwidth. In fact, Austrian economics teach us that implementing proper (as in property-based) enticements leads to a bigger and better managed pool of resources.

Distributing physics: let's not miss the point

The main argument against the distribution of physics processing in SL goes like this: because collisions mean combinatory explosion, and because bandwidth is much scarcer than processing power when treating these collisions, distributing physics means trading processing power use for bandwidth use and causing premature depletion of it, by comparison, making distributed physics slower and less scalable in the end.

I disagree with this view because I think it misses the point thoroughly. The point of Grid processing is simulating an environment for the residents to live their Second Life in, not solving complicated physical simulation computational problems. In simpler words: if making the physics look good is an obstacle to making SL work at all, then off with the physics' head. I view collision/interaction handling as a secondary, best-effort service compared to the rest of SL, and I tend to think the bulk of residents hold a comparable view, as indicated by the fact that LL went as far as implement mechanisms for private estate managers to disable physics and collisions entirely on their land.

Secondly, outside of collisions/interactions the physics can be be distributed down to individual object level just as well as the rest: only dynamic objects which are not already seperated are concerned by the bandwidth limitation evoqued above. Static content is, well, static, and does not contribute to the combinatory explosion.

Lastly, a cascading model for physical interaction could be used, where physical interaction effects are one-way only (not interaction anymore, but action without reaction). This prevents combinatory explosion, too, while still allowing interesting uses (in the Grid resident sense).