Difference between revisions of "Client-side Scripting for HUDs and Widgets"

From Second Life Wiki
Jump to navigation Jump to search
(New page: This page summarizes the discussion held at the UEIG meeting of 2008-11-13 about client-side viewer extensions to deliver a far more powerful form of HUD mechanism than we have today. Thi...)
 
Line 9: Line 9:
Although the desired rendering of a HUD is client-side, the job of a HUD almost always involves interacting with the SL world as well.  It would be the purpose of the API mentioned above to allow client-side programming to interact with SL-side objects which are programmed on the server.  The in-world HUD attachments would become '''HUD gateways''' in this new architecture, acting as gateways between client-side programming and SL-side events.
Although the desired rendering of a HUD is client-side, the job of a HUD almost always involves interacting with the SL world as well.  It would be the purpose of the API mentioned above to allow client-side programming to interact with SL-side objects which are programmed on the server.  The in-world HUD attachments would become '''HUD gateways''' in this new architecture, acting as gateways between client-side programming and SL-side events.


Most commonly, clients interact with in-world objects by triggering one of a small set of in-world events (touch, sit, etc), or else by issuing text messages on chat channels.  The number of in-world events available for direct control by clients is small, fixed, and not particularly flexible, and even worse, these events can (normally) be triggered from client-side only by manual actions of the human UI operator.  Control through chat channels is more flexible, but it requires stream parsing by in-world scripts and hence has a significant overhead, and also is available client-side only to the human UI operator.
Most commonly, clients interact with in-world objects by triggering one of a small set of in-world events (touch, sit, etc), or else by issuing text messages on chat channels.  The number of in-world events available for direct control by clients is small, fixed, and not particularly flexible, and even worse, these events can (normally) be triggered from client-side only by manual actions of the human UI operator.  Control through chat channels is more flexible, but it requires stream parsing by in-world scripts and hence has a significant overhead, and also is available client-side only to the human UI operator (normally).


Client-side HUD programming would allow for smoother, better interaction, as well as the use of standard client UI widgets.  The ergonomics (and hence user experience) of this approach would undoubtedly be magnitudes higher than with in-world HUDs, not just because widgets would be consistent with platform UI standards but because of the far greater CPU and memory resources available on the client machine.
Client-side HUD programming would allow for smoother, better interaction, as well as the use of standard client UI widgets.  The ergonomics (and hence user experience) of this approach would undoubtedly be magnitudes higher than with in-world HUDs, not just because widgets would be consistent with platform UI standards but because of the far greater CPU and memory resources available on the client machine.

Revision as of 19:35, 15 November 2008

This page summarizes the discussion held at the UEIG meeting of 2008-11-13 about client-side viewer extensions to deliver a far more powerful form of HUD mechanism than we have today.

This initial version is little more than a slightly polished extract of the chat log, but hopefully it will improve and be fleshed out with details if there is interest. Everyone's contributions have been merged into a general description of the concept.

General description

HUDs should really run client-side, and only make API calls to attachments occasionally when they need data or need to perform an in-world action. Having a HUD object reside in SL and perform its visual manipulations in-world is all wrong, because that is a very indirect way of achieving the desired goal of rendering a client-side visual.

Although the desired rendering of a HUD is client-side, the job of a HUD almost always involves interacting with the SL world as well. It would be the purpose of the API mentioned above to allow client-side programming to interact with SL-side objects which are programmed on the server. The in-world HUD attachments would become HUD gateways in this new architecture, acting as gateways between client-side programming and SL-side events.

Most commonly, clients interact with in-world objects by triggering one of a small set of in-world events (touch, sit, etc), or else by issuing text messages on chat channels. The number of in-world events available for direct control by clients is small, fixed, and not particularly flexible, and even worse, these events can (normally) be triggered from client-side only by manual actions of the human UI operator. Control through chat channels is more flexible, but it requires stream parsing by in-world scripts and hence has a significant overhead, and also is available client-side only to the human UI operator (normally).

Client-side HUD programming would allow for smoother, better interaction, as well as the use of standard client UI widgets. The ergonomics (and hence user experience) of this approach would undoubtedly be magnitudes higher than with in-world HUDs, not just because widgets would be consistent with platform UI standards but because of the far greater CPU and memory resources available on the client machine.

The update rate of client-powered HUDs wouldn't be at the mercy of sim CPU, nor would HUD graphics programming add load to the sim, since the actual HUD view is purely local. The SL-side HUD attachments would still create some sim loading when gathering data or interacting with other objects, but in general this remaining load would be greatly reduced compared to the loading from current HUDs.

It is worth noting that client CPU scales with client population, whereas sim CPU doesn't, so it's not good for scalability to be running the HUD visualization remotely on the sim.

There is also the matter of programming language. HUDs currently have to be programmed in LSL since that is the only scripting language available SL-side, whereas a plethora of far more suitable languages are available on client machines. Better event handling like JavaScript