Client-side Scripting for HUDs and Widgets

From Second Life Wiki
Jump to navigation Jump to search

This page summarizes the discussion held at the UEIG meeting of 2008-11-13 about client-side viewer extensions to deliver a far more powerful form of HUD mechanism than we have today.

This initial version is little more than a slightly polished extract of the chat log, but hopefully it will improve and be fleshed out with details if there is interest. Everyone's contributions have been merged into a general description of the concept.

Introduction

HUDs should really run client-side, and only make API calls to attachments occasionally when they need data or need to perform an in-world action. Having a HUD object reside in SL and perform its visual manipulations in-world is all wrong, because that is a very indirect way of achieving the desired goal of rendering a client-side visual.

Although the desired rendering of a HUD is client-side, the job of a HUD almost always involves interacting with the SL world as well. It would be the purpose of the API mentioned above to allow client-side programming to interact with SL-side objects which are programmed on the server. The in-world HUD attachments would become HUD gateways in this new architecture, acting as gateways between client-side programming and SL-side events.

Events and user-object interaction

Most commonly, clients interact with in-world objects by triggering one of a small set of in-world events (touch, sit, etc), or else by issuing text messages on chat channels. The number of in-world events available for direct control by clients is small, fixed, and not particularly flexible, and even worse, these events can (normally) be triggered from client-side only by manual actions of the human UI operator. Control through chat channels is more flexible, but it requires stream parsing by in-world scripts and hence has a significant overhead, and also is available client-side only to the human UI operator (normally).

User Experience benefits

Client-side HUD programming would allow for smoother, better interaction, as well as the use of standard client UI widgets. The ergonomics (and hence user experience) of this approach would undoubtedly be magnitudes higher than with in-world HUDs, not just because widgets would be consistent with platform UI standards but because of the far greater CPU and memory resources available on the client machine.

Reducing sim loading and increasing scalability

The update rate of client-powered HUDs wouldn't be at the mercy of sim CPU, nor would HUD graphics programming add load to the sim, since the actual HUD view is purely local. The SL-side HUD attachments would still create some sim loading when gathering data or interacting with other objects, but in general this remaining load would be greatly reduced compared to the loading from current HUDs.

It is worth noting that client CPU scales with client population, whereas sim CPU doesn't, so it's not good for scalability to be running the HUD visualization remotely on the sim.

Programming languages and riding on the shoulders of giants

The availability of powerful programming languages and programming facilities client-side would have a massive impact. HUDs currently have to be programmed in LSL since that is the only scripting language available SL-side, whereas a plethora of far more suitable languages are available on client machines. Better event handling as found in browser JavaScript and other languages would make handling events in HUDs less contorted, and the available of powerful libraries would greatly simplify HUD coding, compared to LSL's very primitive facilities for code reuse.

As an example, think of a radar HUD display, permanently keeping you aware of interesting properties of everything around you. From the sim side, all it would need would be input sensing, whereas locally it could use all the powerful graphics facilities of the client machine to generate a magnificent radar display with 3D navigation, zooming, dynamic colour-coding, radar objects hyperlinked to their properties in a data panel, and anything else that the imagination can conjure up. This would be effectively impossible to do in a current HUD attachment programmed in LSL.

Control widgets and dialogs

Client-side widgets would provide a great number of powerful features for user control, such as sliders and dials to operate with the pointer, buttons and other controls would have mouseover highlights and better tooltips, and so on. But this is only where improvement begins --- the real power will come from all client-side actions being scriptable, because the UI actions would work through an API that is also available to scripts, or indeed any languages bound to that API.

Currently, in-world objects (including HUDs) often use the blue dialog script interface for user interaction, but this is a very basic and primitive client-side widget, and doesn't contribute well to a good user experience. If the event that is sent to the client to display the blue dialog were instead sent to a bindable client endpoint, then binding a scripted callback to this end-point would allow a far more pleasant and powerful interaction, with a custom layout of buttons, sliders, arbitrary graphics, and so on. The incoming event would still default to the blue dialog if the endpoint has not been rebound to a user-defined handler, so the approach is backwards compatible.

General client-side scripting and SL-API interactions

In the preceding section we have been talking mainly about client-side programming for HUDs, but it is important to note that this is just a subset of the more general issue of client-side scripting talking to a viewer API that communicates with the SL world. Those client-side scripts that produce widget-type graphics related to SL activities are (loosely) thought of as HUDs, but such scripts could perform important and useful functions without any graphic display at all. A text-to-speech script taking input from the API bindings for chat functions is a good example of that.

If this kind of scheme were to be adopted, what kind of events could provide useful bindings for client-side scripting? Lacking clairvoyance to know what script developers might want to do, the best approach would probably be to send ALL incoming events to API callback endpoints, and simply binding existing viewer handlers to those endpoints by default. This delivers total backwards compatibility, total flexibility, and at the same time uniformity.