Client-side Scripting for HUDs and Widgets

From Second Life Wiki
Jump to navigation Jump to search

This page summarizes the discussion held at the UXIG meeting of 2008-11-13 about client-side viewer extensions to deliver a far more powerful form of HUD mechanism than we have today.

This initial version is little more than a slightly polished extract of the chat log, but hopefully it will improve and be fleshed out with details if there is interest. Everyone's contributions have been merged into a general description of the concept.

Client-side Scripting for HUDs

Basic model and premise

  • HUD programs should run primarily client-side, and only make API calls to attachments occasionally when they require data or need to perform an in-world action.
  • Having a HUD object reside in SL and perform its visual manipulations in-world is slow, inefficient, and underpowered, because this is a very indirect way of achieving the desired goal of rendering a client-side visual.

Although the desired rendering of a HUD is client-side, the job of a HUD almost always involves interacting with the SL world as well. It would be the purpose of the API mentioned above to allow client-side programming to interact with SL-side objects which are programmed on the server. The in-world HUD attachments would become HUD gateways in this new architecture, acting as gateways between client-side programming and SL-side events received by attachments.

Events and user-object interaction

Most commonly, clients interact with in-world objects by triggering one of a small set of in-world events (touch, sit, etc), or else by issuing text messages on chat channels.

The number of in-world events available for direct control by clients is small, fixed, and not particularly flexible, and even worse, these events can (normally) only be triggered from client-side by manual actions of the human UI operator. Control through chat channels is more flexible, but it requires stream parsing by in-world scripts and hence has a significant overhead. It is also not as friendly as a graphical interface, and is not particularly friendly as a machine interface either.

At the moment, all HUD construction and programming has to be done through prims, but prims just complicate the issue of building HUDs, since such 3D in-world objects have very little to do with the 2D display that is being designed. Likewise, HUD programming in the 3D world context is useful for in-world object sensing and control, but is not at all useful for the job of generating HUD graphics displays unless they are quite elementary.

User Experience benefits

Client-side HUD programming would allow for smoother, better interaction, as well as the use of standard client UI widgets. The ergonomics (and hence user experience) of this approach would undoubtedly be magnitudes higher than with in-world HUDs, not just because widgets would be consistent with platform UI standards but because of the far greater CPU and memory resources available on the client machine.

One very important improvement in User Experience will result from greatly reduced HUD interaction latency owing to the avoidance of the round-trip-time from client to server and back. This is examined further down.

Reducing sim loading and increasing scalability

The update rate of client-powered HUDs wouldn't be at the mercy of sim CPU, nor would HUD graphics programming add load to the sim, since the actual HUD view is purely local. The SL-side HUD attachments would still create some sim loading when gathering data or interacting with other objects, but in general this remaining load would be greatly reduced compared to the loading from current HUDs.

It is worth noting that client CPU scales directly with client population, whereas sim CPU doesn't, so it's not good for scalability to be running the HUD visualization remotely on the sim.

Programming languages and riding on the shoulders of giants

The availability of powerful programming languages and programming facilities client-side can be expected to have a massive impact if it can be harnessed.

HUDs currently have to be programmed in LSL since that is the only scripting language/API available in SL, whereas a plethora of far more suitable languages are available on client machines. Better event handling as found in browser JavaScript and other languages would make handling events in HUDs less contorted, and the available of powerful libraries would greatly simplify HUD coding, compared to LSL's very primitive facilities for code reuse.

As an example, think of a radar HUD display, permanently keeping you aware of interesting properties of everything around you. From the sim side, all it would need would be input sensing, whereas locally it could use all the powerful graphics facilities of the client machine to generate a magnificent radar display with 3D navigation, zooming, dynamic colour-coding, radar objects hyperlinked to their properties in a data panel, and anything else that the imagination can conjure up. This would be effectively impossible to do in a current HUD attachment programmed in LSL.

Another good example is the viewer's Statistics Bar. Such a display would be far beyond the capabilities of programming as an in-world HUD, yet fairly simple to do as a scripted graphics application bound to local client events for displaying client data, and communicating with an (invisible) attachment in-world for obtaining simulator data.

JavaScript would be a good starting point for API binding in that it is already available in the viewer as a module of the browser, and is also very popular. In any case we need a client-side API to which any client-side language can be bound -- a Javascript VM would be just one example of that.

Control widgets and dialogs

Client-side widgets would provide a great number of powerful features for user control, such as sliders and dials to operate with the pointer, buttons and other controls would have mouseover highlights and better tooltips, and so on. But this is only where improvement begins --- the real power will come from all client-side actions being scriptable, because the UI actions would work through an API that is also available to scripts, or indeed any languages bound to that API.

Currently, in-world objects (including HUDs) often use the blue dialog script interface for user interaction, but this is a very basic and primitive client-side widget, and doesn't contribute well to a good user experience. If the event that is sent to the client to display the blue dialog were instead sent to a bindable client endpoint, then binding a scripted callback to this end-point would allow a far more pleasant and powerful interaction, with a custom layout of buttons, sliders, arbitrary graphics, and so on. The incoming event would still default to the blue dialog if the endpoint has not been rebound to a user-defined handler, so the approach is backwards compatible.

Sim non-scalability and high speed interactions

One of the problematic areas that has hampered progress in SL is the general issue of speed, efficiency, and latency of operations, which is a very broad area covering many topics. Within that large area, the specific topic of user interaction speed and latency is an important one, and is relevant here.

When Philip Linden said some years ago that (paraphrasing) "SL will in time have all the necessary properties to support the needs of 3D game worlds", he probably assumed that client and server development would soon improve efficiency and reduce interaction latency and so make lag in SL a thing of the past. Unfortunately it hasn't happened yet, despite the rapid increase in broadband speeds and the large improvements in server-side script execution efficiency. While there are many reasons for this, it doesn't help matters that all scripting has to be done server-side.

The main non-scalability in SL is the non-scalability of sims: it is not possible for a sim that is holding a popular event to harness the unused power of (say) 200 idle CPUs in the grid --- SL sims weren't designed in a way that makes that possible. (This is a subject we have examined in great detail in the Architecture Working Group). Since sims cannot be scaled that way, the only solution available at this time is to relieve sims of unnecessary load. This is where client-side scripting has a role to play.

All visual processing in HUD objects constitutes unnecessary load for sims, because its purpose is to generate a private visual in the client and not a publicly-visible visual in the 3D world. Such private processing should be done in the client, with the sim participating only to the extent of reacting to actions and delivering requested information.

Not loading the sim will certainly contribute to reducing interaction latency and lag in general, but the main benefit as far as User Experience is concerned is more likely to come from the vastly faster processing achievable in client-side programming versus the treacle-like speeds when HUD scripting is server-side. Given that every HUD operation that affects the HUD visuals has to make a trip across the Internet before our HUD display is affected, client-side scripting can deliver a reduction in latency of many orders of magnitude --- thousands of times faster.

This can help to bring Philip's vision a little bit closer. The current approach to HUDs cannot.

General client-side scripting and SL-API interactions

In the preceding section we have been talking mainly about client-side programming for HUDs, but it is important to note that this is just a subset of the more general issue of client-side scripting talking to a viewer API that communicates with the SL world. Those client-side scripts that produce widget-type graphics related to SL activities are (loosely) thought of as HUDs, but such scripts could perform important and useful functions without any graphic display at all. A text-to-speech script taking input from the API bindings for chat functions is a good example of that.

If this kind of scheme were to be adopted, what kind of events could provide useful bindings for client-side scripting?

Lacking clairvoyance to know what script developers might want to do, the best approach would probably be to send ALL incoming events to API callback endpoints, and simply bind existing viewer handlers to those endpoints by default. This delivers total backwards compatibility, total flexibility, and at the same time complete uniformity.

General client-side scripting instead of specific UI customization

One of the contributors to the UXIG discussion made a particularly interesting observation: "Couldn't HUDs be just a part of a general UI customisation paradigm?". This was interesting because it hinted at a problem: the viewer UI developers are currently working to a paradigm that is quite complex yet at the same time is very inflexible and underpowered. It implements predefined panels reasonably well, but extending them beyond reskinning is problematic, and giving them new behaviours is impossible without deep viewer programming.

Given a generic event/action API, to which user-scripts can be bound and to which all incoming SL events are delivered, this would entirely remove all limitations on appearance and behaviour of UI elements, and hence would transform the current UI customization problem into a user-scripting problem. While that change has little merit if all you want are slightly customized UI panels, it makes a world of difference to anyone with greater ambitions.

Coda

Lastly, and slightly philosophically, the icing on the UI cake resulting from the changes outlined above would be that scripted UI facilities will trigger emergent development, an evolutionary process that is very likely to go beyond what the original designers ever envisaged. This is highly frightening to Luddites ... but we're not, right? :-)