S3 based viewer map
- 1 Viewer World Map
- 1.1 Objectives
- 1.2 New Implementation
- 1.3 Notes : How The Old World Map Worked in the Viewer
Viewer World Map
We want to use the S3 JPEG images directly into the viewer for several reasons:
- Improve map browsing performance: The JPEG files used for each region tiles are smaller than the j2c ones (20K vs 65K). Since the web map system uses its own tiling and subresolution stitching mechanism, we're not really using the subresolution goodness of JPEG2000 so it's not really worth the overhead here, especially for very small images like region tiles (256x256 pixels).
- Simplify, simplify: Avoiding duplication of data will prevent all the ilks that come with it (cost of computing, cost of storage, inconsistency bugs, etc...)
- Reduce LL's server load: Using Amazon's S3 to store and retrieve the JPEGs instead of hitting our own asset server will take some load off our own servers. Always a good thing.
The new code design makes 2 fundamental changes compared to the old one:
- Separate the image tiles from the region info completely: this was required by the use of the S3 tiles anyway and is the fundamental cause of the performance improvement when zooming out
- Migrate the various vectors and lists of items from the LLWorldInstance to their respective LLSimInfo: this allows much less data to be iterated through per frame and is the cause of the performance improvement when roaming the map all zoomed in
Some elements of the design are unchanged:
- There's one instance of LLWorldMap, a singleton that holds the access to the mipmap, sim map and all items. This is the "Model" of the data.
- The LLWorldMapView is displaying the LLWorldMap. This is the "View" of the data.
- The LLFloaterWorldMap is handling the clicks and buttons of the panel" This is the "Controler" of the data.
The other big difference with the old code is that each class has now a public API that is unit tested (ZOMG!), all data members are private.
- Requesting regions blocks
- Requesting items
- Requesting people count spaceserver
- Requesting S3 tiles
Note on Item Requests
For a while, we believed that the way the viewer was requesting data for items like events and the like was wasteful, basically requesting the list of items for the entire grid while we are only interested in a very small subset of those (the ones visible on screen).
We ran a set of experiments, trying to request only what was required or all and checking the size of the vectors holding those data (i.e. the total number of items). Here are some of those results.
Request items staying in place or zooming out and roaming... See Sim size (number of regions hit) changing but no change on the other data, "land for sales" return somewhat arbitrary (long delays sometimes....). Also PG events requests was commented out to proove the point that asking with a nil handle was actually getting the data:
Draw frame : sizes : Sim = 124, tele hubs = 4973, info hubs = 30, PG events = 0, mature events = 733, land for sales = 15590 Draw frame : sizes : Sim = 394, tele hubs = 4973, info hubs = 30, PG events = 0, mature events = 733, land for sales = 23413 Draw frame : sizes : Sim = 913, tele hubs = 4972, info hubs = 30, PG events = 0, mature events = 733, land for sales = 23134 Draw frame : sizes : Sim = 169, tele hubs = 4972, info hubs = 30, PG events = 0, mature events = 733, land for sales = 22855 Draw frame : sizes : Sim = 14, tele hubs = 4972, info hubs = 30, PG events = 0, mature events = 733, land for sales = 13758
Same with PG events requests, see that there's no change for the other data:
Draw frame : sizes : Sim = 50, tele hubs = 4970, info hubs = 30, PG events = 1778, mature events = 733, land for sales = 22013
Issue requests items only on the visible sims:
Draw frame : sizes : Sim = 2, tele hubs = 44050, info hubs = 289, PG events = 12142, mature events = 9884, land for sales = 107433
This last run was actually disastrous perf wise:
- we accumulate all data several times in the vectors (baaaad)
- we ask for the same thing over and over again (i.e. the region handle is not used for those data)
- perf sucks terribly as a result of iterating through huge vectors...
The result shows that, actually, sending a single request for the whole grid is quite efficient since the amount of data per items is relatively small (except for land for sales which apparently doesn't return consistently between runs).
We decided to not change the way we request and get those data. It's actually OK for all of them except may be "land for sales" which is indeed very big. Moving however those vectors from the LLWorldMap instance to their respective LLSimInfo instance did reduce the amount of iterations done through those vectors a great deal and improved the performance when those items were displayed (check boxes on).
User Interface Design
The map user interface needs to be modified to take into account the new mapserver production:
- Since there's no production of "terrain" tiles, we need to suppress the "Terrain" tab
- Since we do not produce the overall "stitched" world anymore (and can't since the tiles are not produced on a single server anymore either), we need to suppress the background "blue-ish" world map when zooming out
- Since we are not requesting image data per region anymore (we get them from S3 at the correct subresolution), we can reduce the number of SIM info requests and improve the performance dramatically by avoiding hitting all regions in sight when zooming out
This introduces the following UI modifications:
- Suppress the "Terrain" and "Objects" tabs and replace them by a single panel (with no title)
- Avoids the display of overlayed information (People, Infohub, Events, etc...) when reaching a threshold resolution and turn the checkboxes to "disable" (gray out) so to give the user indication of what's happening
- Allow the slider to zoom out to the whole extend of the grid, whichever starting point was used when opening the map
Map zoomed out:
- Most of the grid is now visible
- Right panel check boxes are disabled
- "green dots" (people) are not rendered though the "people" check box is checked (though disabled)
- There's no "Terrain / Objects" tabs available anymore
- We are not using a "blue" world map anymore (we actually don't need one since we can download the tiles for the world really fast)
Map zoomed in:
- The "green dots" (people) are now visible
- The check boxes in the right panel are now enabled
- The region names are rendered as before (this is unchanged)
- No "Loading" message is displayed when zooming in or out since the tiles are fetched relatively fast and that previously fetched (and cached) tiles of different resolutions are used to fill the temporarily missing tiles (like on the web application which doesn't have a "Loading" message either)
Note: It is likely that, mid to long term, the viewer map will be replaced by the new SLURL map web application running within the viewer using a WebKit panel. This is not possible today as webkit is not available within the viewer yet and since some region information (most importantly, people "green dots") are currently not displayed in the web application. However, there's a trend in that direction from the search team, displaying more and more region information. Eventually, the web app will catch up and even do more than the viewer map. It won't make much sense at that point to maintain 2 competing maps user interface. For this reason, it's certainly sound not to start an overhaul of the in viewer map code and just focus on the changes required to make the current viewer map compatible with the new mapserver architecture.
Suggested Improvements: feedback from people who used the "dogfood" versions:
- The regions boundaries are not easily visible: since we fixed the black borders bug, there's no feedback of where the borders of the regions are. A proposal (Infinity and Zero) is to display a square boundary on hover so that the map stays clean.
- Following the suggestion above, we can also display more information per region in the lower left corner on hover (rather than on a tooltip): e.g. "damage allowed", "stream media", etc...
- The mini map now seems a little poor and slow compared to the map. What about using the S3 tiles there too for the background? (to be discussed with Coco).
Related Unit Tests
List of unit tests:
- Worldmipmap : indra/newview/tests/llworldmipmap_test.cpp
- Worldmap : indra/newview/tests/llworldmap_test.cpp
Notes : How The Old World Map Worked in the Viewer
This is provided for the benefit of developers comparing with trunk or familiar with the old code architecture. If you're not in one or the other category, you can safely ignore that part.
The old world map doesn't use the stitched hierarchy of tiles to zoom out. Instead it uses the following strategy:
- Display all possible regions: all the regions are potentially downloadable, a walk through the entire world could, in theory, fill the local cache with all the j2c tiles for all the regions of the world. Counting 60K per tiles and 30 000 region, that's a total of 1 800 000 K = 1.8 Gig of data...
- Reduce the boost level (download priority) of regions outside the visible view to 0: that's the way to get those non visible tiles out of the download list
- Zoom out using OpenGL texture mapping: no fancy use of the mapserver computed sub resolution tiles like we do in the webapp.
- When zoomed out too much, use the unique stitched "world map": that's triggered when the rendered per region tile is dropping below a certain size in pixels
This is very different from the new architecture. Changing this required a complete rewrite of the map.
There's quite a bit of code to handle what's called "layers". Those are actually 3 levels of world rendering that, initially, we're supposed to be combined or switched. There's currently only 2 background layers used (terrain and objects) corresponding to the 2 tabbed panels in the UI and one overlay layer corresponding to the "land for sale / land for auction" map. That last one is alpha composed on top of the terrain or objects bacground.
The new implementation reduces this to 1 (the "land for sale" overlay) and get rid of the notion of layer altogether. The background image (S3 hierarchy of jpeg tiles) is handled in a completely new structure called LLWorldMipmap.
All other information that are truly "layered" on top of the map (users, telehubs, events, etc...) are actually not using the layer structure at all but are stored in std::vectors in the LLWorldMap.
We have one such object per region object. That's the one holding the handlers on the images we read (the so called layers):
- mMapImageID: holds the UUID for the images we need
- mCurrentImage: the object or terrain image smart pointer
- mOverlayImage: the land for sale image smart pointer
The ways the sim info are loaded is responsible for part of the slow responsiveness of the map. Here's the way things are going currently:
- landing somewhere, generate 9*(8*8)=576 sims block requests (yikes!)
- wait for the requests to come back and handle the replies : observed responses show between 6 and 14 sims coming back at a time. Note that this transaction model doesn't use HTTP but the UDP model.
- once the sim info has been received, we get the UUID for the j2c tile and request it
- then we wait for the tile to come back before being displayed
Even with the tiles out of view receiving 0 for boost level (i.e. not requested), we get sim info for sims *outside* the visible part (since we asked them per blocks of 8x8 sims) before we get anything for the visible (and crucially current) sims. This explain why it takes so long for the map to show anything when triggered.
Switching crudely to JPEG, we first injected the S3 request code just right where we initially requested the j2c. So our perf are still hampered by this architectural issue.
The solution was to decouple the map tiles from the sim info altogether. This is what we've done with LLWorldMipmap.
This code is concerned with the rendering of the minimap. We're not planning to change it and it shouldn't be using the j2c region tiles but it happens to use some of LLWorldMapView methods (e.g. LLWorldMapView::drawAvatar()).
If time allows, we propose to use the same tiles for the minimap than for the world map. Advantages:
- no more "gray blobs": currently, the minimap shows all objects as "gray blobs". In heavily constructed areas, even if those constructions are under an open sky, this gray shortcut creates maps that are almost entirely gray and, therefore, not very engaging exploration tools.
- view neighboring regions: it would be very cheap to request the set of 3x3 tiles centered on the region the resident is in.
- instant view: right now, the minimap waits for things to have been downloaded before showing anything, creating a "discovering" cone while the resident moves around. Getting the tile from S3 would make that unecessary
Issues with this idea:
- not up to date map: new objects won't show on the minimap immediately
- missing regions: if for whatever reason the tile corresponding to the region is missing (new region, moved region), a sea of blue would show in lieu of the region. Note that we could switch to the old code if we detect that we don't get the region's tile.