Difference between revisions of "Talk:Texture Cache"
Kerunix Flan (talk | contribs) |
m (→Idea: Don't ever delete anything: add solution baedd on redis) |
||
(92 intermediate revisions by 16 users not shown) | |||
Line 1: | Line 1: | ||
{{Open Source Talk Page}} | |||
The discussion below is a continuation of [https://lists.secondlife.com/pipermail/sldev/2007-March/thread.html#1125 a long conversation on the sldev list in March, 2007], which [[SLDev|per the newly created SLDev guidelines]] was moved to this talk page. | |||
Related discussions: | |||
* [[Talk:VFS]] | |||
== GIF == | |||
We can safely avoid the GIF format. It's 256 colors only. it use unisys patent LZW (the good old patent used in the good old GIF expired, but unisys repatent'd it because of some "improvments" in the algorithm.). Unisys suddenly claimed they wanted royalties to encode GIF file, and just for that we shouldn't use it. source : [http://www.unisys.com/about__unisys/lzw License Information on GIF and Other LZW-based Technologies] -- {{User|kerunix Flan}} | |||
: The last of the patents expired in October 2006. GIF is no longer patent encumbered. But PNG is still better. Keep in mind by recompressing the image to cache it you're trading CPU usage for disk space. Compression is usually a lot slower than decompression. Just storing uncompressed might be overall faster. In the end, we need to assume nothing and benchmark, benchmark, benchmark. [[User:Seg Baphomet|Seg Baphomet]] 22:45, 23 March 2007 (PDT) | |||
: The GIF format is not suitable. It only supports 256 colors per frame. It does not support semi-transparent pixels. PNG is a superior format. Has anyone considered the time required to re-encode images in the target format? [[User:Strife Onizuka|Strife Onizuka]] 09:42, 24 March 2007 (PDT) | |||
== Databases == | == Databases == | ||
There appears to be support with APR for a portable interface to common databases, namely Berkely-DB and SQL. [[User:Dzonatas Sol|Dzonatas Sol]] 18:23, 23 March 2007 (PDT) | There appears to be support with APR for a portable interface to common databases, namely Berkely-DB and SQL. [[User:Dzonatas Sol|Dzonatas Sol]] 18:23, 23 March 2007 (PDT) | ||
: APR = [http://apr.apache.org/ Apache Portable Runtime] ? If yes, i can't find any interface to common databases in the APR documentation. [[User:Kerunix Flan|Kerunix Flan]] 03:48, 25 March 2007 (PDT) | |||
:: Yes, I noticed some features added to the 1.3 version that is in development. [[User:Dzonatas Sol|Dzonatas Sol]] 13:00, 25 March 2007 (PDT) | |||
== Stats == | == Stats == | ||
It would be useful to get stats on current and new hardware profiles from LL to get a sense of what people are using and where the focus should lie. | It would be useful to get stats on current and new hardware profiles from LL to get a sense of what people are using and where the focus should lie. [[User:Iron Perth|Iron Perth]] 03:00, 23 March 2007 (PDT) | ||
I plugged a simple cpu clock tick timer class (based on http://www.codeguru.com/forum/showthread.php?s=&threadid=280008) into FL-1.13.3.59558 VC Express, XP pro build lltexturecache, and recorded some times for index lookup & texture loading in LLTextureCacheWorker::doRead(). Spec: budget Sempron 2800+ on Gigabyte k8vt800, FX5200 128mb graphics, 1gb RAM, 160gb HDD (inexpensive + no RAID, not about to pull PC apart to get further HDD details) 2x40gb partitions, cache on partition #1, 54% full, in need of a defrag (currently 33% file fragmentation!). | |||
After running the download FL, the cache got purged (any way of re-building the index instead?). | |||
(while re-building a half-decent cache, ReleaseNoOpt build executed from within VC | |||
after several mins running time & teleporting to around 7-8 regions) | |||
WARNING: TickTimer::StopWithReporting: Lookup ticker. | |||
WARNING: TickTimer::StopWithReporting: COUNT: 10200 | |||
WARNING: TickTimer::StopWithReporting: TOTAL (sec): 6.06264 | |||
WARNING: TickTimer::StopWithReporting: AVG (sec): 0.000594377 | |||
WARNING: TickTimer::StopWithReporting: MAX (sec): 0.147985 | |||
WARNING: TickTimer::StopWithReporting: File read ticker. | |||
WARNING: TickTimer::StopWithReporting: COUNT: 3700 | |||
WARNING: TickTimer::StopWithReporting: TOTAL (sec): 12.8276 | |||
WARNING: TickTimer::StopWithReporting: AVG (sec): 0.00346692 | |||
WARNING: TickTimer::StopWithReporting: MAX (sec): 0.126929 | |||
nb. times may be affected by cpu speed varience (callobrated only once, and only over 1sec) | |||
The other day, I was getting a better 0.001s average on the cached file read. | |||
[[User:Paula Innis|Paula Innis]] 10:53, 25 March 2007 (PDT) (feel free to tidy this up or whatever) | |||
== [http://en.wikipedia.org/wiki/TGA TGA] == | == [http://en.wikipedia.org/wiki/TGA TGA] == | ||
Line 11: | Line 48: | ||
I can't find it in the spec, but i think this format give the warranty that images of a given size will always have the same file size. (without RLE of course). This may help for some optimization. | I can't find it in the spec, but i think this format give the warranty that images of a given size will always have the same file size. (without RLE of course). This may help for some optimization. | ||
Additionaly, the "developper area" of the TGA file format could be used to store some usefull information. | Additionaly, the "developper area" of the TGA file format could be used to store some usefull information.{{unsigned|kerunix Flan}} | ||
:Converting images into the TGA format for cache storage has the advantage that the client already supports rendering TGA images. It would be one of the easiest solutions. Images with the same dimensions would all be the same but otherwise would be different in size. [[User:Strife Onizuka|Strife Onizuka]] 02:22, 24 March 2007 (PDT) | |||
== Faster Formats == | |||
Contribution from Tofu Linden from the email discussion | |||
RLE is cheap but also not very effective at all for many, many textures; wherever I might have used RLE in ye olden days I'd now use trusty old zlib or blingy liblzf, the latter of which is more or less on the same order of speed as RLE but a LOT more effective on data (images) with redundancy not necessarily manifesting as runs of pixels. | |||
[[User:Iron Perth|Iron Perth]] 02:30, 24 March 2007 (PDT) | |||
:And as said on the mailling, the destructive effect of JPEG2000 probably killed any chance to have a good compression rate using RLE. RLE is fast and the only compression supported by the Targavision specification. If we don't use it (probably because it's useless) then don't use compression at all. One of the advantage of using RLE like said in the specification is that we don't have to decompress anything to read the header, developer area, and footer. [[User:Kerunix Flan|Kerunix Flan]] 08:02, 24 March 2007 (PDT) | |||
::I think I missed the bit about "the destructive effect of JPEG2000" on the list. Currently lossy compression is only used when the source is a JPEG and lossless for BMP and TGA. The point of using a format like JPEG2000 is that it achieves high compression ratios superior to other formats for the same image quality. For modern codecs to achieves high compression ratios they sacrifice CPU time. The reason for the push for higher ratios is to reduce the amount of storage and bandwidth required for transmission. If SL moves away from using JPEG2000 and towards a less CPU intensive, lower compression ratio codec, the asset servers will be serving more data per asset request. This seems like a step backwards and not forwards. As an intermediate format to save CPU time to avert the costly re-decoding of an image, it makes sense to pick a faster format though it will be costly in diskspace; and as long as the disk is not horibly fragmented it has the potential to be faster then decoding. A format needs to fit these criteria to be considered for this task: | |||
::#Lossless | |||
::#Support a variable number of channels. | |||
::#Fast to encode, fast to decode | |||
::#Preferably GPL compatible. | |||
::This excludes: GIF & JPEG. This may also exclude PNG because most implementations treat anything that is fully transparent as rgba(0,0,0,0); white images develop black outlines at low resolutions (not lossless). [[User:Strife Onizuka|Strife Onizuka]] 21:08, 24 March 2007 (PDT) | |||
:::We talked about at least 2 kind of local cache : | |||
:::#Compressed texture cache : Obviously, it should be just a storage for the jpeg2000 sent by the sim. Why recoding ? (and lose quality ?) | |||
:::#Uncompressed texture cache : storing some texture already decoded, so the client don't have to lose time decoding it when needed. For me, the best format is TGA. [[User:Kerunix Flan|Kerunix Flan]] 03:31, 25 March 2007 (PDT) | |||
::::zlib ontop of TGA works for me. Speed and ease of decoding. [[User:Strife Onizuka|Strife Onizuka]] 18:24, 25 March 2007 (PDT) | |||
http://en.wikipedia.org/wiki/Huffman_coding may be worth a look. It may work well with 'jpeg artifacts' [[User:Paula Innis|Paula Innis]] 13:03, 25 March 2007 (PDT) | |||
:Take a look at http://en.wikipedia.org/wiki/Huffman_coding#Applications. See DEFLATE there? That's what .zip uses. You can get it with zlib. [[User:Dale Glass|Dale Glass]] 13:19, 25 March 2007 (PDT) | |||
:And as http://www.zlib.net says; 'Note that zlib is an integral part of libpng'. There is also .DXT to consider. [[User:Paula Innis|Paula Innis]] 14:21, 25 March 2007 (PDT) | |||
::Right. I said that because I don't get why you're mentioning Huffman then. It's been implicitly brought up by Tofu ("I'd now use trusty old zlib") and Seg Baphomet and Strife Onizuka (who mentioned PNG) before you added that. DXT, if you refer to S3 Texture Compression, is lossy, and patented, which almost certainly makes it unacceptable. | |||
:::AFAIK, the S3 patent only requires registration and the payment of royalties for graphics driver development, not for applications and encoders, and (again AFAIK) S3 has issued a general license to use S3TC/DXTC for all OpenGL drivers. The HUGE advantage of S3TC is, that almost all 3D cards and onboard chipsets support it natively. You can directly upload S3TX textures to OpenGL without having to decompress them and OpenGL uses the compressed version, saving texture space (DXT has a compression ratio of 1:6 when compressing 24Bit RGB textures). In many situations, using S3TC compressed textures is even faster than uncompressed ones, because the decompression is extremely simple and storing the compressed version in texture memory saves I/O bandwith in the graphics board. It is definately true, that DXT is lossy, but so is the initial JPEG2000/Wavelet compression and the number of advantages of S3TC may be worth the lossyness. Another advantage of S3TC is, that the texture data size (body) for textures with power-of-two dimensions (all in SL AFAIK) is also a power of two, making it possible to use efficient storage mechanisms for the local cache, e.g. the buddy system (http://en.wikipedia.org/wiki/Buddy_memory_allocation) to store all textures in a single pre-allocated file. --[[User:Dreamer Miquel|Dreamer Miquel]] 03:18, 31 August 2007 (PDT) | |||
::Excuse me, Strife, but [http://wiki.secondlife.com/w/index.php?title=Talk%3ATexture_cache&diff=16150&oldid=16148|why are you putting Paula's name] on something [http://wiki.secondlife.com/w/index.php?title=Talk%3ATexture_cache&diff=16130&oldid=16129| I wrote]? I see I forgot to sign again though, grr. [[User:Dale Glass|Dale Glass]] 18:43, 25 March 2007 (PDT) | |||
:::Sorry about that I really thought I had gotten the right name on it. Wasn't on purpose x_x. [[User:Strife Onizuka|Strife Onizuka]] 20:14, 25 March 2007 (PDT) | |||
Umm, I guess I did miss Tofu's mention of zlib in amongst all that RLE stuff (which triggered my suggestion of Huffman trees). Not awake enough to register it when I read it, I guess. [[User:Paula Innis|Paula Innis]] 07:30, 26 March 2007 (PDT) | |||
== Current Caching == | |||
Both Tofu and Steve have made the point they feel cache hit rate issues are a priority.. | |||
:# Are they talking about on-disk texture cache or the subset of textures that resides in video memory? The page topic suggests the former, the discussion seems to be about the latter.--[[User:Dreamer Miquel|Dreamer Miquel]] 03:51, 31 August 2007 (PDT) | |||
Contribution from Steve Linden from email discussion: | |||
2. The way we calculate texture pixel coverage (and therefore desired discard level) is as follows: | |||
a. When anything using a texture is rendered. set its pixel coverage to MAX(current pixel coverage, rendered pixel coverage) | |||
b. Once a frame, reduce the current pixel coverage of all textures by 10%. | |||
This way, everything rendered recently has an accurate pixel coverage, and everything not rendered degrades over time. gImageList.mForceResetTextureStats merely tells the code "assume that all textures are no longer visible" and resets the pixel area of all texture to 0. As soon as they are rendered their pixel coverage is updated. | |||
:# Doesn't this reduce the pixel coverage of all textures to zero after - at most - a few minutes, if they aren't rendered? Let's say a computer does 50fps. In 5 minutes, it does 15000 frames ( 50fps * 300 seconds). In this time, the pixel coverage degrades by the factor 0.9^15000 = 4,34E-687, which is beyond float and double precission. Hence, after this time, the coverage of all these textures would be rounded to zero. If I'm correct, this means, that ALL texture that are not used for 5 minutes would be discarded as soon as a sweep is performed. --[[User:Dreamer Miquel|Dreamer Miquel]] 03:33, 31 August 2007 (PDT) | |||
3. Even if a textures pixel coverage gets set to 0, we do not immediately discard the texture data. We only discard the data if we need room for more textures (in which case we discard the textures with the lowest priority), or if no object in view is using the texture *and* 30 seconds have elapsed. | |||
:do that mean that, even if we have a lot of VRAM available, the texture will be removed from cache if we don't see the texture for 30s ? and then, moving our avatar in the sim will make the client redecoding the texture from cache to load it in VRAM again ? Why not discarding the texture only when we're out of memory ? (''please delete/replace this comment by the answer'') [[User:Kerunix Flan|Kerunix Flan]] 08:18, 24 March 2007 (PDT) | |||
4. Tateru is absolutely correct in that caching textures to disk mostly just saves on network bandwidth. Unless you have a slow network connection, it can be almost as fast to load textures across the network as from a disk, especially if that disk is not especially fast or is fragmented. Decoding the textures takes quite a bit longer. However, if you have multiple CPUs and enable Client > Rendering > Run Multiple Threads, the decoding will be done on the second CPU and go much faster. | |||
:I can't believe loading a texture from a remote cache isn't much slower than loading a texture from a local cache. And if you add the usual packet loss and high ping (for non-us user) it's *certainly* much slower. [[User:Kerunix Flan|Kerunix Flan]] 08:18, 24 March 2007 (PDT) | |||
He also makes some interesting comments about teleporting around and filling up the texture cache. Check out his original email here: | |||
https://lists.secondlife.com/pipermail/sldev/2007-March/001133.html | |||
[[User:Iron Perth|Iron Perth]] 02:44, 24 March 2007 (PDT) | |||
== Current Caching sculptie crunch == | |||
Currently sculptie makers are using 64x64 textures which are exactly like a 32x32 ones (just stretching pixels), it takes x4 times the memory. Aside from the extra texture creating hassle, there is a bandwidth, memory, and CPU cost attached for both Linden labs and users... | |||
To stop this behavior, Linden Labs could make an exception for sculpties and never crunch them into a smaller texture to save memory, because we're otherwose wasting bandwidth and texture-crunching CPU time and fixing it would also means that the exact same memory is being used as the sculptie creator truly needs, not x4. | |||
To qualify for the non-cache-crunching status, a sculptie should be 32x32 max already and should be used as a sculptie when first loaded by the client, not as a regular texture. | |||
Alternatively, some textures could have a "crunch me first" or "you can crunch me more" flag, a volontary measure by the creator or owner to make things smoother for more important textures (rather than "don't crunch me" flag which everyone would selfishly put on all the time). | |||
The recent ability to upload sharp-edge sculpties correctly (thanks for the fix!) has made that issue all the more important, the precision sculpties are finally being uploaded and they look uglier when level of detail truncates their details. | |||
P.S.: the average level of detail never allows me to see my roach sculptie as 32x32 no matter how close I am, it is irritating that people see a potato type shape with triangles coming out unless I tell them to adjust detail levels in preferences... this is why the wasteful 64x64 format (and higher, which *does* crunch less) is staying. | |||
P.P.S.: the longer you delay this issue, the more 64x64 or more legacy sculpties will permanently slow down the system... | |||
[[User:Contagious Republic]] 14:40, 11 March 2008 | |||
== Home Cache == | |||
If a resident has a home, why not have a separate texture cache for it, a ''Home Cache''. This cache would only be explicitly cleared down and would be accessed first by the client when the resident logs on or teleports home. | |||
Once populated with textures, each logon/teleport to home would not involve network traffic, as they would be available locally, on the client. | |||
[[User:Benja Kepler|Benja Kepler]] 02:52, 24 March 2007 (PDT) | |||
:A similar idea would be to have a hit count on textures. The hit count can be kept around much longer than the texture data. The hit count would be used to indicate what textures to keep in the cache longer. [[User:Dzonatas Sol|Dzonatas Sol]] 12:53, 24 March 2007 (PDT) | |||
::We probably don't want to invent our own caching system, since this type of thing gets studied a lot. Looking around, at what other projects are doing (e.g. PostgreSQL, Linux), I found this replacement for the LRU scheme used today: [http://www.vldb.org/dblp/db/conf/vldb/vldb94-439.html 2Q cache system]. This balances hit frequency and hit recency to determine whether to eject something from the cache. -- [[User:Rob Linden|Rob Linden]] 23:34, 24 March 2007 (PDT) | |||
:::I agree. Trying to outsmart all the hundreds of people developing new cache replacement algorithms would be stupid. There's plenty of things out there better than LRU without rolling our own. [[User:Gigs Taggart|Gigs Taggart]] 14:34, 25 March 2007 (PDT) | |||
== Textures in Inventory == | |||
A search through the avatar's inventory is a good indication of what textures to keep around longer in the cache than others. [[User:Dzonatas Sol|Dzonatas Sol]] 12:50, 24 March 2007 (PDT) | |||
:I can't agree with this. I have many textures that I rarely see in-world. I don't think it merits extra attention. [[User:Strife Onizuka|Strife Onizuka]] 22:59, 24 March 2007 (PDT) | |||
== Plan: Normalization of Texture Usage == | |||
A method to keep the statistics of pixel coverage and hit rates: We use a collection of hit counters such that each represents an individual entry for an associated texture. When any hit counter hits a threshold, the collection is normalized and the results are stored in each associated hit-rate fields of the entry. The hit-rate fields may store percentages for short term and long term storage. The short term and long term storage indicators present a value for determination in how textures are archived or disposed. Pixel-coverage rates are optional fields in each entry for determination of an optimal storage medium for texture data such that the medium is represented in uncompressed or compressed methods and if the stored texture is downsampled and to what degree it is downsampled. [[User:Dzonatas Sol|Dzonatas Sol]] 17:05, 24 March 2007 (PDT) | |||
:I think this would result in decreased performance as a lot of time would be spent incrementing hit counters. [[User:Strife Onizuka|Strife Onizuka]] 22:57, 24 March 2007 (PDT) | |||
::How hit counters are incremented have not been analyzed for implementation. This is just a general plan independent of any specific implementation. [[User:Dzonatas Sol|Dzonatas Sol]] 23:03, 24 March 2007 (PDT) | |||
== Idea: Keep a cache index == | |||
Keep an on-disk index of cache files. This could be made compact and small enough to fully into RAM. The index would allow to know whether a file is in the cache, thus avoiding trying to open a file if the current way of storing a texture in its own file is kept. | |||
Additionally, store some metadata with: filesize, md5sum, and a very rough texture approximation (perhaps even limited to a solid color) to have something better than the current grey. | |||
Filesize and md5sum would allow to check the on-disk files' integrity, which should avoid any need to clear the cache, as correctness could be automatically verified. | |||
:Isn't this what the texture.cache and texture.entries files do right now? --[[User:Dreamer Miquel|Dreamer Miquel]] 03:47, 31 August 2007 (PDT) | |||
My suggested format for the index is Berkeley DB, which is simple and compact, and is ACID compliant, which should ensure the integrity of the index. [[User:Dale Glass|Dale Glass]] 15:39, 24 March 2007 (PDT) | |||
:The old VFS used an index, while it didn't keep a hash of the asset it did track access date. If you are going to be using a DB on top of the cache, you might as well put the entire cache in a single file and save on file system IO. Why? After all a VFS is just a database of sorts and regardless you have to search the DB or the FS for the file. There were two problems with the old VFS: slow to find data in the index and slow to decode images (this isn't the fault of the VFS). A well designed VFS should be able to out perform any FS (because it doesn't have to perform locks). [[User:Strife Onizuka|Strife Onizuka]] 19:12, 24 March 2007 (PDT) | |||
::The filesystem is already an entirely suitable way of keeping data like textures. It does have a few problems for example, such as that searching for a file in a directory may be relatively expensive. This can be easily compensated for with a small DB used only to compensate for those shortcomings. A full VFS-like thing is a lot more complex, you basically have to write your own mini-filesystem, and as we've seen, that's hard to do well. Berkeley DB is especially nice in that it's ACID - you have a guarantee of that data in it is consistent, unlike with the VFS (unless you go and code that in, but that takes time). Also, by keeping a collection of small items of data of a fixed size you avoid internal fragmentation, which is something you'd have to deal with a VFS-like approach. If you store textures as files, just use the system defragmenter. [[User:Dale Glass|Dale Glass]] 19:43, 24 March 2007 (PDT) | |||
:::*How would having a Berkeley DB solve the problem of searches being slow? Would it be faster then a std::map.find()? How well does it's search alg scale? | |||
:::The old VFS did have a bit of a problem with free space fragmentation. Writing a quick and dirty defragmentation utility for it is not a difficult task (at one point i wrote a utility for converting the cache into a tar archive, the output couldn't have fragmented free space). Remember, we already have a VFS, we don't have an integrated Berkeley DB. [[User:Strife Onizuka|Strife Onizuka]] 22:20, 24 March 2007 (PDT) | |||
::::It would be for on-disk storage, of course. Why invent a new format when a decent one already exists? The contents of it should be small enough to load fully into RAM and map.find() all you want, but at some point it needs to be written to disk. Plus, the VFS was quite awful, which is why it was removed. I don't see any need to reinvent the wheel when you could use an existing library much more easily. Also, assuming you would want to defragment the VFS, when exactly would you do that? On startup, on a 1GB file? [[User:Dale Glass|Dale Glass]] 09:54, 25 March 2007 (PDT) | |||
:::::As I understand it, there is a saved copy of the memory based cache index (generally of std::map type). However, when corrupted, it is not re-built, and a full purge is performed (cahed file integrity is based on file size on disk==file size in index, I believe). Also, when the cahe is full, the removal of files is done on a sequential pass through the index removing files untill there is room for a new one (if memory serves me), without care or regard for frequency of use, and last time used (additions which will slow it down). [[User:Paula Innis|Paula Innis]] 11:43, 25 March 2007 (PDT) | |||
::::::I've never seen the old VFS perform a purge. In the past I've seen it accumulate assets for months at a time. | |||
::::::You keep referring to not wanting to invent a new format, the VFS has already been written; the deed has been done; no inventing to be done, just maintenance. Creating a new format is something to always be avoided. The reason the VFS is slow is that map.find doesn't scale. It iterates sequentially through the list. As new items are added to the end of the list the VFS gets slower (because those new items need to be found to be accessed). The methods of searching and indexing the cache are important design features for the cache. Does Berkeley DB do a better job of indexing and searching? | |||
::::::Anyway Berkeley DB is a no-go, LL can't meet the requirements of the [http://www.oracle.com/technology/documentation/berkeley-db/db/license/license_db.html license] at this time. [[User:Strife Onizuka|Strife Onizuka]] 18:15, 25 March 2007 (PDT) | |||
:::::::The maintenance can amount to rewriting it, though, and I consider it's much easier to plug in something that's been around since 1979, and thus incredibly well polished by now than maintaining a new solution. BDB has a very good performance, as it's basically a hash on disk. | |||
:::::::But hrm, I wasn't aware of that Oracle got their paws on it. The concept has been around in multiple forms though, so I guess I'll have to check whether any variation with acceptable licensing still exists. [[User:Dale Glass|Dale Glass]] 18:49, 25 March 2007 (PDT) | |||
== Idea: Keep a texture map for every sim == | |||
Currently, dual core CPUs have a large amount of spare time on the second core while they're not actively fetching textures. I propose a way of using this spare time. | |||
The viewer would create a database per visited sim with a list of textures found in each location. This database would allow the viewer to know what textures can be found in nearby areas, even before the grid has transmitted the data about them. Then, when idle, the viewer could use the spare time to start loading textures from the surrounding areas, so that they are instantly available when the avatar moves. | |||
My suggested way of doing that is using a Berkeley DB database to store the data. A simple way of storing the data would be making the key the position inside the sim, rounded to for example multiples of 16 meters. The data stored for that key would be a list of textures used in that area. | |||
For example, the position (120,90,56) when rounded to multiples to 16m translates to (112, 80, 48). Under key "112,80,48" the viewer would store a list of textures used inside that 16x16x16 area. | |||
This should make several optimizations possible: | |||
* When teleporting to a new area, decoding of textures can be started before the grid had time to transmit the required information. It should be possible to start loading textures immediately after the user starts the teleport, while the progress bar is still being shown. | |||
* When the avatar moves around, the viewer could start decoding textures likely to be present in locations about which there's no data available yet. This might make flying less painful. | |||
* If a storage method like the VFS, which stores multiple textures in a file is chosen, then having this data available could make it possible to optimize the placement of the texture data on disk, for optimal loading. [[User:Dale Glass|Dale Glass]] 15:39, 24 March 2007 (PDT) | |||
:<br/>Stupid question: Why not just have the sim send the object definitions too? You can glean UUIDs for the images from the objects. Then you don't have to ask the sim for the objects too. Wait but isn't that just the same as looking in all directions at the same time (but only rendering what is infront of the camera). The great part about just asking for the object definitions is that there already is code inplace for caching those definitions. No need to hack in an entirely new DB. Silly question: Is this an efficient way to spend sim CPU time: Serving assets not in view and in directions other then that the user is pointing? Downloading textures while the client is in TP is a bad idea; what if the user doesn't have access to the sim (it takes some time for TP's to fail)? It would be a bad thing for say SAP to try to TP into Oracle's private sim (knowing full well that the TP would fail) in an attempt to glean insider information via texture pre-caching ([http://www.betanews.com/article/Oracle_Accuses_SAP_of_Massive_Corporate_Theft/1174664992 linky]). [[User:Strife Onizuka|Strife Onizuka]] 19:34, 24 March 2007 (PDT) | |||
::Sorry, I don't think I succeeded at getting the idea across. It wouldn't involve requesting anything extra from the sim. The idea is that the viewer would compile a database of texture usage inside sims you visited, so when you started teleporting to a sim, it could look in its database and start preloading images from its '''cache''' that are likely to be found in the place you're teleporting to. Basically, instead of waiting for the sim to tell you what's there and then loading the images, it would load from the cache things that are likely to be there (as sims don't change all that often), and when the sim does finally give you the data, you'd already have a part of the images decoded and ready to show. If you visited a place in the past, I think there should be a very good chance of mostly correctly guessing the textures needed to render it. | |||
::SAP problem wouldn't happen because the mechanism would work based on information you previously got, so you'd have to have been to the place before for it to work. [[User:Dale Glass|Dale Glass]] 19:56, 24 March 2007 (PDT) | |||
:::Ahhh its' a passive solution. What you want could be archived without a special DB; the client already collects this information in the sim-object cache. For TPing, it could be done quite easily. The viewer just needs to load the data from the object cache and render it. This would cause the appropriate functions to be called to decode and request missing assets. It doesn't need to render it to the screen, just an extra buffer.<br/>Doing a fake render is probably the easiest, most efficient and most accurate way of doing this. A major issue with building a separate DB is you have to build the dynamic octrees for it. By using the rendering pipeline with the object cache you don't have to write new code to do this. [[User:Strife Onizuka|Strife Onizuka]] 22:51, 24 March 2007 (PDT) | |||
::::Why go through all the trouble of actually rendering something (you can see the impact of extra rendering by enabling reflections) when textures is the only thing that is wanted? Plus, like I said I don't want to request anything missing. I want to do a sort of predictive loading of textures from the cache. I took a look at the object cache, but assuming I found the right thing, it doesn't seem to store texture usage. | |||
::::Here's basically the behavior I want: I visit say, Lusk pretty often. The usage of textures in a sim can be easily mapped. The viewer knows I'm teleporting to Lusk (200,120,52). It can look in the sim's DB, and start loading the tree, ground, platform, etc textures (again, from the on-disk cache) before it even manages to connect to the sim itself. Then by the time the area loads, most textures may be already loaded, so I will avoid seeing it all grey. There's no need to render anything (what for?), nor there is any need to request textures from the grid (as they might not be used anymore). | |||
::::Also I don't get where are you getting the octree thing from, as my idea doesn't involve any rendering at all, or even storing any geometry data. [[User:Dale Glass|Dale Glass]] 10:06, 25 March 2007 (PDT) | |||
:::::pfft, I've been digging through the object cache code. I could have sworn they were keeping face information in there as well. Been a long time since I've played with the object cache (the last time I didn't have source to read and the format has changed a few times since then). *rereads your suggestions* On the one hand using a dedicated DB seems a bit overkill, but on the other I'm having problems finding a simple solution that would do the same thing. I don't see a point in me standing in the way of this anymore. | |||
:::::When would this information be put into the database? [[User:Strife Onizuka|Strife Onizuka]] 17:00, 25 March 2007 (PDT) | |||
::::::The whole thing should be small enough to keep in memory, so it could be kept there. Then write the DB to disk when the avatar moves to another sim. [[User:Dale Glass|Dale Glass]] 17:33, 25 March 2007 (PDT) | |||
:::::::That makes sense. How and what information do you gather to put into the database? A simple list of UUIDs doesn't indicate which textures are more important then others. Say there are 100 textures in a 16x cube, which one do you decode first? If you sort the list by usage count, when do you collect this information and how often do you update it? [[User:Strife Onizuka|Strife Onizuka]] 20:27, 25 March 2007 (PDT) | |||
== ARC vs. 2Q == | |||
Gigs added to the main page that ARC is "probably better than 2Q in every way". However, the PostgreSQL folks replaced ARC with 2Q over patent concerns, and then, as a bonus, found that 2Q performed as well, and maybe better. See [http://lwn.net/Articles/131554/ the comment from Neil Conway] for more on the subject. -- [[User:Rob Linden|Rob Linden]] 12:48, 25 March 2007 (PDT) | |||
::If it's patent encumbered, that's a good enough reason to not use it for me. Looks like IBM will grant a free license for open source for the patent, but it wouldn't work for LL because IBM wouldn't let the same license work for proprietary releases. (LL would have to buy a separate license). [[User:Gigs Taggart|Gigs Taggart]] 14:42, 25 March 2007 (PDT) | |||
== Idea: Don't ever delete anything == | |||
I am interested to know if anyone has attempted to just disable all the various cache-cleanup functions in the viewer and direct the cache to a separate dedicated drive partition solely for caching purposes. | |||
Hard drive prices have recently reached an all-time low in terms of price per gigabyte. Right now it is possible to buy a 500 gigabyte 7200 RPM 3.0 gigabit SATA-II drive for a mere US$115. | |||
Prices are sufficiently low for drive space that a heavy Second Life user might be willing to buy an extra hard drive or create a 50+ gig drive partition that is dedicated solely to caching their personal SL universe. | |||
If the viewer is permitted to just keep on writing data continuously to an open-ended cache volume that does not fragment (since it's not used for anything else), how is performance of the viewer affected as the overall cache size increases? | |||
While the SL database farm has been said to contain about 25 terabytes of data in Jan 2007, the average person will never see the entirety that is every single SL simulator. After an initial phase of exploration, the caching of data is likely to reach a point where it levels off and does not continue to grow all that significantly. A single person is not likely to ever exceed the capacity limits of a dedicated 500 gig local hard drive cache. | |||
My question is, at what point does the leveling-off of the caching occur, and is the viewer as written capable of dealing with a generally open-ended caching mechanism? | |||
[[User:Scalar Tardis|Scalar Tardis]] 10:41, 13 July 2007 (PDT) | |||
=== Accomodation of an unlimited-size cache === | |||
The major problem I can see with an open-ended cache is that the searching for assets may start to take an excessively long time when using just a small number of cache directories. The Windows 98 Explorer started to get really slow when a directory contained more than 1024 files, and Windows XP probably has a similar search limit. | |||
Directory search overhead needs to be limited, while still allowing for open-ended caching of everything. | |||
This can probably primarily be remedied in the file system by using a deep nested directory structure, each composed of three consecutive characters of the key, so that it is always quick to walk down the tree to find the required file: | |||
Cache structure: | |||
* 4096 directories of keychar 0-2 | |||
* 4096 directories of keychar 3-6 | |||
* 4096 directories of keychar 7-9 | |||
* 4096 directories of keychar 10-12 | |||
* 4096 directories of keychar 13-15 | |||
* 4096 directories of keychar 16-18 | |||
* 4096 directories of keychar 19-21 | |||
* 4096 directories of keychar 22-24 | |||
* 4096 directories of keychar 15-27 | |||
* 4096 directories of keychar 28-30 | |||
* Maximum of 256 key files in this directory | |||
Implementation: | |||
* Key: 0123456789ABCDEF0123456789ABCDEF | |||
* Location: H:\SLCache\012\345\678\9AB\CDE\F01\234\567\89A\BCD\01234567-89AB-CDEF-0123-456790ABCDEF.key | |||
* Directories only created as needed when new assets are downloaded | |||
With this caching mechanism, as caching storage increases into the billions of files, it is always only necessary to go down ten directory levels to find the necessary file. Initially the directory structure imposes a higher cost to a small directory tree but the search costs will probably start to drop off as the overall cache size reaches into the millions of files. | |||
While this directory structure might seem ridiculously huge, it is important to note that LL only uses a tiny fraction of the total keyspace, and will likely never use the entire keyspace, so the total number of actual directories will remain small. | |||
If the cache drive is dedicated to write caching only from SL, then fragmentation slowdown should not be an issue. | |||
[[User:Scalar Tardis|Scalar Tardis]] 10:41, 13 July 2007 (PDT) | |||
=== This is where contemporary file systems break! === | |||
{{Mention|Scalar Tardis|p=}}, I'm pretty sure that, 17 years later (!), your question was probably answered somewhere 🤣 | |||
But since this Wiki, like the "universal cache" you mentioned, is also here 'forever' and being searched by Google for answers such as yours, let me add a few thoughts on this... | |||
The ''beauty'' of the whole asset cache server being ''read-only'' — i.e., you can ''never'' modify anything, the best you can do is to delete something and add a ''new'' item — is that we have wonderful algorithms (and tools to take advantage of such algorithms) to deal exactly with the use-case you present: how to store billions of files (well, billions in 2007; it ought to be ''trillions'' in 2024, when I'm writing this) in an efficient way to get fast retrieval even using home-based solutions and 'regular' hardware (i.e., not specifically designed for large data centres)? | |||
There is not ''one'' answer, but there are several (fortunately!). There are, however, ''incorrect'' answers: a 'normal' filesystem for 'normal' operating systems — essentially, what you've got on your desktop/laptop computer — was designed and fine-tuned for situations where the files are ''always'' changing, and optimised for such a situation. Thus, the whole way the filesystem is organised relies on that assumption — which precludes certain forms of using data efficiently. But because 'most' use cases that 'normal' applications from most mainstream users can be fit into this assumption (over the course of the day, we — and even the filesystem — are constantly updating, appending, changing, deleting files, all the time, non-stop, ''even if we don't realise it''), a 'regular' filesystem is '''not''' a solution. | |||
Indeed, I'm sure that any filesystem available on a regular version of Windows (and macOS) will break with 'too many entries'. It will reach a point where retrieving a single file from disk ''may'' take longer than to actually download it over the 'net (directly to memory)! Even an ever-increasing splitting and splitting of directories, creating a very deep multi-level hierarchy (at the limit, it could be 32 levels deep — that's because an UUID is written with 32 significant characters!), will not work in a trivial, straightforward sense — the ''only'' advantages is that each level will have at most 16 levels, and each of the ''last'' levels (the 'leaves' on the tree) will only have 16 files each (maximum), so this structure ''would'' work. In theory. Because it's not only each level that is limited in number; there is a global number of nodes in the pool for a filesystem, and once these are exhausted, the disk reports 'no space left on device' (or such a similar error message), even if still has lots of free space. It just hasn't enough free ''nodes''. | |||
There are naturally fine-tuning options. You can reformat the disk with more nodes, for instance. Or make each block larger, so that you use less blocks per file (blocks are usually 4 KBytes on most Unix or Unix-inspired filesystems), at the cost of having some empty space. Someone doing 'radical finetuning' would list ''all'' file sizes for all the downloaded assets and do some math to determine how big each block should be, and how many nodes will be required to store everything, while still wasting as little disk space as possible (so that ''more'' assets can be crammed in). And I believe there are some newer-generation filesystems where you can do that ''without'' reformatting the disk; and I've even heard of filesystems where the number of nodes and the block size is ''dynamically adjusted by the operating system''. The latter, however, is a bold claim which requires proof, and I have none to offer. It is ''conceivable'', sure, but is it ''realistic''? (Think of the time eternally calculating, over and over again, how the file sizes change over time, and what the 'best' allocation system is at every moment). | |||
No, instead, what we have to deal ''with this specific scenario'' is something much more basic: we call it a '''key/value pair store''', sometimes also known as a '''NoSQL database'''. KVP stores — which in the 2020s are even implemented, in two different ways, on LSL itself! — are ''the'' way of efficiently managing a huge number of read-only files (''huge'' meaning trillions or quadrillions) and retrieving them 'instantly'. There are lots of algorithms for doing that '''very''' efficiently. There might be some time until the KVP store processes those trillions of files — it does, after all, have to build a data structure (usually some form of advanced tree) to effectively address these files — but after that happens, the data is retrieved instantly, with next-to-zero delays, and that's why they're so popular (every browser, for instance, presents a KVP store for each JavaScript instance running on a web page — like with LSL, KVP stores are extremely useful to store data that doesn't change and retrieve it at a blindingly fast speed). | |||
A good KVP store design will ''not'' use 'trillions of files on disk' to store a 'trillion assets'. At the limit, it might just need 'one' file on disk. What happens is that the start and end positions of each asset are ''also'' recorded with the 'key'. On the simplest model, the file just grows, and old deleted data will remain where it originally was. Every now and then (which can be user defined, automated, periodical, manually launched...) the KVP store ''might'' 'prune' the tree, so to speak: take a look at what entries have been deleted, and which are now pointing to 'nowhere', and recover the lost space. Other implementations, to prevent accessing sensitive data which might have been deleted from the tree but not yet from the actual storage, might fill the space with zeros first, and let the 'garbage collector' run at a later stage. Details vary from solution to solution (and that's why there are different algorithms!) and this is an area of active scientific research; although there are a few 'reference implementations' which are conceptually sound and have passed the test of time (and are therefore used for benchmarking purposes). | |||
That said, the 'best' implementation for your suggested 'infinite data storage' would be a KVP that worked as an HTTP proxy server. A popular solution would be to integrate, say, a Redis server (a popular KVP store which is managed entirely in memory and is therefore ultrafast), which can easily handle a few hundreds of millions of entries (4 billion being its theoretical limit); and put a bare-bones, ultra-simple, minimalistic HTTP proxy 'layer', which can be configured to cache using a Redis KVP store (as opposed to using anything on the filesystem) as the backend. That should work fine, with limited fuss (a nice way to spend an afternoon writing it in Go and cross-compiling it to be installed on my NAS :) ). | |||
— [[User:Gwyneth Llewelyn|Gwyneth Llewelyn]] ([[User talk:Gwyneth Llewelyn|talk]]) 11:31, 27 September 2024 (PDT) | |||
== You don't need that many directories... == | |||
One trick would be to make megafiles (who contain say up 1024 textures each, say all of which say have a texture key that starts with "a6" and is 64x64). Then when you need "a822ff2b-ff02-461d-b45d-dcd10a2de0c2" you look up the megafile which says each texture in it is 64x64 saving the trouble of having that info written at 1024 places, and is indexed at the beginning of the file; you're really looking into an index file at the beginning for the megafile rather than doing redundant os-type-things for 1024 directories. Then you find the relevant record in the megafile and put only that into live memory... | |||
Also since textures tend to come from the same sim who has been visited before, a cache file with that sim's ID (or creator name) as the first directory would help speed up search, at the cost of the occasional duplicate or pointer to another megafile. | |||
In any case, some users would like to cache all elements of certain games or sims forever, so that they don't get PWNED for texture load lag or see unloaded textures while acting in a play and messing up the acting for it... | |||
Surely some similar BSD-licensed code exists somewhere, we just have to find it. One question remains, how many textures can we cram into a max-size file anyhow? | |||
[[User:Contagious Republic]] 14:44, 27 feb 2008 | |||
With a proper hashing algorithm, the directory path crossing can be avoided. Say u need texture "a822ff2b-ff02-461d-b45d-dcd10a2de0c2", chunk it into a hash say "af 22 b0 07 5f 7a 6d 33" (whatever hash lenght we need) and use it as a hard drive raw address. If it contain "I am texture a822ff2b-... here is my data" then bingo, if no texture add it, if taken by a texture by the same hash try the very next hash position "af 22 b0 07 5f 7a 6d 34". | |||
Also use low CPU use time to remove textures who are older than an arbitrary number of days... | |||
This kind of code exists somewhere and has been working without too much bugs, it's just a matter of not reinventing the wheel completely. | |||
As another alternative to 20+ directory path crossings, have the entire directory path thing in memory if feasible. Or at least in one file so we can just point to the proper file according to the same hashing system... in OS that don't auto-defragment all the time this should work... | |||
[[User:Contagious Republic]] 1:12 AM, 8 March 2008 | |||
=== Level 2 data cache === | |||
Doing 11 file system directory tranversals to find every single key may potentially be slow, so it might be worth looking at how processor memory caches operate to improve key retrival speeds. | |||
* Use a small 50 meg secondary cache with the unlimited-size cache (could be a VFS) | |||
* When requesting a key, first check the VFS. If found, get it and proceed. | |||
* If not found in the VFS, do the more time-consuming unlimited cache check | |||
* If not found in the unlimited cache, download direct, and write first to VFS, and then to the unlimited cache. | |||
Over time as the unlimited cache builds up, the network object/texture downloads will steadily decrease until they effectively stop occurring, and all retrieval is from the unlimited cache. | |||
Meanwhile using a VFS intermediary permits very fast lookups of frequently used data without the time cost of always doing a deep search for every key. | |||
[[User:Scalar Tardis|Scalar Tardis]] 12:36, 14 July 2007 (PDT) | |||
== Generalized Problem== | |||
I am not an expert by any means, but I will chime in any way. All the discussion has been about caching on each local host. What about caching per local network, or LAN? | |||
If two or more users are in the same LAN, they should share cache somehow. Consider several users in the same classroom, for example, not to mention the builder or business person with two or three work stations all logged in. | |||
Suppose the client included a web proxy such as SQUID to manage the textures (and anything else that will be cached). If two or more clients are in the same LAN, there could be some sort of negotiation strategy by which one of the proxies takes ownerships. All the other local users get their data from that cache. | |||
Just thinking out loud. | |||
[[User:Lee Ponzu|Lee Ponzu]] 19:28, 6 July 2009 (UTC) | |||
:The deal with IBM works something like this, LL provided them with an asset server. Any assets it didn't have it would request from LL's asset servers and cache it. Your idea is just one step behind. The problem with having a server built into the client is that it if your client was sufficiently under powered or it was a large enough network, it would get hosed (DDoS). You really don't want your client to be the server. There has been talk in the community about using various different file sharing inspired asset deliver mechanisms (bittorrent etc). With HTTP Texture it should be possible to use any HTTP proxy (like SQUID) though you might need to code proxy support into the client. -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 19:32, 7 July 2009 (UTC) | |||
:: More than a decade later, this is not only operational, but definitely recommended, especially because these days almost all content — except for positioning information and current animation being played by the other avatars — is being delivered via HTTP. Proxy web caching software such as SQUID works '''wonderfully''' and is very highly recommended! (I run my texture caching server on the home NAS 😅). All viewers I'm familiar with already have an option to use a proxy server for HTTP content (for the built-in web browser ''and'' for all HTTP content served from their asset servers). — [[User:Gwyneth Llewelyn|Gwyneth Llewelyn]] ([[User talk:Gwyneth Llewelyn|talk]]) 09:25, 27 September 2024 (PDT) |
Latest revision as of 10:31, 27 September 2024
Please sign comments you leave here by putting four tildes (~~~~) at the end of your comment. For more guidelines, see Talk Page Guidelines
The discussion below is a continuation of a long conversation on the sldev list in March, 2007, which per the newly created SLDev guidelines was moved to this talk page.
Related discussions:
GIF
We can safely avoid the GIF format. It's 256 colors only. it use unisys patent LZW (the good old patent used in the good old GIF expired, but unisys repatent'd it because of some "improvments" in the algorithm.). Unisys suddenly claimed they wanted royalties to encode GIF file, and just for that we shouldn't use it. source : License Information on GIF and Other LZW-based Technologies -- kerunix Flan
- The last of the patents expired in October 2006. GIF is no longer patent encumbered. But PNG is still better. Keep in mind by recompressing the image to cache it you're trading CPU usage for disk space. Compression is usually a lot slower than decompression. Just storing uncompressed might be overall faster. In the end, we need to assume nothing and benchmark, benchmark, benchmark. Seg Baphomet 22:45, 23 March 2007 (PDT)
- The GIF format is not suitable. It only supports 256 colors per frame. It does not support semi-transparent pixels. PNG is a superior format. Has anyone considered the time required to re-encode images in the target format? Strife Onizuka 09:42, 24 March 2007 (PDT)
Databases
There appears to be support with APR for a portable interface to common databases, namely Berkely-DB and SQL. Dzonatas Sol 18:23, 23 March 2007 (PDT)
- APR = Apache Portable Runtime ? If yes, i can't find any interface to common databases in the APR documentation. Kerunix Flan 03:48, 25 March 2007 (PDT)
- Yes, I noticed some features added to the 1.3 version that is in development. Dzonatas Sol 13:00, 25 March 2007 (PDT)
Stats
It would be useful to get stats on current and new hardware profiles from LL to get a sense of what people are using and where the focus should lie. Iron Perth 03:00, 23 March 2007 (PDT)
I plugged a simple cpu clock tick timer class (based on http://www.codeguru.com/forum/showthread.php?s=&threadid=280008) into FL-1.13.3.59558 VC Express, XP pro build lltexturecache, and recorded some times for index lookup & texture loading in LLTextureCacheWorker::doRead(). Spec: budget Sempron 2800+ on Gigabyte k8vt800, FX5200 128mb graphics, 1gb RAM, 160gb HDD (inexpensive + no RAID, not about to pull PC apart to get further HDD details) 2x40gb partitions, cache on partition #1, 54% full, in need of a defrag (currently 33% file fragmentation!). After running the download FL, the cache got purged (any way of re-building the index instead?).
(while re-building a half-decent cache, ReleaseNoOpt build executed from within VC after several mins running time & teleporting to around 7-8 regions) WARNING: TickTimer::StopWithReporting: Lookup ticker. WARNING: TickTimer::StopWithReporting: COUNT: 10200 WARNING: TickTimer::StopWithReporting: TOTAL (sec): 6.06264 WARNING: TickTimer::StopWithReporting: AVG (sec): 0.000594377 WARNING: TickTimer::StopWithReporting: MAX (sec): 0.147985 WARNING: TickTimer::StopWithReporting: File read ticker. WARNING: TickTimer::StopWithReporting: COUNT: 3700 WARNING: TickTimer::StopWithReporting: TOTAL (sec): 12.8276 WARNING: TickTimer::StopWithReporting: AVG (sec): 0.00346692 WARNING: TickTimer::StopWithReporting: MAX (sec): 0.126929
nb. times may be affected by cpu speed varience (callobrated only once, and only over 1sec)
The other day, I was getting a better 0.001s average on the cached file read. Paula Innis 10:53, 25 March 2007 (PDT) (feel free to tidy this up or whatever)
TGA
I can't find it in the spec, but i think this format give the warranty that images of a given size will always have the same file size. (without RLE of course). This may help for some optimization.
Additionaly, the "developper area" of the TGA file format could be used to store some usefull information.—The preceding unsigned comment was added by kerunix Flan
- Converting images into the TGA format for cache storage has the advantage that the client already supports rendering TGA images. It would be one of the easiest solutions. Images with the same dimensions would all be the same but otherwise would be different in size. Strife Onizuka 02:22, 24 March 2007 (PDT)
Faster Formats
Contribution from Tofu Linden from the email discussion
RLE is cheap but also not very effective at all for many, many textures; wherever I might have used RLE in ye olden days I'd now use trusty old zlib or blingy liblzf, the latter of which is more or less on the same order of speed as RLE but a LOT more effective on data (images) with redundancy not necessarily manifesting as runs of pixels. Iron Perth 02:30, 24 March 2007 (PDT)
- And as said on the mailling, the destructive effect of JPEG2000 probably killed any chance to have a good compression rate using RLE. RLE is fast and the only compression supported by the Targavision specification. If we don't use it (probably because it's useless) then don't use compression at all. One of the advantage of using RLE like said in the specification is that we don't have to decompress anything to read the header, developer area, and footer. Kerunix Flan 08:02, 24 March 2007 (PDT)
- I think I missed the bit about "the destructive effect of JPEG2000" on the list. Currently lossy compression is only used when the source is a JPEG and lossless for BMP and TGA. The point of using a format like JPEG2000 is that it achieves high compression ratios superior to other formats for the same image quality. For modern codecs to achieves high compression ratios they sacrifice CPU time. The reason for the push for higher ratios is to reduce the amount of storage and bandwidth required for transmission. If SL moves away from using JPEG2000 and towards a less CPU intensive, lower compression ratio codec, the asset servers will be serving more data per asset request. This seems like a step backwards and not forwards. As an intermediate format to save CPU time to avert the costly re-decoding of an image, it makes sense to pick a faster format though it will be costly in diskspace; and as long as the disk is not horibly fragmented it has the potential to be faster then decoding. A format needs to fit these criteria to be considered for this task:
- Lossless
- Support a variable number of channels.
- Fast to encode, fast to decode
- Preferably GPL compatible.
- This excludes: GIF & JPEG. This may also exclude PNG because most implementations treat anything that is fully transparent as rgba(0,0,0,0); white images develop black outlines at low resolutions (not lossless). Strife Onizuka 21:08, 24 March 2007 (PDT)
- I think I missed the bit about "the destructive effect of JPEG2000" on the list. Currently lossy compression is only used when the source is a JPEG and lossless for BMP and TGA. The point of using a format like JPEG2000 is that it achieves high compression ratios superior to other formats for the same image quality. For modern codecs to achieves high compression ratios they sacrifice CPU time. The reason for the push for higher ratios is to reduce the amount of storage and bandwidth required for transmission. If SL moves away from using JPEG2000 and towards a less CPU intensive, lower compression ratio codec, the asset servers will be serving more data per asset request. This seems like a step backwards and not forwards. As an intermediate format to save CPU time to avert the costly re-decoding of an image, it makes sense to pick a faster format though it will be costly in diskspace; and as long as the disk is not horibly fragmented it has the potential to be faster then decoding. A format needs to fit these criteria to be considered for this task:
- We talked about at least 2 kind of local cache :
- Compressed texture cache : Obviously, it should be just a storage for the jpeg2000 sent by the sim. Why recoding ? (and lose quality ?)
- Uncompressed texture cache : storing some texture already decoded, so the client don't have to lose time decoding it when needed. For me, the best format is TGA. Kerunix Flan 03:31, 25 March 2007 (PDT)
- zlib ontop of TGA works for me. Speed and ease of decoding. Strife Onizuka 18:24, 25 March 2007 (PDT)
- We talked about at least 2 kind of local cache :
http://en.wikipedia.org/wiki/Huffman_coding may be worth a look. It may work well with 'jpeg artifacts' Paula Innis 13:03, 25 March 2007 (PDT)
- Take a look at http://en.wikipedia.org/wiki/Huffman_coding#Applications. See DEFLATE there? That's what .zip uses. You can get it with zlib. Dale Glass 13:19, 25 March 2007 (PDT)
- And as http://www.zlib.net says; 'Note that zlib is an integral part of libpng'. There is also .DXT to consider. Paula Innis 14:21, 25 March 2007 (PDT)
- Right. I said that because I don't get why you're mentioning Huffman then. It's been implicitly brought up by Tofu ("I'd now use trusty old zlib") and Seg Baphomet and Strife Onizuka (who mentioned PNG) before you added that. DXT, if you refer to S3 Texture Compression, is lossy, and patented, which almost certainly makes it unacceptable.
- AFAIK, the S3 patent only requires registration and the payment of royalties for graphics driver development, not for applications and encoders, and (again AFAIK) S3 has issued a general license to use S3TC/DXTC for all OpenGL drivers. The HUGE advantage of S3TC is, that almost all 3D cards and onboard chipsets support it natively. You can directly upload S3TX textures to OpenGL without having to decompress them and OpenGL uses the compressed version, saving texture space (DXT has a compression ratio of 1:6 when compressing 24Bit RGB textures). In many situations, using S3TC compressed textures is even faster than uncompressed ones, because the decompression is extremely simple and storing the compressed version in texture memory saves I/O bandwith in the graphics board. It is definately true, that DXT is lossy, but so is the initial JPEG2000/Wavelet compression and the number of advantages of S3TC may be worth the lossyness. Another advantage of S3TC is, that the texture data size (body) for textures with power-of-two dimensions (all in SL AFAIK) is also a power of two, making it possible to use efficient storage mechanisms for the local cache, e.g. the buddy system (http://en.wikipedia.org/wiki/Buddy_memory_allocation) to store all textures in a single pre-allocated file. --Dreamer Miquel 03:18, 31 August 2007 (PDT)
- Right. I said that because I don't get why you're mentioning Huffman then. It's been implicitly brought up by Tofu ("I'd now use trusty old zlib") and Seg Baphomet and Strife Onizuka (who mentioned PNG) before you added that. DXT, if you refer to S3 Texture Compression, is lossy, and patented, which almost certainly makes it unacceptable.
- Excuse me, Strife, but are you putting Paula's name on something I wrote? I see I forgot to sign again though, grr. Dale Glass 18:43, 25 March 2007 (PDT)
- Sorry about that I really thought I had gotten the right name on it. Wasn't on purpose x_x. Strife Onizuka 20:14, 25 March 2007 (PDT)
- Excuse me, Strife, but are you putting Paula's name on something I wrote? I see I forgot to sign again though, grr. Dale Glass 18:43, 25 March 2007 (PDT)
Umm, I guess I did miss Tofu's mention of zlib in amongst all that RLE stuff (which triggered my suggestion of Huffman trees). Not awake enough to register it when I read it, I guess. Paula Innis 07:30, 26 March 2007 (PDT)
Current Caching
Both Tofu and Steve have made the point they feel cache hit rate issues are a priority..
- Are they talking about on-disk texture cache or the subset of textures that resides in video memory? The page topic suggests the former, the discussion seems to be about the latter.--Dreamer Miquel 03:51, 31 August 2007 (PDT)
Contribution from Steve Linden from email discussion:
2. The way we calculate texture pixel coverage (and therefore desired discard level) is as follows:
a. When anything using a texture is rendered. set its pixel coverage to MAX(current pixel coverage, rendered pixel coverage)
b. Once a frame, reduce the current pixel coverage of all textures by 10%. This way, everything rendered recently has an accurate pixel coverage, and everything not rendered degrades over time. gImageList.mForceResetTextureStats merely tells the code "assume that all textures are no longer visible" and resets the pixel area of all texture to 0. As soon as they are rendered their pixel coverage is updated.
- Doesn't this reduce the pixel coverage of all textures to zero after - at most - a few minutes, if they aren't rendered? Let's say a computer does 50fps. In 5 minutes, it does 15000 frames ( 50fps * 300 seconds). In this time, the pixel coverage degrades by the factor 0.9^15000 = 4,34E-687, which is beyond float and double precission. Hence, after this time, the coverage of all these textures would be rounded to zero. If I'm correct, this means, that ALL texture that are not used for 5 minutes would be discarded as soon as a sweep is performed. --Dreamer Miquel 03:33, 31 August 2007 (PDT)
3. Even if a textures pixel coverage gets set to 0, we do not immediately discard the texture data. We only discard the data if we need room for more textures (in which case we discard the textures with the lowest priority), or if no object in view is using the texture *and* 30 seconds have elapsed.
- do that mean that, even if we have a lot of VRAM available, the texture will be removed from cache if we don't see the texture for 30s ? and then, moving our avatar in the sim will make the client redecoding the texture from cache to load it in VRAM again ? Why not discarding the texture only when we're out of memory ? (please delete/replace this comment by the answer) Kerunix Flan 08:18, 24 March 2007 (PDT)
4. Tateru is absolutely correct in that caching textures to disk mostly just saves on network bandwidth. Unless you have a slow network connection, it can be almost as fast to load textures across the network as from a disk, especially if that disk is not especially fast or is fragmented. Decoding the textures takes quite a bit longer. However, if you have multiple CPUs and enable Client > Rendering > Run Multiple Threads, the decoding will be done on the second CPU and go much faster.
- I can't believe loading a texture from a remote cache isn't much slower than loading a texture from a local cache. And if you add the usual packet loss and high ping (for non-us user) it's *certainly* much slower. Kerunix Flan 08:18, 24 March 2007 (PDT)
He also makes some interesting comments about teleporting around and filling up the texture cache. Check out his original email here:
https://lists.secondlife.com/pipermail/sldev/2007-March/001133.html
Iron Perth 02:44, 24 March 2007 (PDT)
Current Caching sculptie crunch
Currently sculptie makers are using 64x64 textures which are exactly like a 32x32 ones (just stretching pixels), it takes x4 times the memory. Aside from the extra texture creating hassle, there is a bandwidth, memory, and CPU cost attached for both Linden labs and users...
To stop this behavior, Linden Labs could make an exception for sculpties and never crunch them into a smaller texture to save memory, because we're otherwose wasting bandwidth and texture-crunching CPU time and fixing it would also means that the exact same memory is being used as the sculptie creator truly needs, not x4.
To qualify for the non-cache-crunching status, a sculptie should be 32x32 max already and should be used as a sculptie when first loaded by the client, not as a regular texture.
Alternatively, some textures could have a "crunch me first" or "you can crunch me more" flag, a volontary measure by the creator or owner to make things smoother for more important textures (rather than "don't crunch me" flag which everyone would selfishly put on all the time).
The recent ability to upload sharp-edge sculpties correctly (thanks for the fix!) has made that issue all the more important, the precision sculpties are finally being uploaded and they look uglier when level of detail truncates their details.
P.S.: the average level of detail never allows me to see my roach sculptie as 32x32 no matter how close I am, it is irritating that people see a potato type shape with triangles coming out unless I tell them to adjust detail levels in preferences... this is why the wasteful 64x64 format (and higher, which *does* crunch less) is staying.
P.P.S.: the longer you delay this issue, the more 64x64 or more legacy sculpties will permanently slow down the system...
User:Contagious Republic 14:40, 11 March 2008
Home Cache
If a resident has a home, why not have a separate texture cache for it, a Home Cache. This cache would only be explicitly cleared down and would be accessed first by the client when the resident logs on or teleports home.
Once populated with textures, each logon/teleport to home would not involve network traffic, as they would be available locally, on the client.
Benja Kepler 02:52, 24 March 2007 (PDT)
- A similar idea would be to have a hit count on textures. The hit count can be kept around much longer than the texture data. The hit count would be used to indicate what textures to keep in the cache longer. Dzonatas Sol 12:53, 24 March 2007 (PDT)
- We probably don't want to invent our own caching system, since this type of thing gets studied a lot. Looking around, at what other projects are doing (e.g. PostgreSQL, Linux), I found this replacement for the LRU scheme used today: 2Q cache system. This balances hit frequency and hit recency to determine whether to eject something from the cache. -- Rob Linden 23:34, 24 March 2007 (PDT)
- I agree. Trying to outsmart all the hundreds of people developing new cache replacement algorithms would be stupid. There's plenty of things out there better than LRU without rolling our own. Gigs Taggart 14:34, 25 March 2007 (PDT)
- We probably don't want to invent our own caching system, since this type of thing gets studied a lot. Looking around, at what other projects are doing (e.g. PostgreSQL, Linux), I found this replacement for the LRU scheme used today: 2Q cache system. This balances hit frequency and hit recency to determine whether to eject something from the cache. -- Rob Linden 23:34, 24 March 2007 (PDT)
Textures in Inventory
A search through the avatar's inventory is a good indication of what textures to keep around longer in the cache than others. Dzonatas Sol 12:50, 24 March 2007 (PDT)
- I can't agree with this. I have many textures that I rarely see in-world. I don't think it merits extra attention. Strife Onizuka 22:59, 24 March 2007 (PDT)
Plan: Normalization of Texture Usage
A method to keep the statistics of pixel coverage and hit rates: We use a collection of hit counters such that each represents an individual entry for an associated texture. When any hit counter hits a threshold, the collection is normalized and the results are stored in each associated hit-rate fields of the entry. The hit-rate fields may store percentages for short term and long term storage. The short term and long term storage indicators present a value for determination in how textures are archived or disposed. Pixel-coverage rates are optional fields in each entry for determination of an optimal storage medium for texture data such that the medium is represented in uncompressed or compressed methods and if the stored texture is downsampled and to what degree it is downsampled. Dzonatas Sol 17:05, 24 March 2007 (PDT)
- I think this would result in decreased performance as a lot of time would be spent incrementing hit counters. Strife Onizuka 22:57, 24 March 2007 (PDT)
- How hit counters are incremented have not been analyzed for implementation. This is just a general plan independent of any specific implementation. Dzonatas Sol 23:03, 24 March 2007 (PDT)
Idea: Keep a cache index
Keep an on-disk index of cache files. This could be made compact and small enough to fully into RAM. The index would allow to know whether a file is in the cache, thus avoiding trying to open a file if the current way of storing a texture in its own file is kept.
Additionally, store some metadata with: filesize, md5sum, and a very rough texture approximation (perhaps even limited to a solid color) to have something better than the current grey.
Filesize and md5sum would allow to check the on-disk files' integrity, which should avoid any need to clear the cache, as correctness could be automatically verified.
- Isn't this what the texture.cache and texture.entries files do right now? --Dreamer Miquel 03:47, 31 August 2007 (PDT)
My suggested format for the index is Berkeley DB, which is simple and compact, and is ACID compliant, which should ensure the integrity of the index. Dale Glass 15:39, 24 March 2007 (PDT)
- The old VFS used an index, while it didn't keep a hash of the asset it did track access date. If you are going to be using a DB on top of the cache, you might as well put the entire cache in a single file and save on file system IO. Why? After all a VFS is just a database of sorts and regardless you have to search the DB or the FS for the file. There were two problems with the old VFS: slow to find data in the index and slow to decode images (this isn't the fault of the VFS). A well designed VFS should be able to out perform any FS (because it doesn't have to perform locks). Strife Onizuka 19:12, 24 March 2007 (PDT)
- The filesystem is already an entirely suitable way of keeping data like textures. It does have a few problems for example, such as that searching for a file in a directory may be relatively expensive. This can be easily compensated for with a small DB used only to compensate for those shortcomings. A full VFS-like thing is a lot more complex, you basically have to write your own mini-filesystem, and as we've seen, that's hard to do well. Berkeley DB is especially nice in that it's ACID - you have a guarantee of that data in it is consistent, unlike with the VFS (unless you go and code that in, but that takes time). Also, by keeping a collection of small items of data of a fixed size you avoid internal fragmentation, which is something you'd have to deal with a VFS-like approach. If you store textures as files, just use the system defragmenter. Dale Glass 19:43, 24 March 2007 (PDT)
- How would having a Berkeley DB solve the problem of searches being slow? Would it be faster then a std::map.find()? How well does it's search alg scale?
- The old VFS did have a bit of a problem with free space fragmentation. Writing a quick and dirty defragmentation utility for it is not a difficult task (at one point i wrote a utility for converting the cache into a tar archive, the output couldn't have fragmented free space). Remember, we already have a VFS, we don't have an integrated Berkeley DB. Strife Onizuka 22:20, 24 March 2007 (PDT)
- It would be for on-disk storage, of course. Why invent a new format when a decent one already exists? The contents of it should be small enough to load fully into RAM and map.find() all you want, but at some point it needs to be written to disk. Plus, the VFS was quite awful, which is why it was removed. I don't see any need to reinvent the wheel when you could use an existing library much more easily. Also, assuming you would want to defragment the VFS, when exactly would you do that? On startup, on a 1GB file? Dale Glass 09:54, 25 March 2007 (PDT)
- The filesystem is already an entirely suitable way of keeping data like textures. It does have a few problems for example, such as that searching for a file in a directory may be relatively expensive. This can be easily compensated for with a small DB used only to compensate for those shortcomings. A full VFS-like thing is a lot more complex, you basically have to write your own mini-filesystem, and as we've seen, that's hard to do well. Berkeley DB is especially nice in that it's ACID - you have a guarantee of that data in it is consistent, unlike with the VFS (unless you go and code that in, but that takes time). Also, by keeping a collection of small items of data of a fixed size you avoid internal fragmentation, which is something you'd have to deal with a VFS-like approach. If you store textures as files, just use the system defragmenter. Dale Glass 19:43, 24 March 2007 (PDT)
- As I understand it, there is a saved copy of the memory based cache index (generally of std::map type). However, when corrupted, it is not re-built, and a full purge is performed (cahed file integrity is based on file size on disk==file size in index, I believe). Also, when the cahe is full, the removal of files is done on a sequential pass through the index removing files untill there is room for a new one (if memory serves me), without care or regard for frequency of use, and last time used (additions which will slow it down). Paula Innis 11:43, 25 March 2007 (PDT)
- I've never seen the old VFS perform a purge. In the past I've seen it accumulate assets for months at a time.
- You keep referring to not wanting to invent a new format, the VFS has already been written; the deed has been done; no inventing to be done, just maintenance. Creating a new format is something to always be avoided. The reason the VFS is slow is that map.find doesn't scale. It iterates sequentially through the list. As new items are added to the end of the list the VFS gets slower (because those new items need to be found to be accessed). The methods of searching and indexing the cache are important design features for the cache. Does Berkeley DB do a better job of indexing and searching?
- Anyway Berkeley DB is a no-go, LL can't meet the requirements of the license at this time. Strife Onizuka 18:15, 25 March 2007 (PDT)
- The maintenance can amount to rewriting it, though, and I consider it's much easier to plug in something that's been around since 1979, and thus incredibly well polished by now than maintaining a new solution. BDB has a very good performance, as it's basically a hash on disk.
- As I understand it, there is a saved copy of the memory based cache index (generally of std::map type). However, when corrupted, it is not re-built, and a full purge is performed (cahed file integrity is based on file size on disk==file size in index, I believe). Also, when the cahe is full, the removal of files is done on a sequential pass through the index removing files untill there is room for a new one (if memory serves me), without care or regard for frequency of use, and last time used (additions which will slow it down). Paula Innis 11:43, 25 March 2007 (PDT)
- But hrm, I wasn't aware of that Oracle got their paws on it. The concept has been around in multiple forms though, so I guess I'll have to check whether any variation with acceptable licensing still exists. Dale Glass 18:49, 25 March 2007 (PDT)
Idea: Keep a texture map for every sim
Currently, dual core CPUs have a large amount of spare time on the second core while they're not actively fetching textures. I propose a way of using this spare time.
The viewer would create a database per visited sim with a list of textures found in each location. This database would allow the viewer to know what textures can be found in nearby areas, even before the grid has transmitted the data about them. Then, when idle, the viewer could use the spare time to start loading textures from the surrounding areas, so that they are instantly available when the avatar moves.
My suggested way of doing that is using a Berkeley DB database to store the data. A simple way of storing the data would be making the key the position inside the sim, rounded to for example multiples of 16 meters. The data stored for that key would be a list of textures used in that area.
For example, the position (120,90,56) when rounded to multiples to 16m translates to (112, 80, 48). Under key "112,80,48" the viewer would store a list of textures used inside that 16x16x16 area.
This should make several optimizations possible:
- When teleporting to a new area, decoding of textures can be started before the grid had time to transmit the required information. It should be possible to start loading textures immediately after the user starts the teleport, while the progress bar is still being shown.
- When the avatar moves around, the viewer could start decoding textures likely to be present in locations about which there's no data available yet. This might make flying less painful.
- If a storage method like the VFS, which stores multiple textures in a file is chosen, then having this data available could make it possible to optimize the placement of the texture data on disk, for optimal loading. Dale Glass 15:39, 24 March 2007 (PDT)
Stupid question: Why not just have the sim send the object definitions too? You can glean UUIDs for the images from the objects. Then you don't have to ask the sim for the objects too. Wait but isn't that just the same as looking in all directions at the same time (but only rendering what is infront of the camera). The great part about just asking for the object definitions is that there already is code inplace for caching those definitions. No need to hack in an entirely new DB. Silly question: Is this an efficient way to spend sim CPU time: Serving assets not in view and in directions other then that the user is pointing? Downloading textures while the client is in TP is a bad idea; what if the user doesn't have access to the sim (it takes some time for TP's to fail)? It would be a bad thing for say SAP to try to TP into Oracle's private sim (knowing full well that the TP would fail) in an attempt to glean insider information via texture pre-caching (linky). Strife Onizuka 19:34, 24 March 2007 (PDT)- Sorry, I don't think I succeeded at getting the idea across. It wouldn't involve requesting anything extra from the sim. The idea is that the viewer would compile a database of texture usage inside sims you visited, so when you started teleporting to a sim, it could look in its database and start preloading images from its cache that are likely to be found in the place you're teleporting to. Basically, instead of waiting for the sim to tell you what's there and then loading the images, it would load from the cache things that are likely to be there (as sims don't change all that often), and when the sim does finally give you the data, you'd already have a part of the images decoded and ready to show. If you visited a place in the past, I think there should be a very good chance of mostly correctly guessing the textures needed to render it.
- SAP problem wouldn't happen because the mechanism would work based on information you previously got, so you'd have to have been to the place before for it to work. Dale Glass 19:56, 24 March 2007 (PDT)
- Ahhh its' a passive solution. What you want could be archived without a special DB; the client already collects this information in the sim-object cache. For TPing, it could be done quite easily. The viewer just needs to load the data from the object cache and render it. This would cause the appropriate functions to be called to decode and request missing assets. It doesn't need to render it to the screen, just an extra buffer.
Doing a fake render is probably the easiest, most efficient and most accurate way of doing this. A major issue with building a separate DB is you have to build the dynamic octrees for it. By using the rendering pipeline with the object cache you don't have to write new code to do this. Strife Onizuka 22:51, 24 March 2007 (PDT)
- Ahhh its' a passive solution. What you want could be archived without a special DB; the client already collects this information in the sim-object cache. For TPing, it could be done quite easily. The viewer just needs to load the data from the object cache and render it. This would cause the appropriate functions to be called to decode and request missing assets. It doesn't need to render it to the screen, just an extra buffer.
- Why go through all the trouble of actually rendering something (you can see the impact of extra rendering by enabling reflections) when textures is the only thing that is wanted? Plus, like I said I don't want to request anything missing. I want to do a sort of predictive loading of textures from the cache. I took a look at the object cache, but assuming I found the right thing, it doesn't seem to store texture usage.
- Here's basically the behavior I want: I visit say, Lusk pretty often. The usage of textures in a sim can be easily mapped. The viewer knows I'm teleporting to Lusk (200,120,52). It can look in the sim's DB, and start loading the tree, ground, platform, etc textures (again, from the on-disk cache) before it even manages to connect to the sim itself. Then by the time the area loads, most textures may be already loaded, so I will avoid seeing it all grey. There's no need to render anything (what for?), nor there is any need to request textures from the grid (as they might not be used anymore).
- Also I don't get where are you getting the octree thing from, as my idea doesn't involve any rendering at all, or even storing any geometry data. Dale Glass 10:06, 25 March 2007 (PDT)
- pfft, I've been digging through the object cache code. I could have sworn they were keeping face information in there as well. Been a long time since I've played with the object cache (the last time I didn't have source to read and the format has changed a few times since then). *rereads your suggestions* On the one hand using a dedicated DB seems a bit overkill, but on the other I'm having problems finding a simple solution that would do the same thing. I don't see a point in me standing in the way of this anymore.
- When would this information be put into the database? Strife Onizuka 17:00, 25 March 2007 (PDT)
- The whole thing should be small enough to keep in memory, so it could be kept there. Then write the DB to disk when the avatar moves to another sim. Dale Glass 17:33, 25 March 2007 (PDT)
- That makes sense. How and what information do you gather to put into the database? A simple list of UUIDs doesn't indicate which textures are more important then others. Say there are 100 textures in a 16x cube, which one do you decode first? If you sort the list by usage count, when do you collect this information and how often do you update it? Strife Onizuka 20:27, 25 March 2007 (PDT)
ARC vs. 2Q
Gigs added to the main page that ARC is "probably better than 2Q in every way". However, the PostgreSQL folks replaced ARC with 2Q over patent concerns, and then, as a bonus, found that 2Q performed as well, and maybe better. See the comment from Neil Conway for more on the subject. -- Rob Linden 12:48, 25 March 2007 (PDT)
- If it's patent encumbered, that's a good enough reason to not use it for me. Looks like IBM will grant a free license for open source for the patent, but it wouldn't work for LL because IBM wouldn't let the same license work for proprietary releases. (LL would have to buy a separate license). Gigs Taggart 14:42, 25 March 2007 (PDT)
Idea: Don't ever delete anything
I am interested to know if anyone has attempted to just disable all the various cache-cleanup functions in the viewer and direct the cache to a separate dedicated drive partition solely for caching purposes.
Hard drive prices have recently reached an all-time low in terms of price per gigabyte. Right now it is possible to buy a 500 gigabyte 7200 RPM 3.0 gigabit SATA-II drive for a mere US$115.
Prices are sufficiently low for drive space that a heavy Second Life user might be willing to buy an extra hard drive or create a 50+ gig drive partition that is dedicated solely to caching their personal SL universe.
If the viewer is permitted to just keep on writing data continuously to an open-ended cache volume that does not fragment (since it's not used for anything else), how is performance of the viewer affected as the overall cache size increases?
While the SL database farm has been said to contain about 25 terabytes of data in Jan 2007, the average person will never see the entirety that is every single SL simulator. After an initial phase of exploration, the caching of data is likely to reach a point where it levels off and does not continue to grow all that significantly. A single person is not likely to ever exceed the capacity limits of a dedicated 500 gig local hard drive cache.
My question is, at what point does the leveling-off of the caching occur, and is the viewer as written capable of dealing with a generally open-ended caching mechanism?
Scalar Tardis 10:41, 13 July 2007 (PDT)
Accomodation of an unlimited-size cache
The major problem I can see with an open-ended cache is that the searching for assets may start to take an excessively long time when using just a small number of cache directories. The Windows 98 Explorer started to get really slow when a directory contained more than 1024 files, and Windows XP probably has a similar search limit.
Directory search overhead needs to be limited, while still allowing for open-ended caching of everything.
This can probably primarily be remedied in the file system by using a deep nested directory structure, each composed of three consecutive characters of the key, so that it is always quick to walk down the tree to find the required file:
Cache structure:
- 4096 directories of keychar 0-2
- 4096 directories of keychar 3-6
- 4096 directories of keychar 7-9
- 4096 directories of keychar 10-12
- 4096 directories of keychar 13-15
- 4096 directories of keychar 16-18
- 4096 directories of keychar 19-21
- 4096 directories of keychar 22-24
- 4096 directories of keychar 15-27
- 4096 directories of keychar 28-30
- Maximum of 256 key files in this directory
Implementation:
- Key: 0123456789ABCDEF0123456789ABCDEF
- Location: H:\SLCache\012\345\678\9AB\CDE\F01\234\567\89A\BCD\01234567-89AB-CDEF-0123-456790ABCDEF.key
- Directories only created as needed when new assets are downloaded
With this caching mechanism, as caching storage increases into the billions of files, it is always only necessary to go down ten directory levels to find the necessary file. Initially the directory structure imposes a higher cost to a small directory tree but the search costs will probably start to drop off as the overall cache size reaches into the millions of files.
While this directory structure might seem ridiculously huge, it is important to note that LL only uses a tiny fraction of the total keyspace, and will likely never use the entire keyspace, so the total number of actual directories will remain small.
If the cache drive is dedicated to write caching only from SL, then fragmentation slowdown should not be an issue.
Scalar Tardis 10:41, 13 July 2007 (PDT)
This is where contemporary file systems break!
@Scalar Tardis, I'm pretty sure that, 17 years later (!), your question was probably answered somewhere 🤣
But since this Wiki, like the "universal cache" you mentioned, is also here 'forever' and being searched by Google for answers such as yours, let me add a few thoughts on this...
The beauty of the whole asset cache server being read-only — i.e., you can never modify anything, the best you can do is to delete something and add a new item — is that we have wonderful algorithms (and tools to take advantage of such algorithms) to deal exactly with the use-case you present: how to store billions of files (well, billions in 2007; it ought to be trillions in 2024, when I'm writing this) in an efficient way to get fast retrieval even using home-based solutions and 'regular' hardware (i.e., not specifically designed for large data centres)?
There is not one answer, but there are several (fortunately!). There are, however, incorrect answers: a 'normal' filesystem for 'normal' operating systems — essentially, what you've got on your desktop/laptop computer — was designed and fine-tuned for situations where the files are always changing, and optimised for such a situation. Thus, the whole way the filesystem is organised relies on that assumption — which precludes certain forms of using data efficiently. But because 'most' use cases that 'normal' applications from most mainstream users can be fit into this assumption (over the course of the day, we — and even the filesystem — are constantly updating, appending, changing, deleting files, all the time, non-stop, even if we don't realise it), a 'regular' filesystem is not a solution.
Indeed, I'm sure that any filesystem available on a regular version of Windows (and macOS) will break with 'too many entries'. It will reach a point where retrieving a single file from disk may take longer than to actually download it over the 'net (directly to memory)! Even an ever-increasing splitting and splitting of directories, creating a very deep multi-level hierarchy (at the limit, it could be 32 levels deep — that's because an UUID is written with 32 significant characters!), will not work in a trivial, straightforward sense — the only advantages is that each level will have at most 16 levels, and each of the last levels (the 'leaves' on the tree) will only have 16 files each (maximum), so this structure would work. In theory. Because it's not only each level that is limited in number; there is a global number of nodes in the pool for a filesystem, and once these are exhausted, the disk reports 'no space left on device' (or such a similar error message), even if still has lots of free space. It just hasn't enough free nodes.
There are naturally fine-tuning options. You can reformat the disk with more nodes, for instance. Or make each block larger, so that you use less blocks per file (blocks are usually 4 KBytes on most Unix or Unix-inspired filesystems), at the cost of having some empty space. Someone doing 'radical finetuning' would list all file sizes for all the downloaded assets and do some math to determine how big each block should be, and how many nodes will be required to store everything, while still wasting as little disk space as possible (so that more assets can be crammed in). And I believe there are some newer-generation filesystems where you can do that without reformatting the disk; and I've even heard of filesystems where the number of nodes and the block size is dynamically adjusted by the operating system. The latter, however, is a bold claim which requires proof, and I have none to offer. It is conceivable, sure, but is it realistic? (Think of the time eternally calculating, over and over again, how the file sizes change over time, and what the 'best' allocation system is at every moment).
No, instead, what we have to deal with this specific scenario is something much more basic: we call it a key/value pair store, sometimes also known as a NoSQL database. KVP stores — which in the 2020s are even implemented, in two different ways, on LSL itself! — are the way of efficiently managing a huge number of read-only files (huge meaning trillions or quadrillions) and retrieving them 'instantly'. There are lots of algorithms for doing that very efficiently. There might be some time until the KVP store processes those trillions of files — it does, after all, have to build a data structure (usually some form of advanced tree) to effectively address these files — but after that happens, the data is retrieved instantly, with next-to-zero delays, and that's why they're so popular (every browser, for instance, presents a KVP store for each JavaScript instance running on a web page — like with LSL, KVP stores are extremely useful to store data that doesn't change and retrieve it at a blindingly fast speed).
A good KVP store design will not use 'trillions of files on disk' to store a 'trillion assets'. At the limit, it might just need 'one' file on disk. What happens is that the start and end positions of each asset are also recorded with the 'key'. On the simplest model, the file just grows, and old deleted data will remain where it originally was. Every now and then (which can be user defined, automated, periodical, manually launched...) the KVP store might 'prune' the tree, so to speak: take a look at what entries have been deleted, and which are now pointing to 'nowhere', and recover the lost space. Other implementations, to prevent accessing sensitive data which might have been deleted from the tree but not yet from the actual storage, might fill the space with zeros first, and let the 'garbage collector' run at a later stage. Details vary from solution to solution (and that's why there are different algorithms!) and this is an area of active scientific research; although there are a few 'reference implementations' which are conceptually sound and have passed the test of time (and are therefore used for benchmarking purposes).
That said, the 'best' implementation for your suggested 'infinite data storage' would be a KVP that worked as an HTTP proxy server. A popular solution would be to integrate, say, a Redis server (a popular KVP store which is managed entirely in memory and is therefore ultrafast), which can easily handle a few hundreds of millions of entries (4 billion being its theoretical limit); and put a bare-bones, ultra-simple, minimalistic HTTP proxy 'layer', which can be configured to cache using a Redis KVP store (as opposed to using anything on the filesystem) as the backend. That should work fine, with limited fuss (a nice way to spend an afternoon writing it in Go and cross-compiling it to be installed on my NAS :) ).
— Gwyneth Llewelyn (talk) 11:31, 27 September 2024 (PDT)
You don't need that many directories...
One trick would be to make megafiles (who contain say up 1024 textures each, say all of which say have a texture key that starts with "a6" and is 64x64). Then when you need "a822ff2b-ff02-461d-b45d-dcd10a2de0c2" you look up the megafile which says each texture in it is 64x64 saving the trouble of having that info written at 1024 places, and is indexed at the beginning of the file; you're really looking into an index file at the beginning for the megafile rather than doing redundant os-type-things for 1024 directories. Then you find the relevant record in the megafile and put only that into live memory...
Also since textures tend to come from the same sim who has been visited before, a cache file with that sim's ID (or creator name) as the first directory would help speed up search, at the cost of the occasional duplicate or pointer to another megafile.
In any case, some users would like to cache all elements of certain games or sims forever, so that they don't get PWNED for texture load lag or see unloaded textures while acting in a play and messing up the acting for it...
Surely some similar BSD-licensed code exists somewhere, we just have to find it. One question remains, how many textures can we cram into a max-size file anyhow?
User:Contagious Republic 14:44, 27 feb 2008
With a proper hashing algorithm, the directory path crossing can be avoided. Say u need texture "a822ff2b-ff02-461d-b45d-dcd10a2de0c2", chunk it into a hash say "af 22 b0 07 5f 7a 6d 33" (whatever hash lenght we need) and use it as a hard drive raw address. If it contain "I am texture a822ff2b-... here is my data" then bingo, if no texture add it, if taken by a texture by the same hash try the very next hash position "af 22 b0 07 5f 7a 6d 34".
Also use low CPU use time to remove textures who are older than an arbitrary number of days...
This kind of code exists somewhere and has been working without too much bugs, it's just a matter of not reinventing the wheel completely.
As another alternative to 20+ directory path crossings, have the entire directory path thing in memory if feasible. Or at least in one file so we can just point to the proper file according to the same hashing system... in OS that don't auto-defragment all the time this should work...
User:Contagious Republic 1:12 AM, 8 March 2008
Level 2 data cache
Doing 11 file system directory tranversals to find every single key may potentially be slow, so it might be worth looking at how processor memory caches operate to improve key retrival speeds.
- Use a small 50 meg secondary cache with the unlimited-size cache (could be a VFS)
- When requesting a key, first check the VFS. If found, get it and proceed.
- If not found in the VFS, do the more time-consuming unlimited cache check
- If not found in the unlimited cache, download direct, and write first to VFS, and then to the unlimited cache.
Over time as the unlimited cache builds up, the network object/texture downloads will steadily decrease until they effectively stop occurring, and all retrieval is from the unlimited cache.
Meanwhile using a VFS intermediary permits very fast lookups of frequently used data without the time cost of always doing a deep search for every key.
Scalar Tardis 12:36, 14 July 2007 (PDT)
Generalized Problem
I am not an expert by any means, but I will chime in any way. All the discussion has been about caching on each local host. What about caching per local network, or LAN?
If two or more users are in the same LAN, they should share cache somehow. Consider several users in the same classroom, for example, not to mention the builder or business person with two or three work stations all logged in.
Suppose the client included a web proxy such as SQUID to manage the textures (and anything else that will be cached). If two or more clients are in the same LAN, there could be some sort of negotiation strategy by which one of the proxies takes ownerships. All the other local users get their data from that cache.
Just thinking out loud.
Lee Ponzu 19:28, 6 July 2009 (UTC)
- The deal with IBM works something like this, LL provided them with an asset server. Any assets it didn't have it would request from LL's asset servers and cache it. Your idea is just one step behind. The problem with having a server built into the client is that it if your client was sufficiently under powered or it was a large enough network, it would get hosed (DDoS). You really don't want your client to be the server. There has been talk in the community about using various different file sharing inspired asset deliver mechanisms (bittorrent etc). With HTTP Texture it should be possible to use any HTTP proxy (like SQUID) though you might need to code proxy support into the client. -- Strife (talk|contribs) 19:32, 7 July 2009 (UTC)
- More than a decade later, this is not only operational, but definitely recommended, especially because these days almost all content — except for positioning information and current animation being played by the other avatars — is being delivered via HTTP. Proxy web caching software such as SQUID works wonderfully and is very highly recommended! (I run my texture caching server on the home NAS 😅). All viewers I'm familiar with already have an option to use a proxy server for HTTP content (for the built-in web browser and for all HTTP content served from their asset servers). — Gwyneth Llewelyn (talk) 09:25, 27 September 2024 (PDT)