Difference between revisions of "Talk:LlGetNotecardLineSync"

From Second Life Wiki
Jump to navigation Jump to search
m
 
(4 intermediate revisions by 2 users not shown)
Line 24: Line 24:


::Reading any line of a notecard will cache the entire notecard into the buffer. In my testing, a single [[llGetNotecardLine]] call to the first line will consistently load the whole notecard, and any remaining lines can be read using [[llGetNotecardLineSync]] - although your suggested best practice is how I end up doing it (call [[llGetNotecardLineSync]], and if it returns [[NAK]], call [[llGetNotecardLine]]; repeat until [[EOF]]).
::Reading any line of a notecard will cache the entire notecard into the buffer. In my testing, a single [[llGetNotecardLine]] call to the first line will consistently load the whole notecard, and any remaining lines can be read using [[llGetNotecardLineSync]] - although your suggested best practice is how I end up doing it (call [[llGetNotecardLineSync]], and if it returns [[NAK]], call [[llGetNotecardLine]]; repeat until [[EOF]]).
::edit: Also, considering this function can blow through your whole 64k of memory in a fraction of a second, I added a note about garbage collection and how to avoid a stack-heap collision; I don't expect a llGetTheWholeNotecardAtOnce function for obvious reasons, either.


::— [[User:Nelson Jenkins|Nelson Jenkins]] ([[User talk:Nelson Jenkins|talk]]) 17:56, 3 July 2024 (PDT)
::— [[User:Nelson Jenkins|Nelson Jenkins]] ([[User talk:Nelson Jenkins|talk]]) 17:56, 3 July 2024 (PDT)
Nelson, in my apps that use llGetNotecardLineSync(), I have yet to experience memory leaks anywhere near the level that you have eluded to.
Can you please file a bug report with a bare bones repro script at https://feedback.secondlife.com/scripting-bugs ? Thanks. Lucia Nightfire
I don't use the Talk portion of this wiki so I don't know how to reply to comments. Bear with me.

Latest revision as of 14:19, 4 July 2024

Some explanations are in order, I think...

I understand that there is (now?) some sort of caching mechanism being employed locally at the region's simulator, which ought to speed up reading notecards (and re-reading them!) quite considerably.

As far as I can understand, the purpose of this function is merely to read the whole notecard in a tight loop inside a single dataserver event, as opposed to continually polling the database server for each line at the time?...

If so, that should be made a bit more explicit, because, from the perspective of someone not intrinsically familiar with how the SL Grid operates, the difference between both approaches is not instantly obvious (I had to re-read this article several times until it finally clicked).

It is a further step towards the full implementation of the OpenSimulator equivalent that reads the entire notecard in a single call, bypassing database server events entirely. The option to have one dataserver event (as opposed to none, as in OpenSimulator) is probably just to allow the actual notecard reading to be run in parallel (a different thread?) without blocking, thus allowing the object to continue to be responding interactively while it's simultaneously loading a huge notecard. That approach might make more sense if it cannot be predicted if a notecard is already on the simulator's cache or not (and, if not, it requires loading it first from the central asset servers).

But I'm just speculating. I did not read anything about the origin and/or proposal of this new function (because I'm lazy and didn't bothered to search for it); my point here is just that I believe that some additional explanations about this function are required, and how it relates to the older LlGetNotecardLine, and publishing some sort of comparative analysis regarding the difference between the two mechanisms, both in raw speed and impact on the simulator...

Gwyneth Llewelyn (talk) 02:27, 17 January 2024 (PST)

This is "dataserver-less" reading of notecards. The dataserver is still part of it, but you don't need to asynchronously loop through it anymore. Like with the old function, you can random-access a single line. Like with the old function, you can loop through the lines until you reach EOF. IF the notecard is known to the simulator, you get the single line right away. If the dataserver does NOT know the notecard, you get "NAK".
So best practive would be to request the LAST line of a notecard via the old dataserver method, thus ensuring the notecard is cached in its entirety, and then enjoy datserver-less reading of any line you want to have.
Best best pratice would be to check for NAK and have a fallback for every query you make: First the fast methid, and if it returns NAK, the dataserver method. The beauty with this new function is that we don't need the async approach anymore, and can random access a (cached) notecard.
Reality these days is that modern scripts parse configuration data only once and store it to LSD - but this would go way faster even.
Peter Stindberg (talk) 08:52, 17 January 2024 (PST)
Reading any line of a notecard will cache the entire notecard into the buffer. In my testing, a single llGetNotecardLine call to the first line will consistently load the whole notecard, and any remaining lines can be read using llGetNotecardLineSync - although your suggested best practice is how I end up doing it (call llGetNotecardLineSync, and if it returns NAK, call llGetNotecardLine; repeat until EOF).
edit: Also, considering this function can blow through your whole 64k of memory in a fraction of a second, I added a note about garbage collection and how to avoid a stack-heap collision; I don't expect a llGetTheWholeNotecardAtOnce function for obvious reasons, either.
Nelson Jenkins (talk) 17:56, 3 July 2024 (PDT)

Nelson, in my apps that use llGetNotecardLineSync(), I have yet to experience memory leaks anywhere near the level that you have eluded to.

Can you please file a bug report with a bare bones repro script at https://feedback.secondlife.com/scripting-bugs ? Thanks. Lucia Nightfire

I don't use the Talk portion of this wiki so I don't know how to reply to comments. Bear with me.