Difference between revisions of "Talk:Hex"

From Second Life Wiki
Jump to navigation Jump to search
m (→‎Efficiency Tester vs. Code Racer: 3 or 10 or 30 ms per run, that is the question)
(archive the talk that preceded our dividing the article in two)
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Solution =
I just realized how to solve all our problems: Rename the page to Python Hex. You being the python expert could write the article anyway you want, I have no love for Python after all. Since it wouldn't be occupying the Hex page we could put in an article about Hexadecimal and provide a link to your page. Your page could be part of a Python to LSL migration Tutorial. I'll wait for your agreement before carrying this out. -- [[User:Strife Onizuka|Strife Onizuka]] 18:33, 24 October 2007 (PDT)
::Sorry, not a consensus solution. If you just want the page without regard to what a newbie expects, just take it. Throw out my by-newbie for-newbie stuff and say what you want to say. I'll drop out and go work somewhere else. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 18:58, 24 October 2007 (PDT)
:::You going elsewhere is not a good solution. -- [[User:Strife Onizuka|Strife Onizuka]] 20:40, 24 October 2007 (PDT)
::I imagine my earlier suggestion got lost in the noise, here it is again.
::To settle the hex article, I now guess it would make sense to divide all our content in two. In the article tab, we could present only the basic specification and demo together with brief & clear & conventional exemplars. Here in the discussion tab, we could present the clever & small & fast exemplars and the raw data that backs our small & fast claims. -- Ppaatt Lynagh 16:25, 24 October 2007 (PDT)
:::It's workable but I don't think it should be on the discussion page, a subpage would work better (the idea has been nagging at the back of my head but then the red mist descends and it runs away). -- [[User:Strife Onizuka|Strife Onizuka]] 20:40, 24 October 2007 (PDT)
::I guess I should hold back from trying that structure until Strife comments. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 19:21, 24 October 2007 (PDT)
::I guess I should mention, I appreciate your every technical contribution, including the technical portions of your last burst of replies to the [[Talk:Hex#Third Talk]]. I don't know how I can helpfully answer the non-technical material such as rants, I do appreciate you again trying to connect with me on non-technical issues also, but I don't know how to begin to give us more connection there. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 19:31, 24 October 2007 (PDT)
:::I do appreciate your contributions, they may drive me a bit bonkers but thats not your problem. Compromise at times seems to be about making everyone unhappy. If I'm a bit tense you can tell me to chill or take a few hours to think things over. I can be tough to get along with and you do extraordinarily well at it. We may not agree but you do a better job of staying calm. -- [[User:Strife Onizuka|Strife Onizuka]] 20:40, 24 October 2007 (PDT)
::::Hey lookathat. I think you have found a way to connect us closer on the non-technical issues. I appreciate your kind words, I'm encouraged by your hope that we neither of us have to go elsewhere, I look forward to the red mist lifting away to leave you able to verbalise how we should act next to take care of both newbies & experts together.
::::If the red mist in fact doesn't lift by next week, then to get us unstuck I might try dividing the content into a page for newbies and a page for experts, maybe with a horizontal separator shown in between. Once we have the content more sharply divided, maybe how to name the pages will become clear. Article names like [[hex]] and [[efficient hex]] would work for me. Article names like [[simple hex]] and [[hex]] would not work so well, for me. I like the simpler name being dedicated to the less expert visitor.
::::-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 05:10, 25 October 2007 (PDT)
= Efficiency Tester vs. Code Racer =
The [[Efficiency Tester]] & [[Code Racer]] harnesses produced mutually consistent results for me back when they were both based on [[llGetTimestamp]].
Now with the change to [[llGetTime]], somehow I don't immediately see consistent results ... That could just be the naturally variability in measuring time on a time-shared system ... or the change introduced a significant bug of some kind ... -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 18:12, 20 October 2007 (PDT)
I am now running all five versions of code vs. an empty block in the new N-way Code Racer, I am again choosing the test case of 0x07fffFFFF. The self-proclaimed inaccurate but conveniently early claim there now is between 3 and 10 ms per run. Looking out across llGetTime vs. llGetTimestamp and across versions of Efficiencey Tester and Code Racer, I'm not sure I should believe this claim. In other places I saw claims more like 30ms per run. Likely I'll return when I have more results to share. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 05:39, 25 October 2007 (PDT)
= llGetFreeMemory and llGetTime results =
So here's the raw data for a new round of summary results, measured with [[llGetTime]] and  [[llGetFreeMemory]] rather than with [[llGetTimestamp]] and llGetFreeMemory -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 11:49, 20 October 2007 (PDT)
<pre>
// Concise & Conventional
14892 free bytes of code at default.state_entry
29.525997+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
29.309505+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
29.048304+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
// Clever & Small
15042 free bytes of code at default.state_entry
26.522137+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
27.012869+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
26.886431+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
// Clever & Fast
15037 free bytes of code at default.state_entry
26.416100+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
27.720108+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
26.514929+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
// Different & Concise & Small
15100 free bytes of code at default.state_entry
25.888016+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
25.656263+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
26.585047+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
// Different & Clever & Fast
15078 free bytes of code at default.state_entry
28.594223+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
26.809479+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
26.388189+-??% ms may have elapsed on average in each of
10000 trials of running the code in the loop
</pre>
And to help make the derivation of summary milliseconds of elapsed run time clear, here below is all the arithmetic calculating { middling = ((fastest + slowest) / 2) } with a +- imprecision of { max(middling - fastest, slowest - middling)) }. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 11:58, 20 October 2007 (PDT)
<pre>
dc -e '29.5 d 29.0 + 2 / p - p'
dc -e '26.5 d 27.0 + 2 / p - p'
dc -e '26.4 d 27.7 + 2 / p - p'
dc -e '25.9 d 26.6 + 2 / p - p'
dc -e '28.6 d 26.4 + 2 / p - p'
$ dc -e '29.5 d 29.0 + 2 / p - p'
29
.5
$ dc -e '26.5 d 27.0 + 2 / p - p'
26
.5
$ dc -e '26.4 d 27.7 + 2 / p - p'
27
-.6
$ dc -e '25.9 d 26.6 + 2 / p - p'
26
-.1
$ dc -e '28.6 d 26.4 + 2 / p - p'
27
1.6
$
29+-1
26+-1
27+-1
26+-1
27+-2
</pre>
= Different & Clever & Fast =
The article claims:
:Note: Reworking this concise & small code into the clever & fast style does make this different code larger and faster, producing run times as fast as 91+-2 milliseconds.
Perhaps by NPOV we should also quote that byte size, to show how much larger that code is. And then also blog here an actual example result of that exercise, corresponding to that quote of run time.
I think this is the code we mean, its byte size may be 161, its run time may be 91+-2 milliseconds, I hope to return soon to reconfirm.
-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] On or before 20 October 2007 (PDT)
<pre>
// http://wiki.secondlife.com/wiki/Talk:hex
string bits2nybbles(integer bits)
{
    integer lsn; // least significant nybble
    string nybbles = "";
    do
    {
        integer lsn = bits & 0xF; // least significant nybble
        nybbles = llGetSubString("0123456789ABCDEF", lsn = (bits & 0xF), lsn) + nybbles;
    } while (bits = (0xfffFFFF & (bits >> 4)));
    return nybbles;
}
</pre>
Running the [[Efficiency Tester]] for just 1000 times suggests this different & clever & fast code suggest it astonishingly runs slower than the different & concise & small code we already included in the article: like 26 29 30 ms rather than 26 26 25 ms. I'll try running 10,000 times next, I guess. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] On or before 20 October 2007 (PDT)
Here below now is a preview - a run of just 1000, using [[llGetTime]] rather than [[llGetTimestamp]], one or two runs for each implementation. Looks like the difference between different & clever & fast vs. different & concise & small is too negligible to measure reliably with a run of 1000. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 09:54, 20 October 2007 (PDT)
<pre>
// Concise & Conventional
14900 free bytes of code at default.state_entry
0.030812+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.031006+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.030123+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
// Clever & Small
15050 free bytes of code at default.state_entry
0.028898+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.028747+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.027901+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
// Clever & Fast
15045 free bytes of code at default.state_entry
0.026896+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.026773+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.027210+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
// Different & Concise & Small
15108 free bytes of code at default.state_entry
0.025729+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.026136+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.024839+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
15108 free bytes of code at default.state_entry
0.026976+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.027034+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.026788+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
// Different & Clever & Fast
15086 free bytes of code at default.state_entry
0.026279+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.029061+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.029775+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
15086 free bytes of code at default.state_entry
0.026498+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.025734+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
0.025299+-25% s may have elapsed on average in each of
1000 trials of running the code in the loop
</pre>
= Third Talk =
'''... I hate [the 18 October] revision ...'''
:Good.
:I think you would vote to undo the 18 October revision, and I know I would vote to undo the 15/ 14 October revision.
:I believe in this way we have established a creative tension: a line along which we can hope to find a neutral point of view (NPOV) and a consensus statement of that NPOV.
:Thank you. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
'''... shouldn't use llGetTimestamp for profiling ...'''<br/>
'''... should be using the script time functions (llGetTime) ...'''
:I see this hex article contains reproducible measurements across a spread from 91+-2 to 147+-30 milliseconds, taken from 1,000 runs of the [[Efficiency Tester]] instrument that harnesses [[llGetTimestamp]].
:I see now that the [[llGetTime]], [[llGetAndResetTime]], [[llResetTime]], [[timer]], [[llGetRegionTimeDilation]] articles were all woefully confused. By your one-word "should" hint here, I think I now understand what we mean by second and fractional seconds of dilated time, so yes I have now updated all five of those confused articles accordingly, thank you.
:I understand you to be saying we should change the [[Efficiency Tester]] instrument to call llGetTime rather than [[llGetTimestamp]]. Is that right? Then also we should update the [[Code Racer]] instrument, by the same reasoning. Then I see after all that we must copy the [[Efficiency Tester]] changes into the exact duplicate of that instrument that appears expanded inline deep inside the [[LSL Script Efficiency]] article. Agreed? This is exactly your intended recommendation? -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
:::No objection yet, so for now this guess stands. We've already now finished most of the work of the update, as [[Efficiency Tester]] shows. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
'''... cannot replicate ... bytecode cost ...'''
:Thank you for making time to say so. I've added the [[Code Sizer]] article to explain the procedure more fully, and naturally I linked the existing [[Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast]] explanation to that article. I hope you or any of us can make time to try again now to confirm or correct the byte counts. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
:::No objection yet, so for now this guess stands. We've already now finished most of the work of the update, as [[Talk:Code Sizer]] shows. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
'''... cannot replicate ... bytecode cost ... steps ...'''
:I've copied your steps into [[Talk:Code_Sizer]] as [[Talk:Code_Sizer#The_Plus_Four_Instrument]]. In my (extremely limited) experience, those steps produce results that count exactly four more bytes than the [[Code Sizer]] instrument. I imagine you already know why, I do not yet know why.
:I vote to have the hex article quote the results of the Code Sizer instrument, rather than the Plus Four instrument. I vote that way because I believe the size occupied when I add a function to a script that already has other functions matters more than the size occupied when I add the first function. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
::<nowiki>*snaps fingers*</nowiki> I know where that extra four bytes are coming from, the dword used for the length of the user function lookup table. When there are no user functions the compiler doesn't include a user function lookup table at all. You can find [[LSO]] documentation on the LibSL wiki. -- [[User:Strife Onizuka|Strife Onizuka]] 13:37, 24 October 2007 (PDT)
:::I guess "*snap fingers*" is English slang. I guess that slang means ah, yes, we appear to be mutually intelligible here now. Yes I agree that's good news, thank you for saying.
:::I guess "the LibSL wiki" means the wiki that includes the page http://www.libsecondlife.org/wiki/LSO that has been referenced at [[LSO]] since at least the 2007-10-19 change commented as "LSL Bytecode moved to LSO".
:::I guess that LibSecondLife "Function Block" is exactly the same as our "user function lookup table" here. I see that LibSecondLife.org page describes a "Global Function Register" that points to the "Function Block".
:::I guess we now agree we always should avoid counting the 4 byte cost of creating the User Function Lookup Table as part of the byte code space cost of an LSL function. That's progress: that's growth in our consensus NPOV, which I can now next merge into the [[Code Sizer]] instrument. Thank you. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
::::All very good guesses and correct (except for the last one). I'm not sure if we should count it, probably shouldn't.
'''... The Loop Cost is ... the cost in bytes of each iteration of the loop ...'''
:People hear more if we can find a way to say less. To whom does this byte size within the loop matter under which conditions? Surely quoting the total byte size of each solution suffices? If we must quote this, then we must define it. Is this byte cost within the loop purely or does this byte cost include the outside of the loop also? With what instrument and procedure do we measure this loop cost? -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
::Why would it include the bytecode outside the loop? That wouldn't be the loop bytecode cost. It is the number of bytes executed per iteration (including the conditional check used to do the iteration). You should take some time to study the compiler & VM source code; it would provide the insight needed to have an informed discussion about this.
::Instruments & procedures? For reading the bytecode I use a hex editor and a bytecode cheat sheet. It's not like political analysis with shades of gray, the answers are quite obvious if you take the time to look. There isn't more I can say without giving you an education in LSO.
::-- [[User:Strife Onizuka|Strife Onizuka]] 13:37, 24 October 2007 (PDT)
:::#I see "the loop bytecode cost ... is the number of bytes executed per iteration (including the conditional check used to do the iteration)". Thank you for clearly defining that new term. For the test case of a loop that runs bigger but faster by beginning with a jump to the conditional check that ends the loop, I assume that the loop bytecode cost would include that jump also.
:::#I see "a bytecode cheat sheet". Is a copy of this cheat sheet available online? Is it merely an extract of the http://www.libsecondlife.org/wiki/LSO page? Have you noticed that [[Talk:Hex#LSO]] doesn't yet explain how other people after you can snoop the byte code that the LL client produces?
:::#I see no reason to believe we should grow the [[hex]] article to include this info -- no concrete specific answer to the question of to whom does this byte size within the loop matter under which conditions. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
::::#This is why I wanted you to look at the compiler; thats now how a for or while loop work (and one of the reasons why LSL sucks). A do-while loop uses JUMPIF at the end of the iteration to check the conditional and jump back to the top. If LL were sane when they implemented the for/while loops, they would have used a jump down to the conditional that was at the end of the loop (can you see the train wreck yet?) Instead they did it like a traditional if statement with a jump at the end that loops it back to the condition at the top. Consequently while and for loops are verifiable slower then do-while loops.
::::#Well, my cheat sheet I wrote before the client was open sourced... I reverse engineered the bytecode by examining scripts generated by the compiler. After the client was open sourced I used it to improve the LSO article on the LibSL wiki. I had mothballed my research into the client because of how insecure it was with assets, I didn't want to publish any of my findings lest LL sick the DMCA on me (my attitude has always been "trust your neighbor, tie up your camel"). Anyway its flawed and I designed the description language I used to define it. You are better off using the LibSL LSO article.
::::#I wouldn't have even put how to profile the functions on the hex page. I would have buried the link on how to do it at the bottom of the page.
::::--[[User:Strife Onizuka|Strife Onizuka]] 18:27, 24 October 2007 (PDT)
'''... the variable cost ...'''
:What is this variable cost? What are its units? How do we count it? Why do we care? -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
::The answer is in the bytecode structure. With some simple experiments you can get an inkling why it's important. If you read the source you can get a firmer grasp. I'm a Scripting Mentor, a mentor shows you path and will help you keep to the path; they won't walk the path for you. I've told you where you can find the information you seek and now I'm offended that you are too lazy to read it. -- [[User:Strife Onizuka|Strife Onizuka]] 13:37, 24 October 2007 (PDT)
:::Sorry I understand none of that reply. With my eyes, I see only another pointless http://en.wikipedia.org/wiki/Ad_hominem attack. I don't know how to understand ad hominem attacks, i.e., I don't know how to turn them into a useful contribution. I don't mean to appear to ignore your input, I just don't know how to respond substantively in any technically helpful way. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
::::I did not use an ad hominem attack; either there has been a horrible breakdown in communication or one of us doesn't fully understand what an ad hominem attack is. Your questions are valid and I chose to answer them by pointing you to where you can find the answers. I apologize for calling you lazy, there isn't evidence to support that. My answers are a bit vague but that doesn't equate to an ad hominem attack. Could you please support this claim?
::::<rant>You shoot first and ask questions later. You should have asked about it and waited for an answer before you deleted it from the page. I would have explained about variable allocation & deallocation and the other crappy stuff the LSL compiler does with variables if I was asked. Put yourself in my shoes for a moment, you have been writing code for years when a self proclaimed newbie doesn't understand your contribution so they delete it, then they have the gall to ask you to explain it. Maybe you can understand why I might be a bit annoyed. If you understood and we disagreed I wouldn't be annoyed, I wouldn't have minded if you deleted it; I respect intellectual disagreement, it's the other types I can't. Looking back, I know you can feel like I do, remember back to when I first edited the article? Imagine how more annoyed you would have been if I had asked questions after axing your content.</rant>
::::You are asking the wrong questions, you should ask: "What is the cost of a variable?". Take a look at the [http://www.libsecondlife.org/wiki/LSO#Code_Chunk Code Chunk] & [http://www.libsecondlife.org/wiki/LSO#Building_A_Call_Frame Building A Call Frame] sections. The answer isn't obvious but it's hideous. -- [[User:Strife Onizuka|Strife Onizuka]] 18:27, 24 October 2007 (PDT)
'''... operations ...'''
:What are the units of this "operations" measure? Instructions? Bytes? How do we count them? Why do we care? -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
::See the computer science definition... [http://www.thefreedictionary.com/operation] -- [[User:Strife Onizuka|Strife Onizuka]] 13:37, 24 October 2007 (PDT)
:::I visited the http://www.thefreedictionary.com/operation page, I fail to see how it is relevant. Perhaps I can help by mention I've been writing code since 1973. Conventional computer science is familiar to me, it is Second Life that is new to me. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
:::I see no reason to believe we should grow the [[hex]] article to include this info -- no concrete specific answer to the question of to whom do these "operations" counts within the loop matter under which conditions. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
:::: ...how do you fail to see the relevance? I mean that seriously and not condescendingly or mockingly. You have been writing code for 34 years, you are coming from python, another interpreted bytecode language. I can't help you, I can't fathom how to help you.
:::: I think the article should be shortening. -- [[User:Strife Onizuka|Strife Onizuka]] 18:27, 24 October 2007 (PDT)
'''... It would be remiss not to mention ...'''
:Which procedure do you intend for us to publish together with additional benchmark results, to allow the scientific audience to most conveniently -- most quickly & inexpensively -- repeat your experiment?
:Can we agree somehow to present less exemplars? The only exemplars that have value in my eye are the concise & conventional exemplar, and the different & concise & small exemplar. Is there a chance that you'd let the page settle if we presented only those two examples together with the clever & fast exemplar?
:::I hope we still hold a chance at such a consensus. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
:Have you seen [[Talk:Hex#Why_bother_with_fast_or_small_or_different_code_exemplars]] yet? Do those few paragraphs explain correctly why you hate the concise & conventional exemplar? They do explain correctly why I avoid clever exemplars like the plague. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]]
::I'll take a look at it.-- [[User:Strife Onizuka|Strife Onizuka]] 13:37, 24 October 2007 (PDT)
:::Thank you. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
'''... [much snipped] ...'''
:I believe I have now answered all the technically substantive open questions.
:If I've left any question unanswered, I hope you can make time to repeat it more clearly. Thanks again for helping me think clearly and neutrally. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 22:57, 19 October 2007 (PDT)
:::So far as I know I have now answered all the new technically substantive open questions. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 15:31, 24 October 2007 (PDT)
= 197 bytes or more =
Scientifically reproducible summary measures of code space now appear in the article:
347 bytes<br/>
197 bytes<br/>
202 bytes<br/>
139 bytes<br/>
More bytes
These results you can quickly & easily reproduce with the [[Code Sizer]] instrument.
We list all the instruments we use at [[Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast]].
-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] On or before 20 October 2007 (PDT)
= 91+-2 milliseconds or more =
Scientifically reproducible summary measures of run time now appear in the article:
147+-30 milliseconds<br/>
105+-7 milliseconds<br/>
102+-6 milliseconds<br/>
113+-7 milliseconds<br/>
91+-2 milliseconds
Those results you can slowly & easily confirm or deny or improve.
We list all the instruments we use at [[Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast]].
Quoting these millisecond results took more calculation than anyone has yet programmed into the [[Efficiency Tester]].
One of us ran that harness 3 x 1,000 times for each version of code, then took the (min, max) pair and reported { middling = ((fastest + slowest) / 2) } with a +- imprecision of { max(middling - fastest, slowest - middling)) }.
The raw data was as follows. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] On or before 20 October 2007 (PDT)
<pre>
[5:51]  Object: 15146 free bytes of code at default.state_entry
[5:53]  Object: 98.496002+-FIXME milliseconds passed on average in each of
[5:53]  Object: 1000 trials of running the code in the loop
[5:55]  Object: 96.188004+-FIXME milliseconds passed on average in each of
[5:55]  Object: 1000 trials of running the code in the loop
[5:56]  Object: 106.851997+-FIXME milliseconds passed on average in each of
[5:56]  Object: 1000 trials of running the code in the loop
[6:01]  Object: 15151 free bytes of code at default.state_entry
[6:03]  Object: 111.475998+-FIXME milliseconds passed on average in each of
[6:03]  Object: 1000 trials of running the code in the loop
[6:04]  Object: 112.019997+-FIXME milliseconds passed on average in each of
[6:04]  Object: 1000 trials of running the code in the loop
[6:06]  Object: 98.428001+-FIXME milliseconds passed on average in each of
[6:06]  Object: 1000 trials of running the code in the loop
[6:09]  Object: 15209 free bytes of code at default.state_entry
[6:11]  Object: 120.155998+-FIXME milliseconds passed on average in each of
[6:11]  Object: 1000 trials of running the code in the loop
[6:12]  Object: 15209 free bytes of code at default.state_entry
[6:14]  Object: 115.272003+-FIXME milliseconds passed on average in each of
[6:14]  Object: 1000 trials of running the code in the loop
[6:16]  Object: 106.751999+-FIXME milliseconds passed on average in each of
[6:16]  Object: 1000 trials of running the code in the loop
[6:18]  Object: 111.919998+-FIXME milliseconds passed on average in each of
[6:18]  Object: 1000 trials of running the code in the loop
[6:25]  Object: 15001 free bytes of code at default.state_entry
[6:28]  Object: 176.723999+-FIXME milliseconds passed on average in each of
[6:28]  Object: 1000 trials of running the code in the loop
[6:30]  Object: 131.716003+-FIXME milliseconds passed on average in each of
[6:30]  Object: 1000 trials of running the code in the loop
[6:32]  Object: 117.820000+-FIXME milliseconds passed on average in each of
[6:32]  Object: 1000 trials of running the code in the loop
[6:35]  Object: 15204 free bytes of code at default.state_entry
[6:36]  Object: 89.372002+-FIXME milliseconds passed on average in each of
[6:36]  Object: 1000 trials of running the code in the loop
[6:38]  Object: 92.748001+-FIXME milliseconds passed on average in each of
[6:38]  Object: 1000 trials of running the code in the loop
[6:40]  Object: 93.180000+-FIXME milliseconds passed on average in each of
[6:40]  Object: 1000 trials of running the code in the loop
</pre>
= Neutral Point Of View =
I notice I continue to feel delighted by the way our community interacting together makes the point of view we publish ever more neutral.
== You Deleting My Work, Me Deleting Your Work ==
I wonder if I can help by mentioning the two guides toward a neutral point of view that I am myself working to hold in mind as I play here:
1. I think the essence of any wikipedia is the astonishing idea that people can end up working together more when all together we agree "if you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here", as our edit page warns us.
2. I think the perspectives we represent run across the range of perspectives balanced at http://en.wikipedia.org/wiki/Optimization_(computer_science)#When_to_optimize. Some of us naturally care most about clear and conventional code, some of us care most about small code, some of us care most about fast code. That's great balance.
-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] ~16 October 2007 (PDT)
== Why bother with fast or small or different code exemplars ==
Kuhn defines an exemplar as a solution paired with a problem to teach the new people how to solve problems (at least, that's my paraphrase of http://en.wikipedia.org/wiki/Exemplar).
Me, I launched this hex page on 8 October 2007, foolishly thinking I was writing out an obvious and adequate exemplar. Me, I thought I was just sharing the trivial work of me the LSL newbie getting the hex out to make some permission masks intelligible, in contrast with how confused the masks look in decimal. I thought I was just saving the next newbie some effort in working out the details.
I didn't think much about whether the masks were signed or unsigned. LSL permission masks only take on values that can be seen as positive or unsigned. LSL permission masks never set the most significant unsigned bit which is the sign bit of a signed two's-complement integer.
The creative tension that we subsequently worked thru then taught me much.
I now know that I initially saw value only in clear & conventional code examplars. In effect, I saw this wiki only teaching LSL to newbies like me.
I now guess this wiki has a wider purpose. Sure, this wiki teaches LSL to newbies, but this wiki also gives experts a place to share between themselves exemplars of making code clever or fast or small, including exemplars of respecifying code to make the code more clever or faster or smaller even at the extreme cost of making the code less clear & conventional, thus harder to call and reuse and read.
As yet I still don't yet understand the burst of changes ff. 14:59, 14 October 2007 ...
... but at least we now have found a perspective from which I can hope to understand those changes.
-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 19:24, 17 October 2007 (PDT) -- created
-- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 06:47, 20 October 2007 (PDT) -- modified towards NPOV to claim value for clever as well as value for brief, clear, small, and fast
Eventually we spoke of "concise" (meaning "brief" and "clear"), "conventional", "clever", "small", and "fast" as qualities some or all of our people find desirable in a script. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 13:29, 22 October 2007 (PDT)
:I was unfamiliar with the term "examplar" but it is good terminology. Its a window on to the deeper issue but it doesn't highlight it well in my opinion. The underlying issue is how people learn and what they learn. It hinges on how we perceive the expectations of the users: what the users are expecting to learn and how they expect to learn it. For someone new to LSL they come here with the expectation of learning to write LSL without any background knowledge. For an experienced LSL user, they aren't here to learn the fundamentals, they already know them; they are here to learn information with the expectation of forgetting it sometime in the future. It isn't important that they remember it passed the time when they aren't using it as long as they remember how to find it; for them the wiki is a reference. The needs of the newbie vs reference user are fundamentally different. I think it's time we revisit the layout design so it can better serve both extremes. When the layout was first designed, the only content on the wiki was very technical documentation (message template stuff). I don't want to take all the credit for layout design but I'll take all the blame for it, it's been a long time since I was a newbie to programing and I was a fast learner.
:The second part is how they expect to learn. Learning styles differ drastically. It's not as simple as just throwing content at them and expecting them to learn it (we both know if it were we wouldn't be having this discussion ^_^). I would write more on this but it would take a dissertation to do it any semblance of justice.
:At this point it would be good to move this (mini) discussion to a part of the wiki where proposals can be made and be actionable. Until the underlying issues are resolved (or reburied) we will likely be at each others throats always. -- [[User:Strife Onizuka|Strife Onizuka]] 15:41, 24 October 2007 (PDT)
::I like all you say, so far as I understand it.
::To settle the [[hex]] article, I now guess it would make sense to divide all our content in two. In the article tab, we could present only the basic specification and demo together with brief & clear & conventional exemplars. Here in the discussion tab, we could present the clever & small & fast exemplars and the raw data that backs our small & fast claims.
::I'll try that approach for a rewrite if I get to it before anyone stops me. -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 16:25, 24 October 2007 (PDT)
::Sounds like we understand each other's taste in exemplars better now. Thanks for working to create that mutual understanding.
::Yes I am a person new to LSL who comes here with an expectation of learning to write LSL without any background knowledge. Thanks for asking.
::So far as I know, as yet we've only volunteered me to finish the change to llGetTime from llGetTimestamp now underway at [[Talk:Code Racer]]. I'd also now happily follow anyone's lead into discovering where the Mac OS client from LL hides the LSO it produces ([[Talk:Hex#LSO, Defined]] explains how that is not obvious). -- [[User:Ppaatt Lynagh|Ppaatt Lynagh]] 16:12, 24 October 2007 (PDT)
= LSO, Defined =
"LSO" may be slang meaning LSL byte code.
The LL client compiles scripts into files of the LSO format, then copies those LSO files thru the client's cache to the server, according to the http://www.libsecondlife.org/wiki/LSO page. Wiki.SecondLife articles that link to that LSO page include [[LSL_Bytecode]] and [[LSO]].
The LL client might not always be so simple. For example,  a sample of watching ( Mac Second Life > Preferences > Network > Disk Cache Location ) shows only (53448 bytes of index.db2.x.1146144658, and 104857600 bytes of data.db2.x.1146144658) when a script is compiled - no small separate LSO file.
= Talk:Hex Archives =
= Talk:Hex Archives =


See 2007-10-24 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=38020 to read back into the history of five topics:
See 2007-10-29 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=38446 to read fourteen old threads:
 
5 Edit Conflicts
 
8 18 October Rewrite
 
9 Loop Costs Unscientific
 
11 Due Diligence
 
12 Correct At A Glance


See 2007-10-17 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=35999 to read back into the history of four topics:
* 1 That Red Mist, Dispelled Maybe
* 2 Solution
* 3 Efficiency Tester vs. Code Racer
* 4 llGetFreeMemory and llGetTime results
** 4.1 Round Three
** 4.2 Round Two
** 4.3 Round One
* 5 Different & Clever & Fast
* 6 Third Talk
* 7 197 bytes or more
* 8 91+-2 milliseconds or more
* 9 Neutral Point Of View
** 9.1 You Deleting My Work, Me Deleting Your Work
** 9.2 Why bother with fast or small or different code exemplars
* 10 LSO, Defined
* 11 Talk:Hex Archives


3 Principle Of Least Astonishment
See 2007-10-24 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=38020 to read five old threads:


5 Second Talk
* 5 Edit Conflicts
* 8 18 October Rewrite
* 9 Loop Costs Unscientific
* 11 Due Diligence
* 12 Correct At A Glance


6 First Talk
See 2007-10-17 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=35999 to read four old threads:


7 Our Growing Consensus
* 3 Principle Of Least Astonishment
* 5 Second Talk
* 6 First Talk
* 7 Our Growing Consensus


Note: Our talk did link with http://en.wikipedia.org/wiki/Principle_of_least_astonishment
See also [[Talk:Efficient Hex]].

Latest revision as of 13:36, 29 October 2007

Talk:Hex Archives

See 2007-10-29 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=38446 to read fourteen old threads:

  • 1 That Red Mist, Dispelled Maybe
  • 2 Solution
  • 3 Efficiency Tester vs. Code Racer
  • 4 llGetFreeMemory and llGetTime results
    • 4.1 Round Three
    • 4.2 Round Two
    • 4.3 Round One
  • 5 Different & Clever & Fast
  • 6 Third Talk
  • 7 197 bytes or more
  • 8 91+-2 milliseconds or more
  • 9 Neutral Point Of View
    • 9.1 You Deleting My Work, Me Deleting Your Work
    • 9.2 Why bother with fast or small or different code exemplars
  • 10 LSO, Defined
  • 11 Talk:Hex Archives

See 2007-10-24 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=38020 to read five old threads:

  • 5 Edit Conflicts
  • 8 18 October Rewrite
  • 9 Loop Costs Unscientific
  • 11 Due Diligence
  • 12 Correct At A Glance

See 2007-10-17 http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=35999 to read four old threads:

  • 3 Principle Of Least Astonishment
  • 5 Second Talk
  • 6 First Talk
  • 7 Our Growing Consensus

See also Talk:Efficient Hex.