Difference between revisions of "Talk:Hex"
m (Complete! You can see I posted my guess of the plan of corrective action that you mean to suggest.) |
m (→Different & Clever & Fast: show the code we described) |
||
Line 6: | Line 6: | ||
Perhaps by NPOV we should also quote that byte size, to show how much larger that code is. And then also blog here an actual example result of that exercise, corresponding to that quote of run time. | Perhaps by NPOV we should also quote that byte size, to show how much larger that code is. And then also blog here an actual example result of that exercise, corresponding to that quote of run time. | ||
I think this is the code we mean, its byte size may be 161, its run time may be 91+-2 milliseconds, I hope to return soon to reconfirm. | |||
<pre> | |||
// http://wiki.secondlife.com/wiki/hex | |||
string bits2nybbles(integer bits) | |||
{ | |||
integer lsn; // least significant nybble | |||
string nybbles = ""; | |||
do | |||
{ | |||
integer lsn = bits & 0xF; // least significant nybble | |||
nybbles = llGetSubString("0123456789ABCDEF", lsn = (bits & 0xF), lsn) + nybbles; | |||
} while (bits = (0xfffFFFF & (bits >> 4))); | |||
return nybbles; | |||
} | |||
</pre> | |||
= Third Talk = | = Third Talk = |
Revision as of 07:44, 20 October 2007
Different & Clever & Fast
The article claims:
- Note: Reworking this concise & small code into the clever & fast style does make this different code larger and faster, producing run times as fast as 91+-2 milliseconds.
Perhaps by NPOV we should also quote that byte size, to show how much larger that code is. And then also blog here an actual example result of that exercise, corresponding to that quote of run time.
I think this is the code we mean, its byte size may be 161, its run time may be 91+-2 milliseconds, I hope to return soon to reconfirm.
// http://wiki.secondlife.com/wiki/hex string bits2nybbles(integer bits) { integer lsn; // least significant nybble string nybbles = ""; do { integer lsn = bits & 0xF; // least significant nybble nybbles = llGetSubString("0123456789ABCDEF", lsn = (bits & 0xF), lsn) + nybbles; } while (bits = (0xfffFFFF & (bits >> 4))); return nybbles; }
Third Talk
... I hate [the 18 October] revision ...
- Good.
- I think you would vote to undo the 18 October revision, and I know I would vote to undo the 15/ 14 October revision.
- I believe in this way we have established a creative tension: a line along which we can hope to find a neutral point of view (NPOV) and a consensus statement of that NPOV.
- Thank you. -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... shouldn't use llGetTimestamp for profiling ...
... should be using the script time functions (llGetTime) ...
- I see this hex article contains reproducible measurements across a spread from 91+-2 to 147+-30 milliseconds, taken from 1,000 runs of the Efficiency Tester instrument that harnesses llGetTimestamp.
- I see now that the llGetTime, llGetAndResetTime, llResetTime, timer, llGetRegionTimeDilation articles were all woefully confused. By your one-word "should" hint here, I think I now understand what we mean by second and fractional seconds of dilated time, so yes I have now updated all five of those confused articles accordingly, thank you.
- I understand you to be saying we should change the Efficiency Tester instrument to call llGetTime rather than llGetTimestamp. Is that right? Then also we should update the Code Racer instrument, by the same reasoning. Then I see after all that we must copy the Efficiency Tester changes into the exact duplicate of that instrument that appears expanded inline deep inside the LSL Script Efficiency article. Agreed? This is exactly your intended recommendation? -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... cannot replicate ... bytecode cost ...
- Thank you for making time to say so. I've added the Code Sizer article to explain the procedure more fully, and naturally I linked the existing Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast explanation to that article. I hope you or any of us can make time to try again now to confirm or correct the byte counts. -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... cannot replicate ... bytecode cost ... steps ...
- I've copied your steps into Talk:Code_Sizer as Talk:Code_Sizer#The_Plus_Four_Instrument. In my (extremely limited) experience, those steps produce results that count exactly four more bytes than the Code Sizer instrument. I imagine you already know why, I do not yet know why.
- I vote to have the hex article quote the results of the Code Sizer instrument, rather than the Plus Four instrument. I vote that way because I believe the size occupied when I add a function to a script that already has other functions matters more than the size occupied when I add the first function. -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... The Loop Cost is ... the cost in bytes of each iteration of the loop ...
- People hear more if we can find a way to say less. To whom does this byte size within the loop matter under which conditions? Surely quoting the total byte size of each solution suffices? If we must quote this, then we must define it. Is this byte cost within the loop purely or does this byte cost include the outside of the loop also? With what instrument and procedure do we measure this loop cost? -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... the variable cost ...
- What is this variable cost? What are its units? How do we count it? Why do we care? -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... operations ...
- What are the units of this "operations" measure? Instructions? Bytes? How do we count them? Why do we care? -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
... It would be remiss not to mention ...
- Which procedure do you intend for us to publish together with additional benchmark results, to allow the scientific audience to most conveniently -- most quickly & inexpensively -- repeat your experiment?
- Can we agree somehow to present less exemplars? The only exemplars that have value in my eye are the concise & conventional exemplar, and the different & concise & small exemplar. Is there a chance that you'd let the page settle if we presented only those two examples together with the clever & fast exemplar?
- Have you seen Talk:Hex#Why_bother_with_fast_or_small_or_different_code_exemplars yet? Do those few paragraphs explain correctly why you hate the concise & conventional exemplar? They do explain correctly why I avoid clever exemplars like the plague. -- Ppaatt Lynagh
... [much snipped] ...
- I believe I have now answered all the technically substantive open questions.
- If I've left any question unanswered, I hope you can make time to repeat it more clearly. Thanks again for helping me think clearly and neutrally. -- Ppaatt Lynagh 22:57, 19 October 2007 (PDT)
Edit Conflicts
The wiki server temporarily went nuts. It says we had an edit conflict. It says I sent twice. It says my actual second send, which it would count as a third send, went nowhere. None of that fits with what I saw here.
Go figure. I hope we didn't destroy any contributed text thru that confusion.
197 bytes or more
Scientifically reproducible summary measures of code space now appear in the article:
347 bytes
197 bytes
202 bytes
139 bytes
More bytes
These results you can quickly & easily reproduce with the Code Sizer instrument.
We list all the instruments we use at Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast.
91+-2 milliseconds or more
Scientifically reproducible summary measures of run time now appear in the article:
147+-30 milliseconds
105+-7 milliseconds
102+-6 milliseconds
113+-7 milliseconds
91+-2 milliseconds
Those results you can slowly & easily confirm or deny or improve.
We list all the instruments we use at Hex#Measuring_Concise_.26_Conventional_.26_Small_.26_Fast.
Quoting these millisecond results took more calculation than anyone has yet programmed into the Efficiency Tester.
One of us ran that harness 3 x 1,000 times for each version of code, then took the (min, max) pair and reported { middling = ((fastest + slowest) / 2) } with a +- imprecision of { max(middling - fastest, slowest - middling)) }.
The raw data was as follows:
[5:51] Object: 15146 free bytes of code at default.state_entry [5:53] Object: 98.496002+-FIXME milliseconds passed on average in each of [5:53] Object: 1000 trials of running the code in the loop [5:55] Object: 96.188004+-FIXME milliseconds passed on average in each of [5:55] Object: 1000 trials of running the code in the loop [5:56] Object: 106.851997+-FIXME milliseconds passed on average in each of [5:56] Object: 1000 trials of running the code in the loop [6:01] Object: 15151 free bytes of code at default.state_entry [6:03] Object: 111.475998+-FIXME milliseconds passed on average in each of [6:03] Object: 1000 trials of running the code in the loop [6:04] Object: 112.019997+-FIXME milliseconds passed on average in each of [6:04] Object: 1000 trials of running the code in the loop [6:06] Object: 98.428001+-FIXME milliseconds passed on average in each of [6:06] Object: 1000 trials of running the code in the loop [6:09] Object: 15209 free bytes of code at default.state_entry [6:11] Object: 120.155998+-FIXME milliseconds passed on average in each of [6:11] Object: 1000 trials of running the code in the loop [6:12] Object: 15209 free bytes of code at default.state_entry [6:14] Object: 115.272003+-FIXME milliseconds passed on average in each of [6:14] Object: 1000 trials of running the code in the loop [6:16] Object: 106.751999+-FIXME milliseconds passed on average in each of [6:16] Object: 1000 trials of running the code in the loop [6:18] Object: 111.919998+-FIXME milliseconds passed on average in each of [6:18] Object: 1000 trials of running the code in the loop [6:25] Object: 15001 free bytes of code at default.state_entry [6:28] Object: 176.723999+-FIXME milliseconds passed on average in each of [6:28] Object: 1000 trials of running the code in the loop [6:30] Object: 131.716003+-FIXME milliseconds passed on average in each of [6:30] Object: 1000 trials of running the code in the loop [6:32] Object: 117.820000+-FIXME milliseconds passed on average in each of [6:32] Object: 1000 trials of running the code in the loop [6:35] Object: 15204 free bytes of code at default.state_entry [6:36] Object: 89.372002+-FIXME milliseconds passed on average in each of [6:36] Object: 1000 trials of running the code in the loop [6:38] Object: 92.748001+-FIXME milliseconds passed on average in each of [6:38] Object: 1000 trials of running the code in the loop [6:40] Object: 93.180000+-FIXME milliseconds passed on average in each of [6:40] Object: 1000 trials of running the code in the loop
18 October Rewrite
So I'm baaack again, trying to fold all our talk to date into the article. I also did my best guess of how to keep the uncommented changes to the article itself.
I think I've now got the story straight on the concise, conventional, & small code. Concise & conventional I can see at a glance. Small I can easily measure with llGetFreeMemory.
I think the article may by now have a structure now that lets us briefly & clearly tell the story of the fast code also. You can see the FIXME spots where we need reproducible quoted milliseconds. I'm not sure how much +- inaccuracy we should allow for quotes of milliseconds. I'll either standby to see someone else volunteer to tell the story of the fast code, or I'll return to measure that story and tell that story myself. I see we have Code Racer and other articles available to help us call llGetTimestamp to measure run time. Looks like nobody has yet found a technique returns results much faster than like three real world minutes per trial, ouch.
I think/hope we're moving towards implementing exactly one fast hex function that implements the specification for the hex function. I don't think we need a fast implementation of the bits2nybbles function.
Have we progressed?
Will we finish soon?
- I hate this revision. It's a step backwards.
- You shouldn't use llGetTimestamp for profiling, you should be using the script time functions (llGetTime).
- I cannot replicate your bytecode cost. Since you didn't post instructions and you used llGeTimestamp I can only conclude that your measurements are entirely invalid. My steps are below:
- Compile and execute
default{state_entry(){llOwnerSay((string)llGetFreeMemory());}}
to establish a baseline - Paste an implementation at the top of script from #1, compile and run.
- Subtract the result from #2 from the result of #1; you now have the bytecode cost.
- Repeat steps #2 & #3 for each implementation.
- Compile and execute
- Here you go again about a single implementation. You may not need a fast implementation, but someone might. -- Strife Onizuka 12:42, 19 October 2007 (PDT)
- I very much appreciate the way the creative tension created between us by your latest remarks brought forth the Code Sizer article. Previously I had no idea that writing out what I knew of that technique would produce such a burst of brief clear conventional code, correct at a glance. Great fun to write, yes thank you. -- Ppaatt Lynagh 18:33, 19 October 2007 (PDT)
- I see I cannot understand some of your remarks here. I guess I can help best by quoting only what I do understand, in a new section at the top, titled Third Talk. That work of mine will leave you free to withdraw, revise, explain or abandon your other remarks here. -- Ppaatt Lynagh 19:04, 19 October 2007 (PDT)
- Complete! You can see I posted my guess of the plan of corrective action that you mean to suggest. -- Ppaatt Lynagh 06:48, 20 October 2007 (PDT)
Loop Costs Unscientific
I see we were quoting loop costs of:
"2 variables, 124 bytes" = Clear & Conventional
"1 variable, 105 bytes" = Small
"0 variables, 81 bytes" = Fast
"1 variables, 105 bytes" = Different & Small - Signed Only
"0 variables, 100 bytes" = Different & Small - Signed or Unsigned
I don't know what we mean by "loop cost". I see we didn't link to a definition on how to reproduce these results. I'll strike the loop costs, pending someone making these results repeatable, thus scientific.
-- Ppaatt Lynagh 19:34, 17 October 2007 (PDT)
In the event I see I also struck a couple of other undefined terms.
We had been speaking of "executing" "operators", without a definition for "operators". We may have meant operations? Or LSL bytecodes? I substituted the word "work".
We had been speaking of "LSO". I don't know if/how "LSO byte code" differs from LSL "byte code", so I cut the English to just "bytes" and "code" and "size".
-- Ppaatt Lynagh 22:18, 17 October 2007 (PDT)
- <rant>Unscientific? Maybe you should try to understand the science before you call it unscientific. The Loop Cost is exactly what it sounds like, the cost in bytes of each iteration of the loop. If you had any understanding of LSO this would be obvious but you don't even know what LSO is. It would be remiss not to mention the the variable cost as it is important when it comes to size and performance, but that is beyond you. I could spend a long time answering your questions but there is a more authoritative source... the compiler source code.</rant>
- "operations" would have been a better word there but "operators" is valid (just a bit awkward). "work" should not be used, it is vague.
- There are a few ways of calculating the loop cost.
- A) compile the compiler as a stand alone application.
- B) hack the client cache to extract compiled scripts
- C) have a perfect understanding of how the compiler works and of LSO. If you have a good understanding of the compiler you can manually count the cost and you can design a framework for checking those numbers (you have to account for the number of variables lest the numbers be off). I tend to just add things up but for the information you butchered, I did in fact verify them.
- --Strife Onizuka 12:18, 19 October 2007 (PDT)
- Which procedure do you intend for us to publish together with your results, to allow the scientific audience to most inexpensively repeat your experiment? -- Ppaatt Lynagh 15:34, 19 October 2007 (PDT)
- I see I cannot understand some of your remarks here. I guess I can help best by quoting only what I do understand, in a new section at the top, titled Third Talk. That work of mine will leave you free to withdraw, revise, explain or abandon your other remarks here. -- Ppaatt Lynagh 19:04, 19 October 2007 (PDT)
- Complete! You can see I posted my guess of the plan of corrective action that you mean to suggest. -- Ppaatt Lynagh 06:48, 20 October 2007 (PDT)
Neutral Point Of View
I notice I continue to feel delighted by the way our community interacting together makes the point of view we publish ever more neutral.
You Deleting My Work, Me Deleting Your Work
I wonder if I can help by mentioning the two guides toward a neutral point of view that I am myself working to hold in mind as I play here:
1. I think the essence of any wikipedia is the astonishing idea that people can end up working together more when all together we agree "if you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here", as our edit page warns us.
2. I think the perspectives we represent run across the range of perspectives balanced at http://en.wikipedia.org/wiki/Optimization_(computer_science)#When_to_optimize. Some of us naturally care most about clear and conventional code, some of us care most about small code, some of us care most about fast code. That's great balance.
-- Ppaatt Lynagh ~16 October 2007 (PDT)
Why bother with fast or small or different code exemplars
Kuhn defines an exemplar as a solution paired with a problem to teach the new people how to solve problems (at least, that's my paraphrase of http://en.wikipedia.org/wiki/Exemplar).
Me, I launched this hex page on 8 October 2007, foolishly thinking I was writing out an obvious and adequate exemplar. Me, I thought I was just sharing the trivial work of me the LSL newbie getting the hex out to make some permission masks intelligible, in contrast with how confused the masks look in decimal. I thought I was just saving the next newbie some effort in working out the details.
I didn't think much about whether the masks were signed or unsigned. LSL permission masks only take on values that can be seen as positive or unsigned. LSL permission masks never set the most significant unsigned bit which is the sign bit of a signed two's-complement integer.
The creative tension that we subsequently worked thru then taught me much.
I now know that I initially saw value only in clear & conventional code examplars. In effect, I saw this wiki only teaching LSL to newbies like me.
I now guess this wiki has a wider purpose. Sure, this wiki teaches LSL to newbies, but this wiki also gives experts a place to share between themselves exemplars of making code clever or fast or small, including exemplars of respecifying code to make the code more clever or faster or smaller even at the extreme cost of making the code less clear & conventional, thus harder to call and reuse and read.
As yet I still don't yet understand the burst of changes ff. 14:59, 14 October 2007 ...
... but at least we now have found a perspective from which I can hope to understand those changes.
-- Ppaatt Lynagh 19:24, 17 October 2007 (PDT) -- created
-- Ppaatt Lynagh 06:47, 20 October 2007 (PDT) -- modified towards NPOV to claim value for clever as well as value for brief, clear, small, and fast
Due Diligence
Part of the difference between us is that my history of pain tells me it is irresponsible to claim "substantial" differences in how fast or small we made code, at the expense of the to-me-most-essential "correct at a glance" quality, without quoting reproducible numbers. When we don't publish a reproducible experimental setup, we're trading opinion, not science, hardly a worthwhile trade. When we mean to contribute the half-an-edit that is just the opinion to invite the community to finish our work by adding the science, then I think we should include an explicit link over here to this Talk:Hex tab. I see with surprise that Strife chose to delete my explicit invitations for the community to complete our unsubstantiated fast & small claims. -- Ppaatt Lynagh 09:16, 14 October 2007 (PDT)
- [... snipped ...] I'll qualify that with numbers when I next log in. I'll write an article on profiling functions and we can link to it. I believe the Readability version uses about 150% to 200% the bytecode of the other versions; it's really chunky (I'm not keen on counting the bytes in my head atm). Linking to the talk page for every implementation is a bit overkill, having a link at the end of the header paragraph would be ok. -- Strife Onizuka 09:51, 14 October 2007 (PDT)
- a) I see numbers! Thank you!! I look forward to learning how to reproduce your results. Hints welcome! Without hints, I'll be exploring llGetFreeMemory. I guess I can measure that, add code, measure that again, and call the diff the LSO size. How to measure time I have no memory of a clue, but I think I've found hints inside this wiki.secondlife.com, I will google ... ah, likely I was remembering the llGetTimestamp example found at Efficiency Tester and then explored in more depth at LSL_Script_Efficiency. I grumble to see us counting time progressing in the real world as a measure of speed even we share the servers and clients out among who knows how much code, I guess like many benchmarks the results will still be useful in comparison with one another ...
- b) http://en.wikipedia.org/wiki/Atm decodes ATM to At The Moment, I guess that's what we mean.
- c) I don't immediately understand "loop cost". My own guess of fastest would be to unroll the loop: 32-bit ints never contain more than 8 nybbles.
- d) To reproduce the "LSO Size" results I figure I'll add more than one copy of the functions into a script, negligibly changing the spelling but never the lengths of the names (e.g., by ending the global constant and function names with a digit chosen from "0123456789"). llGetFreeMemory should then report me eating away perfectly consistent byte counts of code space every time I add a copy of the function. Once I have multiple copies, I can begin to show whether calling the function makes a difference in how much code space it eats.
- I don't always exactly reproduce the article's new claim of "LSO Size" of 351 for the Correct At A Glance code. I count 347 bytes disappearing when I edit the 2 or 3 version out of the following 1 2 3 copy. I count 14983 bytes free with the code as shown, rising by 347 to 15330 when I delete copy 3, rising by 347 to 15677 when I delete copy 2. I do see a difference of 351 if I delete the last copy. I see 15821 bytes free with one copy and no calls, rising by 351 to 16172 if I delete the last copy.
- I don't yet understand how the first copy occupies 351 bytes and subsequent copies occupy 347 bytes. Do you?
- Ah another clue. I see deleting even the first copy frees 347 bytes, not 351 bytes, when I add the first copy to some other code, such as moving by 347 between 16045 and 15698 free by adding/ deleting copy 1 to a script such as:
- string callable(integer passed)
- {
- return (string) passed;
- }
- default
- {
- state_entry()
- {
- llOwnerSay((string) llGetFreeMemory());
- llOwnerSay(callable(-1));
- llOwnerSay(callable(-1));
- }
- }
- string callable(integer passed)
- And yes consistently now, all of the five llGetFreeMemory LSO sizes reported in the article as 351 341 254 201 204 I now reproduce offset by four -- I see 348 337 250 197 200.
- I still vote for shrinking the article to show just most Correct At A Glance, most Fast, and most Small. -- Ppaatt Lynagh 21:41, 15 October 2007 (PDT)
// http://wiki.secondlife.com/wiki/hex string XDIGITS1 = "0123456789abcdef"; // could be "0123456789ABCDEF" string bits2nybbles1(integer bits) { string nybbles = ""; while (bits) { integer lsbs = bits & 0xF; string nybble = llGetSubString(XDIGITS1, lsbs, lsbs); nybbles = nybble + nybbles; bits = bits >> 4; // discard the least significant bits at right bits = bits & 0xfffFFFF; // discard the sign bits at left } return nybbles; } string hex1(integer value) { if (value < 0) { return "-0x" + bits2nybbles1(-value); } else if (value == 0) { return "0x0"; // bits2nybbles(value) == "" when (value == 0) } else // if (value > 0) { return "0x" + bits2nybbles1(value); } } string XDIGITS2 = "0123456789abcdef"; // could be "0123456789ABCDEF" string bits2nybbles2(integer bits) { string nybbles = ""; while (bits) { integer lsbs = bits & 0xF; string nybble = llGetSubString(XDIGITS1, lsbs, lsbs); nybbles = nybble + nybbles; bits = bits >> 4; // discard the least significant bits at right bits = bits & 0xfffFFFF; // discard the sign bits at left } return nybbles; } string hex2(integer value) { if (value < 0) { return "-0x" + bits2nybbles2(-value); } else if (value == 0) { return "0x0"; // bits2nybbles(value) == "" when (value == 0) } else // if (value > 0) { return "0x" + bits2nybbles2(value); } } string XDIGITS3 = "0123456789abcdef"; // could be "0123456789ABCDEF" string bits2nybbles3(integer bits) { string nybbles = ""; while (bits) { integer lsbs = bits & 0xF; string nybble = llGetSubString(XDIGITS3, lsbs, lsbs); nybbles = nybble + nybbles; bits = bits >> 4; // discard the least significant bits at right bits = bits & 0xfffFFFF; // discard the sign bits at left } return nybbles; } string hex3(integer value) { if (value < 0) { return "-0x" + bits2nybbles3(-value); } else if (value == 0) { return "0x0"; // bits2nybbles(value) == "" when (value == 0) } else // if (value > 0) { return "0x" + bits2nybbles3(value); } } default { state_entry() { llOwnerSay((string) llGetFreeMemory()); llOwnerSay(hex1(-1)); llOwnerSay(hex1(-1)); llOwnerSay(hex1(-1)); } }
Correct At A Glance
Strife,
I see you say "your opinions have helped clarify and shape my own opinions". Your talk has had the same effect on my talk. Thank you, for me that's great fun.
I do find marketing, journalism, popular science, etc. to be useful & worthwhile human activities with relevance to the community effort of making wikipedias maximally encyclopaedic, yes. So far as I know, that bias of mine is a difference between us.
I see you say "the article" "lacked" "a neutral point of view". I hope you know I emphatically agree. I hope you know I could not see this truth until after you pointed this truth out. For example, I did not understand the truth that "we find the first few implementations equally easy to call, because we have studied all those implementations enough to believe each actually does produce exactly the same output for any input" until after you explained. You explained that for you the implementations tuned to the point of not being correct at a glance were, for you, still easy to call. I see that now, but I didn't see that until after you showed me.
I agree that encyclopaedias should substitute a generic phrase in place of an equivalent trademarked phrase. I think "correct at a glance" is not a trademarked phrase. I think "correct at a glance" has a Google history of being used ~14,000 times or more, and says significantly more than "readable". I watched with interest as you pushing against me taught me to fill out the else clauses of the if-else-if of the hex routine, rather than letting hex shortcut them. I watched with interest as you pushing against me taught me to find my way to bits2nybbles as the least work core of this, rather than hexu. These are the experiences that taught me my bias is "correct at a glance", when contrasted with your "fast" and "small" bias. Over in User:Ppaatt_Lynagh I long ago said "Thanks to Strife Onizuka for helping to verbalise ... the "correct at a glance" distinctive of separateWords, etc."
I see you say 'if you do a Google search for '"Correct At A Glance"' my userpage comes up as hit #2'. I'm unable to reproduce this result? I checked the first few of the ~14,000 hits of http://www.google.com/search?q=%22correct+at+a+glance%22 and I checked the first few of the ~12,000,000 hits of http://www.google.com/search?q=correct+at+a+glance. Please can you describe your experiment in more detail? I agree my pro bono work to date has given me an unfair share of Google attention, and therefore gives my favourite catchphrases unusually many hits.
-- Ppaatt Lynagh 09:03, 14 October 2007 (PDT)
- I was wondering if those other hits were you (but decided it would be too much like stalking to ask). I didn't think it was a TM or it would have come up as one... just sounded like it should be one. It's probably google learning that I do a lot of SL related searches and elevating those. screen-shot. I'd like to say the reason I didn't give reasons before was that I didn't have anything nice to say but really I'm just not in the habit of writing explanations. I feel I was a bit harsh with you. -- Strife Onizuka 09:33, 14 October 2007 (PDT)
- Now that we've discarded the ad hominem idea that any of us here hold the "Correct At A Glance" trademark ... we can return from a more neutral point of view to the issue.
- Q: Is that phrase the best choice of words to communicate that meaning?
- A: Maybe not, it's awfully long.
- How about we talk of "clear", "small", and "fast", as the qualities we desire to find in politely concise code examples? -- Ppaatt Lynagh 16:24, 16 October 2007 (PDT)
Talk:Hex Archives
See http://wiki.secondlife.com/w/index.php?title=Talk:Hex&oldid=35999 to read back into the history of four topics:
3 Principle Of Least Astonishment
5 Second Talk
6 First Talk
7 Our Growing Consensus
Note: Our talk did link with http://en.wikipedia.org/wiki/Principle_of_least_astonishment