Difference between revisions of "Talk:LlFrand"

From Second Life Wiki
Jump to navigation Jump to search
m (→‎More Random: whoops one of the bolds needed an explicit |1=)
 
(11 intermediate revisions by 5 users not shown)
Line 3: Line 3:


:ummm thats not good. I was rather fond of the old method. Why not use the copysign function found in "math.h" after stripping the sign to put it back on? (C99 revisions of glibc will support it aka, won't be in windows but will be in any recent version of linux GCC). Something like:
:ummm thats not good. I was rather fond of the old method. Why not use the copysign function found in "math.h" after stripping the sign to put it back on? (C99 revisions of glibc will support it aka, won't be in windows but will be in any recent version of linux GCC). Something like:
<pre>
:<syntaxhighlight lang="c">
include &lt;math.h&gt;
include <math.h>


float llFrand(float range)
float llFrand(float range)
{
{
     return copysign(new_method(fabsf(range)), range);
     return copysign(new_method(fabsf(range)), range);
}
}</syntaxhighlight>
</pre>
:QED [[User:Strife Onizuka|Strife Onizuka]] 18:45, 31 January 2007 (PST)
:QED [[User:Strife Onizuka|Strife Onizuka]] 18:45, 31 January 2007 (PST)


Line 21: Line 20:
:I agree with you, it's why I added the needs references. -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 09:54, 18 February 2012 (PST)
:I agree with you, it's why I added the needs references. -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 09:54, 18 February 2012 (PST)


:It is clearly mathematically provable that it is <b>not</b> "more" random. Given the function will have a psuedo-random output that is distributed over a statistical normal curve, any stream of output from the same will result in exactly the same statistical normal curve.  Therefore, if we call the bare function f() and the "more random" function f'(), the output will be identically spread out over the same distribution, therefore, f() = f'().  On the other hand, the "more random" function f'() <b>does</b> waste time in a loop.  
:It is clearly mathematically provable that it is <b>not</b> "more" random. Given the function will have a pseudo-random output that is distributed over a statistical normal curve, any stream of output from the same will result in exactly the same statistical normal curve.  Therefore, if we call the bare function {{bold|f()}} and the "more random" function {{bold|f'()}}, the output will be identically spread out over the same distribution, therefore, {{bold|1=f() = f'()}}.  On the other hand, the "more random" function {{bold|f'()}} <b>does</b> waste time in a loop.  


<lsl>
:<syntaxhighlight lang="lsl2">
integer r_seed;
integer r_seed;
integer f_seed;
integer f_seed;
Line 30: Line 29:
init() {
init() {
vector v;
vector v;
   v=llGetPos();
   v = llGetPos();
   r_seed=llGetUnixTime()+((integer)v.x*(integer)v.y);
   r_seed = llGetUnixTime()+((integer)v.x*(integer)v.y);
   f_seed=r_seed*(integer)(v.x-v.y);
   f_seed = r_seed*(integer)(v.x-v.y);
}
}


Line 50: Line 49:
//--------------------------//
//--------------------------//
string trim(float num,integer n) {
string trim(float num,integer n) {
string txt;
    string txt;
     txt=(string)num;
     txt=(string)num;
     return llGetSubString(txt,0,llSubStringIndex(txt,".")+n);
     return llGetSubString(txt,0,llSubStringIndex(txt,".")+n);
Line 65: Line 64:
   //--------------------------//
   //--------------------------//
   touch_start(integer num) {
   touch_start(integer num) {
  integer x;
      integer x;
  string msg;
      string msg;
  float avg;
      float avg;
  float f;
      float f;


       msg="\n\nllFrand()::\n";
       msg="\n\nllFrand()::\n";
Line 91: Line 90:
       msg+="\n\ntanya_rand() average: "+trim(avg,3);
       msg+="\n\ntanya_rand() average: "+trim(avg,3);
       llOwnerSay(msg);
       llOwnerSay(msg);
 
   }
   }
}
}
</lsl>
</syntaxhighlight>
-- [[User:Tanya Avon|Tanya Avon]] 13:26, 10 September 2012 (PDT)
-- [[User:Tanya Avon|Tanya Avon]] 13:26, 10 September 2012 (PDT)


Line 143: Line 140:
:::Ah, so you have specific knowledge about the underlying implementation of llFrand()?  Interesting.  The method used to produce these random numbers goes directly to the heart of the matter, of course.  Yet, your explanation does not match up with the observed behaviour: If it were purely a case of randomising those 23 bits to get a value in [1.0, 2.0) then subtracting 1 and multiplying by the magnitude then it would be impossible for llFrand(1 << 24) to produce an odd number -- the pre-multiply value would always be a multiple of 2<sup><small>-23</small></sup>.  Yet such calls produce a reasonable amount of odd numbers, and even at one point 68087.53, an impossibility under the explained scheme.
:::Ah, so you have specific knowledge about the underlying implementation of llFrand()?  Interesting.  The method used to produce these random numbers goes directly to the heart of the matter, of course.  Yet, your explanation does not match up with the observed behaviour: If it were purely a case of randomising those 23 bits to get a value in [1.0, 2.0) then subtracting 1 and multiplying by the magnitude then it would be impossible for llFrand(1 << 24) to produce an odd number -- the pre-multiply value would always be a multiple of 2<sup><small>-23</small></sup>.  Yet such calls produce a reasonable amount of odd numbers, and even at one point 68087.53, an impossibility under the explained scheme.


:::Regardless of one's interpretation of the above, this part from the page appears inconsistent: "Integers generated with this function cannot contain more than 23 random bits, or 24bits of precision, yeilding 2<sup><small>24</small></sup> possible final values."  If there are only 23 random bits, then there can only be 2<sup><small>23</small></sup> different values. -- [[User:Natasja Kiranov|Natasja Kiranov]] 05:08, 17 September 2013 (PDT)
:::Regardless of one's interpretation of the above, this part from the page appears inconsistent: "Integers generated with this function cannot contain more than 23 random bits, or 24bits of precision, yielding 2<sup><small>24</small></sup> possible final values."  If there are only 23 random bits, then there can only be 2<sup><small>23</small></sup> different values. -- [[User:Natasja Kiranov|Natasja Kiranov]] 05:08, 17 September 2013 (PDT)


Void is obviously wrong. 2<sup>24</sup> is a 25-bit number, and an uniformly distributed random <del>number</del> <ins>integer</ins> between 0 and 2<sup>24</sup>-1 has 24 full random bits. You can't get all the random bits that (integer)llFrand(0x1000000) gives by flipping a coin 23 times only; you need 24 flips, or there are some numbers that you won't be able to generate. While it's true that the float's mantissa can only hold 23 bits, the exponent adds the remaining 24th bit when being converted to integer, which is what that part of the article talks about. --[[User:Pedro Oval|Pedro Oval]] 06:43, 23 September 2013 (PDT)
Void is obviously wrong. 2<sup>24</sup> is a 25-bit number, and an uniformly distributed random <del>number</del> <ins>integer</ins> between 0 and 2<sup>24</sup>-1 has 24 full random bits. You can't get all the random bits that (integer)llFrand(0x1000000) gives by flipping a coin 23 times only; you need 24 flips, or there are some numbers that you won't be able to generate. While it's true that the float's mantissa can only hold 23 bits, the exponent adds the remaining 24th bit when being converted to integer, which is what that part of the article talks about. --[[User:Pedro Oval|Pedro Oval]] 06:43, 23 September 2013 (PDT)
Line 151: Line 148:
::Or yet another way to look at it: in llFrand(0x1000000), the exponent equals +23 once every 2<sup>1</sup> times on average, so the information that the exponent equals +23 is worth log2(2<sup>1</sup>) = 1 bit; together with the 23 bits of the mantissa, that makes 24 bits. It equals +22 once every 2<sup>2</sup> times on average, so the information that it equals +22 is worth log2(2<sup>2</sup>) = 2 bits; in this case, the mantissa will be truncated taking its highest 22 bits when converted to integer, making a total of 24 bits. If it equals +21, that information is worth 3 bits because it will happen once every 2<sup>3</sup> times on average, and the mantissa will be truncated to 21 bits making again 24 bits. ... the exponent will equal 0 once every 2<sup>24</sup> times, making that information worth 24 bits with no need to take bits from the mantissa when converting to integer, and will be less than 0 once every 2<sup>24</sup> times too, making that worth 24 bits as well, with no intervention of mantissa bits either. Global average over all possible cases: 24 bits of information (24 random bits). Not 23. This fact is implementation-independent; it only uses the assumption that the numbers are really uniformly distributed, that is, that llFrand(0x1000000) will in average be in [0, 1) once every 2<sup>24</sup> times, in [1, 2) once every 2<sup>24</sup> times, in [2, 3) once every 2<sup>24</sup> times, in [3, 4) once every 2<sup>24</sup> times, in [4, 5) once every 2<sup>24</sup> times, etc.
::Or yet another way to look at it: in llFrand(0x1000000), the exponent equals +23 once every 2<sup>1</sup> times on average, so the information that the exponent equals +23 is worth log2(2<sup>1</sup>) = 1 bit; together with the 23 bits of the mantissa, that makes 24 bits. It equals +22 once every 2<sup>2</sup> times on average, so the information that it equals +22 is worth log2(2<sup>2</sup>) = 2 bits; in this case, the mantissa will be truncated taking its highest 22 bits when converted to integer, making a total of 24 bits. If it equals +21, that information is worth 3 bits because it will happen once every 2<sup>3</sup> times on average, and the mantissa will be truncated to 21 bits making again 24 bits. ... the exponent will equal 0 once every 2<sup>24</sup> times, making that information worth 24 bits with no need to take bits from the mantissa when converting to integer, and will be less than 0 once every 2<sup>24</sup> times too, making that worth 24 bits as well, with no intervention of mantissa bits either. Global average over all possible cases: 24 bits of information (24 random bits). Not 23. This fact is implementation-independent; it only uses the assumption that the numbers are really uniformly distributed, that is, that llFrand(0x1000000) will in average be in [0, 1) once every 2<sup>24</sup> times, in [1, 2) once every 2<sup>24</sup> times, in [2, 3) once every 2<sup>24</sup> times, in [3, 4) once every 2<sup>24</sup> times, in [4, 5) once every 2<sup>24</sup> times, etc.


Now if you discard the exponent and force it to 0 in the float, then yes, you're losing 1 bit and you're left with just the 23 mantissa bits. The uniformity of the resulting distribution would then be implementation-dependent. In particular, it's unclear, without looking at the code, whether when the exponent was originally negative as the result of llFrand(0x1000000), the mantissa still had all 23 bits of information. My guess is the answer would be no because it is probably implemented as a division between two 32-bit quantities, so it would have at most 8 random bits in that case. --[[User:Pedro Oval|Pedro Oval]] 10:35, 23 September 2013 (PDT)
::Now if you discard the exponent and force it to 0 in the float, then yes, you're losing 1 bit and you're left with just the 23 mantissa bits. The uniformity of the resulting distribution would then be implementation-dependent. In particular, it's unclear, without looking at the code, whether when the exponent was originally negative as the result of llFrand(0x1000000), the mantissa still had all 23 bits of information. My guess is the answer would be no because it is probably implemented as a division between two 32-bit quantities, so it would have at most 8 random bits in that case. --[[User:Pedro Oval|Pedro Oval]] 10:35, 23 September 2013 (PDT)
 
:::Well, I think there's more to it than this; in principle the exponent provides much more than a single bit of information.  The point about 24 bits as some kind of useful limit is that beyond that size not all integers can be adequately represented in the float range so the distribution becomes somewhat sparser.  However, there are still more than 24 bits of information arising from the call, as testing shows (I could not have obtained the value 68087.53 in this way otherwise).
 
:::The precise behaviour comes down to the specific implementation of llFrand(), and I do not know what that is.  (Although the evidence is clear that the method suggested by Void Singer above does not correspond with what is actually used.)  My ''belief'' is that the generation is originally done in doubles rather than floats and then cast to a float.  That suggests that more than 24 bits of random information are available (just short of 5 bits worth in the exponent (53 - 23 = 30 ~= 2^5) would indicate a bit under 28 bits in total) but some care is needed to extract them properly. -- [[User:Natasja Kiranov|Natasja Kiranov]] 22:56, 27 September 2013 (PDT)
 
::::It appears that, at least at one time (I am -- perhaps mistakenly -- going by [http://forge.opensimulator.org/gf/project/opensim-viewer/scmsvn/?action=browse&path=%2Flinden_release%2Flinden%2Findra%2Fllcommon%2Fllrand.cpp&view=markup&revision=100 this source]), this was the case, and that the underlying implementation was to use one of Boost's lagged Fibonacci generators.  I can't say that I like the approach, as it does all the arithmetic in doubles and thereby loses bits of information on occasion; the approach suggested by Void Singer, applied to doubles, would be preferable. -- [[User:Natasja Kiranov|Natasja Kiranov]] 22:56, 27 September 2013 (PDT)
 
:::::The sentence being discussed reads: "''Integers generated with this function cannot contain more than X random bits'' [...]" where X was 23 according to Void and 24 according to you and me. There is an implicit assumption in it, namely the assumption of uniformity of the resulting distribution. Without that assumption, it's indeed certainly possible that you can get more than 24 bits of information on average depending on the underlying implementation. Actually, to be extra-picky, you can most likely get ~24.000000086 bits of near-uniformly distributed integers using llFrand(0x1000001). Perhaps that sentence should just be deleted and the non-uniformity explained in the preceding one that talks about the float range. --[[User:Pedro Oval|Pedro Oval]] 10:18, 29 September 2013 (PDT)
 
::::::You want to use 0x1000002 not 0x1000001, 0x1000001 invokes rounding (I think 0x1000000 will be twice as probable in this case to come up than any other number). If it makes things easier I agree, it should be deleted. As to the distribution, to give an even distribution of integers, as the exponent portion of the float increases, the probability for that range doubles. If that is the case, if you pick a really large number llRand(1.7014118346E+38), half the values will be greater than or equal to 8.5070591730E+37, of which there are only 8388608 distinct choices, which leads to a huge likely hood of repeats. Or if the uniform probability stops at 0x1000000 to keep high probability repeats from happening, then you end up with an nonuniform probability. I don't know how to document this nicely without taking the reader quite far down the rabbit hole. -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 22:15, 29 September 2013 (PDT)
 
:::::::Right, sorry. To give an uniform distribution of integers, you certainly can't use more than llFrand(0x1000002) and probably neither more than llFrand(0x1000000), or there will be integers you won't ever get or that you will get more often than the rest. But let's not miss the forest of utility to the users for the trees of academic correctness; the intention is to add a caveat for those who think they can do (integer)llFrand(999999999) for channel numbers, thinking they are getting 999,999,999 possible channels a possible attacker would need to search through. when they are getting only ~32M; they need two llFrand calls to get the whole range. For example, in that case (integer)llFrand(2997)*333667+(integer)llFrand(333667) would do the job. I think that it's better to remove that sentence and use one similar to the preceding one. After rereading, I think it deserves its own bullet point at the main level, not as part of the one that discusses how mag is inherently converted to float. Here's my proposed wording:<ul
 
><li> It should be remembered that when passing llFrand an [[integer]] as the {{LSLP|mag}}, it will be implicitly [[typecast]] to a [[float]].</li><li
> Integers outside the range {{Interval|lte=+2<sup>24</sup>|gte=-2<sup>24</sup>|gteh=-2^24|lteh=+2^24|center=integer}} may not be accurately represented (this is an inherent limitation of the float type); for example, outside that range no odd integers will appear. For that reason, when converting the resulting float to integer, it is not possible to generate more than 2<sup>25</sup> different integers. Two llFrand calls may be needed to obtain the desired range using a scheme like <code>(integer)llFrand(a)*b+(integer)llFrand(b)</code> with both <code>a</code> and <code>b</code> less than 2<sup>24</sup>, where the product of <code>a</code> and <code>b</code> equals the target number of values.</li></ul
 
>(the 2<sup>25</sup> comes from 2<sup>24</sup> in [0,2<sup>24</sup>) + 2<sup>23</sup> in [2<sup>24</sup>,2<sup>25</sup>) + ... + 2<sup>17</sup> in [2<sup>30</sup>,2<sup>31</sup>) which sums up to 2<sup>25</sup>-2<sup>17</sup>=33423360 but the sentence holds true). I'm in doubt about including the last sentence about the method to extend the range; it might sound too confusing, though it gives a practical solution. (Edit: a reference to an example would be a solution; something like: "Two llFrand calls may be needed to obtain the desired range; see examples below"). --[[User:Pedro Oval|Pedro Oval]] 03:53, 30 September 2013 (PDT)
 
:::::::Sorry, the above is incorrect. There are 2<sup>23</sup> integers in the floats in [2<sup>25</sup>,2<sup>26</sup>), not 2<sup>22</sup>, and so on. They add up to 9*2<sup>23</sup> = 2<sup>26</sup>+2<sup>23</sup>, not to under 2<sup>25</sup>.
 
:::::::I'll make the above change (with that correction) in a week if there are no objections. --[[User:Pedro Oval|Pedro Oval]] 12:14, 3 October 2013 (PDT)


== Examples ==
== Examples ==

Latest revision as of 04:48, 13 March 2023

Known Issues

1.13.3 specifying mag < 0 always returns 0.

ummm thats not good. I was rather fond of the old method. Why not use the copysign function found in "math.h" after stripping the sign to put it back on? (C99 revisions of glibc will support it aka, won't be in windows but will be in any recent version of linux GCC). Something like:
include <math.h>

float llFrand(float range)
{
    return copysign(new_method(fabsf(range)), range);
}
QED Strife Onizuka 18:45, 31 January 2007 (PST)

The issue has already been corrected in the code, unit tested, through QA, and awaiting the next rolling update. Phoenix Linden 08:47, 2 February 2007 (PST)

More Random

How is this any more random? It just substitutes one predetermined sequence of numbers with another... Iain Maltz

I agree with you, it's why I added the needs references. -- Strife (talk|contribs) 09:54, 18 February 2012 (PST)
It is clearly mathematically provable that it is not "more" random. Given the function will have a pseudo-random output that is distributed over a statistical normal curve, any stream of output from the same will result in exactly the same statistical normal curve. Therefore, if we call the bare function f() and the "more random" function f'(), the output will be identically spread out over the same distribution, therefore, f() = f'(). On the other hand, the "more random" function f'() does waste time in a loop.
integer r_seed;
integer f_seed;

//--------------------------//
init() {
vector v;
   v = llGetPos();
   r_seed = llGetUnixTime()+((integer)v.x*(integer)v.y);
   f_seed = r_seed*(integer)(v.x-v.y);
}

//--------------------------//
float tanya_rand(float mag) {
float f;
   r_seed+=2454267026;
   f_seed+=2909493974;
   r_seed=(r_seed<<29)|((r_seed>>3)&536870911);
   f_seed=(f_seed<<27)|((f_seed>>5)&134217727);
   f=(float)llAbs(r_seed)/(float)llAbs(f_seed);
   f-=(float)llFloor(f); 
   f*=mag;
   return f;
}

//--------------------------//
string trim(float num,integer n) {
    string txt;
    txt=(string)num;
    return llGetSubString(txt,0,llSubStringIndex(txt,".")+n);
}

//--------------------------//
default {

   //--------------------------//
   state_entry() {
      init();
   }

   //--------------------------//
   touch_start(integer num) {
      integer x;
      string msg;
      float avg;
      float f;

      msg="\n\nllFrand()::\n";
      avg=0.0;
      for (x=0; x<100; x++) {
         f=llFrand(100.0);
         avg+=f;
         msg+=trim(f,3)+" ";
      }
      avg/=100.0;
      msg+="\n\nllFrand() average: "+trim(avg,3);
      llOwnerSay(msg);

      msg="\n\ntanya_rand()::\n";
      avg=0.0;
      for (x=0; x<100; x++) {
         f=tanya_rand(100.0);
         avg+=f;
         msg+=trim(f,3)+" ";
      }
      avg/=100.0;
      msg+="\n\ntanya_rand() average: "+trim(avg,3);
      llOwnerSay(msg);
   }
}

-- Tanya Avon 13:26, 10 September 2012 (PDT)

^_^ then I can ax the content. The last version it will appear in is 1169816 -- Strife (talk|contribs) 13:34, 10 September 2012 (PDT)

Unclear note

What does mean "the process" in "The sequence of random numbers are shared across the entire process" in Notes. Should one expect it is SIM, object or script "wide" ?—The preceding unsigned comment was added on 18:38, 22 September 2008 by Scarabeus Kurka

By "process" they mean the entire sim. -- Strife (talk|contribs) 07:42, 23 September 2008 (PDT)

Unreferenced?

it's very odd to see the unreferenced template used in a wiki this way... I get why it was put there but there's no reference for internal working of the simulators so it's really out of place... add the fact that it's trying to link to wikipedia and well... can we either fix the template or just note the contentious point (I agree, there's plenty of question just how random [or not] the native function is)?
-- Void (talk|contribs) 19:14, 11 July 2011 (PDT)

  • Well it certainly doesn't make any sense. That's not how entropy works! Adeon Writer 16:14, 17 November 2011 (PST)
  • As to the template, the porting of it from wikipedia turned out to be very involved. -- Strife (talk|contribs) 18:10, 17 November 2011 (PST)

Accurate range for holding an integer as a float

I have my doubts about the statement "If the integer is outside the range [-2^23, +2^23] it may not be accurately represented".

In my own tests, a float can accurately hold any integer between +/- 2^24 (-16777216 to +16277216).

Outside of that range, the value will approximate to the next higher or lower even number, then outside of -33554432 to +33554432 it will approximate to the next higher or lower multiple of 4, etc.

Omei Qunhua 10:05, 20 January 2013 (PST)

You are correct, float format has 24 bits of actual precision float32=[1 sign bit | 8 exponent bits | 23 fractional bits (+ leading 1 assumed)], the original author probably missed the assumed bit in the specification for float32. They also misinterpreted the assumed bit as being random, but the spec says it doesn't include mag as a possible result. llFrand( llPow( 2, 24 ) ) only gives 224-1 as a max value, ie 23 bits. Corrected to use ±224 Max accuracy and 23 bits random values
-- Void (talk|contribs) 14:23, 20 January 2013 (PST)
I would say intellectual laziness on my part. *sigh* -- Strife (talk|contribs) 19:32, 22 January 2013 (PST)
A trifling correction: 224-1 is 24 bits, not 23. -- Natasja Kiranov 23:17, 14 September 2013 (PDT)
the value limit and number of bits do not match up because there is an imaginary assumed bit in the 24, which is always 1, and therefor cannot be random... only the 23 real bits after it are random
-- Void (talk|contribs) 01:55, 15 September 2013 (PDT)
This is conflating two things which are not the same; the number of bits in the mantissa is 23, true, but the representable range is much larger because the exponent may also vary. Your argument would only be true if the exponent were fixed, but even a moment's thought will show that cannot be the case. It is impossible that 2^24 different values can all be distinctly represented in only 23 bits. -- Natasja Kiranov 09:03, 16 September 2013 (PDT)
Stupid chrome eat my comment. It was epic. The short of it: I understand what you are getting at. Yes, using a bigger range does give you more bits that could be 1 or 0 but it doesn't change the fact that at least 8 of them will be zero (assuming a positive range, which we will, because it simplifies things). The probability is much higher that those 8 zeros will be least significant and not most significant. This is sounding to me like Henry Ford and the Model T, you could pick any (random) color you wanted, as long as it was black. -- Strife (talk|contribs) 23:35, 16 September 2013 (PDT)
P.S. I'm thinking of writing an FUI style function to see if I can get more out of llFrand. It's theoretically possible to almost get 3 additional bits out of it (NaNs are the problem). -- Strife (talk|contribs) 23:35, 16 September 2013 (PDT)
The value limit I mentioned above is precision, rather than the min/max representation, the Frand function does not randomize the exponent portion (which is set at -1 unbiased / 126 biased), only the significand, in the range of normalized [0, 1) and then multiplies by the supplied magnitude. The precision is 24bits, but the randomized portion is always 23bits (+ a fixed assumed 1, so, not a random bit). it's a bit easier to see this in the floating point standard description at IEEE 754-1985 and related entries
ETA to strifes thought, it should be possible to to get any number of random bits with repeated calls, too bad LSL won't let us properly read NaN, so it has to be stored in int, key or other types = S
-- Void (talk|contribs) 00:29, 17 September 2013 (PDT)
Addendum: I actually had the formula wrong, it's ([sign + | exponent 0 unbiased (127 biased) | 23 random bits] -1), but the effect is the same. 24 bits of absolute precision, 1 of which is fixed, the remainder random.
-- Void (talk|contribs) 01:03, 17 September 2013 (PDT)
Ah, so you have specific knowledge about the underlying implementation of llFrand()? Interesting. The method used to produce these random numbers goes directly to the heart of the matter, of course. Yet, your explanation does not match up with the observed behaviour: If it were purely a case of randomising those 23 bits to get a value in [1.0, 2.0) then subtracting 1 and multiplying by the magnitude then it would be impossible for llFrand(1 << 24) to produce an odd number -- the pre-multiply value would always be a multiple of 2-23. Yet such calls produce a reasonable amount of odd numbers, and even at one point 68087.53, an impossibility under the explained scheme.
Regardless of one's interpretation of the above, this part from the page appears inconsistent: "Integers generated with this function cannot contain more than 23 random bits, or 24bits of precision, yielding 224 possible final values." If there are only 23 random bits, then there can only be 223 different values. -- Natasja Kiranov 05:08, 17 September 2013 (PDT)

Void is obviously wrong. 224 is a 25-bit number, and an uniformly distributed random number integer between 0 and 224-1 has 24 full random bits. You can't get all the random bits that (integer)llFrand(0x1000000) gives by flipping a coin 23 times only; you need 24 flips, or there are some numbers that you won't be able to generate. While it's true that the float's mantissa can only hold 23 bits, the exponent adds the remaining 24th bit when being converted to integer, which is what that part of the article talks about. --Pedro Oval 06:43, 23 September 2013 (PDT)

Let me put it this way: in llFrand(0x1000000), the exponent can be either less than +23 or equal to +23. When it is equal to +23, you still have 23 mantissa bits, totaling a 24-bit number; when it is less than +23, you also have still 23 bits of mantissa, of which some will be discarded when truncating to integer, but the total bits in exponent + number add up to 23 bits anyway. So, the decision between the exponent being equal to +23 or less than +23 yields a 24th bit. --Pedro Oval 06:55, 23 September 2013 (PDT)
Or yet another way to look at it: in llFrand(0x1000000), the exponent equals +23 once every 21 times on average, so the information that the exponent equals +23 is worth log2(21) = 1 bit; together with the 23 bits of the mantissa, that makes 24 bits. It equals +22 once every 22 times on average, so the information that it equals +22 is worth log2(22) = 2 bits; in this case, the mantissa will be truncated taking its highest 22 bits when converted to integer, making a total of 24 bits. If it equals +21, that information is worth 3 bits because it will happen once every 23 times on average, and the mantissa will be truncated to 21 bits making again 24 bits. ... the exponent will equal 0 once every 224 times, making that information worth 24 bits with no need to take bits from the mantissa when converting to integer, and will be less than 0 once every 224 times too, making that worth 24 bits as well, with no intervention of mantissa bits either. Global average over all possible cases: 24 bits of information (24 random bits). Not 23. This fact is implementation-independent; it only uses the assumption that the numbers are really uniformly distributed, that is, that llFrand(0x1000000) will in average be in [0, 1) once every 224 times, in [1, 2) once every 224 times, in [2, 3) once every 224 times, in [3, 4) once every 224 times, in [4, 5) once every 224 times, etc.
Now if you discard the exponent and force it to 0 in the float, then yes, you're losing 1 bit and you're left with just the 23 mantissa bits. The uniformity of the resulting distribution would then be implementation-dependent. In particular, it's unclear, without looking at the code, whether when the exponent was originally negative as the result of llFrand(0x1000000), the mantissa still had all 23 bits of information. My guess is the answer would be no because it is probably implemented as a division between two 32-bit quantities, so it would have at most 8 random bits in that case. --Pedro Oval 10:35, 23 September 2013 (PDT)
Well, I think there's more to it than this; in principle the exponent provides much more than a single bit of information. The point about 24 bits as some kind of useful limit is that beyond that size not all integers can be adequately represented in the float range so the distribution becomes somewhat sparser. However, there are still more than 24 bits of information arising from the call, as testing shows (I could not have obtained the value 68087.53 in this way otherwise).
The precise behaviour comes down to the specific implementation of llFrand(), and I do not know what that is. (Although the evidence is clear that the method suggested by Void Singer above does not correspond with what is actually used.) My belief is that the generation is originally done in doubles rather than floats and then cast to a float. That suggests that more than 24 bits of random information are available (just short of 5 bits worth in the exponent (53 - 23 = 30 ~= 2^5) would indicate a bit under 28 bits in total) but some care is needed to extract them properly. -- Natasja Kiranov 22:56, 27 September 2013 (PDT)
It appears that, at least at one time (I am -- perhaps mistakenly -- going by this source), this was the case, and that the underlying implementation was to use one of Boost's lagged Fibonacci generators. I can't say that I like the approach, as it does all the arithmetic in doubles and thereby loses bits of information on occasion; the approach suggested by Void Singer, applied to doubles, would be preferable. -- Natasja Kiranov 22:56, 27 September 2013 (PDT)
The sentence being discussed reads: "Integers generated with this function cannot contain more than X random bits [...]" where X was 23 according to Void and 24 according to you and me. There is an implicit assumption in it, namely the assumption of uniformity of the resulting distribution. Without that assumption, it's indeed certainly possible that you can get more than 24 bits of information on average depending on the underlying implementation. Actually, to be extra-picky, you can most likely get ~24.000000086 bits of near-uniformly distributed integers using llFrand(0x1000001). Perhaps that sentence should just be deleted and the non-uniformity explained in the preceding one that talks about the float range. --Pedro Oval 10:18, 29 September 2013 (PDT)
You want to use 0x1000002 not 0x1000001, 0x1000001 invokes rounding (I think 0x1000000 will be twice as probable in this case to come up than any other number). If it makes things easier I agree, it should be deleted. As to the distribution, to give an even distribution of integers, as the exponent portion of the float increases, the probability for that range doubles. If that is the case, if you pick a really large number llRand(1.7014118346E+38), half the values will be greater than or equal to 8.5070591730E+37, of which there are only 8388608 distinct choices, which leads to a huge likely hood of repeats. Or if the uniform probability stops at 0x1000000 to keep high probability repeats from happening, then you end up with an nonuniform probability. I don't know how to document this nicely without taking the reader quite far down the rabbit hole. -- Strife (talk|contribs) 22:15, 29 September 2013 (PDT)
Right, sorry. To give an uniform distribution of integers, you certainly can't use more than llFrand(0x1000002) and probably neither more than llFrand(0x1000000), or there will be integers you won't ever get or that you will get more often than the rest. But let's not miss the forest of utility to the users for the trees of academic correctness; the intention is to add a caveat for those who think they can do (integer)llFrand(999999999) for channel numbers, thinking they are getting 999,999,999 possible channels a possible attacker would need to search through. when they are getting only ~32M; they need two llFrand calls to get the whole range. For example, in that case (integer)llFrand(2997)*333667+(integer)llFrand(333667) would do the job. I think that it's better to remove that sentence and use one similar to the preceding one. After rereading, I think it deserves its own bullet point at the main level, not as part of the one that discusses how mag is inherently converted to float. Here's my proposed wording:
  • It should be remembered that when passing llFrand an integer as the mag, it will be implicitly typecast to a float.
  • Integers outside the range [-224, +224] may not be accurately represented (this is an inherent limitation of the float type); for example, outside that range no odd integers will appear. For that reason, when converting the resulting float to integer, it is not possible to generate more than 225 different integers. Two llFrand calls may be needed to obtain the desired range using a scheme like (integer)llFrand(a)*b+(integer)llFrand(b) with both a and b less than 224, where the product of a and b equals the target number of values.
(the 225 comes from 224 in [0,224) + 223 in [224,225) + ... + 217 in [230,231) which sums up to 225-217=33423360 but the sentence holds true). I'm in doubt about including the last sentence about the method to extend the range; it might sound too confusing, though it gives a practical solution. (Edit: a reference to an example would be a solution; something like: "Two llFrand calls may be needed to obtain the desired range; see examples below"). --Pedro Oval 03:53, 30 September 2013 (PDT)
Sorry, the above is incorrect. There are 223 integers in the floats in [225,226), not 222, and so on. They add up to 9*223 = 226+223, not to under 225.
I'll make the above change (with that correction) in a week if there are no objections. --Pedro Oval 12:14, 3 October 2013 (PDT)

Examples

I've been looking at the example scripts provided for llFrand. The first 2 almost identical scripts were in a non-compilable state. And do we need both of them? Or was that perhaps more an exercise in how to edit the page to place 2 scripts side by side?

The 3rd example (tossing a coin) seems to be a jumble of 2 ideas, and has extensive comments some of which don't relate to the actual application. And do we really need to know the author and earlier poster?

Do we need the last example at all? The comment under func_footer tells us to use an integer cast, and explains why. So we don't need a rambling example to prove it (IMHO).

The range terminology employed [0.0, mag) is brilliantly compact ... but really only for those with a mathematical background. Even spotting that the opening and closing parentheses differ, takes some doing for the untrained eye. Keep this terminology by all means, but I do think Joe Public needs to have it spelled out without having to click on a minuscule reference icon.

I have spent 3 years helping on various scripting forums in-world, and often hear newcomers complain that they don't understand the Wiki. My extensive scrutiny of Wiki pages over the past 6 weeks has made me realise why that complaint is so common. I have so far fixed (yes, fixed, not tweaked for the sake of it, tho' I've done a bit of that too, LOL) nearly 30 example scripts most of which wouldn't even compile. I think we lose sight of our target audience in our headlong flight towards proving our technical prowess, or our ability to do a quick off-the-cuff edit.

Omei Qunhua 02:30, 21 January 2013 (PST)

I honestly think you've misjudged the majority of editors... most of whom simply happened to notice some detail is wrong or missing and fixed just that detail... their contributions are no less important, in fact I'd guess they are more important since there are many times more of those volunteers than of the more obsessive ones. Do mistakes get made on occasion? of course. but overall the system improves at a much more rapid rate if you train them up from their mistakes rather than beating them down for them. Even old pro's make mistakes, it happens. we've already been down the road of only allowing 'perfect' edits... it ended up with a very empty/outdated wiki because the inexperienced didn't bother any more, and the experienced mostly quit because they were doing it all themselves and unable to keep up. please, ease up a little, forgive and forget, more flies with honey and all that = )

I find it kind of amusing (not sarcastically) that you'd complain about not spelling things out to 'joe public' user, while consistently replacing references to PUBLIC_CHANNEL (which is auto linked to it's value) with 0. if your goal is to make things explicit for those unfamiliar with coding, named constants are much more newbie friendly.

I (and I'm sure quite a few people we'll never meet) appreciate the corrections and additions you have made, but the negative editorializing is a big turn off. It's a missed opportunity to encourage others to participate. even bad edits can be helpful by bringing attention to pages that lack care, and provide a starting point to build from.
-- Void (talk|contribs) 21:52, 22 January 2013 (PST)

Practically all my criticism of examples on the Wiki pages has been addressed at the technically competent but utterly careless. Newbies tend to take much more care as they are scared of getting things wrong, something we could all take to heart. To my mind, leaving non-compilable examples on primary Wiki pages is bordering on the unforgivable. I did it once and was mortified.
I find this amusing (in a slightly sarcastic way) ... on the page Talk:State , when discussing the use of PUBLIC_CHANNEL vs 0, you say "I have no preference either way" but now you contradict yourself, LOL. I already explained my reasoning for prefering 0 in this special case only, on that page.
Omei Qunhua 03:17, 24 January 2013 (PST)

The two side by side examples are needlessly repetitive but it doesn't cost us anything nor does it detract from the article. I don't see any point in removing them. Keep them since they drive the point home with repetition.

The last example demonstrates not only how not to do it, but gives you the test bed to demonstrate it. People will write bad rand functions - we can't stop this - we can however help them write better rand functions by giving them some way to test them.

I happen to agree, there shouldn't be examples that don't compile, "shouldn't be" and "are no" are not the same things. Like spelling errors and grammar errors, these need to be picked up by our reviewers. I'm thinking it might be fun to try and port lslint to Javascript and run it via greasemonkey. Let me stress this point, as reviewers we are failing to properly police all contributions.

We shouldn't overlook that our editors have different skill sets, some are better at some things than others, we cannot expect to get the same quality, quantity and type of work out of each and every one of them. Most important is that they are volunteers, the only thing we can force them to do is to stay away. However if we tell everyone that all they need to do is leave the wiki a better place than when they found it, we get (hopefully) progressively better content. As someone who has written many a stub article, it is important to have a seed, a germ for the article to grow from. Without some framework, some starting point, people don't know where to start, they don't know what to do. There are a bunch of articles that have never been written, no stubs created, nothing has happened with them. Many a stub on the other hand once created has been occasioned with the addition of new content.

There is a saying about people who are promoted within their organization, that they have been promoted to their level of incompetence, which is to say, they have been promoted to the point where their incompetence is so glaring that they can't be promoted further. On the wiki that idea doesn't work, we have all been promoted to the level of senior editor, well and truly beyond are levels of competence. As such some things get published by people who shouldn't have been let out of the mail room. It also means that those who are competent have to have the the additional skill set to get along with the people from the mail room. But like any company with its many departments, the wiki has a place for all types (if only they could recognize where they fit best ~_~).

If it's a choice between no content and bad content, I'll choose bad content. Especially if it's so bad that it motivates people to correct and improve it. It is way easier to turn terrible content into better content then it is to turn no content into content. Originality is hard.

Part of the problem is we do not agree on what the wiki should be. We don't understand nor share each others vision. I don't think we are capable. If we judge each other, fight over the wiki, we will drive each other off and those not driven off will burn out. -- Strife (talk|contribs) 21:37, 25 January 2013 (PST)

Masterful diplomacy Strife :))
Usually the easiest way to turn bad content into better is to rewrite it, preserving the seed idea.
It would be interesting to hear your views on PUBLIC_CHANNEL and 0 for use in llSay etc. as discussed on the page Talk:State
Please excuse me tweaking your grammar.
Omei Qunhua 14:39, 26 January 2013 (PST)

Just for clarity's sake, I still personally have no preference for either mode. I just found your action inconsistent with the stated goal... I also somehow missed your response over on Talk:State, so I'll continue over there = )
-- Void (talk|contribs) 15:21, 27 January 2013 (PST)