Difference between revisions of "Efficiency Tester"

From Second Life Wiki
Jump to navigation Jump to search
m (to integer max from float max, and print it as part of the result)
(guess the inaccuracy is +-1ms since that's more than I saw, first time I tried measuring the cost of no code for 1000 (not 10,000) times)
Line 29: Line 29:
         // always measure how small, not only how fast
         // always measure how small, not only how fast


         llOwnerSay((string) llGetFreeMemory());
         llOwnerSay((string) llGetFreeMemory() + " free bytes of code at default.state_entry");


         // always take more than one measurement
         // always take more than one measurement
Line 73: Line 73:


             float elapsed = ((t1 - t0) - (t2 - t1))/max;
             float elapsed = ((t1 - t0) - (t2 - t1))/max;
             llOwnerSay((string) elapsed + "+-FIXME milliseconds passed on average in each of");
             llOwnerSay((string) elapsed + "+-1 milliseconds passed on average in each of");
             llOwnerSay((string) max + " trials of running the code in the loop");
             llOwnerSay((string) max + " trials of running the code in the loop");
         }
         }

Revision as of 04:48, 18 October 2007

Q1: Want to see how small some code is?

A: Add three copies of your code to a script, call llGetFreeMemory to count free space, and start deleting copies. After deleting each copy, you should see a consistent savings in free space, i.e, the code space cost of your code.

Q2: Want to see how fast some code is?

A: Run your code inside code like this example to call your code time and again to measure the consequent change in llGetTimestamp.

// IMPORTANT:
// Only perform tests in an empty region.
// To reduce contamination and be sure to wearing no attachments.
// Preferably do tests in a private sim with one on it.
// Don't move while performing the test.
// There is a margin of error so run the tests multiple times to determine it.

integer time() { // count milliseconds since the day began
    string stamp = llGetTimestamp(); // "YYYY-MM-DDThh:mm:ss.ff..fZ"
    return (integer) llGetSubString(stamp, 11, 12) * 3600000 + // hh
           (integer) llGetSubString(stamp, 14, 15) * 60000 +  // mm
           llRound((float)llGetSubString(stamp, 17, -2) * 1000000.0)/1000; // ss.ff..f
}

default {
    state_entry() {

        // always measure how small, not only how fast

        llOwnerSay((string) llGetFreeMemory() + " free bytes of code at default.state_entry");

        // always take more than one measurement

        integer repeateds;
        for (repeateds = 0; repeateds < 3; ++repeateds)
        {

            // declare test variables

            float counter;

            // declare framework variables

            float i = 0;
            float j = 0;
            integer max = 10000; // 2ms of work takes 20 seconds to repeat 10,000 times, plus overhead

            // begin

            float t0 = time();

            // loop to measure elapsed time to run sample code

            do
            {

              // test once or more

              counter += 1;
      
            } while (++i < max);

            float t1 = time();

            // loop to measure elapsed time to run no code

            do ; while (++j < max);

            float t2 = time();

            // report average time elapsed per run

            float elapsed = ((t1 - t0) - (t2 - t1))/max;
            llOwnerSay((string) elapsed + "+-1 milliseconds passed on average in each of");
            llOwnerSay((string) max + " trials of running the code in the loop");
        }
    }
}

Launched by Xaviar Czervik, then modified by Strife Onizuka, then further edited as the history of this article shows.

Try the empty test of deleting the { counter += 1; } source line to see the astonishing inaccuracy of this instrument. The time cost of no code, as measured here, isn't always zero!

See the LSL Script Efficiency article for a less brief discussion. Please understand, we don't mean to be arguing for many different ways to measure the costs of code. Here we do mean to be building a consensus on best practices, in one considerately short article constructed from a neutral point of view.