Code Racer

From Second Life Wiki
Revision as of 18:18, 19 October 2007 by Ppaatt Lynagh (talk | contribs) (link with Code Sizer - count bytes of code, and link with Efficiency Tester - count milliseconds of run time)
Jump to navigation Jump to search

Introduction

Q: Want to see if one version of code usually runs faster than another?

A: Run your code inside a test harness such as the example code here to race both versions time and again. Along the way, declare the winners, measuring each win by the change in llGetTimestamp.

Sample Results


14255 == llGetFreeMemory at default state entry
Click No = Running to stop this script after you've seen enough ...
2007-10-17T04:38:07.893983Z

37.103798 ms on average won 8 times for version 1
39.045620 ms on average won 3 times for version 0
2007-10-17T04:38:09.348014Z

38.025761 ms on average won 6 times for version 0
48.356247 ms on average won 5 times for version 1
2007-10-17T04:38:10.776157Z

38.604069 ms on average won 7 times for version 1
42.263889 ms on average won 4 times for version 0
2007-10-17T04:38:12.292078Z

39.734879 ms on average won 51 times for version 0
40.321190 ms on average won 50 times for version 1
2007-10-17T04:38:25.212252Z

38.497734 ms on average won 52 times for version 1
38.419685 ms on average won 49 times for version 0
2007-10-17T04:38:37.462428Z

40.511398 ms on average won 60 times for version 0
41.545677 ms on average won 41 times for version 1
2007-10-17T04:38:50.413441Z

41.040203 ms on average won 523 times for version 1
40.605671 ms on average won 478 times for version 0
2007-10-17T04:40:58.638995Z

40.874664 ms on average won 501 times for version 0
40.492779 ms on average won 500 times for version 1
2007-10-17T04:43:08.773074Z

40.991032 ms on average won 507 times for version 1
41.352806 ms on average won 494 times for version 0
2007-10-17T04:45:18.649320Z

Code

// http://wiki.secondlife.com/wiki/Code_Racer

version0()
{
}

version1()
{
}

// Count the wins.

integer wins0;
integer wins1;

// Sum the times.

float tsum0;
float tsum1;

// Start up.

startup()
{
        llOwnerSay("");

        llOwnerSay((string) llGetFreeMemory() +
            " == llGetFreeMemory at default state entry");

        llOwnerSay("Click No = Running to stop this script after you've seen enough ...");
        llOwnerSay(llGetTimestamp());
}

// Count time elapsed since the minute began.

float elapsed()
{
    string chars = llGetTimestamp(); // "YYYY-MM-DDThh:mm:ss.ff..fZ"
    return (float) llGetSubString(chars, 17, -2); // "ss.ff..f"
//  return llGetSubString(chars,
//      llStringLength("YYYY-MM-DDThh:mm:"),
//      -1 - llStringLength("Z"));
}

// Say less time wins, else odd/even wins.

integer chooseWinner(float tv0, float tv1, integer runneds)
{
    if (tv0 < tv1)
    {
        return 0;
    }
    else if (tv1 < tv0)
    {
        return 1;
    }
    else // if (tv0 == tv1) // rare
    {
        return (runneds & 1);
    }
}

// Count the wins and sum the times.

declareWinner(float tv0, float tv1, integer runneds)
{

    // See negative delta time as rollover of 60 seconds per minute.
    
    if (tv0 < 0) { tv0 += 60;}
    if (tv1 < 0) { tv1 += 60;}

    // Count the wins.

    integer winner = chooseWinner(tv0, tv1, runneds);
    wins0 += (1 - winner);
    wins1 += winner;

    // Sum the times.

    tsum0 += tv0;
    tsum1 += tv1;
}

// Run one race between two versions of code.

runRace(integer runneds)
{
    if (runneds & 1)
    {
        float t0 = elapsed();
        version1();
        float t1 = elapsed();
        version0();
        float t2 = elapsed();
        
        declareWinner(t1 - t0, t2 - t1, runneds);
    }
    else
    {
        float t0 = elapsed();
        version0();
        float t1 = elapsed();
        version1();
        float t2 = elapsed();
        
        declareWinner(t2 - t1, t1 - t0, runneds);
    }
}

// Report the result for one version in a burst of races.

reportSpeed(list averages, list wins, integer winningVersion)
{
    llOwnerSay(llList2String(averages, winningVersion) +
        " ms on average won " +
        (string) llList2String(wins, winningVersion) + " times" +
        " for version " + (string) winningVersion);
}

// Report the result of a burst of races.

reportRace(integer scale)
{
    list wins = [wins0, wins1];
    
    list averages = [
        (1000.0 * tsum0) / scale,
        (1000.0 * tsum1) / scale];

    llOwnerSay("");
    integer winningVersion = (wins0 <= wins1); // bias to 1
    reportSpeed(averages, wins, winningVersion);
    reportSpeed(averages, wins, 1 - winningVersion);
    llOwnerSay(llGetTimestamp());
}

// Run several bursts of races.

raceRepeatedly(integer scale)
{
        
    // Repeat any measurement at least three times.
     
    integer repeateds;
    for (repeateds = 0; repeateds < 3; ++repeateds)
    {
        
        // Run a burst of races.

        wins0 = wins1 = 0;
        tsum0 = tsum1 = 0.0;
        
        integer runneds;
        for (runneds = 0; runneds < scale; ++runneds)
        {
            runRace(runneds);
        }
        
        // Resolve near ties in favour of lesser average.
        
        wins0 += (tsum0 < tsum1);
        wins1 += (tsum1 < tsum0);
        
        // Report frequently to pacify the human operator.
        
        reportRace(scale);
    }
}

// Race to measure relative run-time cost of multiple versions of code.
// Produce unreasonable answers when the run-time cost measured equals or exceeds 60 seconds.

default
{
    state_entry()
    {
        startup();
        integer scale = 10;
        while (TRUE)
        {
            raceRepeatedly(scale);
            scale *= 10;
        }
    }
}

Instructions

This instrument quickly & accurately measures the run time difference between two fragments of code. For example, the code presented here measures the zero difference between the two empty fragments of code found inside the functions named version0 and version1. To measure other code, insert the code you want to compare into the version0 and version1 functions.

Alternatives & Caveats

This instrument compares two run times quickly, like 100X faster than the Efficiency Tester instrument. This instrument runs two fragments of code at a time, gives you immediate results and then progressively more accurate results over time, like when you slowly fetch a detailed image from the web. This instrument burns thru hugely much run time, like as much as fifty milliseconds per version raced. Providing immediate feedback and finishing 100X faster makes work proceed when the work would otherwise be too hard & boring to attract enough volunteers.

The Efficiency Tester instrument serves a different purpose. That instrument adds accuracy to a measure of a range of observed run times in as little time as possible. By simple arithmetic, running thru 200ms for 1,000 times necessarily takes at least 200s, aka, more than 3 minutes. That instrument runs one fragment of code at a time, but then runs that fragment many many times to try and average out any distractions that may hit the server during the run. That instrument actually can measure 200ms in as little as 10 minutes, but that instrument gives you no answer at all until after 10,000 runs and no final answer until after 30,000 runs.

See the LSL Script Efficiency article for much discussion of the Efficiency Tester instrument, including recommendations on how to avoid distracting the server into spending run time running other code in parallel. Those same recommendation apply to any llGetTimestamp harness, including this instrument.

Please do try to find deserted places to run such benchmarks and remember to turn them off when you finish! Else naturally you'll be rudely lagging the sim for the other people sharing the sim with you, for however long you run the benchmark.

See Also

Scripts

Code Sizer - count bytes of code with perfect accuracy

Efficiency Tester - run as long as you please to count approximate milliseconds of run time with every more accuracy