Performance Testers

From Second Life Wiki
Revision as of 22:19, 8 September 2010 by Merov Linden (talk | contribs) (Draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Metric-based Test Framework

The metric-based test framework can be used to create test metrics on all sorts of data gathered by the software at run time. The base implementation offers a variety of services to save and load data, parse LLSD results, compare current performance with a base and produce test reports.

The interface is implemented in llcommon/llmetricperformancetester.h. Two abstract classes are available:

  • LLMetricPerformanceTesterBasic
  • LLMetricPerformanceTesterWithSession

LLMetricPerformanceTesterBasic=

The abstract class LLMetricPerformanceTester defines the general metric-based test framework. This class offers most of work necessary for metric-based automated testing, such as data saving, data loading, LLSD parsing, (default) performance comparing, and final test report generating, minimizing your job to create your own tester.

Below is the detailed doc for this class.


LLMetricPerformanceTesterWithSession=


Creating/Adding Your Own Test Metrics

You need to create your own metric-based tester derived from the class LLMetricPerformanceTester. The key steps are as following.

  • declare your own tester derived from LLMetricPerformanceTester;
  • in the constructor, insert all metric strings you will use in this tester, the inserting order does not matter. Also indicate if you want to use the default performance analyzer to generate the final test report;
  • collect the test data in your own way;
  • define the abstract virtual function outputTestRecord(LLSD* sd) to output your test data to the LLSD structure. Everything output to the LLSD in this function will be saved to the log file.
  • the final test report contains the following columns: metric_string, baseline_value, target_value, target_value - baseline_value and 100 * target_value / baseline_value. If you use the default performance analyzer, which compares the test results against the baseline metric by metric, label by label, to generate the final test report, make the two abstract virtual functions loadTestSession(LLSD* log) and compareTestSessions(std::ofstream* os) empty, then you are done. If you want to define your own way to generate the final test report, define it in these two abstract functions. The function loadTestSession(LLSD* log) defines the way to load/parse test data from the LLSD structure. This LLSD structure is as same as you define in the function outputTestRecord(LLSD* sd). The function compareTestSessions(std::ofstream* os) defines the way to compare the test results and output the comparisons.
  • that's it!

Below is the detailed code sample:

class YourOwnTester : public LLMetricPerformanceTester
{
public:
    YourOwnTester() ;

protected:
    /*virtual*/ void outputTestRecord(LLSD* sd) ;
    /*virtual*/ LLMetricPerformanceTester::LLTestSession* loadTestSession(LLSD* log) ;
    /*virtual*/ void compareTestSessions(std::ofstream* os) ;

private:
    //
    //need to define this only when you use your own way to analyze the performance.
    //
    class YourOwnSession : public LLTestSession
    {
    };
};

YourOwnTester::YourOwnTester() : LLMetricPerformanceTester(your-unique-tester-name-string, use-default-performance-analysis-or-not)
{
    //insert all metric strings used in the tester.
    addMetricString(metric-string-1) ;
    addMetricString(metric-string-2) ;
    addMetricString(metric-string-3) ;
    ...

    //your own other initializations
}
LLMetricPerformanceTester::LLTestSession* YourOwnTester::loadTestSession(LLSD* log) 
{
    //
    //return NULL unless you pass false to "use-default-performance-analysis-or-not".
    //

    //
    //if you define your own way to analyze the performance,
    //the format of this function will look like:
    //
    YourOwnSession* sessionp = new YourOwnSession(...) ;
    ...
    
    BOOL in_log = (*log).has(mCurLabel) ;
    while(in_log)
    {
	LLSD::String label = mCurLabel ;		
	incLabel() ;
	in_log = (*log).has(mCurLabel) ;

        //insert your code here to load and process a test record
    }
}

void YourOwnTester::compareTestSessions(std::ofstream* os)
{
    //
    //leave this empty unless you pass false to "use-default-performance-analysis-or-not".
    //

    //
    //if you define your own way to analyze the performance,
    //the format of this function will look like:
    //
    YourOwnSession* base_sessionp = dynamic_cast<YourOwnSession*>(mBaseSessionp) ;
    YourOwnSession* current_sessionp = dynamic_cast<YourOwnSession*>(mCurrentSessionp) ;
    if(!base_sessionp || !current_sessionp)
    {
	llerrs << "type of test session does not match!" << llendl ;
    }

    //insert your code here to do performance analysis
}

void YourOwnTester::outputTestRecord(LLSd *sd)
{
    //
    //insert your own code to output test results to sd
    //format like this
    (*sd)[mCurLabel][metric-string-1] = a ;
    ...
} 

You may check the class LLTexturePipelineTester as a sample when get problems creating your own metric-based tester.

How To Run Metric-based Automated Test

Similar as to run the default automated test, follow the steps below:

  • Insert the following parameters in the command line for the baseline viewer: -logmetrics -autologin -relaysession
  • Copy the file metric.slp to metric_baseline.slp, which is located in your secondlife log file folder.
  • Insert the following parameters in the command line for the viewer you want to compare against the baseline viewer: -logmetrics -analyzeperformance -autologin -relaysession

You can find the test results in the file metric_report.csv located in your secondlife log file folder.