Difference between revisions of "Performance Testers"

From Second Life Wiki
Jump to navigation Jump to search
(Add WithSession API)
(Basic test writing doc)
Line 150: Line 150:
{
{
public:
public:
/**
    /**
     * @param[in] name - Unique string identifying this tester instance.
     * @param[in] name - Unique string identifying this tester instance.
     */
     */
Line 156: Line 156:
virtual ~LLMetricPerformanceTesterWithSession();
virtual ~LLMetricPerformanceTesterWithSession();


/**
    /**
     * @brief Compare the test results.
     * @brief Compare the test results.
     * This will be loading the base and current sessions and compare them using the virtual  
     * This will be loading the base and current sessions and compare them using the virtual  
Line 174: Line 174:
         };
         };
      
      
/**
    /**
     * @brief Convert an LLSD log into a test session.
     * @brief Convert an LLSD log into a test session.
     * @param[in] log - The LLSD record
     * @param[in] log - The LLSD record
Line 181: Line 181:
virtual LLMetricPerformanceTesterWithSession::LLTestSession* loadTestSession(LLSD* log) = 0;
virtual LLMetricPerformanceTesterWithSession::LLTestSession* loadTestSession(LLSD* log) = 0;
      
      
/**
    /**
     * @brief Compare the base session and the target session. Assumes base and current sessions have been loaded.
     * @brief Compare the base session and the target session. Assumes base and current sessions have been loaded.
     * @param[out] os - The comparison result as a standard stream
     * @param[out] os - The comparison result as a standard stream
Line 192: Line 192:
</nowiki></pre>
</nowiki></pre>


===Creating/Adding Your Own Test Metrics===
===Creating/Adding a Basic Test Metrics===
You need to create your own metric-based tester derived from the class <b>LLMetricPerformanceTester</b>. The key steps are as following.
First, you need to create a metric-based tester class derived from the <b>LLMetricPerformanceTesterBasic</b> class and that will hold your performance data. The key steps are as following.
*declare your own tester derived from <b>LLMetricPerformanceTester</b>;
*declare your own tester derived from <b>LLMetricPerformanceTesterBasic</b>
*in the constructor, insert all metric strings you will use in this tester, the inserting order does not matter. Also indicate if you want to use the default performance analyzer to generate the final test report;
*in the constructor, declare all metrics you will use in this tester, the declaration order does not matter
*collect the test data in your own way;
*collect the test data in your own way. The usual way is to define an update() method that gets called and gather the relevant performance data.
*define the abstract virtual function <b>outputTestRecord(LLSD* sd)</b> to output your test data to the LLSD structure. Everything output to the LLSD in this function will be saved to the log file.
*define the abstract virtual method <b>outputTestRecord(LLSD* sd)</b> to output your test data to the LLSD structure. Everything output to the LLSD in this function will be saved to the log file metric.slp in the log folder.
*the final test report contains the following columns: metric_string, baseline_value, target_value, target_value - baseline_value and 100 * target_value / baseline_value. If you use the default performance analyzer, which compares the test results against the baseline metric by metric, label by label, to generate the final test report, make the two abstract virtual functions <b>loadTestSession(LLSD* log)</b> and <b>compareTestSessions(std::ofstream* os)</b> empty, then you are done. If you want to define your own way to generate the final test report, define it in these two abstract functions. The function <b>loadTestSession(LLSD* log)</b> defines the way to load/parse test data from the LLSD structure. This LLSD structure is as same as you define in the function <b>outputTestRecord(LLSD* sd)</b>. The function <b>compareTestSessions(std::ofstream* os)</b> defines the way to compare the test results and output the comparisons.
*the final test report contains the following columns: metric_string, baseline_value, target_value, target_value - baseline_value and 100 * target_value / baseline_value.
*that's it!


Below is the detailed code sample:
Below is a code example:
<pre><nowiki>
<pre><nowiki>
class YourOwnTester : public LLMetricPerformanceTester
class YourOwnTester : public LLMetricPerformanceTesterBasic
{
{
public:
public:
     YourOwnTester() ;
     YourOwnTester() ;
    ~YourOwnTester() ;
    // This will have to get called in code to update your perf data.
    // Note: you can create as many updateXx() variation as your perf system requires
    void update(const S32 d1, const F32 d2) ;


protected:
protected:
    // This is required. It tells the class how to pack the data in an LLSD stream
     /*virtual*/ void outputTestRecord(LLSD* sd) ;
     /*virtual*/ void outputTestRecord(LLSD* sd) ;
    /*virtual*/ LLMetricPerformanceTester::LLTestSession* loadTestSession(LLSD* log) ;
    /*virtual*/ void compareTestSessions(std::ofstream* os) ;


private:
private:
     //
     // Define the relevant perf gathering variables.
     //need to define this only when you use your own way to analyze the performance.
     // Note: the default compare method only supports S32 and F32 comparison. You need to overload the compare if you need to carry something else.
     //
     S32 data1;
     class YourOwnSession : public LLTestSession
     F32 data2;
     {
     ...
    };
};
};


YourOwnTester::YourOwnTester() : LLMetricPerformanceTester(your-unique-tester-name-string, use-default-performance-analysis-or-not)
YourOwnTester::YourOwnTester() : LLMetricPerformanceTesterBasic("your-unique-tester-name-string")
{
{
     //insert all metric strings used in the tester.
     // Declare all the metrics used in the tester.
     addMetricString(metric-string-1) ;
     addMetric("metric-string-1") ;
     addMetricString(metric-string-2) ;
     addMetric("metric-string-2") ;
    addMetricString(metric-string-3) ;
     ...
     ...


     //your own other initializations
     // Your own initializations
}
     data1 = 0;
LLMetricPerformanceTester::LLTestSession* YourOwnTester::loadTestSession(LLSD* log)
     data2 = 0.0f;
{
     //
     //return NULL unless you pass false to "use-default-performance-analysis-or-not".
    //
 
    //
    //if you define your own way to analyze the performance,
    //the format of this function will look like:
    //
    YourOwnSession* sessionp = new YourOwnSession(...) ;
     ...
     ...
   
    BOOL in_log = (*log).has(mCurLabel) ;
    while(in_log)
    {
LLSD::String label = mCurLabel ;
incLabel() ;
in_log = (*log).has(mCurLabel) ;
        //insert your code here to load and process a test record
    }
}
}


void YourOwnTester::compareTestSessions(std::ofstream* os)
YourOwnTester::~YourOwnTester()
{
{
     //
     // You likely need to invalidate the static pointer holding that test instance
    //leave this empty unless you pass false to "use-default-performance-analysis-or-not".
     sYourTester = NULL;
    //
 
    //
    //if you define your own way to analyze the performance,
    //the format of this function will look like:
    //
     YourOwnSession* base_sessionp = dynamic_cast<YourOwnSession*>(mBaseSessionp) ;
    YourOwnSession* current_sessionp = dynamic_cast<YourOwnSession*>(mCurrentSessionp) ;
    if(!base_sessionp || !current_sessionp)
    {
llerrs << "type of test session does not match!" << llendl ;
    }
 
    //insert your code here to do performance analysis
}
}


void YourOwnTester::outputTestRecord(LLSd *sd)
void YourOwnTester::outputTestRecord(LLSd *sd)
{
{
     //
     std::string currentLabel = getCurrentLabelName();
     //insert your own code to output test results to sd
     //insert your own code to output test results to sd
     //format like this
     //format like this
     (*sd)[mCurLabel][metric-string-1] = a ;
     (*sd)[currentLabel]["metric-string-1"] = (LLSD::Integer)data1;
    (*sd)[currentLabel]["metric-string-2"] = (LLSD::Real)data2;
     ...
     ...
}  
}  
void YourOwnTester::update(const S32 d1, const F32 d2)
{
    // Do something with the input data to update your perf data
    data1 += d1;
    data2 += d2;
    ...
    // *Important* You need to call outputTestResults() when some perf gathering condition is met
    // Otherwise your data might not be saved to the log ever.
    if (condition)
    {
        outputTestResults();
    }
}
</nowiki></pre>
</nowiki></pre>


You may check the class <b>LLTexturePipelineTester</b> as a sample when get problems creating your own metric-based tester.
You may check the class <b>LLImageCompressionTester</b> as an example.


===How To Run Metric-based Automated Test===
===How To Run Metric-based Automated Test===
Similar as to run the default automated test, follow the steps below:
Follow the steps below:
*Insert the following parameters in the command line for the baseline viewer: <b>-logmetrics -autologin -relaysession</b>
*Insert the following parameters in the command line for the baseline viewer: <b>-logmetrics -autologin -relaysession</b>
*Copy the file <b>metric.slp</b> to <b>metric_baseline.slp</b>, which is located in your secondlife log file folder.
*Copy the file <b>metric.slp</b> to <b>metric_baseline.slp</b>, which is located in your secondlife log file folder.
*Insert the following parameters in the command line for the viewer you want to compare against the baseline viewer: <b>-logmetrics -analyzeperformance -autologin -relaysession</b>
*Insert the following parameters in the command line for the viewer you want to compare against the baseline viewer: <b>-logmetrics -analyzeperformance -autologin -relaysession</b>
You can find the test results in the file <b>metric_report.csv</b> located in your secondlife log file folder.
You can find the test results in the file <b>metric_report.csv</b> located in your secondlife log file folder.

Revision as of 22:55, 8 September 2010

Metric-based Test Framework

The metric-based test framework can be used to create test metrics on all sorts of data gathered by the software at run time. The base implementation offers a variety of services to save and load data, parse LLSD results, compare current performance with a base and produce test reports.

The interface is implemented in llcommon/llmetricperformancetester.h. Two abstract classes are available:

  • LLMetricPerformanceTesterBasic
  • LLMetricPerformanceTesterWithSession

LLMetricPerformanceTesterBasic

The abstract class LLMetricPerformanceTesterBasic defines the general metric-based test framework.

This class can be directly inherited from for simple data gathering and provides predefined methods to save, load and compare results of performance sessions.

Below is the detailed doc for this class.

/**
 * @class LLMetricPerformanceTesterBasic
 * @brief Performance Metric Base Class
 */
class LL_COMMON_API LLMetricPerformanceTesterBasic
{
public:
    /**
     * @brief Creates a basic tester instance.
     * @param[in] name - Unique string identifying this tester instance.
     */
	LLMetricPerformanceTesterBasic(std::string name);
	virtual ~LLMetricPerformanceTesterBasic();

    /**
     * @return Returns true if the instance has been added to the tester map.
     * Need to be tested after creation of a tester instance so to know if the tester is correctly handled.
     * A tester might not be added to the map if another tester with the same name already exists.
     */
    BOOL isValid() const { return mValidInstance; }

    /**
     * @brief Write a set of test results to the log LLSD.
     */
	void outputTestResults() ;
    
    /**
     * @brief Compare the test results.
     * By default, compares the test results against the baseline one by one, item by item, 
     * in the increasing order of the LLSD record counter, starting from the first one.
     */
	virtual void analyzePerformance(std::ofstream* os, LLSD* base, LLSD* current) ;
    
    /**
     * @return Returns the number of the test metrics in this tester instance.
     */
	S32 getNumberOfMetrics() const { return mMetricStrings.size() ;}
    /**
     * @return Returns the metric name at index
     * @param[in] index - Index on the list of metrics managed by this tester instance.
     */
	std::string getMetricName(S32 index) const { return mMetricStrings[index] ;}
    
protected:
    /**
     * @return Returns the name of this tester instance.
     */
	std::string getTesterName() const { return mName ;}
    
    /**
     * @brief Insert a new metric to be managed by this tester instance.
     * @param[in] str - Unique string identifying the new metric.
     */
	void addMetric(std::string str) ;

    /**
     * @brief Compare test results, provided in 2 flavors: compare integers and compare floats.
     * @param[out] os - Formatted output string holding the compared values.
     * @param[in] metric_string - Name of the metric.
     * @param[in] v_base - Base value of the metric.
     * @param[in] v_current - Current value of the metric.
     */
	virtual void compareTestResults(std::ofstream* os, std::string metric_string, S32 v_base, S32 v_current) ;
	virtual void compareTestResults(std::ofstream* os, std::string metric_string, F32 v_base, F32 v_current) ;
    
    /**
     * @brief Reset internal record count. Count starts with 1.
     */
	void resetCurrentCount() { mCount = 1; }
    /**
     * @brief Increment internal record count.
     */
	void incrementCurrentCount() { mCount++; }
    /**
     * @return Returns the label to be used for the current count. It's "TesterName"-"Count".
     */
    std::string getCurrentLabelName() const { return llformat("%s-%d", mName.c_str(), mCount) ;}
    
    /**
     * @brief Write a test record to the LLSD. Implementers need to overload this method.
     * @param[out] sd - The LLSD record to store metric data into.
     */
	virtual void outputTestRecord(LLSD* sd) = 0 ;

private:
	void preOutputTestResults(LLSD* sd) ;
	void postOutputTestResults(LLSD* sd) ;

	std::string mName ;                         // Name of this tester instance
	S32 mCount ;                                // Current record count
    BOOL mValidInstance;                            // TRUE if the instance is managed by the map
	std::vector< std::string > mMetricStrings ; // Metrics strings

// Static members managing the collection of testers
public:	
    // Map of all the tester instances in use
	typedef std::map< std::string, LLMetricPerformanceTesterBasic* > name_tester_map_t;	
	static name_tester_map_t sTesterMap ;

    /**
     * @return Returns a pointer to the tester
     * @param[in] name - Name of the tester instance queried.
     */
	static LLMetricPerformanceTesterBasic* getTester(std::string name) ;
    /**
     * @return Returns TRUE if there's a tester defined, FALSE otherwise.
     */
	static BOOL hasMetricPerformanceTesters() { return !sTesterMap.empty() ;}
    /**
     * @brief Delete all testers and reset the tester map
     */
	static void cleanClass() ;

private:
    // Add a tester to the map. Returns false if adding fails.
	static BOOL addTester(LLMetricPerformanceTesterBasic* tester) ;    
};

LLMetricPerformanceTesterWithSession

The abstract class LLMetricPerformanceTesterWithSession is derived from the previous one and provides an additional session abstraction that allow the definition of ad-hoc comparison method for reporting.

This class should be used when data need to be collated and analyzed in specific ways.

Below is the detailed doc for this class.


/**
 * @class LLMetricPerformanceTesterWithSession
 * @brief Performance Metric Class with custom session 
 */
class LL_COMMON_API LLMetricPerformanceTesterWithSession : public LLMetricPerformanceTesterBasic
{
public:
    /**
     * @param[in] name - Unique string identifying this tester instance.
     */
	LLMetricPerformanceTesterWithSession(std::string name);
	virtual ~LLMetricPerformanceTesterWithSession();

    /**
     * @brief Compare the test results.
     * This will be loading the base and current sessions and compare them using the virtual 
     * abstract methods loadTestSession() and compareTestSessions()
     */
	virtual void analyzePerformance(std::ofstream* os, LLSD* base, LLSD* current) ;

protected:
    /**
     * @class LLMetricPerformanceTesterWithSession::LLTestSession
     * @brief Defines an interface for the two abstract virtual functions loadTestSession() and compareTestSessions()
     */
	class LLTestSession
        {
        public:
            virtual ~LLTestSession() ;
        };
    
    /**
     * @brief Convert an LLSD log into a test session.
     * @param[in] log - The LLSD record
     * @return Returns the record as a test session
     */
	virtual LLMetricPerformanceTesterWithSession::LLTestSession* loadTestSession(LLSD* log) = 0;
    
    /**
     * @brief Compare the base session and the target session. Assumes base and current sessions have been loaded.
     * @param[out] os - The comparison result as a standard stream
     */
	virtual void compareTestSessions(std::ofstream* os) = 0;
    
	LLTestSession* mBaseSessionp;
	LLTestSession* mCurrentSessionp;
};

Creating/Adding a Basic Test Metrics

First, you need to create a metric-based tester class derived from the LLMetricPerformanceTesterBasic class and that will hold your performance data. The key steps are as following.

  • declare your own tester derived from LLMetricPerformanceTesterBasic
  • in the constructor, declare all metrics you will use in this tester, the declaration order does not matter
  • collect the test data in your own way. The usual way is to define an update() method that gets called and gather the relevant performance data.
  • define the abstract virtual method outputTestRecord(LLSD* sd) to output your test data to the LLSD structure. Everything output to the LLSD in this function will be saved to the log file metric.slp in the log folder.
  • the final test report contains the following columns: metric_string, baseline_value, target_value, target_value - baseline_value and 100 * target_value / baseline_value.

Below is a code example:

class YourOwnTester : public LLMetricPerformanceTesterBasic
{
public:
    YourOwnTester() ;
    ~YourOwnTester() ;

    // This will have to get called in code to update your perf data.
    // Note: you can create as many updateXx() variation as your perf system requires
    void update(const S32 d1, const F32 d2) ;

protected:
    // This is required. It tells the class how to pack the data in an LLSD stream
    /*virtual*/ void outputTestRecord(LLSD* sd) ;

private:
    // Define the relevant perf gathering variables. 
    // Note: the default compare method only supports S32 and F32 comparison. You need to overload the compare if you need to carry something else.
    S32 data1;
    F32 data2;
    ...
};

YourOwnTester::YourOwnTester() : LLMetricPerformanceTesterBasic("your-unique-tester-name-string")
{
    // Declare all the metrics used in the tester.
    addMetric("metric-string-1") ;
    addMetric("metric-string-2") ;
    ...

    // Your own initializations
    data1 = 0;
    data2 = 0.0f;
    ...
}

YourOwnTester::~YourOwnTester()
{
    // You likely need to invalidate the static pointer holding that test instance
    sYourTester = NULL;
}

void YourOwnTester::outputTestRecord(LLSd *sd)
{
    std::string currentLabel = getCurrentLabelName();
    //insert your own code to output test results to sd
    //format like this
    (*sd)[currentLabel]["metric-string-1"] = (LLSD::Integer)data1;
    (*sd)[currentLabel]["metric-string-2"] = (LLSD::Real)data2;
    ...
} 

void YourOwnTester::update(const S32 d1, const F32 d2)
{
    // Do something with the input data to update your perf data
    data1 += d1;
    data2 += d2;
    ...

    // *Important* You need to call outputTestResults() when some perf gathering condition is met
    // Otherwise your data might not be saved to the log ever.
    if (condition)
    {
        outputTestResults();
    }
}

You may check the class LLImageCompressionTester as an example.

How To Run Metric-based Automated Test

Follow the steps below:

  • Insert the following parameters in the command line for the baseline viewer: -logmetrics -autologin -relaysession
  • Copy the file metric.slp to metric_baseline.slp, which is located in your secondlife log file folder.
  • Insert the following parameters in the command line for the viewer you want to compare against the baseline viewer: -logmetrics -analyzeperformance -autologin -relaysession

You can find the test results in the file metric_report.csv located in your secondlife log file folder.