Performance Tracking
Objectives
Output some files to compare the performance of automatic runs of the Viewer. This is to allow the OS community (and beyond) to focus on performance data as well as help automated testing in general.
Requirements
- Must be able to run in automated mode (no human intervention)
- Output data set per run
- Mark runs with time stamp and build / version data
- Allow trend tracking : gather a bunch of runs and graph
- Allow version to version comparison : take a "baseline" and "current" run and graph
- Post those graphs in a place that people can consult
Current Implementation
We implemented a performance tracking system that basically hooks up to the existing LLFastTimer system.
Here's a rough explanation:
- The instrumented viewer (auto-viewer-benchmark) outputs perf data in a performance.slp file
- This file is an XML formatted serialized version of some data collected by the existing LLFastTimer class
- LLFastTimer objects are created and deleted as before
- Deleting an LLFastTimer object updates a record in a static list of timers corresponding to this timer type (that's how LLFastTimer worked before, not specific to auto-viewer-benchmark)
- This list is pooled regularly by the LLFastTimerLogThread that outputs the data (using XML serializer) to the performance.slp file
- When quitting the app, the performance.slp file can be compared to a performance_baseline.slp file (simply a previous run result renamed) and output a performance_report.csv file
- That file can then be grabbed and graphed using OpenOffice or other graphing package
The output is triggered by a set of command line options:
-logperformance
: output the performance.slp file-analyzeperformance
: output the performance_report.csv file (assumes a performance_baseline.slp exists)
See Client parameters for details on how to run the viewer from the command line.
TODO
Ideas on how we could improve on the current implementation:
- Output a second file per run using the same .slp format: we need a different file for start and quit stuff, everything that is not "per frame" performance (unique per run if you prefer) so that it makes the analyzes part easy
- Use the same command line
-logperformance
to output that file: no need to add another command... - Output the files with a unique buildname / timestamp : so we can accumulate a bunch of them in a single folder. The other way would be to append data to a single run file but that could create problems to sort good runs from bogus ones or may be we may want to overwrite some run or create analyzes with a subset of runs, etc...
- Write the analyzing tools in python: make them independent of the viewer so that we have more flexibility on how we mix and match things. Also that separates the concerns more cleanly: the viewer spits out timed data and is not concerned with analyzing. Also we could get some data and analyzes even in the event of a viewer crash.
Links
- Performance Testers: Describes the framework that can be used to track specific metrics within the viewer