cgal/Benchmark/doc_tex/Benchmark/Benchmark.tex

662 lines
29 KiB
TeX

% \newcommand{\floor}[1]{\left\lfloor {#1} \right\rfloor}
% \newcommand{\unix}{{\tt UNIX}}
% \newcommand{\dos}{{\tt DOS}}
% \newcommand{\tildegen}{\protect\raisebox{-0.12cm}{\symbol{'176}}}
\section{Introduction}
\begin{verse}
``In the computer industry, there are three kinds of lies: lies, damn lies, and
benchmarks.''
\end{verse}
This document describes a practical toolkit to evaluate the status of code.
It can be used to create programs that measure performance, known as
benchmarks, and other various tests, execute them, and analyze their results.
With little effort a user of this toolkit can detect inefficiencies,
bottlenecks, and loss of functionality or performance degradation, compare
various techniques, algorithms, and different implementations, and measure
progress. A user, can then, present the results in a comprehensible way. The
information produced also includes the precise description of the execution
environment to allow the reproduction of the results.
There are two directives that must be followed in order to write bug-free code.
The first is to prevent the introduction of bugs in the first place. The second
is to find bugs as soon as they are introduced. This toolkit can be used to
build a full blown automatic system for bug detection, also known as regression
tests. We haven't used the toolkit in such a way so far, but we may exploit
this option in the future.
The toolkit is designed as a independent component in a large-scale
development environment. Final products and intermediate products that
are required to build other products are installed into a dedicated
database referred as {\em ROOT}, and pointed to by the environment
variable {\tt \$ROOT}. While the toolkit assumes that certain files are
installed into {\em ROOT}, it allows you to redefine {\em ROOT's}
location and other pieces that are based on {\em ROOT's} location
through command-line options.
The toolkit consists of three parts that corresponds to the three phases
required to evaluate code; (i) The creation of a hierarchy of test or benchmark
programs, (ii) the selective and controlled execution of the programs with
various input test cases, and (iii) the analysis and profiling of their results
and the conversion of the data into more meaningful and comprehensible
presentations. The following Sections describe the three parts in details.
It must be noted that our goal here is not to replace existing tools
that already provide useful functionality for computational
experiments (e.g., gnuplot, make, perl, python). Rather, the goal is
to augment this set with new tools that build on the functionality
already available to provide a comfortable testing environment.
It's worth mentioning the \textbf{ExpLab} tool set for Computational
Experiments by Susan Hert, Lutz Kettner, Tobias Polzin, and Guido
Sch{\"a}fer from the Max-Planck-Institut f{\"u}r Informatik. There seems to be
little overlap between \textbf{ExpLab} and our toolkit, but for most
they emphasize different aspects.
\section{Creation}
The toolkit was developed as a package in \cgal, the Computational
Geometry Algorithms and Data Structures library, to aid in the
development and maintenance of the planar map modules. As a
\cgal\ module on its own, it obliges to the Generic Programming Paradigm.
The first part consists of a couple of generic C++ classes, and the
interface between them and the user code to be evaluated. Some of the code
and documentation is derived from the material developed in the
\textsc{Acs} project, parts of which are published in \cite{cgal:f-ecsca-04}.
The \ccc{Bench_option_parser} class is used to parse command line options,
interpret them, and configure the bench accordingly. The benchmark itself
is performed through the \textbf{()} operator of the \ccc{Bench} class
described next.
The \ccc{Bench} class is parameterized with a model of the
{\em Benchable} concept. This concept serves as the interface between
the measuring device and the operation you wish to measure, refereed
as the target operation here after. A model of this concept must
satisfy a few requirements as follows. It must have a default
constructor, and it must provide four methods. The
\ccc{init()} method can be used to initialize some data
members before the target operation is carried out. The
\ccc{clean()} method is used to clean those data members
and residual data of the operation after the measure is completed. The
\ccc{sync()} method can be used to synchronize between the
target operation and the time-sampling operations. While these three
methods must be provided, they can be empty. The \ccc{op()}
method performs the target operation. This operation is executed in a
controlled loop according to various criteria explained in the next
paragraph.
If the estimated time-duration it takes to complete a single execution
of the target operation is large compared to the granularity of the
timing device, it is sufficient to execute the target operation a
small number of times between sampling the timing device, perhaps even
once. You can set the number of times the target operation is executed
through the \ccc{set_samples()} interface method of the
\ccc{Bench} class. On the other hand, if the estimated
time-duration of a single execution is of the same order as the
granularity of the timing device, increasing the number of executions
increases the accuracy of the measure. With the
\ccc{set_seconds()} method of the \ccc{Bench}
class, you can set the time slot in seconds allocated for the
measure. In this case the target operation is executed in a loop while
the number of executions is counted. When the time expires, the
counter is sampled. Then, the target operation is executed again for
as many times as counted, while measuring the time it takes to
complete the sequence.
\subsection*{A simple Benchmark Program}
The basic structure of a useful benchmark program can be very simple; Its
tasks are to measure the time it takes to complete a sequence of
executions of the target operation, and present the results in some useful
manner.
Below you can find the listing of a program that measures the time it takes
to compute the square root of $\Pi$, and produces the results shown in figure
\ref{sqrtResults}
\begin{figure}[!hbp]
\begin{verbatim}
Bench Bench Ops Total Single Num Ops
Name Time Num Ops Time Op Time Per Sec
-------------------------------- -------- -------- -------- -------- --------
Square root 1 14690512 1.0000 0.0000 14690512.0000
\end{verbatim}
\caption{Square-root benchmark results.}\label{sqrtResults}
\end{figure}
The \textbf{()} operator of the \ccc{Bench} class counts the number
of times the function \textbf{sqrt()} can be executed within the allocated time
slot. Then, it executes the \ccc{sqrt()} function for as many times as
counted, while measuring the time it takes to complete the sequence.
By default the allocated time slot is 1 second. The program overrides the
default with the number of seconds returned by the
\ccc{get_seconds()} method of the
\ccc{Bench_option_parser} class. The default of the later is
1 second as well, and can be overridden with the \textbf{``-t {\em seconds}''}
command-line option.
\ccIncludeExampleCode{Benchmark/simple.cpp}
\section{Bnechmark File Format}
We present a common file format for data instances and for benchmarking
\cgal\ software. It supports benchmarks on algebraic data as well as curve
and surface data. We document the file format grammar, a reference parser
implementation, and how it can be extended.
It is derived from the material developed in the \textsc{Acs} project.
\input{Benchmark/intro}
\input{Benchmark/benchmarkformat}
\input{Benchmark/grammar}
\input{Benchmark/visitor}
\input{Benchmark/extend}
\section{Execution}
This section lists the various command-line options a benchmark
program may accepts, and it explains how to create a hierarchy of
programs and execute the programs in the hierarchy selectively using
an agent implemented in \textbf{Perl}.
Some pieces of the toolkits are dedicated to the development and
maintenance of the \cgal\ planar map modules. For example, some of
the command-line options listed below directly control the behavior
of the planar-map benchmarks, and it's hard to imagine how they can be
applied to other benchmarks. While these pieces should be
reimplemented or even removed all together, in order to make the
toolkit fully generic, other users may ignore them for the time being.
\subsection{Command-Line Options}
A program written with the aid of the \ccc{Bench_option_parser}
class accepts the command-line options listed below. The command-line options
must be provided after the executable name and before an optional name of an
input file. A brief description is displayed on the console as a
response to the \textbf{`` -h''} option.
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abc}
}
\item[\bf{-b {\em options}}\hfill]
set bench options
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abc}
}
\item[\bf{type\_name={\em type}}\hfill]
\item[\bf{tn={\em type}}\hfill]
set bench type to {\em type} (default all).\\
{\em type} is one of:
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abc}
}
\item[\bf{i[ncrement]}\hfill] \bf{0x1}
\item[\bf{a[ggregate]}\hfill] \bf{0x2}
\item[\bf{d[isplay]}\hfill] \bf{0x4}
\end{list}
\item[\bf{type\_mask={\em mask}}\hfill]
\item[\bf{tm={\em mask}}\hfill]
set bench type mask to {\em mask}
\item[\bf{strategy\_name={\em strategy}}\hfill]
\item[\bf{sn={\em strategy}}\hfill]
set bench strategy to {\em strategy} (default all).\\
{\em strategy} is one of:
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abcdefg}
}
\item[\bf{t[rapezoidal]}\hfill] \bf{0x1}
\item[\bf{n[aive]}\hfill] \bf{0x2}
\item[\bf{w[alk]}\hfill] \bf{0x4}
\item[\bf{d[ummy]}\hfill] \bf{0x8}
\end{list}
\item[\bf{strategy\_mask={\em mask}}\hfill]
\item[\bf{sm={\em mask}}\hfill]\
set bench strategy mask to {\em mask}
\item[\bf{h[eader]={\em bool}}\hfill]
print header (default \textbf{true})
\item[\bf{name\_length={\em length}}\hfill]
\item[\bf{nl={\em length}}\hfill]
set the length of the name field to {\em length}
\end{list}
\item[\bf{-d {\em dir}}\hfill]
add directory {\em dir} to list of search directories
\item[\bf{-h}\hfill]
print this help message
\item[\bf{-i {\em iters}}\hfill]
set number of iterations to {\em iters} (default 0)
\item[\bf{-I {\em options}}\hfill]
set input options
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abcdefg}
}
\item[\bf{f[ormat]={\em format}}\hfill]
set format to {\em format} (default \textbf{rat}).\\
{\em format} is one of:
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abcdefg}
}
\item[\bf{i[nt]}\hfill] integer
\item[\bf{f[lt]}\hfill] floating point
\item[\bf{r[at]}\hfill] rational
\end{list}
\end{list}
\item[\bf{-r {\em root}}\hfill]
set the {\tt \$ROOT} to {\em root} (default is the environment
variable {\tt \$ROOT})
\item[\bf{-s {\em samples}}\hfill]
set number of samples to {\em samples} (default 10)
\item[\bf{-t {\em seconds}}\hfill]
set number of seconds to {\em seconds} (default 1)
\item[\bf{-v}\hfill]
toggle verbosity (default \textbf{false})
\end{list}
\subsubsection{Input}
The sole input file, if provided, must appear after the last
command-line option in the command line. This file is searched for in
a directory search-list. The initial list consists of the current
directory followed by {\tt \$ROOT/data/Segments\_2},
{\tt \$ROOT/data/Conics\_2}, and {\tt \$ROOT/data/Polylines\_2} in
this order, where {\tt \$ROOT} is initialized with the value of the
environment variable {\tt \$ROOT}, and possibly overridden using the
command line \textbf{``-r {\em root}''}
The \textbf{``-d {\em dir}''} command-line option inserts the
directory \textbf{\em dir} at the end of the search list.
We are aware to the need to extend the command-line parsing to handle
multiple input files. In addition the dedicated names
{\tt Segments\_2}, {\tt Conics\_2}, and {\tt Polylines\_2} should be
removed or extended.
The \ccc{get_input_format()} method of the
\ccc{Bench_option_parser} class returns the format provided
by the user through the \textbf{``-I format={\em format}''} command
line option, or \textbf{``-I f={\em format}''} in short.
\subsubsection{Output}
The output produced by a single benchmark program exemplified in
figure \ref{sqrtResults} is a text-based table easily readable by
humans. It consists of an optional header record and a data
record. The production of the header can be suppressed by the
\textbf{''-b header=false''} command-line option, or
\textbf{''-b h=false''} in short. Occasionally a sequence of
benchmarks are performed in a raw and the display of the header is
desired only once (or once per page).
A data record consists of the following fields:
\begin{description}
\item[Bench Name] the name of the benchmark.
\item[Bench Time] the allocated time-slot for the entire benchmark in
seconds (see \textbf{``-t {\em seconds}``} option).
\item[Ops Num] the number of target operations performed within the
time slot.
\item[Total Ops Time] the time required to perform the loop of
operations consisting of \textbf{Ops Num} operations in seconds.
\item[Single Op Time] the average time required to perform a single
operation in seconds.
\item[Num Ops Per Second] the number of operations completed per second.
\end{description}
The \textbf{Bench Name} field identifies the benchmark for all
purposes, its length is 32 characters by default. The length can
overridden by the \textbf{``-b name\_length={\em length}''}
command-line option, or \textbf{``-b nl={\em length}''} in short.
Independent tools listed in section \ref{Analysis} parse log files
that contain benchmark results, manipulate them, analyze them, and
perhaps convert them to other formats for artful presentations.
\subsection{Execution of multiple benchmarks}
The \textbf{Perl} script \textbf{cgal\_bench} selectively executes
multiple benchmarks ordered in a hierarchy. It executes them in a
sequence one at a time passing the appropriate command-line
options and input data file for each execution. It accepts a few
command-line options on its own listed below, and reads an input file
that contains the hierarchy of the benchmarks along with necessary
data required to execute them. This information is represented in a
simple language derived from the Extensible Markup Language (XML).
\subsubsection{Command-line options}
\begin{list}{}
{
\setlength{\topsep}{0pt}
\setlength{\partopsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\setlength{\itemsep}{0pt}
\setlength{\itemindent}{0pt}
\setlength{\leftmargin}{0.2\textwidth}
\setlength{\labelsep}{0pt}
\setlength{\labelwidth}{0.2\textwidth}
\settowidth{\listparindent}{abc}
}
\item[\textbf{ -args {\em args}}\hfill]
set additional arguments passed to the benchmark programs.
\item[\textbf{-help}\hfill]
print this help message.
\item[\textbf{-verbose {\em level}}\hfill]
set verbose level to \textbf{\em level} (default 0).
\item[\textbf{-database {\em file}}\hfill]
set database xml file to \textbf{\em file} (default
{\tt \$ROOT/bench/data/benchDb.xml}).
\item[\textbf{-filter {\em name}}\hfill]
select bench \textbf{\em name}, and sub benches (default all).
\end{list}
A unique prefix is sufficient to indicate the desired
option. For example, when the \textbf{``-help''} option is specified,
a brief description is displayed on the console, and the script quits
immediately after. The same behavior is achieved through the
abbreviated \textbf{``-hel''}, \textbf{``-he''}, and \textbf{``-h''}
options.
By default the script reads the file
{\tt \$ROOT/bench/data/benchDb.xml}. This can be overridden through
the \textbf{``-database {\em file}''} command-line option.
\subsubsection{Documenting the Environment}
The scripts automatically documents the environment in which it
performs the benchmarks, so the benchmark can be easily rerun
(provided the same environment is still available) and the
results can be more accurately compared to the results of other
benchmarks.
The scripts extract most of the information directly from the
environment. Additional configuration data that cannot be extracted
directly from the environment is extracted from the database input
file. The script prints out its finding, and only then it starts
performing the benchmarks. Here is an excerpt from a sample run of
{\tt cgal\_bench}, showing the type of information extracted and
printed out.
\begin{verbatim}
Mon Mar 31 20:29:02 2003
COMPILER NAME: gcc
COMPILER INFO: gcc (GCC) 3.2
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
OS NAME: linux
OS INFO: Linux cgal 2.4.20-net1 #1 SMP Wed Feb 5 13:05:52 IST 2003 i686 unknown
PROCESSOR: 0
CPU SPEED: 999.783 MHz
CPU TYPE: Pentium III (Coppermine)
PRIMARY DATA CACHE: 256 KB
SECONDARY DATA CACHE: 0
INSTRUCTION CACHE: 0
PROCESSOR: 1
CPU SPEED: 999.783 MHz
CPU TYPE: Pentium III (Coppermine)
PRIMARY DATA CACHE: 256 KB
SECONDARY DATA CACHE: 0
INSTRUCTION CACHE: 0
MEM SIZE: 2020 MBytes
GFX BOARD: unknown
CGAL VERSION: 2.5-I-81
LEDA VERSION: 441
QT VERSION: unknown
\end{verbatim}
\subsubsection{Input File Format}
The representation of the input file is derived from XML. Its element
tag-set consists of 4 predefined element tags listed below. All other
element tags that appear in an input file without exception are names
of executables that perform benchmarks.
The following is a list of the 4 elements with the 4 predefined tags
respectively:
\begin{description}
\item[file] specifies a file.
\item[bench] specifies a hierarchy of benchmarks.
\item[clo] specifies a command-line option.
\item[class] specifies a style-sheet class.
\end{description}
A \textbf{file} element specifies a data file provided as input to a
benchmark. It may have the following attributes:
\begin{description}
\item[name] - the file name.
\item[format] - the number type.
\item[curves] - the number of curves.
\item[vertices] - the number of vertices.
\item[halfedges] - the number of halfedges.
\item[faces] - the number of faces.
\end{description}
The \textbf{name} attribute is mandatory, as it identifies the file
for all purposes. The other attributes are optional (as a matter of
fact, the last 4 attributes are specific to the planar-map benchmarks.)
A \textbf{clo} element specifies a command-line option. It has the
following two mandatory attributes:
\begin{description}
\item[name] - the option name.
\item[string] - the option string.
\end{description}
The option name identifies the option for all purposes. The option
string is the exact argument that must appear in the command line for
that option to take effect.
A \textbf{bench} element specifies a hierarchy of benchmarks. It can
contain multiple \textbf{file}, \textbf{clo}, or \textbf{class}
elements, multiple elements that represent executables, and multiple
nested \textbf{bench} elements. The \textbf{file} attribute of a
\textbf{bench} element, if present, specifies an input data file. The
value of the \textbf{file} attribute is the name of the input file
(and the value of the \textbf{name} attributes of a \textbf{file}
element). Each attribute of a \textbf{bench} element that is neither
\textbf{file} nor \textbf{enable} specifies a command-line option. The
value of such an attribute is the option variable-value or
parameter. Command-line options are passed through inheritance in the
benchmark hierarchy, where an option parameter specified in a
\textbf{bench} element overrides the parameter specified higher in the
hierarchy. The boolean attribute \textbf{enable} simply indicates
whether the bench hierarchy should be executed or not.
The tag of a benchmark element is the name of an executable that
performs a benchmark. A benchmark element, just like a \textbf{bench}
element, may contain a \textbf{file} attribute to indicate an input
data file, a \textbf{enable} attribute to enable or disable the
benchmark, and multiple attributes, each indicating the parameter of a
command-line option.
A \textbf{class} element is used only while generating files for
browsing (e.g., html, php, etc.).
Figure \ref{database} lists a simple bench input-file that consists
of three benchmarks, three corresponding input files, and some
command-line options that are used to execute the benchmarks. When
this file is provided as input to the \textbf{cgal\_bench} script, the
later parses the file, interprets its contents, and executes the
commands below in turn:
\begin{verbatim}
bench1 -s 10 -bh=true -bnl=64 file1
bench2 -s 10 -bh=flase -bnl=64 file2
bench3 -s 10 -bh=flase -bnl=64 file3
\end{verbatim}
\begin{figure}[!hbp]
\begin{fminipage}{\textwidth}
\begin{alltt}
<?xml version="1.0" encoding="ISO-8859-1"?>
<\textbf{\Red{bench}} name_length="64" header="false">
<\textbf{\GREEN{clo}} name="samples" string="-s"/>
<\textbf{\GREEN{clo}} name="name_length" string="-bnl="/>
<\textbf{\GREEN{clo}} name="header" string="-bh="/>
<\textbf{\GREEN{clo}} name="format" string="-If="/>
<\textbf{\GBLUE{file}} name="file1" format="rat"/>
<\textbf{\GBLUE{file}} name="file2" format="rat"/>
<\textbf{\GBLUE{file}} name="file3" format="rat"/>
<\textbf{\Red{bench}} samples="10" enable="true">
<\textbf{\Red{bench1}} file="file1" header="true"/>
<\textbf{\Maroon{bench2}} file="file2"/>
<\textbf{\Apricot{bench2}} file="file3"/>
</bench>
</bench>
\end{alltt}
\end{fminipage}
\caption{A simple database}\label{database}
\end{figure}
\section{Analysis\label{Analysis}}
The scripts in this category are intended to analyse and profile
results of benchmarks and convert the resulting data into more
meaningful and comprehensible presentations. They parse log files that
contain benchmark results, interpret the data, manipulate it, analyze
it, and perhaps convert it into other formats for artful presentations.
In principle they should be able to 1. add new fields to the
predefined fields of a benchmark data-record. This can be useful to
perform mathematical operations on the data values, or to reformat the
output for pretty printing, 2. merge records, 3. sort the records,
4. filter out records, and 5. convert the data into other formats.
Converting the textual results into other formats that support artful
presentations, pretty printing, and browsing capabilities is
deficient, as this part of the toolkit consists of a single
{\textbf Perl} script that converts benchmark results into a dedicated
php script that can be quickly integrated into the Tel-Aviv local
\cgal\ web site (not very useful for other users).
The \textbf{bash} command-line below can be used to record benchmark
results in separate log files in the {\tt \~{}/logs} directory:
\begin{verbatim}
cgal_bench <flags> 2>&1 | tee ~/logs/bcgal_`date +%y%m%d%%%H%M%S`.log
\end{verbatim}
When these log files are sorted by name, which is typically the
default, it is fairly easy to point at the most recent log file.
\section{Leftturn example}
The following example measures the performance of the left-turn predicate
in \cgal.
\ccIncludeExampleCode{Benchmark/leftturn.cpp}
The result
\begin{verbatim}
Bench Bench Ops Total Single Num Ops
Name Time Num Ops Time Op Time Per Sec
-------------------------------- -------- -------- -------- -------- --------
Leftturn 1 90242 0.9900 0.0000 91153.5345
\end{verbatim}
\section{Acknowledgement}
The benchmark file-format and the reference parser implementation was
developed at Max-Planck-Institut f\"ur Informatik, Saarbr\"ucken, Germany, by
Eric Berberich, Franziska Ebert, and Lutz Kettner as part of the
\textsc{Acs} project.
% conforms to the
% test agent
% performance profiling
% ------------------------------------------
% Automatic tests are arranged in a hierarchy. A set of super-tests comprises the highest level. Each super-test consists of a group of sub-tests that have something in common. For example, they all test X3D IndexedfaceSet features. Each sub-test is associated with an *.html file. The tests are executed sequentially. For each super-test, the browser is run from the command line. When the agent (script) that runs the browser, detects that the last sub-test within the current super has completed, it submits a kill signal that terminates the browser. For each sub-test the associated html file is loaded onto the browser from the command line. When the agent detects that the sub-test has terminated, it analyzes the test output to determine whether the test failed or succeeded. Next, the html associated with the next sub-test is loaded. These steps are repeated until all super-tests are run. The entire process is repeated for various configurations.
% ------------------------------------------
% 1 : the act or an instance of regressing
% 2 : a trend or shift toward a lower or less perfect state: as a :
% progressive decline of a manifestation of disease b (1) : gradual loss
% of differentiation and function by a body part especially as a
% physiological change accompanying aging (2) : gradual loss of memories
% and acquired skills c : reversion to an earlier mental or behavioral
% level d : a functional relationship between two or more correlated
% variables that is often empirically determined from data and is used
% especially to predict values of one variable when given values of the
% others <the regression of y on x is linear>; specifically : a function
% that yields the mean value of a random variable under the condition
% that one or more independent variables have specified values
% 3 : retrograde motion
% ------------------------------------------
% <benchmark> A standard program or set of programs which can be run on
% different computers to give an inaccurate measure of their
% performance.
%
% "In the computer industry, there are three kinds of lies: lies, damn
% lies, and benchmarks."
%
% A benchmark may attempt to indicate the overall power of a system by
% including a "typical" mixture of programs or it may attempt to measure
% more specific aspects of performance, like graphics, I/O or
% computation (integer or floating-point). Others measure specific tasks
% like rendering polygons, reading and writing files or performing
% operations on matrices. The most useful kind of benchmark is one which
% is tailored to a user's own typical tasks. While no one benchmark can
% fully characterize overall system performance, the results of a
% variety of realistic benchmarks can give valuable insight into
% expected real performance.
%
% Benchmarks should be carefully interpreted, you should know exactly
% which benchmark was run (name, version); exactly what configuration
% was it run on (CPU, memory, compiler options, single user/multi-user,
% peripherals, network); how does the benchmark relate to your workload?
%
% Well-known benchmarks include Whetstone, Dhrystone, Rhealstone (see
% h), the Gabriel benchmarks for Lisp, the SPECmark suite, and
% LINPACK.