WCET Tool Challenge 2008 (WCC'08)

This work has been funded in part by the ARTIST2 Network of Excellence.

artistlogo-forpublications.jpg

Introduction

For a general introduction to the WCET Tool Challenge, please visit the public web page. This Wiki site contains the current status, advice for the participants, background information, benchmark programs with definitions of the analysis problems, and analysis results.

Contributors and editors, please note the following:

and, last but not least,

We also have a collection of questions and answers about the WCET Tool Challenge, the benchmarks, and the target processors.

Organization

The steering group for WCC'08 consisted of:

  • Niklas Holsti, Tidorum Ltd, contact point, (niklas DOT holsti AT tidorum DOT fi)
  • Jan Gustafsson, Mälardalen University (jan DOT gustafsson AT mdh DOT se)
  • Guillem Bernat, Rapita Systems Ltd (bernat AT rapitasystems DOT com)

Participants

The currently registered participants are as follows, listed in alphabetical order of tool name.

Tool Source Type More
Bound-T Tidorum static analysis info
MTime Vienna University of Technology hybrid, measurements and analysis, also test case generation
OTAWA IRIT static analysis info
RapiTime Rapita Systems hybrid, measurements and analysis info
TuBound Vienna University of Technology static analysis, also source-code transformation
WCC Dortmund University of Technology static analysis, WCET-aware C compiler

Schedule

The WCET Tool Challenge 2008 was expected to follow this rough schedule:

  • March - May: Benchmarks become progressively available for analysis.
  • June 15: Closing date for analysis results to be reported at the WCET Workshop.
  • July 1: Report at the WCET Workshop.
  • … : Wiki site remains open for new benchmarks and results… for the next Challenge.

The real schedule was somewhat protracted. Only an intermediate status report could be given at the WCET Workshop. The full report will be published in the WCET Workshop Proceedings. A preprint is available.

Target Processors

The execution time and WCET of a benchmark program depends, of course, on the target processor on which the benchmark program runs (for measurements) or is assumed to run (for analysis). The difficulty of WCET analysis also depends on the target processor.

Targets for the WCET Tool Challenge

The WCET Tool Challenge has an open set of target processors, presented in the target table.

Participants are free to use the target processor or processors of their choice, as available and as supported by the participating tools. However, one common target processor is suggested (see below).

Suggested Common Target

The WCC'08 steering group proposes one common target processor, the ARM7. This is the target with the widest WCET tool support.

While the ARM7 is the (or a) recommended target, it is possible to participate in WCC'08 with some other target(s), or with only flow analysis based on source-code or intermediate code and no actual target processor.

Different ARM7 implementations have different timing. Therefore, the steering group proposes a specific ARM7 device, the LPC2138 from NXP (former Philips). This device has 512 KiB flash on chip, normally used for code, and also 32 KiB static RAM on chip, normally used for data.

The timing of the LPC2138 is not entirely trivial since its flash-memory interface contains three 128-bit buffers that cache information (the “Memory Acceleration Module”). However, the device can be operated with these buffers disabled. Presumably, WCET analysis is easier if the buffers are disabled, so participants should analyse at least this case, but we hope that the more challenging case, with the buffers enabled, is also addressed.

The Memory Acceleration Module is described in the User Manual for the LPC2138.

There are inexpensive development kits and test systems based on the LPC2138, for example the Olimex LPC-H2138 board used with the OLIMEX ARM-USB-OCD JTAG and serial interface.

The IF-DEV-LPC kit from iSYSTEM was originally suggested, and used, for WCC'08. However, iSYSTEM has discontinued this particular kit (information from iSYSTEM on August 14, 2008) and it is no longer available in single units.

Benchmark Programs

Goals

The benchmark programs for the WCET Tool Challenge 2008 are new programs, and include none of the programs that were used in the 2006 edition of the Challenge. The exclusion of 2006 benchmarks was not intentional, but happened simply because no-one made the effort to port them to the 2008 format. For WCC'08, the emphasis is on improving the definition and quality of the benchmarks by:

  • making the benchmarks more portable by proper use of C “typedef” and C99 “least-size” type-names,
  • exactly defining the analysis problems:
    • which subprogram should be analysed,
    • for which ranges of inputs, and
    • in which context, or with what other constraints or assumptions,
  • separating test data from benchmark code, and
  • when possible, including test data that is sufficiently complete for measurement-based timing analysis.

Benchmark Table

Most benchmarks are down-loadable from this Wiki, but some benchmarks are derived from proprietary, commercial programs and will be provided on request to registered WCC'08 participants. See the “Availability” column in the table below.

Benchmark Source Availability Current version Results
debie1 Space Systems Finland Ltd Request from niklas.holsti@tidorum.fi b results
rathijit_1 Rathijit Sen, Saarland University rathijit_1-a.zip a results
rathijit_2 Rathijit Sen, Saarland University rathijit_2-a.zip a results
rathijit_3 Rathijit Sen, Saarland University rathijit_3-a.zip a results
rathijit_4 Rathijit Sen, Saarland University rathijit_4-a.zip a results

Benchmark Structure

Overview

A major goal for WCC'08 is to define a standard structure for the benchmark programs, a structure that clearly separates the code to be analyzed from the test data that may be included, within the program or separately. Moreover, the participation of measurement-based tools means that test data must be provided for all core benchmarks. For easier comparison of analysis results from all participants, we must also clearly define the “analysis problems” for each benchmark and the results to be reported from each analysis. Here the word “analysis” includes static WCET analysis, measurement-based analysis, and pure flow analysis (analysis of feasible and infeasible execution paths).

Each WCC'08 core benchmark therefore consists of:

  1. The program itself:
    • the source code,
    • build scripts for generating binaries for various targets, at least for the suggested common target (LPC2138, IF-DEV-LPC), and
    • binaries for these targets.
  2. One or more analysis problems, each one defining:
    • the parts (subprograms) of the benchmark to be analysed or measured,
    • the conditions (inputs, initial program states) for the analysis,
    • other constraints (scenarios) on the executions to be analysed,
    • the results to be reported.
  3. Test data embedded in program code, but in separate modules (source files), not mixed in the application code.

For benchmarks that contain test data, the program (“main” function) executes the application functions with this test data, without needing actual input data from the environment. This allows measurement-based timing analysis.

By separating the test data from the application code, static WCET analysis should be able to ignore the test data, and thus be applicable to any input data, or to any input data that satisfies the conditions or constraints that may be defined for an analysis problem.

Program Structure

The following structure of files, folders and sub-folders is suggested, but not required, for benchmark programs.

  • A top-level folder that contains files with benchmark identification, copyright statements, distribution permissions or restrictions, and subfolders as follows:
    • code: all target-independent (portable) source-code files, with a sub-folder for each target processor (architecture):
      • <target>: all target-specific source-code files, and a sub-folder for each cross-compiler to this target:
        • <compiler>: all target- and compiler-specific source-code files, build scripts, and the binary executable for this target and compiler.

For example, in the debie1 benchmark:

  • the top-level folder “debie1” contains the Terms Of Use statement from the benchmark owner,
  • the folder “debie1/code” contains most source-code files, and
  • the folder “debie1/code/arm7-lpc2138/gcc-if07” contains the source-file “keyword.h” that adapts the benchmark to the target arm7-lpc2138 (our suggested target name for the LPC2138 chip) and the “gcc-if07” compiler (our suggested name for the version of GCC that comes with the IF-DEV-LPC kit). This folder also contains the build script and binary for this target and compiler.

The <target> names are defined in the target table. The <compiler> names are defined in the compiler table.

Benchmark Description Page

Each benchmark has a main description page in this Wiki. The benchmark template suggests a format for this page, but it can be adjusted to suit the properties of the benchmark. However, it should always refer to the pages that define the analysis problems and to the page that holds the analysis results. The template has sections for both.

Here is the same template as a text file, ready for copying into the Wiki page editor. Delete all the explanatory text (/ / italic text / / in the template) and fill in the <> placeholders with the description of the benchmark.

Here is the same template as a text file without the explanatory text and with shorter placeholders. This one is quicker to fill in, because there is less explanatory text to remove.

Analysis Problems

An analysis problem defines one part of the analysis of a particular benchmark. The definition of an analysis problem has the following parts:

  • Problem identification. Typically a running number and a subscript, eg. “3a”, followed by a brief, one-line title.
  • Rationale. A brief prose description saying why this problem is interesting, either for the verification of the benchmark program (in reality), or as a test of WCET analysis in general.
  • Root subprogram. The name of the subprogram for which a WCET estimate or flow-analysis results are requested.
  • Inputs. A table of the inputs (parameters, global variables, memory-mapped input registers, results from I/O routines) and constraints on their values.
  • Other constraints. Other constraints on the execution of the root subprogram or the initial or final state of the bechmark program or processor, to be assumed for the analysis.
  • Flow-analysis results to be reported.
  • WCET-analysis results to be reported.

Here is a Wiki template for analysis problems.

Here is the same template as a text file, ready for copying into the Wiki page editor. Delete all the explanatory text and fill in the <> placeholders.

Here is the same template as a text file without the explanatory text and with shorter placeholders. This one is quicker to fill in, because there is less explanatory text to remove.

Analysis Results

Two types of results can be requested for an analysis problem: flow-analysis results and WCET-analysis results.

WCET-Analysis Results

WCET-analysis results are expected from all kinds of WCET tools, whether static or measurement-based, but not from pure flow-analysis tools. A WCET-analysis result is simply:

  • the WCET (bound or estimate) for a particular subprogram named in the analysis problem.

The WCET is given in the time units defined for the given target processor, usually clock cycles, and under the input conditions and other constraints defined in the analysis problem.

Flow-Analysis Results

Flow-analysis results are expected from both pure flow-analysis tools and from WCET-analysis tools. As there is no standard format (language) for describing flow-analysis results, and as many such results (eg. loop bounds) depend on compiler-specific code optimizations, we limit flow-analysis results to a form that is simple to express and not so sensitive to code optimization changes. Moreover, we want to be able to compare the results of pure flow-analysis, covering all possible execution paths without regard to execution time, to the flow-analysis in static WCET analysis, which covers (or reports) only the execution path(s) that give the maximal execution time. Therefore, the chosen type of flow-analysis result is:

  • the (bounds on the) number of executions of a particular “witness” subprogram, called within the execution of the root subprogram of the analysis problem, and assuming that the execution time of the witness subprogram is arbitrarily large.

This number is usually insensitive to compiler optimizations, except, of course, if the compiler inlines some or all calls to the witness subprogram. Happily, inlining can usually be prevented by compiler options or by placing the witness subprogram in a different module (different source-code file, for 'C'), and the existence of inlining (absence of calls) is usually easy to detect in the output from a static WCET analysis.

The condition that the execution time of the witness subprogram is arbitrarily large forces a static WCET tool to include, in the worst-case path, the largest possible number of executions of this subprogram; the tool cannot instead choose as the worst case some other execution path with a smaller number of witness calls. This forces the static WCET tool to reveal the results of its flow analysis, as it applies to the calls of the witness subprogram.

To help static WCET analysers implement this condition, the benchmark developer may add a call of a “time-bomb” subprogram in the witness subprogram. A time-bomb subprogram is a null subprogram that can be annotated to have an arbitrarily large execution time. This makes the execution time of the witness subprogram also arbitrarily large, but still lets the WCET tool analyse the witness subprogram, which may have an important role in the application.

In some cases, it may be useful to extend a benchmark with some null subprograms specifically meant for use as witness subprograms, although they run a high risk of being inlined (or entirely removed by some smart compilers).

Analysis Result Forms and Templates

The results of the analyses of a benchmark are reported in a table organized into one column for each result requested in the analysis problems, and one row for each analysis, identified and sorted by the target processor and the analysis tool, in that order.

Here is a Wiki template for one benchmark.

Here is the same template as a text file, ready for copying into the Wiki page editor. Delete all the explanatory text and fill in the <> placeholders.

Here is the same template as a text file without the explanatory text and with shorter placeholders. This one is quicker to fill in, because there is less explanatory text to remove.

Wiki

This is a Wiki page where you can take notes, upload files, etc.

See syntax for Wiki syntax.

See Frequently Asked Questions (FAQ)

 
start.txt · Last modified: 2010/12/08 18:17 by admin
 
Recent changes RSS feed Creative Commons License Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki