Skip to content

User Workflow

Setup

The setup step involves initializing the process, creating a folder (outputs/output_jobprefix) for the benchmarking experiments, and setting up the configuration. The configuration includes storing environment variables such as which scheduler to use, the number of repetitions for the benchmark experiment from the AutoBench package and information, e.g., current file names and paths in a current\_setup.json file.

Terminal window
autobench setup

Configuration Read

In the configuration read, it reads the boiler plate configuration files by default from tests/configs/cluster folder containing all layers and is processed to form an intermediate json file called read\_config.json file.

Terminal window
autobench config read

The custom cluster configuration path can also be passed using --path.

Terminal window
autobench config read --path=custom/configs/cluster

Benchmark Generation

Benchmark generation involves creating all possible combinations of configuration knobs across different layers using Cartesian products. This process results in a concrete benchmarks.json file, where placeholders are replaced with actual values. Thus, the JSON file contains all possible unique benchmarks.

Terminal window
autobench benchmark generate

It is possible to generate specific benchmarks by specifying the cluster, partition, hardware components, and benchmark values, as shown below.

Terminal window
autobench benchmark generate --cluster=beast --partition=ice

Jobscript Generation

In job script generation, benchmarks.json serves as an input along with the scheduler master configuration and job templates. These inputs are used to generate the job command (run.cmd), which is used to submit a job to the scheduler, and the script file (run.sh), which contains the commands to be executed.

Terminal window
autobench jobscript generate

Jobscript Submission

In job script generation, a single job submission file (submission_file.sh) is created. This can be submitted either via the command line interface (CLI) or manually.

CLI

Terminal window
autobench jobscript submit --all

Manually

Terminal window
cd outputs/output_jobprefix
./submission_file.sh

Postprocessing

Postprocessing step allows for the extraction of benchmark KPIs from the output of a benchmark job using predefined Perl templates. Additionally, it can query DCDB to extract performance counters and energy consumption, combining them with the extracted KPIs and presenting them in CSV format.

Terminal window
autobench postprocessing start

Help

Use --help in the CLI to learn more about available commands or arguments.

Terminal window
autobench --help