User Workflow
Setup
The setup step involves initializing the process, creating a folder (outputs/output_jobprefix) for the benchmarking experiments, and setting up the configuration. The configuration includes storing environment variables such as which scheduler to use, the number of repetitions for the benchmark experiment from the AutoBench package and information, e.g., current file names and paths in a current\_setup.json file.
autobench setupConfiguration Read
In the configuration read, it reads the boiler plate configuration files by default from tests/configs/cluster folder containing all layers and is processed to form an intermediate json file called read\_config.json file.
autobench config readThe custom cluster configuration path can also be passed using --path.
autobench config read --path=custom/configs/clusterBenchmark Generation
Benchmark generation involves creating all possible combinations of configuration knobs across different layers using Cartesian products. This process results in a concrete benchmarks.json file, where placeholders are replaced with actual values. Thus, the JSON file contains all possible unique benchmarks.
autobench benchmark generateIt is possible to generate specific benchmarks by specifying the cluster, partition, hardware components, and benchmark values, as shown below.
autobench benchmark generate --cluster=beast --partition=iceJobscript Generation
In job script generation, benchmarks.json serves as an input along with the scheduler master configuration and job templates. These inputs are used to generate the job command (run.cmd), which is used to submit a job to the scheduler, and the script file (run.sh), which contains the commands to be executed.
autobench jobscript generateJobscript Submission
In job script generation, a single job submission file (submission_file.sh) is created. This can be submitted either via the command line interface (CLI) or manually.
CLI
autobench jobscript submit --allManually
cd outputs/output_jobprefix./submission_file.shPostprocessing
Postprocessing step allows for the extraction of benchmark KPIs from the output of a benchmark job using predefined Perl templates. Additionally, it can query DCDB to extract performance counters and energy consumption, combining them with the extracted KPIs and presenting them in CSV format.
autobench postprocessing startHelp
Use --help in the CLI to learn more about available commands or arguments.
autobench --help