.. _sat_competition_generation: Competition Generation ---------------------- In order to generate a competition, you will need to create a generation script. For this example, we will replicate the SAT competition. For this example, we will assume that the solvers are in the folder ``path/to/solvers/`` and the instances are in the folder ``path/to/instances/``. Here is an example of a generation script: .. code:: python :number-lines: from optilog.running import RunningScenario from optilog.blackbox import ExecutionConstraints, RunSolver import random if __name__ == '__main__': N_SEEDS = 5 running = RunningScenario( solvers = "path/to/solvers/*", tasks="path/to/instances/*.cnf", submit_file="./enque_sge.sh", constraints=ExecutionConstraints( # Execution constraints for the competition s_cpu_time=3600, s_real_memory="32G", enforcer=RunSolver() ), unbuffer=False, seeds=[random.randint(0, 2**32 - 1) for _ in range(N_SEEDS)] ) running.generate_scenario("./scenario") As explained in the section :ref:`binary_running`, the ``RunningScenario`` class is used to generate a scenario for the competition. The ``RunningScenario`` class takes as input the path to the solvers, the path to the instances, the path to the submission script (you can find the SGE template in the section :ref:`submit-command-examples`), the execution constraints and the unbuffer option. The execution constraints are explained in the section :ref:`execution_constraints`. .. _sat_competition_running: Competition Running ------------------- Running the scenario is as simple as running the following command: .. code:: bash # Submit all jobs $ optilog-running path/to/scenario submit You can find more information about all the available commands in the section :ref:`running-scenario`. .. _sat_competition_parsing: Competition Parsing ------------------- In order to parse the results of the competition, you will need to create a parsing script. Here is an example of a parsing script: .. code:: python :number-lines: from optilog.running import ParsingInfo, parse_scenario # For this example we will trust that the reported model of the solver is correct # So we will not parse the model (Notice the None) parsing_info = ParsingInfo.from_template('sat', None) df = parse_scenario( './scenario', parsing_info=parsing_info ) # Note that for this example we will compute the best known solution # As the best of all the executed solvers # In the real competition, we need to take into account # The bounds computed from other executions # Swap levels df.columns = df.columns.swaplevel(0, 1) # Get the number of seeds seeds = [int(seed) for seed in df.index.levels[1]] num_seeds = len(seeds) # Get the average number of solved instances by each solver df = df['sat'].notnull().sum(axis=0) / num_seeds # Sort the solvers by their score (descending) df = df.sort_values(ascending=False) print(df) You can find more information about the parsing scenario in the section :ref:`running_parsing-scenario`.