.. _max_competition_generation: Competition Generation ---------------------- In order to generate a competition, you will need to create a generation script. For this example, we will replicate the MaxSAT competition. For this example, we will assume that the solvers are in the folder ``path/to/solvers/`` and the instances are in the folder ``path/to/instances/``. Here is an example of a generation script: .. code:: python :number-lines: from optilog.running import RunningScenario from optilog.blackbox import ExecutionConstraints, RunSolver if __name__ == '__main__': running = RunningScenario( solvers = "path/to/solvers/*", tasks="path/to/instances/*.wcnf", submit_file="./enque_sge.sh", constraints=ExecutionConstraints( # Execution constraints for the competition s_cpu_time=3600, s_real_memory="32G", enforcer=RunSolver() ), unbuffer=False, ) running.generate_scenario("./scenario") As explained in the section :ref:`binary_running`, the ``RunningScenario`` class is used to generate a scenario for the competition. The ``RunningScenario`` class takes as input the path to the solvers, the path to the instances, the path to the submission script (you can find the SGE template in the section :ref:`submit-command-examples`), the execution constraints and the unbuffer option. The execution constraints are explained in the section :ref:`execution_constraints`. .. _max_competition_running: Competition Running ------------------- Running the scenario is as simple as running the following command: .. code:: bash # Submit all jobs $ optilog-running path/to/scenario submit You can find more information about all the available commands in the section :ref:`running-scenario`. .. _max_competition_parsing: Competition Parsing ------------------- In order to parse the results of the competition, you will need to create a parsing script. Here is an example of a parsing script: .. code:: python :number-lines: from optilog.running import ParsingInfo, parse_scenario # For this example we will trust that the reported cost of the solver is correct # So we will not parse the model (Notice the None) parsing_info = ParsingInfo.from_template('maxsat', None) df = parse_scenario( './scenario', parsing_info=parsing_info ) # Note that for this example we will compute the best known solution # As the best of all the executed solvers # In the real competition, we need to take into account # The bounds computed from other executions # Swap levels df.columns = df.columns.swaplevel(0, 1) # Only keep the 'cost' column df = df['cost'] # Compute Virtual Best Solver df['VBS'] = df.min(axis=1) # Compute the score of each solver def score(row): return (1+row['VBS']) / (1+row) df = df.apply(score, axis=1) # Compute the average score of all instances df = df.mean() # Sort the solvers by their score (descending) df = df.sort_values(ascending=False) print(df) You can find more information about the parsing scenario in the section :ref:`running_parsing-scenario`.