Competition Generation

In order to generate a competition, you will need to create a generation script. For this example, we will replicate the MaxSAT competition.

For this example, we will assume that the solvers are in the folder path/to/solvers/ and the instances are in the folder path/to/instances/.

Here is an example of a generation script:

 1from optilog.running import RunningScenario
 2from optilog.blackbox import ExecutionConstraints, RunSolver
 3
 4if __name__ == '__main__':
 5    running = RunningScenario(
 6        solvers = "path/to/solvers/*",
 7        tasks="path/to/instances/*.wcnf",
 8        submit_file="./enque_sge.sh",
 9        constraints=ExecutionConstraints(
10            # Execution constraints for the competition
11            s_cpu_time=3600,
12            s_real_memory="32G",
13            enforcer=RunSolver()
14        ),
15        unbuffer=False,
16    )
17
18    running.generate_scenario("./scenario")

As explained in the section Scenario with binary programs, the RunningScenario class is used to generate a scenario for the competition. The RunningScenario class takes as input the path to the solvers, the path to the instances, the path to the submission script (you can find the SGE template in the section Examples for submit_file), the execution constraints and the unbuffer option. The execution constraints are explained in the section Execution Constraints.

Competition Running

Running the scenario is as simple as running the following command:

# Submit all jobs
$ optilog-running path/to/scenario submit

You can find more information about all the available commands in the section Running the scenario.

Competition Parsing

In order to parse the results of the competition, you will need to create a parsing script. Here is an example of a parsing script:

 1from optilog.running import ParsingInfo, parse_scenario
 2
 3# For this example we will trust that the reported cost of the solver is correct
 4# So we will not parse the model (Notice the None)
 5parsing_info = ParsingInfo.from_template('maxsat', None)
 6
 7df = parse_scenario(
 8    './scenario',
 9    parsing_info=parsing_info
10)
11
12# Note that for this example we will compute the best known solution
13# As the best of all the executed solvers
14# In the real competition, we need to take into account
15# The bounds computed from other executions
16
17# Swap levels
18
19df.columns = df.columns.swaplevel(0, 1)
20
21# Only keep the 'cost' column
22
23df = df['cost']
24
25# Compute Virtual Best Solver
26
27df['VBS'] = df.min(axis=1)
28
29# Compute the score of each solver
30
31def score(row):
32    return (1+row['VBS']) / (1+row)
33
34df = df.apply(score, axis=1)
35
36# Compute the average score of all instances
37
38df = df.mean()
39
40# Sort the solvers by their score (descending)
41
42df = df.sort_values(ascending=False)
43
44print(df)

You can find more information about the parsing scenario in the section Parsing an execution scenario.