Competition Generation

In order to generate a competition, you will need to create a generation script. For this example, we will replicate the SAT competition.

For this example, we will assume that the solvers are in the folder path/to/solvers/ and the instances are in the folder path/to/instances/.

Here is an example of a generation script:

 1from optilog.running import RunningScenario
 2from optilog.blackbox import ExecutionConstraints, RunSolver
 3import random
 4
 5if __name__ == '__main__':
 6    N_SEEDS = 5
 7    running = RunningScenario(
 8        solvers = "path/to/solvers/*",
 9        tasks="path/to/instances/*.cnf",
10        submit_file="./enque_sge.sh",
11        constraints=ExecutionConstraints(
12            # Execution constraints for the competition
13            s_cpu_time=3600,
14            s_real_memory="32G",
15            enforcer=RunSolver()
16        ),
17        unbuffer=False,
18        seeds=[random.randint(0, 2**32 - 1) for _ in range(N_SEEDS)]
19    )
20
21    running.generate_scenario("./scenario")

As explained in the section Scenario with binary programs, the RunningScenario class is used to generate a scenario for the competition. The RunningScenario class takes as input the path to the solvers, the path to the instances, the path to the submission script (you can find the SGE template in the section Examples for submit_file), the execution constraints and the unbuffer option. The execution constraints are explained in the section Execution Constraints.

Competition Running

Running the scenario is as simple as running the following command:

# Submit all jobs
$ optilog-running path/to/scenario submit

You can find more information about all the available commands in the section Running the scenario.

Competition Parsing

In order to parse the results of the competition, you will need to create a parsing script. Here is an example of a parsing script:

 1from optilog.running import ParsingInfo, parse_scenario
 2
 3# For this example we will trust that the reported model of the solver is correct
 4# So we will not parse the model (Notice the None)
 5parsing_info = ParsingInfo.from_template('sat', None)
 6
 7df = parse_scenario(
 8    './scenario',
 9    parsing_info=parsing_info
10)
11
12# Note that for this example we will compute the best known solution
13# As the best of all the executed solvers
14# In the real competition, we need to take into account
15# The bounds computed from other executions
16
17# Swap levels
18
19df.columns = df.columns.swaplevel(0, 1)
20
21# Get the number of seeds
22
23seeds = [int(seed) for seed in df.index.levels[1]]
24num_seeds = len(seeds)
25
26# Get the average number of solved instances by each solver
27
28df = df['sat'].notnull().sum(axis=0) / num_seeds
29
30# Sort the solvers by their score (descending)
31
32df = df.sort_values(ascending=False)
33
34print(df)

You can find more information about the parsing scenario in the section Parsing an execution scenario.