OptiLog
0.5.3

Contents:

  • Installation
  • End-user OptiLog API
    • Modelling Module
      • Basic Problem Modelling
      • Operators and parsing options
      • Equivalence of Python operators, Expression classes, and textual description
      • Solving a Problem with a SAT solver
      • Encoding Pseudo Boolean Expressions
    • Formulas Module
      • CNF Formula
        • CNF
        • CNFException
        • Methods sumary
      • WCNF Formula
        • WCNF
        • WCNFException
        • Methods sumary
      • QCNF Formula
        • QCNF
        • QCNFException
        • Methods sumary
      • VarManager
        • VarManager.extend_vars()
        • VarManager.set_minimum_vars()
        • VarManager.add_var()
        • VarManager.max_var()
        • VarManager.new_var()
        • VarManager.get_lit()
      • Methods sumary
      • Formula Loading utilities
        • load_cnf()
        • load_wcnf()
        • load_qcnf()
      • Formula visualization
        • print_clauses_color()
    • Encoders Module
      • PB Encoder
        • Non-incremental encoders
        • Incremental encoders
    • Solvers Module
      • SAT Solver
        • Integrated SAT solvers
    • Tuning Module
      • Configurable parameters
        • Bool
        • Int
        • Real
        • Categorical
        • Choice
        • Dict
        • CfgCall
        • CfgCls
        • CfgObj
      • Configurable functions
        • Global configurable functions
        • Local configurable functions
        • Entrypoint functions
      • Configurable BlackBox
        • BlackBox Class as entrypoint
        • BlackBox Object as entrypoint
        • Extract BlackBox Command
        • BlackBox Class as global configurable
        • BlackBox as configurable parameter
      • Configurable utilities
        • The Choice parameter
      • Configuration scenario
        • TuningEntrypointType
        • TuningGlobalConfigurableType
        • GGAScenario
        • SMACScenario
      • Parsing Output
        • GGAParser
        • SMACParser
      • Usage example: Automatic Configuration of the Linear MaxSAT algorithm
    • Running Module
      • Introduction to the Running Module
      • Generating execution scenario
        • RunningSolverType
        • RunningScenario
        • Scenario with binary programs
        • Scenario with functions or BlackBox
      • Running the scenario
      • Examples for submit_file
        • Task Spooler
        • SGE
      • Parsing an execution scenario
        • parse_scenario()
        • ParsingInfo
    • BlackBox Module
      • Execution Constraints
        • ExecutionConstraints
        • RunSolver
        • DockerEnforcer
      • System Black Box
        • SystemBlackBox
        • BlackBoxRedirection
      • Satex Black Box
        • SatexBlackBox
  • Solver Developer
    • iSAT Interface
      • iSAT class
      • Solver states and Variables values
    • How to add a new solver to OptiLog
      • Implementing the iSAT interface
      • The Linking Functions
      • Compiling as a library
      • Loading the solver
  • Use Cases
    • The Slitherlink Problem
      • Modelling the Slitherlink Problem
      • Solving the Slitherlink Problem
        • Analyzing Inconsistent Subproblems
      • Tuning the Slitherlink Problem
      • Running the Slitherlink Problem
        • Processing Experimental Results
    • MaxSAT Competitions
      • Competition Generation
      • Competition Running
      • Competition Parsing
    • SAT Competitions
      • Competition Generation
      • Competition Running
      • Competition Parsing
  • References
  • Changelog
    • 0.5.3
    • 0.5.2
    • 0.5.1
    • 0.5.0
    • 0.4.2
    • 0.4.1
    • 0.3.6
    • 0.3.5
    • 0.3.4
    • 0.3.3
    • 0.3.2
    • 0.3.1
    • 0.3.0
  • License
OptiLog
  • Use Cases
  • The Slitherlink Problem
  • Running the Slitherlink Problem
  • View page source

Running the Slitherlink Problem

Setting up properly the experimentation environment required to evaluate a solving approach can result in a time-consuming task, and also a source of bugs conducting to wrong evaluations. OptiLog provides support in this sense automating as much as possible some parts of the process.

Now that we have a tuned configuration for our solve_slitherlink function (see Tuning the Slitherlink Problem), we are interested on comparing its performance with respect to the default one. To achieve that we will create an execution scenario by using OptiLog’s Running module:

 1from optilog.running import RunningScenario
 2from optilog.blackbox import ExecutionConstraints, RunSolver
 3
 4from slitherlink import solve_slitherlink
 5
 6
 7def solve_slitherlink_no_remove(instance, seed):
 8    return solve_slitherlink(instance, seed, remove_unconstr_subcycles=False)
 9
10
11if __name__ == "__main__":
12    runner = RunningScenario(
13        solvers={
14            "default-remove-subcycles": solve_slitherlink,
15            "default-no-remove": solve_slitherlink_no_remove,
16            "gga": "./configs/best-conf.sh",
17        },
18        tasks="./instances/test/*.txt",
19        submit_file="./enque.sh",
20        constraints=ExecutionConstraints(
21            s_virtual_memory="6G",
22            s_wall_time=300,
23            enforcer=RunSolver(),
24        ),
25        slots=1,
26        seeds=[1, 2, 3],
27        unbuffer=True,
28    )
29
30    runner.generate_scenario(scenario_dir="./default_running")

First, we describe the settings of our scenario. In particular, we will run the default configuration (which removes the unconstrained subcycles) by executing the solve_slitherlink function, the default configuration without removing the unconstrained subcycles (the solve_slitherlink_no_remove function) and the tuned configuration found by GGA (that OptiLog automatically extracted in the script ./configs/best-conf.sh).

We generated another set of instances that are different to the ones used in the Tuning process. These instances are located by expanding the glob ./instances/test/*.txt. Other options such as the Wallclock time limit s_wall_time and the amount of memory available s_virtual_memory can be specified through ExecutionConstraints. A list of seeds is provided to the seeds parameter such that each experiment will be executed for each of the seeds.

By default, OptiLog incorporates compatibility for two optional tools, unbuffer [DonLibes21], to automatically flush to the log files and runsolver [Rou11], to constraint the number of resources (time and memory) available to the process. In order to use these tools, they have to be available in the PATH.

OptiLog provides a running environment on which to execute tasks to a computing-agnostic backend. This means that the underlying tasks need to be delegated to a Job Scheduler like SGE [Microsystems01] or Task Spooler [LluisBiRossell21] to get executed. The special parameter submit_file points to the script in charge of submitting each job to the scheduler. See the Examples for submit_file section for some examples of submission commands for different backends.

Finally, the method generate_scenario() of the scenario generation code creates an scenario directory containing all the necessary settings to run the experiments (by default it will create a directory called default_running). Then, the user can easily run these experiments by using the optilog-running command provided by OptiLog:

$ optilog-running default_running/ submit
Your job 8575807 ("default-remove-subcycles") has been submitted
(...)
Your job 8576306 ("gga") has been submitted

Unless a specific directory for storing the logs is indicated using the logs parameter, the directory ./default_running/logs will be automatically created with subdirectories output and error where the logs for stdout and stderr will be stored.

Processing Experimental Results

We include in OptiLog a new functionality within the Running module to automatically process the logs of the experiments. We show the code to parse the logs for the Slitherlink problem, and the output of its execution.

 1from optilog.running import ParsingInfo, parse_scenario
 2
 3pi = ParsingInfo()
 4pi.add_filter("res", r"^s (.+)", timestamp=True)
 5
 6df = parse_scenario("./default_running", parsing_info=pi)
 7print(df.head())
 8
 9
10def solved(col):
11    return (col == "YES").sum() / col.shape[0]
12
13
14def parK(col, k=10):
15    return col.fillna(1000 * 300 * k).mean()
16
17
18def stats(df, solver):
19    print("* Pctg solved:", df[(solver, "res")].agg(solved))
20    print("* PAR10:", df[(solver, "time_res")].agg(parK), end="\n\n")
21
22
23print("DEFAULT (remove subcycles)")
24stats(df, "default-remove-subcycles")
25print("DEFAULT (no remove subcycles)")
26stats(df, "default-no-remove")
27print("GGA")
28stats(df, "gga")
                gga                            default-no-remove                            default-remove-subcycles
                res cpu_time_res wall_time_res               res cpu_time_res wall_time_res                      res cpu_time_res wall_time_res
instance seed
130.txt  1      YES       155.54        156.03              None          NaN           NaN                     None          NaN           NaN
108.txt  1      YES        96.19         97.24              None          NaN           NaN                      YES       264.00        269.59
145.txt  1      YES       145.39        146.96               YES       117.20        119.40                      YES       123.47        125.48
109.txt  1      YES       288.89        293.30               YES       255.70        266.47                      YES       257.62        263.16
136.txt  1     None          NaN           NaN               YES        73.44         74.40                     None          NaN           NaN
DEFAULT (remove subcycles)
* Pctg solved: 0.51
* PAR10: 1470064.2421999997

DEFAULT (no remove subcycles)
* Pctg solved: 0.43
* PAR10: 1710052.2067000002

GGA
* Pctg solved: 0.89
* PAR10: 330131.0026

The experiments are parsed line by line following a set of filters that are described with a ParsingInfo object. OptiLog includes template parsers for SAT and MaxSAT output following the SAT Competition and MaxSAT Evaluation [BBJarvisaloM20] output format.

In this example, a ParsingInfo object is instantiated (line 3). Line 4 adds a filter that parses the result of the experiment (s YES or s NO), and also records the time at which the result is reported in milliseconds. More advanced options are also supported, such as the automatic casting of the parsed results and the retrieval of all the matches of the regular expression. The parse_scenario function call (line 6) parses the result of the experiments and returns a Pandas [pdt20] dataframe with the parsed data.

Lines 10-20 process the experiment results by using some basic Pandas functions. Finally, lines 23-28 show the final results of our experiments. In particular, we found that the automatic configuration of the solve_slitherlink function with the Tuning module and GGA allows us to solve about 25% more instances than the default configuration, and improves the PAR10 score by about 66%.

Previous Next

© Copyright 2023, Logic and Optimization Group.

Built with Sphinx using a theme provided by Read the Docs.