OptiLog
0.3.1

Contents:

  • Installation
  • End-user OptiLog API
    • Modelling Module
      • Basic Problem Modelling
      • Operators and parsing options
      • Equivalence of Python operators, Expression classes, and textual description
      • Solving a Problem with a SAT solver
      • Encoding Pseudo Boolean Expressions
    • Formulas Module
      • CNF Formula
        • Methods sumary
      • WCNF Formula
        • Methods sumary
      • QCNF Formula
        • Methods sumary
      • Methods sumary
      • Formula Loading utilities
    • Encoders Module
      • PB Encoder
        • Non-incremental encoders
        • Incremental encoders
    • Solvers Module
      • SAT Solver
        • Integrated SAT solvers
    • Tuning Module
      • Configurable parameters
      • Configurable functions
        • Global configurable functions
        • Local configurable functions
        • Entrypoint functions
      • Configurable utilities
        • The Choice parameter
      • Configuration scenario
      • Parsing Output
      • Usage example: Automatic Configuration of the Linear MaxSAT algorithm
    • Running Module
      • Execution scenario
      • Examples for submit_command
        • Task Spooler
        • SGE
      • Parsing an execution scenario
    • BlackBox Module
      • System Black Box
      • Satex Black Box
  • Solver Developer
    • iSAT Interface
      • iSAT class
      • Solver states and Variables values
    • How to add a new solver to OptiLog
      • Implementing the iSAT interface
      • The Linking Functions
      • Compiling as a library
      • Loading the solver
  • Use Cases
    • The Slitherlink Problem
      • Modelling the Slitherlink Problem
      • Solving the Slitherlink Problem
        • Analyzing Inconsistent Subproblems
      • Tuning the Slitherlink Problem
      • Running the Slitherlink Problem
        • Processing Experimental Results
  • References
  • Changelog
    • 0.3.1
    • 0.3.0
  • License
OptiLog
  • »
  • Use Cases »
  • The Slitherlink Problem »
  • Running the Slitherlink Problem
  • View page source

Running the Slitherlink Problem

Setting up properly the experimentation environment required to evaluate a solving approach can result in a time-consuming task, and also a source of bugs conducting to wrong evaluations. OptiLog provides support in this sense automating as much as possible some parts of the process.

Now that we have a tuned configuration for our solve_slitherlink function (see Tuning the Slitherlink Problem), we are interested on comparing its performance with respect to the default one. To achieve that we will create an execution scenario by using OptiLog’s Running module:

from optilog.running import SolverRunner

runner = SolverRunner(
    solvers={"default": "slitherlink.py", "gga": "./configs/best-conf.sh"},
    tasks="./instances/test/*.txt",
    scenario_dir=f"./default_running",
    submit_file="./enque.sh",
    constraints=ExecutionConstraints(
        memory=f'6G',
        wall_time=300
    ),
    slots=1,
    seeds=[1, 2, 3],
    unbuffer=False,
    runsolver=False,
)

runner.generate_scenario()

First, we describe the settings of our scenario. In particular, we will run the default configuration by executing the slitherlink.py file, and the tuned configuration found by GGA (that OptiLog automatically extracted in the script ./configs/best-conf.sh). The problem instances to be solved are located by expanding the glob ./instances/test/*.txt. Other options such as the CPU time limit cpu_limit_sec, the amount of memory available mem_per_slot and the number of slots slots can also be specified. A list of seeds is provided to the seeds parameter such that each experiment will be executed for each of the seeds.

By default, OptiLog incorporates compatibility for two optional tools, unbuffer [DonLibes21], to automatically flush to the log files and runsolver [Rou11], to constraint the number of resources (time and memory) available to the process. In order to use these tools, they have to be available in the PATH.

OptiLog provides a running environment on which to execute tasks to a computing-agnostic backend. This means that the underlying tasks need to be delegated to a Job Scheduler like SGE [Microsystems01] or Task Spooler [LluisBiRossell21] to get executed. The special parameter submit_command points to the script in charge of launching each task. See the Examples for submit_command section for some examples of submission commands for different backends.

Finally, the method generate_scenario() of the scenario generation code creates an scenario directory containing all the necessary settings to run the experiments (by default it will create a directory called default_running). Then, the user can easily run these experiments by using the optilog-scenario command provided by OptiLog:

> optilog-scenario default_running/ launch
Your job 8575807 ("default") has been submitted
(...)
Your job 8576306 ("gga") has been submitted

Unless a specific directory for storing the logs is indicated using the logs parameter, the directory ./default_running/logs will be automatically created with subdirectories output and error where the logs for stdout and stderr will be stored.

Processing Experimental Results

We include in OptiLog a new functionality within the Running module to automatically process the logs of the experiments. We show the code to parse the logs for the Slitherlink problem, and the output of its execution.

from optilog.running import ParsingInfo, parse_scenario

pi = ParsingInfo()
pi.add_filter("res", r"^s (.+)", timestamp=True)

df = parse_scenario("./default_running", parsing_info=pi)
print(df.head())


def solved(col):
    return (col == "YES").sum() / col.shape[0]


def parK(col, k=10):
    return col.fillna(1000 * 300 * k).mean()


def stats(df, solver):
    print("* Pctg solved:", df[(solver, "res")].agg(solved))
    print("* PAR10:", df[(solver, "time_res")].agg(parK), end="\n\n")


print("DEFAULT")
stats(df, "default")
print("GGA")
stats(df, "gga")
            default            gga
                res  time_res  res  time_res
instance seed
130.txt  1        YES  267890.0  YES  288112.0
         2        YES  265512.0  YES  166154.0
         3        YES  264812.0  YES  112420.0
108.txt  3        YES  135638.0  YES  131559.0
         2        YES  133669.0  YES   89008.0
DEFAULT
* Pctg solved: 0.51
* PAR10: 1541310.3933333333

GGA
* Pctg solved: 0.87
* PAR10: 520862.7966666667

The experiments are parsed line by line following a set of filters that are described with a ParsingInfo object. OptiLog includes template parsers for SAT and MaxSAT output following the SAT Competition and MaxSAT Evaluation [BBJarvisaloM20] output format.

In this example, a ParsingInfo object is instantiated (line 3). Line 4 adds a filter that parses the result of the experiment (s YES or s NO), and also records the time at which the result is reported in milliseconds. More advanced options are also supported, such as the automatic casting of the parsed results and the retrieval of all the matches of the regular expression. The parse_scenario function call (line 6) parses the result of the experiments and returns a Pandas [pdt20] dataframe with the parsed data.

Lines 10-20 process the experiment results by using some basic Pandas functions. Finally, lines 23-26 show the final results of our experiments. In particular, we found that the automatic configuration of the solve_slitherlink function with the Tuning module and GGA allows us to solve about 25% more instances than the default configuration, and improves the PAR10 score by about 66%.

Previous Next

© Copyright 2021, Logic and Optimization Group.

Built with Sphinx using a theme provided by Read the Docs.