Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::sat::CoreBasedOptimizer Class Reference

#include <optimization.h>

Public Member Functions

 CoreBasedOptimizer (IntegerVariable objective_var, absl::Span< const IntegerVariable > variables, absl::Span< const IntegerValue > coefficients, std::function< void()> feasible_solution_observer, Model *model)
 
SatSolver::Status Optimize ()
 
SatSolver::Status OptimizeWithSatEncoding (absl::Span< const Literal > literals, absl::Span< const IntegerVariable > vars, absl::Span< const Coefficient > coefficients, Coefficient offset)
 

Detailed Description

Same as MinimizeIntegerVariableWithLinearScanAndLazyEncoding() but use a core-based approach instead. Note that the given objective_var is just used for reporting the lower-bound/upper-bound and do not need to be linked with its linear representation.

Unlike MinimizeIntegerVariableWithLinearScanAndLazyEncoding() this function just return the last solver status. In particular if it is INFEASIBLE but feasible_solution_observer() was called, it means we are at OPTIMAL.

Definition at line 93 of file optimization.h.

Constructor & Destructor Documentation

◆ CoreBasedOptimizer()

operations_research::sat::CoreBasedOptimizer::CoreBasedOptimizer ( IntegerVariable objective_var,
absl::Span< const IntegerVariable > variables,
absl::Span< const IntegerValue > coefficients,
std::function< void()> feasible_solution_observer,
Model * model )

This is used by the "stratified" approach. We will only consider terms with a weight not lower than this threshold. The threshold will decrease as the algorithm progress.

Definition at line 436 of file optimization.cc.

Member Function Documentation

◆ Optimize()

SatSolver::Status operations_research::sat::CoreBasedOptimizer::Optimize ( )
Todo
(user): Change the algo slighlty to allow resuming from the last aborted position. Currently, the search is "resumable", but it will restart some of the work already done, so it might just never find anything.

Hack: If the objective is fully Boolean, we use the OptimizeWithSatEncoding() version as it seems to be better.

Todo
(user): Try to understand exactly why and merge both code path.

In some corner case, it is possible the GetOrCreateAssociatedLiteral() returns identical or negated literal of another term. We don't support this below, so we need to make sure this is not the case.

Todo
(user): It might be interesting to redo this kind of presolving once high cost booleans have been fixed as we might have more at most one between literal in the objective by then.

Or alternatively, we could try this or something like it on the literals from the cores as they are found. We should probably make sure that if it exist, a core of size two is always added. And for such core, we can always try to see if the "at most one" can be extended.

Todo
(user): The core is returned in the same order as the assumptions, so we don't really need this map, we could just do a linear scan to recover which node are part of the core. This however needs to be properly unit tested before usage.

Start the algorithm.

Todo
(user): This always resets the solver to level zero. Because of that we don't resume a solve in "chunk" perfectly. Fix.

Bulk cover optimization.

Todo
(user): If the search is aborted during this phase and we solve in "chunk", we don't resume perfectly from where it was. Fix.

We assumes all terms (modulo stratification) at their lower-bound.

Todo
(user): These can be simply removed from the list.

Skip fixed terms. We still keep them around for a proper lower-bound computation.

Todo
(user): we could keep an objective offset instead.

Only consider the terms above the threshold.

No assumptions with the current stratification? use the next one.

If there is only one or two assumptions left, we switch the algorithm.

Display the progress.

Convert integer_assumptions to Literals.

Tricky: In some rare case, it is possible that the same literal correspond to more that one assumptions. In this case, we can just pick one of them when converting back a core to term indices.

Todo
(user): We can probably be smarter about the cost of the assumptions though.

Solve under the assumptions.

Todo
(user): If the "search" is interrupted while computing cores, we currently do not resume it flawlessly. We however add any cores we found before aborting.

Process the cores by creating new variables and transferring the minimum weight of each core to it.

This just increase the lower-bound of the corresponding node.

Todo
(user): Maybe the solver should do it right away.

Compute the min weight of all the terms in the core. The lower bound will be increased by that much because at least one assumption in the core must be true. This is also why we can start at 1 for new_var_lb.

When this happen, the core is now trivially "minimized" by the new bound on this variable, so there is no point in adding it.

We will "transfer" min_weight from all the variables of the core to a new variable.

Sum variables in the core <= new_var.

Abort if we reached the time limit. Note that we still add any cores we found in case the solve is split in "chunk".

Definition at line 1028 of file optimization.cc.

◆ OptimizeWithSatEncoding()

SatSolver::Status operations_research::sat::CoreBasedOptimizer::OptimizeWithSatEncoding ( absl::Span< const Literal > literals,
absl::Span< const IntegerVariable > vars,
absl::Span< const Coefficient > coefficients,
Coefficient offset )

A different way to encode the objective as core are found.

If the vector if literals is passed it will use that, otherwise it will encode the passed integer variables. In both cases, the vector used should be of the same size as the coefficients vector.

It seems to be more powerful, but it isn't completely implemented yet.

Todo
(user):
  • Support resuming for interleaved search.
  • Implement all core heurisitics.

Create one initial nodes per variables with cost.

Todo
(user): We could create EncodingNode out of IntegerVariable.
Note
the nodes order and assumptions extracted from it will be stable. In particular, new nodes will be appended at the end, which make the solver more likely to find core involving only the first assumptions. This is important at the beginning so the solver as a chance to find a lot of non-overlapping small cores without the need to have dedicated non-overlapping core finder.
Todo
(user): It could still be beneficial to add one. Experiments.

All Booleans.

Use integer encoding.

Todo
(user): This might not be ideal if there are holes in the domain. It should work by adding duplicates literal, but we should be able to be more efficient.

Initialize the bounds. This is in term of number of variables not at their minimal value.

This is used by the "stratified" approach.

Start the algorithm.

Note
the objective_var_ upper bound is the one from the "improving" problem, so if we have a feasible solution, it will be the best solution objective value - 1.

Report the improvement.

Note
we have a callback that will do the same, but doing it earlier allow us to add more information.

We adapt the stratified lower bound when the gap is small. All literals with such weight will be in an at_most_one relationship, which will lead to a nice encoding if we find a core.

We do not have any assumptions anymore, but we still need to see if the problem is feasible or not!

Solve under the assumptions.

Todo
(user): Find multiple core like in the "main" algorithm. This is just trying to solve with assumptions not involving the newly found core.
Todo
(user): With stratification, sometime we just spend too much time trying to find a feasible solution/prove infeasibility and we could instead just use stratification=0 to find easty core and improve lower bound.

If not all assumptions were taken, continue with a lower stratified bound. Otherwise we have an optimal solution.

We have a new core.

Compute the min weight of all the nodes in the core. The lower bound will be increased by that much.

We only count an iter when we found a core.

Definition at line 669 of file optimization.cc.


The documentation for this class was generated from the following files: