Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
|
#include <optimization.h>
Public Member Functions | |
CoreBasedOptimizer (IntegerVariable objective_var, absl::Span< const IntegerVariable > variables, absl::Span< const IntegerValue > coefficients, std::function< void()> feasible_solution_observer, Model *model) | |
SatSolver::Status | Optimize () |
SatSolver::Status | OptimizeWithSatEncoding (absl::Span< const Literal > literals, absl::Span< const IntegerVariable > vars, absl::Span< const Coefficient > coefficients, Coefficient offset) |
Same as MinimizeIntegerVariableWithLinearScanAndLazyEncoding() but use a core-based approach instead. Note that the given objective_var is just used for reporting the lower-bound/upper-bound and do not need to be linked with its linear representation.
Unlike MinimizeIntegerVariableWithLinearScanAndLazyEncoding() this function just return the last solver status. In particular if it is INFEASIBLE but feasible_solution_observer() was called, it means we are at OPTIMAL.
Definition at line 93 of file optimization.h.
operations_research::sat::CoreBasedOptimizer::CoreBasedOptimizer | ( | IntegerVariable | objective_var, |
absl::Span< const IntegerVariable > | variables, | ||
absl::Span< const IntegerValue > | coefficients, | ||
std::function< void()> | feasible_solution_observer, | ||
Model * | model ) |
This is used by the "stratified" approach. We will only consider terms with a weight not lower than this threshold. The threshold will decrease as the algorithm progress.
Definition at line 436 of file optimization.cc.
SatSolver::Status operations_research::sat::CoreBasedOptimizer::Optimize | ( | ) |
Hack: If the objective is fully Boolean, we use the OptimizeWithSatEncoding() version as it seems to be better.
In some corner case, it is possible the GetOrCreateAssociatedLiteral() returns identical or negated literal of another term. We don't support this below, so we need to make sure this is not the case.
Or alternatively, we could try this or something like it on the literals from the cores as they are found. We should probably make sure that if it exist, a core of size two is always added. And for such core, we can always try to see if the "at most one" can be extended.
Start the algorithm.
Bulk cover optimization.
We assumes all terms (modulo stratification) at their lower-bound.
Skip fixed terms. We still keep them around for a proper lower-bound computation.
Only consider the terms above the threshold.
No assumptions with the current stratification? use the next one.
If there is only one or two assumptions left, we switch the algorithm.
Display the progress.
Convert integer_assumptions to Literals.
Tricky: In some rare case, it is possible that the same literal correspond to more that one assumptions. In this case, we can just pick one of them when converting back a core to term indices.
Solve under the assumptions.
Process the cores by creating new variables and transferring the minimum weight of each core to it.
This just increase the lower-bound of the corresponding node.
Compute the min weight of all the terms in the core. The lower bound will be increased by that much because at least one assumption in the core must be true. This is also why we can start at 1 for new_var_lb.
When this happen, the core is now trivially "minimized" by the new bound on this variable, so there is no point in adding it.
We will "transfer" min_weight from all the variables of the core to a new variable.
Sum variables in the core <= new_var.
Abort if we reached the time limit. Note that we still add any cores we found in case the solve is split in "chunk".
Definition at line 1028 of file optimization.cc.
SatSolver::Status operations_research::sat::CoreBasedOptimizer::OptimizeWithSatEncoding | ( | absl::Span< const Literal > | literals, |
absl::Span< const IntegerVariable > | vars, | ||
absl::Span< const Coefficient > | coefficients, | ||
Coefficient | offset ) |
A different way to encode the objective as core are found.
If the vector if literals is passed it will use that, otherwise it will encode the passed integer variables. In both cases, the vector used should be of the same size as the coefficients vector.
It seems to be more powerful, but it isn't completely implemented yet.
Create one initial nodes per variables with cost.
All Booleans.
Use integer encoding.
Initialize the bounds. This is in term of number of variables not at their minimal value.
This is used by the "stratified" approach.
Start the algorithm.
Report the improvement.
We adapt the stratified lower bound when the gap is small. All literals with such weight will be in an at_most_one relationship, which will lead to a nice encoding if we find a core.
We do not have any assumptions anymore, but we still need to see if the problem is feasible or not!
Solve under the assumptions.
If not all assumptions were taken, continue with a lower stratified bound. Otherwise we have an optimal solution.
We have a new core.
Compute the min weight of all the nodes in the core. The lower bound will be increased by that much.
We only count an iter when we found a core.
Definition at line 669 of file optimization.cc.