![]() |
Google OR-Tools v9.12
a fast and portable software suite for combinatorial optimization
|
#include <var_domination.h>
Public Member Functions | |
void | Reset (int num_variables) |
void | CannotDecrease (absl::Span< const int > refs, int ct_index=-1) |
All constraints should be mapped to one of more call to these functions. | |
void | CannotIncrease (absl::Span< const int > refs, int ct_index=-1) |
void | CannotMove (absl::Span< const int > refs, int ct_index=-1) |
template<typename LinearProto> | |
void | ProcessLinearConstraint (bool is_objective, const PresolveContext &context, const LinearProto &linear, int64_t min_activity, int64_t max_activity, int ct_index=-1) |
Most of the logic here deals with linear constraints. | |
bool | Strengthen (PresolveContext *context) |
int64_t | CanFreelyDecreaseUntil (int ref) const |
int | NumDeletedConstraints () const |
Reset on each Strengthen() call. | |
This detects variables that can move freely in one direction, or that can move freely as long as their value do not cross a bound.
Definition at line 223 of file var_domination.h.
|
inline |
The given ref can always freely decrease until the returned value.
Definition at line 256 of file var_domination.h.
void operations_research::sat::DualBoundStrengthening::CannotDecrease | ( | absl::Span< const int > | refs, |
int | ct_index = -1 ) |
All constraints should be mapped to one of more call to these functions.
Optim: We cache pointers to avoid refetching them in the loop.
Definition at line 564 of file var_domination.cc.
void operations_research::sat::DualBoundStrengthening::CannotIncrease | ( | absl::Span< const int > | refs, |
int | ct_index = -1 ) |
Optim: We cache pointers to avoid refetching them in the loop.
Definition at line 578 of file var_domination.cc.
void operations_research::sat::DualBoundStrengthening::CannotMove | ( | absl::Span< const int > | refs, |
int | ct_index = -1 ) |
Optim: We cache pointers to avoid refetching them in the loop.
Definition at line 592 of file var_domination.cc.
|
inline |
Reset on each Strengthen() call.
Definition at line 261 of file var_domination.h.
void operations_research::sat::DualBoundStrengthening::ProcessLinearConstraint | ( | bool | is_objective, |
const PresolveContext & | context, | ||
const LinearProto & | linear, | ||
int64_t | min_activity, | ||
int64_t | max_activity, | ||
int | ct_index = -1 ) |
Most of the logic here deals with linear constraints.
lb side.
We never want to increase the objective value. Note that if the objective is lower bounded, we checked that on the lb side above.
ub side.
Definition at line 611 of file var_domination.cc.
|
inline |
Reset the class to a clean state. This must be called before processing the constraints.
Definition at line 227 of file var_domination.h.
bool operations_research::sat::DualBoundStrengthening::Strengthen | ( | PresolveContext * | context | ) |
Once ALL constraints have been processed, call this to fix variables or reduce their domain if possible.
Fix to lb?
Fix to ub?
Here we can fix to any value in [ub_limit, lb_limit] that is compatible with the current domain. We prefer zero or the lowest possible magnitude.
Here we can reduce the domain, but we must be careful when the domain has holes.
For detecting near-duplicate constraint that can be made equivalent. hash -> (ct_index, modified ref).
If there is only one blocking constraint, we can simplify the problem in a few situation.
If we have an enforcement literal then we can always add the implication "not enforced" => var at its lower bound. If we also have enforced => fixed var, then var is in affine relation with the enforced literal and we can remove one variable.
Corner case.
Note(user): If we have enforced => var fixed, we could actually just have removed var from the constraint it was implied by another constraint. If not, because of the new affine relation we could remove it right away.
The new affine relation added below can break the hint if hint(enf) is 0. In this case the only constraint blocking ref
from decreasing [ct
= enf => (var = implied)] does not apply. We can thus set the hint of positive_ref
to bound
to preserve the hint feasibility.
positive_ref = enf * implied + (1 - enf) * bound.
enf_var = PositiveRef(enf). positive_ref = (1 - enf_var) * implied + enf_var * bound.
If we have a literal, we always add the implication. This seems like a good thing to do.
The new implication can break the hint if hint(enf) is 0 and hint(ref) is 1. In this case the only locking constraint ct
does not apply and thus does not prevent decreasing the hint of ref in order to preserve the hint feasibility.
We can add an implication not_enforced => var to its bound ?
We can make enf equivalent to the constraint instead of just =>. This seems useful since internally we always use fully reified encoding.
If it is different, we have an equivalence now, and we can remove the constraint.
Extending ct
= "not(ref) => encoding_lit" to an equality can break the hint only if hint(ref) = hint(encoding_lit) = 1. But in this case ct
is actually not blocking ref from decreasing. We can thus set its hint to 0 to preserve the hint feasibility.
Extending ct
= "not(ref) => not(encoding_lit)" to an equality can break the hint only if hint(encoding_lit) = 0 and hint(ref) = 1. But in this case ct
is actually not blocking ref from decreasing. We can thus set its hint to 0 to preserve the hint feasibility.
encoding_lit
was maybe a new variable added during this loop, so make sure we cannot go out-of-bound.
The new_ct
constraint ref
=> (var
in complement
) below can break the hint if hint(var) is not in complement
. In this case, set the hint of ref
to false. This should be safe since the only constraint blocking ref
from decreasing is ct
= not(ref) => (var
in rhs
) – which does not apply when ref
is true.
If we have two Booleans, each with a single blocking constraint that is the same if we "swap" the Booleans reference, then we can make the Booleans equivalent. This is because they will be forced to their bad value only if it is needed by the other part of their respective blocking constraints. This is related to the FindAlmostIdenticalLinearConstraints() except that here we can deal with non-equality or enforced constraints.
Optimization: Skip if this constraint was already used as a single blocking constraint of another variable. If this is the case, it cannot be equivalent to another constraint with "var" substituted, since otherwise var would have two blocking constraint. We can safely skip it. this make sure this is in O(num_entries) and not more.
Already present!
Corner case: We just discovered l => ct and not(l) => ct. So ct must just always be true. And since that was the only blocking constraint for l, we can just set l to an arbitrary value.
We have a true equality. The two ref can be made equivalent.
If the two hints are different, and since both refs have an equivalent blocking constraint, then the constraint is actually not blocking the ref at 1 from decreasing. Hence we can set its hint to false to preserve the hint feasibility despite the new Boolean equality constraint.
We can delete one of the constraint since they are duplicate now.
Other potential cases?
If (a => b) is the only constraint blocking a literal a in the up direction, then we can set a == b !
Recover a => b where a is having an unique up_lock (i.e this constraint).
Here, we can only add the equivalence if the literal is the only on the lhs, otherwise there is actually more lock.
If hint(a) is false we can always set it to hint(b) since this can only increase its value. If hint(a) is true then hint(b) must be true as well if the hint is feasible, due to the a => b constraint. Setting hint(a) to hint(b) is thus always safe. The opposite is true as well.
Definition at line 725 of file var_domination.cc.