Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::sat::DualBoundStrengthening Class Reference

#include <var_domination.h>

Public Member Functions

void Reset (int num_variables)
 
void CannotDecrease (absl::Span< const int > refs, int ct_index=-1)
 All constraints should be mapped to one of more call to these functions.
 
void CannotIncrease (absl::Span< const int > refs, int ct_index=-1)
 
void CannotMove (absl::Span< const int > refs, int ct_index=-1)
 
template<typename LinearProto >
void ProcessLinearConstraint (bool is_objective, const PresolveContext &context, const LinearProto &linear, int64_t min_activity, int64_t max_activity, int ct_index=-1)
 Most of the logic here deals with linear constraints.
 
bool Strengthen (PresolveContext *context)
 
int64_t CanFreelyDecreaseUntil (int ref) const
 
int NumDeletedConstraints () const
 Reset on each Strengthen() call.
 

Detailed Description

This detects variables that can move freely in one direction, or that can move freely as long as their value do not cross a bound.

Todo
(user): This is actually an important step to do before scaling as it can usually reduce really large bounds!

Definition at line 223 of file var_domination.h.

Member Function Documentation

◆ CanFreelyDecreaseUntil()

int64_t operations_research::sat::DualBoundStrengthening::CanFreelyDecreaseUntil ( int ref) const
inline

The given ref can always freely decrease until the returned value.

Note
this does not take into account the domain of the variable.

Definition at line 256 of file var_domination.h.

◆ CannotDecrease()

void operations_research::sat::DualBoundStrengthening::CannotDecrease ( absl::Span< const int > refs,
int ct_index = -1 )

All constraints should be mapped to one of more call to these functions.

Todo
(user): No need to set locking_ct_index_[var] if num_locks_[var] > 1

Optim: We cache pointers to avoid refetching them in the loop.

Definition at line 560 of file var_domination.cc.

◆ CannotIncrease()

void operations_research::sat::DualBoundStrengthening::CannotIncrease ( absl::Span< const int > refs,
int ct_index = -1 )

Optim: We cache pointers to avoid refetching them in the loop.

Definition at line 574 of file var_domination.cc.

◆ CannotMove()

void operations_research::sat::DualBoundStrengthening::CannotMove ( absl::Span< const int > refs,
int ct_index = -1 )

Optim: We cache pointers to avoid refetching them in the loop.

Definition at line 588 of file var_domination.cc.

◆ NumDeletedConstraints()

int operations_research::sat::DualBoundStrengthening::NumDeletedConstraints ( ) const
inline

Reset on each Strengthen() call.

Definition at line 261 of file var_domination.h.

◆ ProcessLinearConstraint()

template<typename LinearProto >
void operations_research::sat::DualBoundStrengthening::ProcessLinearConstraint ( bool is_objective,
const PresolveContext & context,
const LinearProto & linear,
int64_t min_activity,
int64_t max_activity,
int ct_index = -1 )

Most of the logic here deals with linear constraints.

lb side.

We never want to increase the objective value. Note that if the objective is lower bounded, we checked that on the lb side above.

ub side.

Definition at line 607 of file var_domination.cc.

◆ Reset()

void operations_research::sat::DualBoundStrengthening::Reset ( int num_variables)
inline

Reset the class to a clean state. This must be called before processing the constraints.

Definition at line 227 of file var_domination.h.

◆ Strengthen()

bool operations_research::sat::DualBoundStrengthening::Strengthen ( PresolveContext * context)

Once ALL constraints have been processed, call this to fix variables or reduce their domain if possible.

Note
this also tighten some constraint that are the only one blocking in one direction. Currently we only do that for implication, so that if we have two Booleans such that a + b <= 1 we transform that to = 1 and we remove one variable since we have now an equivalence relation.

Fix to lb?

Fix to ub?

Here we can fix to any value in [ub_limit, lb_limit] that is compatible with the current domain. We prefer zero or the lowest possible magnitude.

Here we can reduce the domain, but we must be careful when the domain has holes.

For detecting near-duplicate constraint that can be made equivalent. hash -> (ct_index, modified ref).

If there is only one blocking constraint, we can simplify the problem in a few situation.

Todo
(user): Cover all the cases.
Todo
(user): Fix variable right away rather than waiting for next call.

If we have an enforcement literal then we can always add the implication "not enforced" => var at its lower bound. If we also had enforced => fixed var, then var is in affine relation with the enforced literal and we can remove one variable.

Todo
(user): We can also deal with more than one enforcement.

Corner case.

Note(user): If we have enforced => var fixed, we could actually just have removed var from the constraint it was implied by another constraint. If not, because of the new affine relation we could remove it right away.

positive_ref = enf * implied + (1 - enf) * bound.

positive_ref = (1 - enf) * implied + enf * bound.

If we have a literal, we always add the implication. This seems like a good thing to do.

We can add an implication not_enforced => var to its bound ?

We can make enf equivalent to the constraint instead of just =>. This seems useful since internally we always use fully reified encoding.

If it is different, we have an equivalence now, and we can remove the constraint.

If we have two Booleans, each with a single blocking constraint that is the same if we "swap" the Booleans reference, then we can make the Booleans equivalent. This is because they will be forced to their bad value only if it is needed by the other part of their respective blocking constraints. This is related to the FindAlmostIdenticalLinearConstraints() except that here we can deal with non-equality or enforced constraints.

Todo
(user): Generalize to non-Boolean. Also for Boolean, we might miss some possible reduction if replacing X by 1 - X make a constraint near-duplicate of another.
Todo
(user): We can generalize to non-linear constraint.

Optimization: Skip if this constraint was already used as a single blocking constraint of another variable. If this is the case, it cannot be equivalent to another constraint with "var" substituted, since otherwise var would have two blocking constraint. We can safely skip it. this make sure this is in O(num_entries) and not more.

Already present!

We have a true equality. The two ref can be made equivalent.

We can delete one of the constraint since they are duplicate now.

Other potential cases?

If (a => b) is the only constraint blocking a literal a in the up direction, then we can set a == b !

Recover a => b where a is having an unique up_lock (i.e this constraint).

Note
if many implications are encoded in the same bool_and, we have to be careful that a is appearing in just one of them.
Todo
(user): Make sure implication graph is transitively reduced to not miss such reduction. More generally, this might only use the graph rather than the encoding into bool_and / at_most_one ? Basically if a => all_direct_deduction, we can transform it into a <=> all_direct_deduction if that is interesting. This could always be done on a max-2sat problem in one of the two direction. Also think about max-2sat specific presolve.

Here, we can only add the equivalence if the literal is the only on the lhs, otherwise there is actually more lock.

Definition at line 721 of file var_domination.cc.


The documentation for this class was generated from the following files: