Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::sat::IntegerRoundingCutHelper Class Reference

#include <cuts.h>

Public Member Functions

 ~IntegerRoundingCutHelper ()
 
bool ComputeCut (RoundingOptions options, const CutData &base_ct, ImpliedBoundsProcessor *ib_processor=nullptr)
 Returns true on success. The cut can be accessed via cut().
 
const CutDatacut () const
 If successful, info about the last generated cut.
 
void SetSharedStatistics (SharedStatistics *stats)
 
std::string Info () const
 Single line of text that we append to the cut log line.
 

Detailed Description

Definition at line 406 of file cuts.h.

Constructor & Destructor Documentation

◆ ~IntegerRoundingCutHelper()

operations_research::sat::IntegerRoundingCutHelper::~IntegerRoundingCutHelper ( )

Definition at line 646 of file cuts.cc.

Member Function Documentation

◆ ComputeCut()

bool operations_research::sat::IntegerRoundingCutHelper::ComputeCut ( RoundingOptions options,
const CutData & base_ct,
ImpliedBoundsProcessor * ib_processor = nullptr )

Returns true on success. The cut can be accessed via cut().

Todo
(user): This is slow, 50% of run time on a2c1s1.pb.gz. Optimize!

Try IB before heuristic? This should be better except it can mess up the norm and the divisors.

We complement the term before trying the implied bound.

Todo
(user): We assume that this is called with and without the option use_ib_before_heuristic, so that we can abort if no IB has been applied since then we will redo the computation. This is not really clean.

Our heuristic will try to generate a few different cuts, and we will keep the most violated one scaled by the l2 norm of the relevant position.

Todo
(user): Experiment for the best value of this initial violation threshold. Note also that we use the l2 norm on the restricted position here. Maybe we should change that? On that note, the L2 norm usage seems a bit weird to me since it grows with the number of term in the cut. And often, we already have a good cut, and we make it stronger by adding extra terms that do not change its activity.

The discussion above only concern the best_scaled_violation initial value. The remainder_threshold allows to not consider cuts for which the final efficacity is clearly lower than 1e-3 (it is a bound, so we could generate cuts with a lower efficacity than this).

Todo
(user): If the rhs is small and close to zero, we might want to consider different way of complementing the variables.

There is no point trying twice the same divisor or a divisor that is too small. Note that we use a higher threshold than the remainder_threshold because we can boost the remainder thanks to our adjusting heuristic below and also because this allows to have cuts with a small range of coefficients.

Note
because of the slacks, initial coeff are here too.

If we have too many divisor to try, restrict to the first ones which should correspond to the highest lp values.

Note
most of the time is spend here since we call this function on many linear equation, and just a few of them have a good enough scaled violation. We can spend more time afterwards to tune the cut.
Todo
(user): Avoid quadratic algorithm? Note that we are quadratic in relevant positions not the full cut size, but this is still too much on some problems.
Note
the function will abort right away if PositiveRemainder() is not good enough, so it is quick for bad divisor.

Try best_divisor divided by small number.

Re try complementation on the transformed cut.

Temporary complement this variable.

keep the change.

Restore.

Adjust coefficients as computed by the best GetScaledViolation().

Create the base super-additive function f().

Look amongst all our possible function f() for one that dominate greedily our current best one. Note that we prefer lower scaling factor since that result in a cut with lower coefficients.

We only look at relevant position and ignore the other. Not sure this is the best approach.

Note
the complexity seems high 100 * 2 * options.max_scaling, but this only run on cuts that are already efficient and the inner loop tend to abort quickly. I didn't see this code in the cpu profile so far.

Use implied bounds to "lift" Booleans into the cut. This should lead to stronger cuts even if the norms might be worse.

More complementation, but for the same f. If we can do that, it probably means our heuristics above are not great.

Complementing an entry gives: [a * X <= b] -> [-a * (diff - X) <= b - a * diff]

We will compare what happen when we apply f: [f(b) - f(a) * lp(X)] -> [f(b - a * diff) - f(-a) * (diff - lp(X))].

If lp(X) is zero, then the transformation is always worse. Because f(b - a * diff) >= f(b) + f(-a) * diff by super-additivity.

However the larger is X, the better it gets since at diff, we have f(b) >= f(b - a * diff) + f(a * diff) >= f(b - a * diff) + f(a) * diff.

Todo
(user): It is still unclear if we have a * X + b * (1 - X) <= rhs for a Boolean X, what is the best way to apply f and if we should merge the terms. If there is no other terms, best is probably f(rhs - a) * X + f(rhs - b) * (1 - X).

Avoid potential overflow here.

Definition at line 785 of file cuts.cc.

◆ cut()

const CutData & operations_research::sat::IntegerRoundingCutHelper::cut ( ) const
inline

If successful, info about the last generated cut.

Definition at line 415 of file cuts.h.

◆ Info()

std::string operations_research::sat::IntegerRoundingCutHelper::Info ( ) const
inline

Single line of text that we append to the cut log line.

Definition at line 420 of file cuts.h.

◆ SetSharedStatistics()

void operations_research::sat::IntegerRoundingCutHelper::SetSharedStatistics ( SharedStatistics * stats)
inline

Definition at line 417 of file cuts.h.


The documentation for this class was generated from the following files: