Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
|
#include <cuts.h>
Public Member Functions | |
~IntegerRoundingCutHelper () | |
bool | ComputeCut (RoundingOptions options, const CutData &base_ct, ImpliedBoundsProcessor *ib_processor=nullptr) |
Returns true on success. The cut can be accessed via cut(). | |
const CutData & | cut () const |
If successful, info about the last generated cut. | |
void | SetSharedStatistics (SharedStatistics *stats) |
std::string | Info () const |
Single line of text that we append to the cut log line. | |
operations_research::sat::IntegerRoundingCutHelper::~IntegerRoundingCutHelper | ( | ) |
bool operations_research::sat::IntegerRoundingCutHelper::ComputeCut | ( | RoundingOptions | options, |
const CutData & | base_ct, | ||
ImpliedBoundsProcessor * | ib_processor = nullptr ) |
Returns true on success. The cut can be accessed via cut().
Try IB before heuristic? This should be better except it can mess up the norm and the divisors.
We complement the term before trying the implied bound.
Our heuristic will try to generate a few different cuts, and we will keep the most violated one scaled by the l2 norm of the relevant position.
The discussion above only concern the best_scaled_violation initial value. The remainder_threshold allows to not consider cuts for which the final efficacity is clearly lower than 1e-3 (it is a bound, so we could generate cuts with a lower efficacity than this).
There is no point trying twice the same divisor or a divisor that is too small. Note that we use a higher threshold than the remainder_threshold because we can boost the remainder thanks to our adjusting heuristic below and also because this allows to have cuts with a small range of coefficients.
If we have too many divisor to try, restrict to the first ones which should correspond to the highest lp values.
Try best_divisor divided by small number.
Re try complementation on the transformed cut.
Temporary complement this variable.
keep the change.
Restore.
Adjust coefficients as computed by the best GetScaledViolation().
Create the base super-additive function f().
Look amongst all our possible function f() for one that dominate greedily our current best one. Note that we prefer lower scaling factor since that result in a cut with lower coefficients.
We only look at relevant position and ignore the other. Not sure this is the best approach.
Use implied bounds to "lift" Booleans into the cut. This should lead to stronger cuts even if the norms might be worse.
More complementation, but for the same f. If we can do that, it probably means our heuristics above are not great.
Complementing an entry gives: [a * X <= b] -> [-a * (diff - X) <= b - a * diff]
We will compare what happen when we apply f: [f(b) - f(a) * lp(X)] -> [f(b - a * diff) - f(-a) * (diff - lp(X))].
If lp(X) is zero, then the transformation is always worse. Because f(b - a * diff) >= f(b) + f(-a) * diff by super-additivity.
However the larger is X, the better it gets since at diff, we have f(b) >= f(b - a * diff) + f(a * diff) >= f(b - a * diff) + f(a) * diff.
Avoid potential overflow here.
|
inline |
|
inline |
|
inline |