Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::glop::RevisedSimplex Class Reference

Entry point of the revised simplex algorithm implementation. More...

#include <revised_simplex.h>

Public Member Functions

 RevisedSimplex ()
 
 RevisedSimplex (const RevisedSimplex &)=delete
 This type is neither copyable nor movable.
 
RevisedSimplexoperator= (const RevisedSimplex &)=delete
 
void SetParameters (const GlopParameters &parameters)
 Sets or gets the algorithm parameters to be used on the next Solve().
 
const GlopParameters & GetParameters () const
 
ABSL_MUST_USE_RESULT Status Solve (const LinearProgram &lp, TimeLimit *time_limit)
 
void ClearStateForNextSolve ()
 
void LoadStateForNextSolve (const BasisState &state)
 Uses the given state as a warm-start for the next Solve() call.
 
void SetStartingVariableValuesForNextSolve (const DenseRow &values)
 
void NotifyThatMatrixIsUnchangedForNextSolve ()
 
void NotifyThatMatrixIsChangedForNextSolve ()
 
RowIndex GetProblemNumRows () const
 Getters to retrieve all the information computed by the last Solve().
 
ColIndex GetProblemNumCols () const
 
ProblemStatus GetProblemStatus () const
 
Fractional GetObjectiveValue () const
 
int64_t GetNumberOfIterations () const
 
Fractional GetVariableValue (ColIndex col) const
 
Fractional GetReducedCost (ColIndex col) const
 
const DenseRowGetReducedCosts () const
 
Fractional GetDualValue (RowIndex row) const
 
Fractional GetConstraintActivity (RowIndex row) const
 
VariableStatus GetVariableStatus (ColIndex col) const
 
ConstraintStatus GetConstraintStatus (RowIndex row) const
 
const BasisStateGetState () const
 
double DeterministicTime () const
 
bool objective_limit_reached () const
 
const DenseRowGetPrimalRay () const
 
const DenseColumnGetDualRay () const
 
const DenseRowGetDualRayRowCombination () const
 This is the "dual ray" linear combination of the matrix rows.
 
ColIndex GetBasis (RowIndex row) const
 
const ScatteredRowGetUnitRowLeftInverse (RowIndex row)
 
RowToColMapping GetBasisVector () const
 
const BasisFactorizationGetBasisFactorization () const
 
std::string StatString ()
 Returns statistics about this class as a string.
 
RowMajorSparseMatrix ComputeDictionary (const DenseRow *column_scales)
 
void ComputeBasicVariablesForState (const LinearProgram &linear_program, const BasisState &state)
 
void ClearIntegralityScales ()
 
void SetIntegralityScale (ColIndex col, Fractional scale)
 
void SetLogger (SolverLogger *logger)
 

Detailed Description

Entry point of the revised simplex algorithm implementation.

Definition at line 132 of file revised_simplex.h.

Constructor & Destructor Documentation

◆ RevisedSimplex() [1/2]

operations_research::glop::RevisedSimplex::RevisedSimplex ( )

Definition at line 103 of file revised_simplex.cc.

◆ RevisedSimplex() [2/2]

operations_research::glop::RevisedSimplex::RevisedSimplex ( const RevisedSimplex & )
delete

This type is neither copyable nor movable.

Member Function Documentation

◆ ClearIntegralityScales()

void operations_research::glop::RevisedSimplex::ClearIntegralityScales ( )
inline

This is used in a MIP context to polish the final basis. We assume that the columns for which SetIntegralityScale() has been called correspond to integral variable once multiplied by the given factor.

Definition at line 247 of file revised_simplex.h.

◆ ClearStateForNextSolve()

void operations_research::glop::RevisedSimplex::ClearStateForNextSolve ( )

Do not use the current solution as a warm-start for the next Solve(). The next Solve() will behave as if the class just got created.

Definition at line 135 of file revised_simplex.cc.

◆ ComputeBasicVariablesForState()

void operations_research::glop::RevisedSimplex::ComputeBasicVariablesForState ( const LinearProgram & linear_program,
const BasisState & state )

Initializes the matrix for the given 'linear_program' and 'state' and computes the variable values for basic variables using non-basic variables.

Definition at line 3850 of file revised_simplex.cc.

◆ ComputeDictionary()

util_intops::StrongVector< RowIndex, SparseRow > operations_research::glop::RevisedSimplex::ComputeDictionary ( const DenseRow * column_scales)

Computes the dictionary B^-1*N on-the-fly row by row. Returns the resulting matrix as a vector of sparse rows so that it is easy to use it on the left side in the matrix multiplication. Runs in O(num_non_zeros_in_matrix).

Todo
(user): Use row scales as well.

Definition at line 3829 of file revised_simplex.cc.

◆ DeterministicTime()

double operations_research::glop::RevisedSimplex::DeterministicTime ( ) const
Todo
(user): Count what is missing.

Definition at line 757 of file revised_simplex.cc.

◆ GetBasis()

ColIndex operations_research::glop::RevisedSimplex::GetBasis ( RowIndex row) const

Returns the index of the column in the basis and the basis factorization.

Note
the order of the column in the basis is important since it is the one used by the various solve functions provided by the BasisFactorization class.

Definition at line 734 of file revised_simplex.cc.

◆ GetBasisFactorization()

const BasisFactorization & operations_research::glop::RevisedSimplex::GetBasisFactorization ( ) const

Definition at line 736 of file revised_simplex.cc.

◆ GetBasisVector()

RowToColMapping operations_research::glop::RevisedSimplex::GetBasisVector ( ) const
inline

Returns a copy of basis_ vector for outside applications (like cuts) to have the correspondence between rows and columns of the dictionary.

Definition at line 226 of file revised_simplex.h.

◆ GetConstraintActivity()

Fractional operations_research::glop::RevisedSimplex::GetConstraintActivity ( RowIndex row) const

Note the negative sign since the slack variable is such that constraint_activity + slack_value = 0.

Definition at line 701 of file revised_simplex.cc.

◆ GetConstraintStatus()

ConstraintStatus operations_research::glop::RevisedSimplex::GetConstraintStatus ( RowIndex row) const

The status of the given constraint is the same as the status of the associated slack variable with a change of sign.

Definition at line 707 of file revised_simplex.cc.

◆ GetDualRay()

const DenseColumn & operations_research::glop::RevisedSimplex::GetDualRay ( ) const

Definition at line 724 of file revised_simplex.cc.

◆ GetDualRayRowCombination()

const DenseRow & operations_research::glop::RevisedSimplex::GetDualRayRowCombination ( ) const

This is the "dual ray" linear combination of the matrix rows.

Definition at line 729 of file revised_simplex.cc.

◆ GetDualValue()

Fractional operations_research::glop::RevisedSimplex::GetDualValue ( RowIndex row) const

Definition at line 691 of file revised_simplex.cc.

◆ GetNumberOfIterations()

int64_t operations_research::glop::RevisedSimplex::GetNumberOfIterations ( ) const

Definition at line 671 of file revised_simplex.cc.

◆ GetObjectiveValue()

Fractional operations_research::glop::RevisedSimplex::GetObjectiveValue ( ) const

Definition at line 667 of file revised_simplex.cc.

◆ GetParameters()

const GlopParameters & operations_research::glop::RevisedSimplex::GetParameters ( ) const
inline

Definition at line 142 of file revised_simplex.h.

◆ GetPrimalRay()

const DenseRow & operations_research::glop::RevisedSimplex::GetPrimalRay ( ) const

If the problem status is PRIMAL_UNBOUNDED (respectively DUAL_UNBOUNDED), then the solver has a corresponding primal (respectively dual) ray to show the unboundness. From a primal (respectively dual) feasible solution any positive multiple of this ray can be added to the solution and keep it feasible. Moreover, by doing so, the objective of the problem will improve and its magnitude will go to infinity.

Note
when the problem is DUAL_UNBOUNDED, the dual ray is also known as the Farkas proof of infeasibility of the problem.

Definition at line 720 of file revised_simplex.cc.

◆ GetProblemNumCols()

ColIndex operations_research::glop::RevisedSimplex::GetProblemNumCols ( ) const

Definition at line 677 of file revised_simplex.cc.

◆ GetProblemNumRows()

RowIndex operations_research::glop::RevisedSimplex::GetProblemNumRows ( ) const

Getters to retrieve all the information computed by the last Solve().

Definition at line 675 of file revised_simplex.cc.

◆ GetProblemStatus()

ProblemStatus operations_research::glop::RevisedSimplex::GetProblemStatus ( ) const

Definition at line 663 of file revised_simplex.cc.

◆ GetReducedCost()

Fractional operations_research::glop::RevisedSimplex::GetReducedCost ( ColIndex col) const

Definition at line 683 of file revised_simplex.cc.

◆ GetReducedCosts()

const DenseRow & operations_research::glop::RevisedSimplex::GetReducedCosts ( ) const

Definition at line 687 of file revised_simplex.cc.

◆ GetState()

const BasisState & operations_research::glop::RevisedSimplex::GetState ( ) const

Definition at line 699 of file revised_simplex.cc.

◆ GetUnitRowLeftInverse()

const ScatteredRow & operations_research::glop::RevisedSimplex::GetUnitRowLeftInverse ( RowIndex row)
inline

Definition at line 220 of file revised_simplex.h.

◆ GetVariableStatus()

VariableStatus operations_research::glop::RevisedSimplex::GetVariableStatus ( ColIndex col) const

Definition at line 695 of file revised_simplex.cc.

◆ GetVariableValue()

Fractional operations_research::glop::RevisedSimplex::GetVariableValue ( ColIndex col) const

Definition at line 679 of file revised_simplex.cc.

◆ LoadStateForNextSolve()

void operations_research::glop::RevisedSimplex::LoadStateForNextSolve ( const BasisState & state)

Uses the given state as a warm-start for the next Solve() call.

We avoid marking the state as set externally if it is the same as the current one.

Todo
(user): Add comparison operator.

Definition at line 141 of file revised_simplex.cc.

◆ NotifyThatMatrixIsChangedForNextSolve()

void operations_research::glop::RevisedSimplex::NotifyThatMatrixIsChangedForNextSolve ( )

Definition at line 161 of file revised_simplex.cc.

◆ NotifyThatMatrixIsUnchangedForNextSolve()

void operations_research::glop::RevisedSimplex::NotifyThatMatrixIsUnchangedForNextSolve ( )

Advanced usage. Tells the next Solve() that the matrix inside the linear program will not change compared to the one used the last time Solve() was called. This allows to bypass the somewhat costly check of comparing both matrices. Note that this call will be ignored if Solve() was never called or if ClearStateForNextSolve() was called.

Definition at line 157 of file revised_simplex.cc.

◆ objective_limit_reached()

bool operations_research::glop::RevisedSimplex::objective_limit_reached ( ) const
inline

Definition at line 197 of file revised_simplex.h.

◆ operator=()

RevisedSimplex & operations_research::glop::RevisedSimplex::operator= ( const RevisedSimplex & )
delete

◆ SetIntegralityScale()

void operations_research::glop::RevisedSimplex::SetIntegralityScale ( ColIndex col,
Fractional scale )

Definition at line 2620 of file revised_simplex.cc.

◆ SetLogger()

void operations_research::glop::RevisedSimplex::SetLogger ( SolverLogger * logger)
inline

Definition at line 250 of file revised_simplex.h.

◆ SetParameters()

void operations_research::glop::RevisedSimplex::SetParameters ( const GlopParameters & parameters)

Sets or gets the algorithm parameters to be used on the next Solve().

Definition at line 3629 of file revised_simplex.cc.

◆ SetStartingVariableValuesForNextSolve()

void operations_research::glop::RevisedSimplex::SetStartingVariableValuesForNextSolve ( const DenseRow & values)

Advanced usage. While constructing the initial basis, if this is called then we will use these values as the initial starting value for the FREE variables.

Definition at line 152 of file revised_simplex.cc.

◆ Solve()

Status operations_research::glop::RevisedSimplex::Solve ( const LinearProgram & lp,
TimeLimit * time_limit )

Solves the given linear program.

We accept two forms of LinearProgram:

  • The lp can be in the equations form Ax = 0 created by LinearProgram::AddSlackVariablesForAllRows(), i.e. the rightmost square submatrix of A is an identity matrix, all its columns have been marked as slack variables, and the bounds of all constraints have been set to 0.
  • If not, we will convert it internally while copying it to the internal structure used.

By default, the algorithm tries to exploit the computation done during the last Solve() call. It will analyze the difference of the new linear program and try to use the previously computed solution as a warm-start. To disable this behavior or give explicit warm-start data, use one of the State*() functions below.

Initialization. Note That Initialize() must be called first since it analyzes the current solver state.

In case we abort because of an error, we cannot assume that the current solution state will be in sync with all our internal data structure. In case we abort without resetting it, setting this allow us to still use the previous state info, but we will double-check everything.

Todo
(user): Avoid doing the first phase checks when we know from the incremental solve that the solution is already dual or primal feasible.

Note(user): In most cases, the matrix will already be refactorized and both Refactorize() and PermuteBasis() will do nothing. However, if the time limit is reached during the first phase, this might not be the case and RecomputeBasicVariableValues() below DCHECKs that the matrix is refactorized. This is not required, but we currently only want to recompute values from scratch when the matrix was just refactorized to maximize precision.

This is needed to display errors properly.

Test initial dual infeasibility, ignoring boxed variables. We currently refactorize/recompute the reduced costs if not already done.

Todo
(user): Not ideal in an incremental setting.

Transform problem and recompute variable values.

Optimize.

Restore original problem and recompute variable values. Note that we need the reduced cost on the fixed positions here.

Todo
(user): Note that if there was cost shifts, we just keep them until the end of the optim.
Todo
(user): What if slightly infeasible? we shouldn't really stop. Call primal ? use higher tolerance ? It seems we can always kind of continue and deal with the issue later. Find a way other than this + 1e-6 hack.

After the primal phase I, we need to restore the objective.

Because of shifts or perturbations, we may need to re-run a dual simplex after the primal simplex finished, or the opposite.

We alter between solving with primal and dual Phase II algorithm as long as time limit permits and we did not yet achieve the desired precision. I.e., we run iteration i if the solution from iteration i-1 was not precise after we removed the bound and cost shifts and perturbations.

NOTE(user): We may still hit the limit of max_number_of_reoptimizations() which means the status returned can be PRIMAL_FEASIBLE or DUAL_FEASIBLE (i.e., these statuses are not necesserily a consequence of hitting a time limit).

We want to enter the loop when both num_optims and num_iterations_ are equal to the corresponding limits (to return a meaningful status when the limits are set to 0).

Run the primal simplex.

Run the dual simplex.

PrimalMinimize() or DualMinimize() always double check the result with maximum precision by refactoring the basis before exiting (except if an iteration or time limit was reached).

If SetIntegralityScale() was called, we perform a polish operation.

Remove the bound and cost shifts (or perturbations).

Note(user): Currently, we never do both at the same time, so we could be a bit faster here, but then this is quick anyway.

Todo
(user): We should also confirm the PRIMAL_UNBOUNDED or DUAL_UNBOUNDED status by checking with the other phase I that the problem is really DUAL_INFEASIBLE or PRIMAL_INFEASIBLE. For instance we currently report PRIMAL_UNBOUNDED with the primal on the problem l30.mps instead of OPTIMAL and the dual does not have issues on this problem.
Todo
(user): There is another issue on infeas/qual.mps. I think we should just check the dual ray, not really the current solution dual feasibility.

All of our tolerance are okay, but the dual ray might be fishy. This happens on l30.mps and on L1_sixm250obs.mps.gz. If the ray do not seems good enough, we might actually just be at the optimal and have trouble going down to our relatively low default tolerances.

The difference bettween optimal and unbounded can be thin. Say you have a free variable with no constraint and a cost of epsilon, depending on epsilon and your tolerance, this will either cause the problem to be unbounded, or can be ignored.

Here, we compute what is the cost gain if we move from the current value with the ray up to the bonds + tolerance. If this gain is < 1, it is hard to claim we are really unbounded. This is a quick heuristic to error on the side of optimality rather than unboundedness.

Validate the dual ray that prove primal infeasibility.

By taking the linear combination of the constraint, we should arrive to an infeasible <= 0 constraint using the variable bounds.

Change the status, if after the shift and perturbation removal the problem is not OPTIMAL anymore.

We use the "precise" tolerances here to try to report the best possible solution. Note however that we cannot really hope for an infeasibility lower than its corresponding residual error. Note that we already adapt the tolerance like this during the simplex execution.

Check that the return status is "precise".

Todo
(user): we currently skip the DUAL_INFEASIBLE status because the quantities are not up to date in this case.

If the user didn't provide starting variable values, then there is no need to check for super-basic variables.

Todo
(user): We should re-check for feasibility at this point and apply clean-up as needed.

Store the result for the solution getters.

If the problem is unbounded, set the objective value to +/- infinity.

Definition at line 165 of file revised_simplex.cc.

◆ StatString()

std::string operations_research::glop::RevisedSimplex::StatString ( )

Returns statistics about this class as a string.

Definition at line 3593 of file revised_simplex.cc.


The documentation for this class was generated from the following files: