public interface GlopParametersOrBuilder
extends com.google.protobuf.MessageOrBuilder
Modifier and Type | Method and Description |
---|---|
boolean |
getAllowSimplexAlgorithmChange()
During incremental solve, let the solver decide if it use the primal or
dual simplex algorithm depending on the current solution and on the new
problem.
|
int |
getBasisRefactorizationPeriod()
Number of iterations between two basis refactorizations.
|
boolean |
getChangeStatusToImprecise()
If true, the internal API will change the return status to imprecise if the
solution does not respect the internal tolerances.
|
GlopParameters.CostScalingAlgorithm |
getCostScaling()
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING]; |
double |
getCrossoverBoundSnappingDistance()
If the starting basis contains FREE variable with bounds, we will move
any such variable to their closer bounds if the distance is smaller than
this parameter.
|
double |
getDegenerateMinistepFactor()
During a degenerate iteration, the more conservative approach is to do a
step of length zero (while shifting the bound of the leaving variable).
|
int |
getDevexWeightsResetPeriod()
Devex weights will be reset to 1.0 after that number of updates.
|
double |
getDropMagnitude()
Value in the input LP lower than this will be ignored.
|
double |
getDropTolerance()
In order to increase the sparsity of the manipulated vectors, floating
point values with a magnitude smaller than this parameter are set to zero
(only in some places).
|
double |
getDualFeasibilityTolerance()
Variables whose reduced costs have an absolute value smaller than this
tolerance are not considered as entering candidates.
|
double |
getDualizerThreshold()
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number
of constraints of the problem is more than this threshold times the number
of variables.
|
boolean |
getDualPricePrioritizeNorm()
On some problem like stp3d or pds-100 this makes a huge difference in
speed and number of iterations of the dual simplex.
|
double |
getDualSmallPivotThreshold()
Like small_pivot_threshold but for the dual simplex.
|
boolean |
getDynamicallyAdjustRefactorizationPeriod()
If this is true, then basis_refactorization_period becomes a lower bound on
the number of iterations between two refactorization (provided there is no
numerical accuracy issues).
|
boolean |
getExploitSingletonColumnInInitialBasis()
Whether or not we exploit the singleton columns already present in the
problem when we create the initial basis.
|
GlopParameters.PricingRule |
getFeasibilityRule()
PricingRule to use during the feasibility phase.
|
double |
getHarrisToleranceRatio()
This impacts the ratio test and indicates by how much we allow a basic
variable value that we move to go out of bounds.
|
GlopParameters.InitialBasisHeuristic |
getInitialBasis()
What heuristic is used to try to replace the fixed slack columns in the
initial basis of the primal simplex.
|
double |
getInitialConditionNumberThreshold()
If our upper bound on the condition number of the initial basis (from our
heurisitic or a warm start) is above this threshold, we revert to an all
slack basis.
|
boolean |
getInitializeDevexWithColumnNorms()
Whether we initialize devex weights to 1.0 or to the norms of the matrix
columns.
|
boolean |
getLogSearchProgress()
If true, logs the progress of a solve to LOG(INFO).
|
boolean |
getLogToStdout()
If true, logs will be displayed to stdout instead of using Google log info.
|
double |
getLuFactorizationPivotThreshold()
Threshold for LU-factorization: for stability reasons, the magnitude of the
chosen pivot at a given step is guaranteed to be greater than this
threshold times the maximum magnitude of all the possible pivot choices in
the same column.
|
double |
getMarkowitzSingularityThreshold()
If a pivot magnitude is smaller than this during the Markowitz LU
factorization, then the matrix is assumed to be singular.
|
int |
getMarkowitzZlatevParameter()
How many columns do we look at in the Markowitz pivoting rule to find
a good pivot.
|
double |
getMaxDeterministicTime()
Maximum deterministic time allowed to solve a problem.
|
long |
getMaxNumberOfIterations()
Maximum number of simplex iterations to solve a problem.
|
double |
getMaxNumberOfReoptimizations()
When the solution of phase II is imprecise, we re-run the phase II with the
opposite algorithm from that imprecise solution (i.e., if primal or dual
simplex was used, we use dual or primal simplex, respectively).
|
double |
getMaxTimeInSeconds()
Maximum time allowed in seconds to solve a problem.
|
double |
getMaxValidMagnitude()
Any finite values in the input LP must be below this threshold, otherwise
the model will be reported invalid.
|
double |
getMinimumAcceptablePivot()
We never follow a basis change with a pivot under this threshold.
|
int |
getNumOmpThreads()
Number of threads in the OMP parallel sections.
|
double |
getObjectiveLowerLimit()
The solver will stop as soon as it has proven that the objective is smaller
than objective_lower_limit or greater than objective_upper_limit.
|
double |
getObjectiveUpperLimit()
optional double objective_upper_limit = 41 [default = inf]; |
GlopParameters.PricingRule |
getOptimizationRule()
PricingRule to use during the optimization phase.
|
boolean |
getPerturbCostsInDualSimplex()
When this is true, then the costs are randomly perturbed before the dual
simplex is even started.
|
double |
getPreprocessorZeroTolerance()
A floating point tolerance used by the preprocessors.
|
double |
getPrimalFeasibilityTolerance()
This tolerance indicates by how much we allow the variable values to go out
of bounds and still consider the current solution primal-feasible.
|
boolean |
getProvideStrongOptimalGuarantee()
If true, then when the solver returns a solution with an OPTIMAL status,
we can guarantee that:
- The primal variable are in their bounds
|
boolean |
getPushToVertex()
If the optimization phases finishes with super-basic variables (i.e.,
variables that either 1) have bounds but are FREE in the basis, or 2) have
no bounds and are FREE in the basis at a nonzero value), then run a "push"
phase to push these variables to bounds, obtaining a vertex solution.
|
int |
getRandomSeed()
At the beginning of each solve, the random number generator used in some
part of the solver is reinitialized to this seed.
|
double |
getRatioTestZeroThreshold()
During the primal simplex (resp. dual simplex), the coefficients of the
direction (resp. update row) with a magnitude lower than this threshold are
not considered during the ratio test.
|
double |
getRecomputeEdgesNormThreshold()
Note that the threshold is a relative error on the actual norm (not the
squared one) and that edge norms are always greater than 1.
|
double |
getRecomputeReducedCostsThreshold()
We estimate the accuracy of the iteratively computed reduced costs.
|
double |
getRefactorizationThreshold()
We estimate the factorization accuracy of B during each pivot by using
the fact that we can compute the pivot coefficient in two ways:
- From direction[leaving_row]
|
double |
getRelativeCostPerturbation()
The magnitude of the cost perturbation is given by
RandomIn(1.0, 2.0) * (
relative_cost_perturbation * cost
+ relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05]; |
double |
getRelativeMaxCostPerturbation()
optional double relative_max_cost_perturbation = 55 [default = 1e-07]; |
GlopParameters.ScalingAlgorithm |
getScalingMethod()
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION]; |
double |
getSmallPivotThreshold()
When we choose the leaving variable, we want to avoid small pivot because
they are the less precise and may cause numerical instabilities.
|
double |
getSolutionFeasibilityTolerance()
When the problem status is OPTIMAL, we check the optimality using this
relative tolerance and change the status to IMPRECISE if an issue is
detected.
|
GlopParameters.SolverBehavior |
getSolveDualProblem()
Whether or not we solve the dual of the given problem.
|
boolean |
getUseDedicatedDualFeasibilityAlgorithm()
We have two possible dual phase I algorithms.
|
boolean |
getUseDualSimplex()
Whether or not we use the dual simplex algorithm instead of the primal.
|
boolean |
getUseImpliedFreePreprocessor()
If presolve runs, include the pass that detects implied free variables.
|
boolean |
getUseMiddleProductFormUpdate()
Whether or not to use the middle product form update rather than the
standard eta LU update.
|
boolean |
getUsePreprocessing()
Whether or not we use advanced preprocessing techniques.
|
boolean |
getUseScaling()
Whether or not we scale the matrix A so that the maximum coefficient on
each line and each column is 1.0.
|
boolean |
getUseTransposedMatrix()
Whether or not we keep a transposed version of the matrix A to speed-up the
pricing at the cost of extra memory and the initial tranposition
computation.
|
boolean |
hasAllowSimplexAlgorithmChange()
During incremental solve, let the solver decide if it use the primal or
dual simplex algorithm depending on the current solution and on the new
problem.
|
boolean |
hasBasisRefactorizationPeriod()
Number of iterations between two basis refactorizations.
|
boolean |
hasChangeStatusToImprecise()
If true, the internal API will change the return status to imprecise if the
solution does not respect the internal tolerances.
|
boolean |
hasCostScaling()
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING]; |
boolean |
hasCrossoverBoundSnappingDistance()
If the starting basis contains FREE variable with bounds, we will move
any such variable to their closer bounds if the distance is smaller than
this parameter.
|
boolean |
hasDegenerateMinistepFactor()
During a degenerate iteration, the more conservative approach is to do a
step of length zero (while shifting the bound of the leaving variable).
|
boolean |
hasDevexWeightsResetPeriod()
Devex weights will be reset to 1.0 after that number of updates.
|
boolean |
hasDropMagnitude()
Value in the input LP lower than this will be ignored.
|
boolean |
hasDropTolerance()
In order to increase the sparsity of the manipulated vectors, floating
point values with a magnitude smaller than this parameter are set to zero
(only in some places).
|
boolean |
hasDualFeasibilityTolerance()
Variables whose reduced costs have an absolute value smaller than this
tolerance are not considered as entering candidates.
|
boolean |
hasDualizerThreshold()
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number
of constraints of the problem is more than this threshold times the number
of variables.
|
boolean |
hasDualPricePrioritizeNorm()
On some problem like stp3d or pds-100 this makes a huge difference in
speed and number of iterations of the dual simplex.
|
boolean |
hasDualSmallPivotThreshold()
Like small_pivot_threshold but for the dual simplex.
|
boolean |
hasDynamicallyAdjustRefactorizationPeriod()
If this is true, then basis_refactorization_period becomes a lower bound on
the number of iterations between two refactorization (provided there is no
numerical accuracy issues).
|
boolean |
hasExploitSingletonColumnInInitialBasis()
Whether or not we exploit the singleton columns already present in the
problem when we create the initial basis.
|
boolean |
hasFeasibilityRule()
PricingRule to use during the feasibility phase.
|
boolean |
hasHarrisToleranceRatio()
This impacts the ratio test and indicates by how much we allow a basic
variable value that we move to go out of bounds.
|
boolean |
hasInitialBasis()
What heuristic is used to try to replace the fixed slack columns in the
initial basis of the primal simplex.
|
boolean |
hasInitialConditionNumberThreshold()
If our upper bound on the condition number of the initial basis (from our
heurisitic or a warm start) is above this threshold, we revert to an all
slack basis.
|
boolean |
hasInitializeDevexWithColumnNorms()
Whether we initialize devex weights to 1.0 or to the norms of the matrix
columns.
|
boolean |
hasLogSearchProgress()
If true, logs the progress of a solve to LOG(INFO).
|
boolean |
hasLogToStdout()
If true, logs will be displayed to stdout instead of using Google log info.
|
boolean |
hasLuFactorizationPivotThreshold()
Threshold for LU-factorization: for stability reasons, the magnitude of the
chosen pivot at a given step is guaranteed to be greater than this
threshold times the maximum magnitude of all the possible pivot choices in
the same column.
|
boolean |
hasMarkowitzSingularityThreshold()
If a pivot magnitude is smaller than this during the Markowitz LU
factorization, then the matrix is assumed to be singular.
|
boolean |
hasMarkowitzZlatevParameter()
How many columns do we look at in the Markowitz pivoting rule to find
a good pivot.
|
boolean |
hasMaxDeterministicTime()
Maximum deterministic time allowed to solve a problem.
|
boolean |
hasMaxNumberOfIterations()
Maximum number of simplex iterations to solve a problem.
|
boolean |
hasMaxNumberOfReoptimizations()
When the solution of phase II is imprecise, we re-run the phase II with the
opposite algorithm from that imprecise solution (i.e., if primal or dual
simplex was used, we use dual or primal simplex, respectively).
|
boolean |
hasMaxTimeInSeconds()
Maximum time allowed in seconds to solve a problem.
|
boolean |
hasMaxValidMagnitude()
Any finite values in the input LP must be below this threshold, otherwise
the model will be reported invalid.
|
boolean |
hasMinimumAcceptablePivot()
We never follow a basis change with a pivot under this threshold.
|
boolean |
hasNumOmpThreads()
Number of threads in the OMP parallel sections.
|
boolean |
hasObjectiveLowerLimit()
The solver will stop as soon as it has proven that the objective is smaller
than objective_lower_limit or greater than objective_upper_limit.
|
boolean |
hasObjectiveUpperLimit()
optional double objective_upper_limit = 41 [default = inf]; |
boolean |
hasOptimizationRule()
PricingRule to use during the optimization phase.
|
boolean |
hasPerturbCostsInDualSimplex()
When this is true, then the costs are randomly perturbed before the dual
simplex is even started.
|
boolean |
hasPreprocessorZeroTolerance()
A floating point tolerance used by the preprocessors.
|
boolean |
hasPrimalFeasibilityTolerance()
This tolerance indicates by how much we allow the variable values to go out
of bounds and still consider the current solution primal-feasible.
|
boolean |
hasProvideStrongOptimalGuarantee()
If true, then when the solver returns a solution with an OPTIMAL status,
we can guarantee that:
- The primal variable are in their bounds
|
boolean |
hasPushToVertex()
If the optimization phases finishes with super-basic variables (i.e.,
variables that either 1) have bounds but are FREE in the basis, or 2) have
no bounds and are FREE in the basis at a nonzero value), then run a "push"
phase to push these variables to bounds, obtaining a vertex solution.
|
boolean |
hasRandomSeed()
At the beginning of each solve, the random number generator used in some
part of the solver is reinitialized to this seed.
|
boolean |
hasRatioTestZeroThreshold()
During the primal simplex (resp. dual simplex), the coefficients of the
direction (resp. update row) with a magnitude lower than this threshold are
not considered during the ratio test.
|
boolean |
hasRecomputeEdgesNormThreshold()
Note that the threshold is a relative error on the actual norm (not the
squared one) and that edge norms are always greater than 1.
|
boolean |
hasRecomputeReducedCostsThreshold()
We estimate the accuracy of the iteratively computed reduced costs.
|
boolean |
hasRefactorizationThreshold()
We estimate the factorization accuracy of B during each pivot by using
the fact that we can compute the pivot coefficient in two ways:
- From direction[leaving_row]
|
boolean |
hasRelativeCostPerturbation()
The magnitude of the cost perturbation is given by
RandomIn(1.0, 2.0) * (
relative_cost_perturbation * cost
+ relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05]; |
boolean |
hasRelativeMaxCostPerturbation()
optional double relative_max_cost_perturbation = 55 [default = 1e-07]; |
boolean |
hasScalingMethod()
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION]; |
boolean |
hasSmallPivotThreshold()
When we choose the leaving variable, we want to avoid small pivot because
they are the less precise and may cause numerical instabilities.
|
boolean |
hasSolutionFeasibilityTolerance()
When the problem status is OPTIMAL, we check the optimality using this
relative tolerance and change the status to IMPRECISE if an issue is
detected.
|
boolean |
hasSolveDualProblem()
Whether or not we solve the dual of the given problem.
|
boolean |
hasUseDedicatedDualFeasibilityAlgorithm()
We have two possible dual phase I algorithms.
|
boolean |
hasUseDualSimplex()
Whether or not we use the dual simplex algorithm instead of the primal.
|
boolean |
hasUseImpliedFreePreprocessor()
If presolve runs, include the pass that detects implied free variables.
|
boolean |
hasUseMiddleProductFormUpdate()
Whether or not to use the middle product form update rather than the
standard eta LU update.
|
boolean |
hasUsePreprocessing()
Whether or not we use advanced preprocessing techniques.
|
boolean |
hasUseScaling()
Whether or not we scale the matrix A so that the maximum coefficient on
each line and each column is 1.0.
|
boolean |
hasUseTransposedMatrix()
Whether or not we keep a transposed version of the matrix A to speed-up the
pricing at the cost of extra memory and the initial tranposition
computation.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
boolean hasScalingMethod()
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
GlopParameters.ScalingAlgorithm getScalingMethod()
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
boolean hasFeasibilityRule()
PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
GlopParameters.PricingRule getFeasibilityRule()
PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
boolean hasOptimizationRule()
PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
GlopParameters.PricingRule getOptimizationRule()
PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
boolean hasRefactorizationThreshold()
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
double getRefactorizationThreshold()
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
boolean hasRecomputeReducedCostsThreshold()
We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
double getRecomputeReducedCostsThreshold()
We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
boolean hasRecomputeEdgesNormThreshold()
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
double getRecomputeEdgesNormThreshold()
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
boolean hasPrimalFeasibilityTolerance()
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
double getPrimalFeasibilityTolerance()
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
boolean hasDualFeasibilityTolerance()
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
double getDualFeasibilityTolerance()
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
boolean hasRatioTestZeroThreshold()
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
double getRatioTestZeroThreshold()
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
boolean hasHarrisToleranceRatio()
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
double getHarrisToleranceRatio()
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
boolean hasSmallPivotThreshold()
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
double getSmallPivotThreshold()
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
boolean hasMinimumAcceptablePivot()
We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
double getMinimumAcceptablePivot()
We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
boolean hasDropTolerance()
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
double getDropTolerance()
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
boolean hasUseScaling()
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
boolean getUseScaling()
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
boolean hasCostScaling()
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
GlopParameters.CostScalingAlgorithm getCostScaling()
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
boolean hasInitialBasis()
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
GlopParameters.InitialBasisHeuristic getInitialBasis()
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
boolean hasUseTransposedMatrix()
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
boolean getUseTransposedMatrix()
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
boolean hasBasisRefactorizationPeriod()
Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
int getBasisRefactorizationPeriod()
Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
boolean hasDynamicallyAdjustRefactorizationPeriod()
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
boolean getDynamicallyAdjustRefactorizationPeriod()
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
boolean hasSolveDualProblem()
Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
GlopParameters.SolverBehavior getSolveDualProblem()
Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
boolean hasDualizerThreshold()
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
double getDualizerThreshold()
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
boolean hasSolutionFeasibilityTolerance()
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
double getSolutionFeasibilityTolerance()
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
boolean hasProvideStrongOptimalGuarantee()
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
boolean getProvideStrongOptimalGuarantee()
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
boolean hasChangeStatusToImprecise()
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
boolean getChangeStatusToImprecise()
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
boolean hasMaxNumberOfReoptimizations()
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
double getMaxNumberOfReoptimizations()
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
boolean hasLuFactorizationPivotThreshold()
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
double getLuFactorizationPivotThreshold()
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
boolean hasMaxTimeInSeconds()
Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
double getMaxTimeInSeconds()
Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
boolean hasMaxDeterministicTime()
Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
double getMaxDeterministicTime()
Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
boolean hasMaxNumberOfIterations()
Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
long getMaxNumberOfIterations()
Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
boolean hasMarkowitzZlatevParameter()
How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
int getMarkowitzZlatevParameter()
How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
boolean hasMarkowitzSingularityThreshold()
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
double getMarkowitzSingularityThreshold()
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
boolean hasUseDualSimplex()
Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
boolean getUseDualSimplex()
Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
boolean hasAllowSimplexAlgorithmChange()
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
boolean getAllowSimplexAlgorithmChange()
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
boolean hasDevexWeightsResetPeriod()
Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
int getDevexWeightsResetPeriod()
Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
boolean hasUsePreprocessing()
Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
boolean getUsePreprocessing()
Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
boolean hasUseMiddleProductFormUpdate()
Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
boolean getUseMiddleProductFormUpdate()
Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
boolean hasInitializeDevexWithColumnNorms()
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
boolean getInitializeDevexWithColumnNorms()
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
boolean hasExploitSingletonColumnInInitialBasis()
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
boolean getExploitSingletonColumnInInitialBasis()
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
boolean hasDualSmallPivotThreshold()
Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
double getDualSmallPivotThreshold()
Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
boolean hasPreprocessorZeroTolerance()
A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
double getPreprocessorZeroTolerance()
A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
boolean hasObjectiveLowerLimit()
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
double getObjectiveLowerLimit()
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
boolean hasObjectiveUpperLimit()
optional double objective_upper_limit = 41 [default = inf];
double getObjectiveUpperLimit()
optional double objective_upper_limit = 41 [default = inf];
boolean hasDegenerateMinistepFactor()
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
double getDegenerateMinistepFactor()
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
boolean hasRandomSeed()
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
int getRandomSeed()
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
boolean hasNumOmpThreads()
Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
int getNumOmpThreads()
Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
boolean hasPerturbCostsInDualSimplex()
When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
boolean getPerturbCostsInDualSimplex()
When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
boolean hasUseDedicatedDualFeasibilityAlgorithm()
We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
boolean getUseDedicatedDualFeasibilityAlgorithm()
We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
boolean hasRelativeCostPerturbation()
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
double getRelativeCostPerturbation()
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
boolean hasRelativeMaxCostPerturbation()
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
double getRelativeMaxCostPerturbation()
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
boolean hasInitialConditionNumberThreshold()
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
double getInitialConditionNumberThreshold()
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
boolean hasLogSearchProgress()
If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
boolean getLogSearchProgress()
If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
boolean hasLogToStdout()
If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
boolean getLogToStdout()
If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
boolean hasCrossoverBoundSnappingDistance()
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
double getCrossoverBoundSnappingDistance()
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
boolean hasPushToVertex()
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
boolean getPushToVertex()
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
boolean hasUseImpliedFreePreprocessor()
If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
boolean getUseImpliedFreePreprocessor()
If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
boolean hasMaxValidMagnitude()
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
double getMaxValidMagnitude()
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
boolean hasDropMagnitude()
Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
double getDropMagnitude()
Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
boolean hasDualPricePrioritizeNorm()
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
boolean getDualPricePrioritizeNorm()
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
Copyright © 2025. All rights reserved.