Class GlopParameters.Builder
java.lang.Object
com.google.protobuf.AbstractMessageLite.Builder
com.google.protobuf.AbstractMessage.Builder<GlopParameters.Builder>
com.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
com.google.ortools.glop.GlopParameters.Builder
- All Implemented Interfaces:
GlopParametersOrBuilder
,com.google.protobuf.Message.Builder
,com.google.protobuf.MessageLite.Builder
,com.google.protobuf.MessageLiteOrBuilder
,com.google.protobuf.MessageOrBuilder
,Cloneable
- Enclosing class:
GlopParameters
public static final class GlopParameters.Builder
extends com.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
implements GlopParametersOrBuilder
next id = 73Protobuf type
operations_research.glop.GlopParameters
-
Method Summary
Modifier and TypeMethodDescriptionbuild()
clear()
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.Number of iterations between two basis refactorizations.If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).Devex weights will be reset to 1.0 after that number of updates.Value in the input LP lower than this will be ignored.In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.Like small_pivot_threshold but for the dual simplex.If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.PricingRule to use during the feasibility phase.This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.If true, logs the progress of a solve to LOG(INFO).If true, logs will be displayed to stdout instead of using Google log info.Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.How many columns do we look at in the Markowitz pivoting rule to find a good pivot.Maximum deterministic time allowed to solve a problem.Maximum number of simplex iterations to solve a problem.When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).Maximum time allowed in seconds to solve a problem.Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.We never follow a basis change with a pivot under this threshold.Number of threads in the OMP parallel sections.The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.optional double objective_upper_limit = 41 [default = inf];
PricingRule to use during the optimization phase.When this is true, then the costs are randomly perturbed before the dual simplex is even started.A floating point tolerance used by the preprocessors.This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their boundsIf the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test.Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.We estimate the accuracy of the iteratively computed reduced costs.We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);optional double relative_max_cost_perturbation = 55 [default = 1e-07];
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.Whether or not we solve the dual of the given problem.Whether to use absl::BitGen instead of MTRandom.We have two possible dual phase I algorithms.Whether or not we use the dual simplex algorithm instead of the primal.If presolve runs, include the pass that detects implied free variables.Whether or not to use the middle product form update rather than the standard eta LU update.Whether or not we use advanced preprocessing techniques.Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.boolean
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.int
Number of iterations between two basis refactorizations.boolean
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
double
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.double
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).static final com.google.protobuf.Descriptors.Descriptor
com.google.protobuf.Descriptors.Descriptor
int
Devex weights will be reset to 1.0 after that number of updates.double
Value in the input LP lower than this will be ignored.double
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).double
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.double
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.boolean
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.double
Like small_pivot_threshold but for the dual simplex.boolean
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).boolean
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.PricingRule to use during the feasibility phase.double
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.double
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.boolean
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.boolean
If true, logs the progress of a solve to LOG(INFO).boolean
If true, logs will be displayed to stdout instead of using Google log info.double
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.double
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.int
How many columns do we look at in the Markowitz pivoting rule to find a good pivot.double
Maximum deterministic time allowed to solve a problem.long
Maximum number of simplex iterations to solve a problem.double
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).double
Maximum time allowed in seconds to solve a problem.double
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.double
We never follow a basis change with a pivot under this threshold.int
Number of threads in the OMP parallel sections.double
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.double
optional double objective_upper_limit = 41 [default = inf];
PricingRule to use during the optimization phase.boolean
When this is true, then the costs are randomly perturbed before the dual simplex is even started.double
A floating point tolerance used by the preprocessors.double
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.boolean
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their boundsboolean
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.int
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.double
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test.double
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.double
We estimate the accuracy of the iteratively computed reduced costs.double
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]double
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);double
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
double
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.double
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.Whether or not we solve the dual of the given problem.boolean
Whether to use absl::BitGen instead of MTRandom.boolean
We have two possible dual phase I algorithms.boolean
Whether or not we use the dual simplex algorithm instead of the primal.boolean
If presolve runs, include the pass that detects implied free variables.boolean
Whether or not to use the middle product form update rather than the standard eta LU update.boolean
Whether or not we use advanced preprocessing techniques.boolean
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.boolean
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.boolean
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.boolean
Number of iterations between two basis refactorizations.boolean
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.boolean
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
boolean
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.boolean
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).boolean
Devex weights will be reset to 1.0 after that number of updates.boolean
Value in the input LP lower than this will be ignored.boolean
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).boolean
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.boolean
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.boolean
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.boolean
Like small_pivot_threshold but for the dual simplex.boolean
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).boolean
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.boolean
PricingRule to use during the feasibility phase.boolean
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.boolean
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.boolean
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.boolean
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.boolean
If true, logs the progress of a solve to LOG(INFO).boolean
If true, logs will be displayed to stdout instead of using Google log info.boolean
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.boolean
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.boolean
How many columns do we look at in the Markowitz pivoting rule to find a good pivot.boolean
Maximum deterministic time allowed to solve a problem.boolean
Maximum number of simplex iterations to solve a problem.boolean
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).boolean
Maximum time allowed in seconds to solve a problem.boolean
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.boolean
We never follow a basis change with a pivot under this threshold.boolean
Number of threads in the OMP parallel sections.boolean
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.boolean
optional double objective_upper_limit = 41 [default = inf];
boolean
PricingRule to use during the optimization phase.boolean
When this is true, then the costs are randomly perturbed before the dual simplex is even started.boolean
A floating point tolerance used by the preprocessors.boolean
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.boolean
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their boundsboolean
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.boolean
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.boolean
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test.boolean
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.boolean
We estimate the accuracy of the iteratively computed reduced costs.boolean
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]boolean
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);boolean
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
boolean
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
boolean
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.boolean
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.boolean
Whether or not we solve the dual of the given problem.boolean
Whether to use absl::BitGen instead of MTRandom.boolean
We have two possible dual phase I algorithms.boolean
Whether or not we use the dual simplex algorithm instead of the primal.boolean
If presolve runs, include the pass that detects implied free variables.boolean
Whether or not to use the middle product form update rather than the standard eta LU update.boolean
Whether or not we use advanced preprocessing techniques.boolean
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.boolean
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
final boolean
mergeFrom
(GlopParameters other) mergeFrom
(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) mergeFrom
(com.google.protobuf.Message other) setAllowSimplexAlgorithmChange
(boolean value) During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.setBasisRefactorizationPeriod
(int value) Number of iterations between two basis refactorizations.setChangeStatusToImprecise
(boolean value) If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
setCrossoverBoundSnappingDistance
(double value) If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.setDegenerateMinistepFactor
(double value) During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).setDevexWeightsResetPeriod
(int value) Devex weights will be reset to 1.0 after that number of updates.setDropMagnitude
(double value) Value in the input LP lower than this will be ignored.setDropTolerance
(double value) In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).setDualFeasibilityTolerance
(double value) Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.setDualizerThreshold
(double value) When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.setDualPricePrioritizeNorm
(boolean value) On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.setDualSmallPivotThreshold
(double value) Like small_pivot_threshold but for the dual simplex.setDynamicallyAdjustRefactorizationPeriod
(boolean value) If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).setExploitSingletonColumnInInitialBasis
(boolean value) Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.PricingRule to use during the feasibility phase.setHarrisToleranceRatio
(double value) This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.setInitialConditionNumberThreshold
(double value) If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.setInitializeDevexWithColumnNorms
(boolean value) Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.setLogSearchProgress
(boolean value) If true, logs the progress of a solve to LOG(INFO).setLogToStdout
(boolean value) If true, logs will be displayed to stdout instead of using Google log info.setLuFactorizationPivotThreshold
(double value) Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.setMarkowitzSingularityThreshold
(double value) If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.setMarkowitzZlatevParameter
(int value) How many columns do we look at in the Markowitz pivoting rule to find a good pivot.setMaxDeterministicTime
(double value) Maximum deterministic time allowed to solve a problem.setMaxNumberOfIterations
(long value) Maximum number of simplex iterations to solve a problem.setMaxNumberOfReoptimizations
(double value) When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).setMaxTimeInSeconds
(double value) Maximum time allowed in seconds to solve a problem.setMaxValidMagnitude
(double value) Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.setMinimumAcceptablePivot
(double value) We never follow a basis change with a pivot under this threshold.setNumOmpThreads
(int value) Number of threads in the OMP parallel sections.setObjectiveLowerLimit
(double value) The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.setObjectiveUpperLimit
(double value) optional double objective_upper_limit = 41 [default = inf];
PricingRule to use during the optimization phase.setPerturbCostsInDualSimplex
(boolean value) When this is true, then the costs are randomly perturbed before the dual simplex is even started.setPreprocessorZeroTolerance
(double value) A floating point tolerance used by the preprocessors.setPrimalFeasibilityTolerance
(double value) This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.setProvideStrongOptimalGuarantee
(boolean value) If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their boundssetPushToVertex
(boolean value) If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.setRandomSeed
(int value) At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.setRatioTestZeroThreshold
(double value) During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test.setRecomputeEdgesNormThreshold
(double value) Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.setRecomputeReducedCostsThreshold
(double value) We estimate the accuracy of the iteratively computed reduced costs.setRefactorizationThreshold
(double value) We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]setRelativeCostPerturbation
(double value) The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);setRelativeMaxCostPerturbation
(double value) optional double relative_max_cost_perturbation = 55 [default = 1e-07];
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
setSmallPivotThreshold
(double value) When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.setSolutionFeasibilityTolerance
(double value) When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.Whether or not we solve the dual of the given problem.setUseAbslRandom
(boolean value) Whether to use absl::BitGen instead of MTRandom.setUseDedicatedDualFeasibilityAlgorithm
(boolean value) We have two possible dual phase I algorithms.setUseDualSimplex
(boolean value) Whether or not we use the dual simplex algorithm instead of the primal.setUseImpliedFreePreprocessor
(boolean value) If presolve runs, include the pass that detects implied free variables.setUseMiddleProductFormUpdate
(boolean value) Whether or not to use the middle product form update rather than the standard eta LU update.setUsePreprocessing
(boolean value) Whether or not we use advanced preprocessing techniques.setUseScaling
(boolean value) Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.setUseTransposedMatrix
(boolean value) Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.Methods inherited from class com.google.protobuf.GeneratedMessage.Builder
addRepeatedField, clearField, clearOneof, clone, getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMapFieldReflection, internalGetMutableMapField, internalGetMutableMapFieldReflection, isClean, markClean, mergeUnknownFields, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setField, setRepeatedField, setUnknownFields, setUnknownFieldSetBuilder, setUnknownFieldsProto3
Methods inherited from class com.google.protobuf.AbstractMessage.Builder
findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toString
Methods inherited from class com.google.protobuf.AbstractMessageLite.Builder
addAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, newUninitializedMessageException
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface com.google.protobuf.Message.Builder
mergeDelimitedFrom, mergeDelimitedFrom
Methods inherited from interface com.google.protobuf.MessageLite.Builder
mergeFrom
Methods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
Method Details
-
getDescriptor
public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() -
internalGetFieldAccessorTable
protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable()- Specified by:
internalGetFieldAccessorTable
in classcom.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
-
clear
- Specified by:
clear
in interfacecom.google.protobuf.Message.Builder
- Specified by:
clear
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
clear
in classcom.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
-
getDescriptorForType
public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.Message.Builder
- Specified by:
getDescriptorForType
in interfacecom.google.protobuf.MessageOrBuilder
- Overrides:
getDescriptorForType
in classcom.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
-
getDefaultInstanceForType
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Specified by:
getDefaultInstanceForType
in interfacecom.google.protobuf.MessageOrBuilder
-
build
- Specified by:
build
in interfacecom.google.protobuf.Message.Builder
- Specified by:
build
in interfacecom.google.protobuf.MessageLite.Builder
-
buildPartial
- Specified by:
buildPartial
in interfacecom.google.protobuf.Message.Builder
- Specified by:
buildPartial
in interfacecom.google.protobuf.MessageLite.Builder
-
mergeFrom
- Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<GlopParameters.Builder>
-
mergeFrom
-
isInitialized
public final boolean isInitialized()- Specified by:
isInitialized
in interfacecom.google.protobuf.MessageLiteOrBuilder
- Overrides:
isInitialized
in classcom.google.protobuf.GeneratedMessage.Builder<GlopParameters.Builder>
-
mergeFrom
public GlopParameters.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException - Specified by:
mergeFrom
in interfacecom.google.protobuf.Message.Builder
- Specified by:
mergeFrom
in interfacecom.google.protobuf.MessageLite.Builder
- Overrides:
mergeFrom
in classcom.google.protobuf.AbstractMessage.Builder<GlopParameters.Builder>
- Throws:
IOException
-
hasScalingMethod
public boolean hasScalingMethod()optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
- Specified by:
hasScalingMethod
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the scalingMethod field is set.
-
getScalingMethod
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
- Specified by:
getScalingMethod
in interfaceGlopParametersOrBuilder
- Returns:
- The scalingMethod.
-
setScalingMethod
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
- Parameters:
value
- The scalingMethod to set.- Returns:
- This builder for chaining.
-
clearScalingMethod
optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
- Returns:
- This builder for chaining.
-
hasFeasibilityRule
public boolean hasFeasibilityRule()PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
- Specified by:
hasFeasibilityRule
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the feasibilityRule field is set.
-
getFeasibilityRule
PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
- Specified by:
getFeasibilityRule
in interfaceGlopParametersOrBuilder
- Returns:
- The feasibilityRule.
-
setFeasibilityRule
PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
- Parameters:
value
- The feasibilityRule to set.- Returns:
- This builder for chaining.
-
clearFeasibilityRule
PricingRule to use during the feasibility phase.
optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
- Returns:
- This builder for chaining.
-
hasOptimizationRule
public boolean hasOptimizationRule()PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
- Specified by:
hasOptimizationRule
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the optimizationRule field is set.
-
getOptimizationRule
PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
- Specified by:
getOptimizationRule
in interfaceGlopParametersOrBuilder
- Returns:
- The optimizationRule.
-
setOptimizationRule
PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
- Parameters:
value
- The optimizationRule to set.- Returns:
- This builder for chaining.
-
clearOptimizationRule
PricingRule to use during the optimization phase.
optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
- Returns:
- This builder for chaining.
-
hasRefactorizationThreshold
public boolean hasRefactorizationThreshold()We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
- Specified by:
hasRefactorizationThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the refactorizationThreshold field is set.
-
getRefactorizationThreshold
public double getRefactorizationThreshold()We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
- Specified by:
getRefactorizationThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The refactorizationThreshold.
-
setRefactorizationThreshold
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
- Parameters:
value
- The refactorizationThreshold to set.- Returns:
- This builder for chaining.
-
clearRefactorizationThreshold
We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row]. - From update_row[entering_column]. If the two values have a relative difference above this threshold, we trigger a refactorization.
optional double refactorization_threshold = 6 [default = 1e-09];
- Returns:
- This builder for chaining.
-
hasRecomputeReducedCostsThreshold
public boolean hasRecomputeReducedCostsThreshold()We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
- Specified by:
hasRecomputeReducedCostsThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the recomputeReducedCostsThreshold field is set.
-
getRecomputeReducedCostsThreshold
public double getRecomputeReducedCostsThreshold()We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
- Specified by:
getRecomputeReducedCostsThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The recomputeReducedCostsThreshold.
-
setRecomputeReducedCostsThreshold
We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
- Parameters:
value
- The recomputeReducedCostsThreshold to set.- Returns:
- This builder for chaining.
-
clearRecomputeReducedCostsThreshold
We estimate the accuracy of the iteratively computed reduced costs. If it falls below this threshold, we reinitialize them from scratch. Note that such an operation is pretty fast, so we can use a low threshold. It is important to have a good accuracy here (better than the dual_feasibility_tolerance below) to be sure of the sign of such a cost.
optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
- Returns:
- This builder for chaining.
-
hasRecomputeEdgesNormThreshold
public boolean hasRecomputeEdgesNormThreshold()Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
- Specified by:
hasRecomputeEdgesNormThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the recomputeEdgesNormThreshold field is set.
-
getRecomputeEdgesNormThreshold
public double getRecomputeEdgesNormThreshold()Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
- Specified by:
getRecomputeEdgesNormThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The recomputeEdgesNormThreshold.
-
setRecomputeEdgesNormThreshold
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
- Parameters:
value
- The recomputeEdgesNormThreshold to set.- Returns:
- This builder for chaining.
-
clearRecomputeEdgesNormThreshold
Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1. Recomputing norms is a really expensive operation and a large threshold is ok since this doesn't impact directly the solution but just the entering variable choice.
optional double recompute_edges_norm_threshold = 9 [default = 100];
- Returns:
- This builder for chaining.
-
hasPrimalFeasibilityTolerance
public boolean hasPrimalFeasibilityTolerance()This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
- Specified by:
hasPrimalFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the primalFeasibilityTolerance field is set.
-
getPrimalFeasibilityTolerance
public double getPrimalFeasibilityTolerance()This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
- Specified by:
getPrimalFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- The primalFeasibilityTolerance.
-
setPrimalFeasibilityTolerance
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
- Parameters:
value
- The primalFeasibilityTolerance to set.- Returns:
- This builder for chaining.
-
clearPrimalFeasibilityTolerance
This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible. We also use the same tolerance for the error A.x - b. Note that the two errors are closely related if A is scaled in such a way that the greatest coefficient magnitude on each column is 1.0. This is also simply called feasibility tolerance in other solvers.
optional double primal_feasibility_tolerance = 10 [default = 1e-08];
- Returns:
- This builder for chaining.
-
hasDualFeasibilityTolerance
public boolean hasDualFeasibilityTolerance()Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
- Specified by:
hasDualFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dualFeasibilityTolerance field is set.
-
getDualFeasibilityTolerance
public double getDualFeasibilityTolerance()Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
- Specified by:
getDualFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- The dualFeasibilityTolerance.
-
setDualFeasibilityTolerance
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
- Parameters:
value
- The dualFeasibilityTolerance to set.- Returns:
- This builder for chaining.
-
clearDualFeasibilityTolerance
Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates. That is they do not take part in deciding whether a solution is dual-feasible or not. Note that this value can temporarily increase during the execution of the algorithm if the estimated precision of the reduced costs is higher than this tolerance. Note also that we scale the costs (in the presolve step) so that the cost magnitude range contains one. This is also known as the optimality tolerance in other solvers.
optional double dual_feasibility_tolerance = 11 [default = 1e-08];
- Returns:
- This builder for chaining.
-
hasRatioTestZeroThreshold
public boolean hasRatioTestZeroThreshold()During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
- Specified by:
hasRatioTestZeroThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the ratioTestZeroThreshold field is set.
-
getRatioTestZeroThreshold
public double getRatioTestZeroThreshold()During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
- Specified by:
getRatioTestZeroThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The ratioTestZeroThreshold.
-
setRatioTestZeroThreshold
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
- Parameters:
value
- The ratioTestZeroThreshold to set.- Returns:
- This builder for chaining.
-
clearRatioTestZeroThreshold
During the primal simplex (resp. dual simplex), the coefficients of the direction (resp. update row) with a magnitude lower than this threshold are not considered during the ratio test. This tolerance is related to the precision at which a Solve() involving the basis matrix can be performed. TODO(user): Automatically increase it when we detect that the precision of the Solve() is worse than this.
optional double ratio_test_zero_threshold = 12 [default = 1e-09];
- Returns:
- This builder for chaining.
-
hasHarrisToleranceRatio
public boolean hasHarrisToleranceRatio()This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
- Specified by:
hasHarrisToleranceRatio
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the harrisToleranceRatio field is set.
-
getHarrisToleranceRatio
public double getHarrisToleranceRatio()This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
- Specified by:
getHarrisToleranceRatio
in interfaceGlopParametersOrBuilder
- Returns:
- The harrisToleranceRatio.
-
setHarrisToleranceRatio
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
- Parameters:
value
- The harrisToleranceRatio to set.- Returns:
- This builder for chaining.
-
clearHarrisToleranceRatio
This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds. The value should be in [0.0, 1.0) and should be interpreted as a ratio of the primal_feasibility_tolerance. Setting this to 0.0 basically disables the Harris ratio test while setting this too close to 1.0 will make it difficult to keep the variable values inside their bounds modulo the primal_feasibility_tolerance. Note that the same comment applies to the dual simplex ratio test. There, we allow the reduced costs to be of an infeasible sign by as much as this ratio times the dual_feasibility_tolerance.
optional double harris_tolerance_ratio = 13 [default = 0.5];
- Returns:
- This builder for chaining.
-
hasSmallPivotThreshold
public boolean hasSmallPivotThreshold()When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
- Specified by:
hasSmallPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the smallPivotThreshold field is set.
-
getSmallPivotThreshold
public double getSmallPivotThreshold()When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
- Specified by:
getSmallPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The smallPivotThreshold.
-
setSmallPivotThreshold
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
- Parameters:
value
- The smallPivotThreshold to set.- Returns:
- This builder for chaining.
-
clearSmallPivotThreshold
When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities. For a pivot under this threshold times the infinity norm of the direction, we try various countermeasures in order to avoid using it.
optional double small_pivot_threshold = 14 [default = 1e-06];
- Returns:
- This builder for chaining.
-
hasMinimumAcceptablePivot
public boolean hasMinimumAcceptablePivot()We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
- Specified by:
hasMinimumAcceptablePivot
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the minimumAcceptablePivot field is set.
-
getMinimumAcceptablePivot
public double getMinimumAcceptablePivot()We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
- Specified by:
getMinimumAcceptablePivot
in interfaceGlopParametersOrBuilder
- Returns:
- The minimumAcceptablePivot.
-
setMinimumAcceptablePivot
We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
- Parameters:
value
- The minimumAcceptablePivot to set.- Returns:
- This builder for chaining.
-
clearMinimumAcceptablePivot
We never follow a basis change with a pivot under this threshold.
optional double minimum_acceptable_pivot = 15 [default = 1e-06];
- Returns:
- This builder for chaining.
-
hasDropTolerance
public boolean hasDropTolerance()In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
- Specified by:
hasDropTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dropTolerance field is set.
-
getDropTolerance
public double getDropTolerance()In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
- Specified by:
getDropTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- The dropTolerance.
-
setDropTolerance
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
- Parameters:
value
- The dropTolerance to set.- Returns:
- This builder for chaining.
-
clearDropTolerance
In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places). This parameter should be positive or zero.
optional double drop_tolerance = 52 [default = 1e-14];
- Returns:
- This builder for chaining.
-
hasUseScaling
public boolean hasUseScaling()Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
- Specified by:
hasUseScaling
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useScaling field is set.
-
getUseScaling
public boolean getUseScaling()Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
- Specified by:
getUseScaling
in interfaceGlopParametersOrBuilder
- Returns:
- The useScaling.
-
setUseScaling
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
- Parameters:
value
- The useScaling to set.- Returns:
- This builder for chaining.
-
clearUseScaling
Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
optional bool use_scaling = 16 [default = true];
- Returns:
- This builder for chaining.
-
hasCostScaling
public boolean hasCostScaling()optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
- Specified by:
hasCostScaling
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the costScaling field is set.
-
getCostScaling
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
- Specified by:
getCostScaling
in interfaceGlopParametersOrBuilder
- Returns:
- The costScaling.
-
setCostScaling
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
- Parameters:
value
- The costScaling to set.- Returns:
- This builder for chaining.
-
clearCostScaling
optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
- Returns:
- This builder for chaining.
-
hasInitialBasis
public boolean hasInitialBasis()What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
- Specified by:
hasInitialBasis
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the initialBasis field is set.
-
getInitialBasis
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
- Specified by:
getInitialBasis
in interfaceGlopParametersOrBuilder
- Returns:
- The initialBasis.
-
setInitialBasis
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
- Parameters:
value
- The initialBasis to set.- Returns:
- This builder for chaining.
-
clearInitialBasis
What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
- Returns:
- This builder for chaining.
-
hasUseTransposedMatrix
public boolean hasUseTransposedMatrix()Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
- Specified by:
hasUseTransposedMatrix
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useTransposedMatrix field is set.
-
getUseTransposedMatrix
public boolean getUseTransposedMatrix()Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
- Specified by:
getUseTransposedMatrix
in interfaceGlopParametersOrBuilder
- Returns:
- The useTransposedMatrix.
-
setUseTransposedMatrix
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
- Parameters:
value
- The useTransposedMatrix to set.- Returns:
- This builder for chaining.
-
clearUseTransposedMatrix
Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
optional bool use_transposed_matrix = 18 [default = true];
- Returns:
- This builder for chaining.
-
hasBasisRefactorizationPeriod
public boolean hasBasisRefactorizationPeriod()Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
- Specified by:
hasBasisRefactorizationPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the basisRefactorizationPeriod field is set.
-
getBasisRefactorizationPeriod
public int getBasisRefactorizationPeriod()Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
- Specified by:
getBasisRefactorizationPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- The basisRefactorizationPeriod.
-
setBasisRefactorizationPeriod
Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
- Parameters:
value
- The basisRefactorizationPeriod to set.- Returns:
- This builder for chaining.
-
clearBasisRefactorizationPeriod
Number of iterations between two basis refactorizations. Note that various conditions in the algorithm may trigger a refactorization before this period is reached. Set this to 0 if you want to refactorize at each step.
optional int32 basis_refactorization_period = 19 [default = 64];
- Returns:
- This builder for chaining.
-
hasDynamicallyAdjustRefactorizationPeriod
public boolean hasDynamicallyAdjustRefactorizationPeriod()If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
- Specified by:
hasDynamicallyAdjustRefactorizationPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dynamicallyAdjustRefactorizationPeriod field is set.
-
getDynamicallyAdjustRefactorizationPeriod
public boolean getDynamicallyAdjustRefactorizationPeriod()If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
- Specified by:
getDynamicallyAdjustRefactorizationPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- The dynamicallyAdjustRefactorizationPeriod.
-
setDynamicallyAdjustRefactorizationPeriod
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
- Parameters:
value
- The dynamicallyAdjustRefactorizationPeriod to set.- Returns:
- This builder for chaining.
-
clearDynamicallyAdjustRefactorizationPeriod
If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues). Depending on the estimated time to refactorize vs the extra time spend in each solves because of the LU update, we try to balance the two times.
optional bool dynamically_adjust_refactorization_period = 63 [default = true];
- Returns:
- This builder for chaining.
-
hasSolveDualProblem
public boolean hasSolveDualProblem()Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
- Specified by:
hasSolveDualProblem
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the solveDualProblem field is set.
-
getSolveDualProblem
Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
- Specified by:
getSolveDualProblem
in interfaceGlopParametersOrBuilder
- Returns:
- The solveDualProblem.
-
setSolveDualProblem
Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
- Parameters:
value
- The solveDualProblem to set.- Returns:
- This builder for chaining.
-
clearSolveDualProblem
Whether or not we solve the dual of the given problem. With a value of auto, the algorithm decide which approach is probably the fastest depending on the problem dimensions (see dualizer_threshold).
optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
- Returns:
- This builder for chaining.
-
hasDualizerThreshold
public boolean hasDualizerThreshold()When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
- Specified by:
hasDualizerThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dualizerThreshold field is set.
-
getDualizerThreshold
public double getDualizerThreshold()When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
- Specified by:
getDualizerThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The dualizerThreshold.
-
setDualizerThreshold
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
- Parameters:
value
- The dualizerThreshold to set.- Returns:
- This builder for chaining.
-
clearDualizerThreshold
When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
optional double dualizer_threshold = 21 [default = 1.5];
- Returns:
- This builder for chaining.
-
hasSolutionFeasibilityTolerance
public boolean hasSolutionFeasibilityTolerance()When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
- Specified by:
hasSolutionFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the solutionFeasibilityTolerance field is set.
-
getSolutionFeasibilityTolerance
public double getSolutionFeasibilityTolerance()When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
- Specified by:
getSolutionFeasibilityTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- The solutionFeasibilityTolerance.
-
setSolutionFeasibilityTolerance
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
- Parameters:
value
- The solutionFeasibilityTolerance to set.- Returns:
- This builder for chaining.
-
clearSolutionFeasibilityTolerance
When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected. The tolerance is "relative" in the sense that our thresholds are: - tolerance * max(1.0, abs(bound)) for crossing a given bound. - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost. - tolerance for an infeasible dual value.
optional double solution_feasibility_tolerance = 22 [default = 1e-06];
- Returns:
- This builder for chaining.
-
hasProvideStrongOptimalGuarantee
public boolean hasProvideStrongOptimalGuarantee()If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
- Specified by:
hasProvideStrongOptimalGuarantee
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the provideStrongOptimalGuarantee field is set.
-
getProvideStrongOptimalGuarantee
public boolean getProvideStrongOptimalGuarantee()If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
- Specified by:
getProvideStrongOptimalGuarantee
in interfaceGlopParametersOrBuilder
- Returns:
- The provideStrongOptimalGuarantee.
-
setProvideStrongOptimalGuarantee
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
- Parameters:
value
- The provideStrongOptimalGuarantee to set.- Returns:
- This builder for chaining.
-
clearProvideStrongOptimalGuarantee
If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds. - The dual variable are in their bounds. - If we modify each component of the right-hand side a bit and each component of the objective function a bit, then the pair (primal values, dual values) is an EXACT optimal solution of the perturbed problem. - The modifications above are smaller than the associated tolerances as defined in the comment for solution_feasibility_tolerance (*). (*): This is the only place where the guarantee is not tight since we compute the upper bounds with scalar product of the primal/dual solution and the initial problem coefficients with only double precision. Note that whether or not this option is true, we still check the primal/dual infeasibility and objective gap. However if it is false, we don't move the primal/dual values within their bounds and leave them untouched.
optional bool provide_strong_optimal_guarantee = 24 [default = true];
- Returns:
- This builder for chaining.
-
hasChangeStatusToImprecise
public boolean hasChangeStatusToImprecise()If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
- Specified by:
hasChangeStatusToImprecise
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the changeStatusToImprecise field is set.
-
getChangeStatusToImprecise
public boolean getChangeStatusToImprecise()If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
- Specified by:
getChangeStatusToImprecise
in interfaceGlopParametersOrBuilder
- Returns:
- The changeStatusToImprecise.
-
setChangeStatusToImprecise
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
- Parameters:
value
- The changeStatusToImprecise to set.- Returns:
- This builder for chaining.
-
clearChangeStatusToImprecise
If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
optional bool change_status_to_imprecise = 58 [default = true];
- Returns:
- This builder for chaining.
-
hasMaxNumberOfReoptimizations
public boolean hasMaxNumberOfReoptimizations()When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
- Specified by:
hasMaxNumberOfReoptimizations
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the maxNumberOfReoptimizations field is set.
-
getMaxNumberOfReoptimizations
public double getMaxNumberOfReoptimizations()When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
- Specified by:
getMaxNumberOfReoptimizations
in interfaceGlopParametersOrBuilder
- Returns:
- The maxNumberOfReoptimizations.
-
setMaxNumberOfReoptimizations
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
- Parameters:
value
- The maxNumberOfReoptimizations to set.- Returns:
- This builder for chaining.
-
clearMaxNumberOfReoptimizations
When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively). We repeat such re-optimization until the solution is precise, or we hit this limit.
optional double max_number_of_reoptimizations = 56 [default = 40];
- Returns:
- This builder for chaining.
-
hasLuFactorizationPivotThreshold
public boolean hasLuFactorizationPivotThreshold()Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
- Specified by:
hasLuFactorizationPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the luFactorizationPivotThreshold field is set.
-
getLuFactorizationPivotThreshold
public double getLuFactorizationPivotThreshold()Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
- Specified by:
getLuFactorizationPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The luFactorizationPivotThreshold.
-
setLuFactorizationPivotThreshold
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
- Parameters:
value
- The luFactorizationPivotThreshold to set.- Returns:
- This builder for chaining.
-
clearLuFactorizationPivotThreshold
Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column. The value must be in [0,1].
optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
- Returns:
- This builder for chaining.
-
hasMaxTimeInSeconds
public boolean hasMaxTimeInSeconds()Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
- Specified by:
hasMaxTimeInSeconds
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the maxTimeInSeconds field is set.
-
getMaxTimeInSeconds
public double getMaxTimeInSeconds()Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
- Specified by:
getMaxTimeInSeconds
in interfaceGlopParametersOrBuilder
- Returns:
- The maxTimeInSeconds.
-
setMaxTimeInSeconds
Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
- Parameters:
value
- The maxTimeInSeconds to set.- Returns:
- This builder for chaining.
-
clearMaxTimeInSeconds
Maximum time allowed in seconds to solve a problem.
optional double max_time_in_seconds = 26 [default = inf];
- Returns:
- This builder for chaining.
-
hasMaxDeterministicTime
public boolean hasMaxDeterministicTime()Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
- Specified by:
hasMaxDeterministicTime
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the maxDeterministicTime field is set.
-
getMaxDeterministicTime
public double getMaxDeterministicTime()Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
- Specified by:
getMaxDeterministicTime
in interfaceGlopParametersOrBuilder
- Returns:
- The maxDeterministicTime.
-
setMaxDeterministicTime
Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
- Parameters:
value
- The maxDeterministicTime to set.- Returns:
- This builder for chaining.
-
clearMaxDeterministicTime
Maximum deterministic time allowed to solve a problem. The deterministic time is more or less correlated to the running time, and its unit should be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz). TODO(user): Improve the correlation.
optional double max_deterministic_time = 45 [default = inf];
- Returns:
- This builder for chaining.
-
hasMaxNumberOfIterations
public boolean hasMaxNumberOfIterations()Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
- Specified by:
hasMaxNumberOfIterations
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the maxNumberOfIterations field is set.
-
getMaxNumberOfIterations
public long getMaxNumberOfIterations()Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
- Specified by:
getMaxNumberOfIterations
in interfaceGlopParametersOrBuilder
- Returns:
- The maxNumberOfIterations.
-
setMaxNumberOfIterations
Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
- Parameters:
value
- The maxNumberOfIterations to set.- Returns:
- This builder for chaining.
-
clearMaxNumberOfIterations
Maximum number of simplex iterations to solve a problem. A value of -1 means no limit.
optional int64 max_number_of_iterations = 27 [default = -1];
- Returns:
- This builder for chaining.
-
hasMarkowitzZlatevParameter
public boolean hasMarkowitzZlatevParameter()How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
- Specified by:
hasMarkowitzZlatevParameter
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the markowitzZlatevParameter field is set.
-
getMarkowitzZlatevParameter
public int getMarkowitzZlatevParameter()How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
- Specified by:
getMarkowitzZlatevParameter
in interfaceGlopParametersOrBuilder
- Returns:
- The markowitzZlatevParameter.
-
setMarkowitzZlatevParameter
How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
- Parameters:
value
- The markowitzZlatevParameter to set.- Returns:
- This builder for chaining.
-
clearMarkowitzZlatevParameter
How many columns do we look at in the Markowitz pivoting rule to find a good pivot. See markowitz.h.
optional int32 markowitz_zlatev_parameter = 29 [default = 3];
- Returns:
- This builder for chaining.
-
hasMarkowitzSingularityThreshold
public boolean hasMarkowitzSingularityThreshold()If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
- Specified by:
hasMarkowitzSingularityThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the markowitzSingularityThreshold field is set.
-
getMarkowitzSingularityThreshold
public double getMarkowitzSingularityThreshold()If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
- Specified by:
getMarkowitzSingularityThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The markowitzSingularityThreshold.
-
setMarkowitzSingularityThreshold
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
- Parameters:
value
- The markowitzSingularityThreshold to set.- Returns:
- This builder for chaining.
-
clearMarkowitzSingularityThreshold
If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular. Note that this is an absolute threshold and is not relative to the other possible pivots on the same column (see lu_factorization_pivot_threshold).
optional double markowitz_singularity_threshold = 30 [default = 1e-15];
- Returns:
- This builder for chaining.
-
hasUseDualSimplex
public boolean hasUseDualSimplex()Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
- Specified by:
hasUseDualSimplex
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useDualSimplex field is set.
-
getUseDualSimplex
public boolean getUseDualSimplex()Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
- Specified by:
getUseDualSimplex
in interfaceGlopParametersOrBuilder
- Returns:
- The useDualSimplex.
-
setUseDualSimplex
Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
- Parameters:
value
- The useDualSimplex to set.- Returns:
- This builder for chaining.
-
clearUseDualSimplex
Whether or not we use the dual simplex algorithm instead of the primal.
optional bool use_dual_simplex = 31 [default = false];
- Returns:
- This builder for chaining.
-
hasAllowSimplexAlgorithmChange
public boolean hasAllowSimplexAlgorithmChange()During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
- Specified by:
hasAllowSimplexAlgorithmChange
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the allowSimplexAlgorithmChange field is set.
-
getAllowSimplexAlgorithmChange
public boolean getAllowSimplexAlgorithmChange()During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
- Specified by:
getAllowSimplexAlgorithmChange
in interfaceGlopParametersOrBuilder
- Returns:
- The allowSimplexAlgorithmChange.
-
setAllowSimplexAlgorithmChange
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
- Parameters:
value
- The allowSimplexAlgorithmChange to set.- Returns:
- This builder for chaining.
-
clearAllowSimplexAlgorithmChange
During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem. Note that even if this is true, the value of use_dual_simplex still indicates the default algorithm that the solver will use.
optional bool allow_simplex_algorithm_change = 32 [default = false];
- Returns:
- This builder for chaining.
-
hasDevexWeightsResetPeriod
public boolean hasDevexWeightsResetPeriod()Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
- Specified by:
hasDevexWeightsResetPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the devexWeightsResetPeriod field is set.
-
getDevexWeightsResetPeriod
public int getDevexWeightsResetPeriod()Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
- Specified by:
getDevexWeightsResetPeriod
in interfaceGlopParametersOrBuilder
- Returns:
- The devexWeightsResetPeriod.
-
setDevexWeightsResetPeriod
Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
- Parameters:
value
- The devexWeightsResetPeriod to set.- Returns:
- This builder for chaining.
-
clearDevexWeightsResetPeriod
Devex weights will be reset to 1.0 after that number of updates.
optional int32 devex_weights_reset_period = 33 [default = 150];
- Returns:
- This builder for chaining.
-
hasUsePreprocessing
public boolean hasUsePreprocessing()Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
- Specified by:
hasUsePreprocessing
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the usePreprocessing field is set.
-
getUsePreprocessing
public boolean getUsePreprocessing()Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
- Specified by:
getUsePreprocessing
in interfaceGlopParametersOrBuilder
- Returns:
- The usePreprocessing.
-
setUsePreprocessing
Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
- Parameters:
value
- The usePreprocessing to set.- Returns:
- This builder for chaining.
-
clearUsePreprocessing
Whether or not we use advanced preprocessing techniques.
optional bool use_preprocessing = 34 [default = true];
- Returns:
- This builder for chaining.
-
hasUseMiddleProductFormUpdate
public boolean hasUseMiddleProductFormUpdate()Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
- Specified by:
hasUseMiddleProductFormUpdate
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useMiddleProductFormUpdate field is set.
-
getUseMiddleProductFormUpdate
public boolean getUseMiddleProductFormUpdate()Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
- Specified by:
getUseMiddleProductFormUpdate
in interfaceGlopParametersOrBuilder
- Returns:
- The useMiddleProductFormUpdate.
-
setUseMiddleProductFormUpdate
Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
- Parameters:
value
- The useMiddleProductFormUpdate to set.- Returns:
- This builder for chaining.
-
clearUseMiddleProductFormUpdate
Whether or not to use the middle product form update rather than the standard eta LU update. The middle form product update should be a lot more efficient (close to the Forrest-Tomlin update, a bit slower but easier to implement). See for more details: Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised simplex method", 28 january 2013, Technical Report ERGO-13-0001 http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
optional bool use_middle_product_form_update = 35 [default = true];
- Returns:
- This builder for chaining.
-
hasInitializeDevexWithColumnNorms
public boolean hasInitializeDevexWithColumnNorms()Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
- Specified by:
hasInitializeDevexWithColumnNorms
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the initializeDevexWithColumnNorms field is set.
-
getInitializeDevexWithColumnNorms
public boolean getInitializeDevexWithColumnNorms()Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
- Specified by:
getInitializeDevexWithColumnNorms
in interfaceGlopParametersOrBuilder
- Returns:
- The initializeDevexWithColumnNorms.
-
setInitializeDevexWithColumnNorms
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
- Parameters:
value
- The initializeDevexWithColumnNorms to set.- Returns:
- This builder for chaining.
-
clearInitializeDevexWithColumnNorms
Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
optional bool initialize_devex_with_column_norms = 36 [default = true];
- Returns:
- This builder for chaining.
-
hasExploitSingletonColumnInInitialBasis
public boolean hasExploitSingletonColumnInInitialBasis()Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
- Specified by:
hasExploitSingletonColumnInInitialBasis
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the exploitSingletonColumnInInitialBasis field is set.
-
getExploitSingletonColumnInInitialBasis
public boolean getExploitSingletonColumnInInitialBasis()Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
- Specified by:
getExploitSingletonColumnInInitialBasis
in interfaceGlopParametersOrBuilder
- Returns:
- The exploitSingletonColumnInInitialBasis.
-
setExploitSingletonColumnInInitialBasis
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
- Parameters:
value
- The exploitSingletonColumnInInitialBasis to set.- Returns:
- This builder for chaining.
-
clearExploitSingletonColumnInInitialBasis
Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
- Returns:
- This builder for chaining.
-
hasDualSmallPivotThreshold
public boolean hasDualSmallPivotThreshold()Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
- Specified by:
hasDualSmallPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dualSmallPivotThreshold field is set.
-
getDualSmallPivotThreshold
public double getDualSmallPivotThreshold()Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
- Specified by:
getDualSmallPivotThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The dualSmallPivotThreshold.
-
setDualSmallPivotThreshold
Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
- Parameters:
value
- The dualSmallPivotThreshold to set.- Returns:
- This builder for chaining.
-
clearDualSmallPivotThreshold
Like small_pivot_threshold but for the dual simplex. This is needed because the dual algorithm does not interpret this value in the same way. TODO(user): Clean this up and use the same small pivot detection.
optional double dual_small_pivot_threshold = 38 [default = 0.0001];
- Returns:
- This builder for chaining.
-
hasPreprocessorZeroTolerance
public boolean hasPreprocessorZeroTolerance()A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
- Specified by:
hasPreprocessorZeroTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the preprocessorZeroTolerance field is set.
-
getPreprocessorZeroTolerance
public double getPreprocessorZeroTolerance()A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
- Specified by:
getPreprocessorZeroTolerance
in interfaceGlopParametersOrBuilder
- Returns:
- The preprocessorZeroTolerance.
-
setPreprocessorZeroTolerance
A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
- Parameters:
value
- The preprocessorZeroTolerance to set.- Returns:
- This builder for chaining.
-
clearPreprocessorZeroTolerance
A floating point tolerance used by the preprocessors. This is used for things like detecting if two columns/rows are proportional or if an interval is empty. Note that the preprocessors also use solution_feasibility_tolerance() to detect if a problem is infeasible.
optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
- Returns:
- This builder for chaining.
-
hasObjectiveLowerLimit
public boolean hasObjectiveLowerLimit()The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
- Specified by:
hasObjectiveLowerLimit
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the objectiveLowerLimit field is set.
-
getObjectiveLowerLimit
public double getObjectiveLowerLimit()The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
- Specified by:
getObjectiveLowerLimit
in interfaceGlopParametersOrBuilder
- Returns:
- The objectiveLowerLimit.
-
setObjectiveLowerLimit
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
- Parameters:
value
- The objectiveLowerLimit to set.- Returns:
- This builder for chaining.
-
clearObjectiveLowerLimit
The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit. Depending on the simplex algorithm (primal or dual) and the optimization direction, note that only one bound will be used at the time. Important: The solver does not add any tolerances to these values, and as soon as the objective (as computed by the solver, so with some imprecision) crosses one of these bounds (strictly), the search will stop. It is up to the client to add any tolerance if needed.
optional double objective_lower_limit = 40 [default = -inf];
- Returns:
- This builder for chaining.
-
hasObjectiveUpperLimit
public boolean hasObjectiveUpperLimit()optional double objective_upper_limit = 41 [default = inf];
- Specified by:
hasObjectiveUpperLimit
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the objectiveUpperLimit field is set.
-
getObjectiveUpperLimit
public double getObjectiveUpperLimit()optional double objective_upper_limit = 41 [default = inf];
- Specified by:
getObjectiveUpperLimit
in interfaceGlopParametersOrBuilder
- Returns:
- The objectiveUpperLimit.
-
setObjectiveUpperLimit
optional double objective_upper_limit = 41 [default = inf];
- Parameters:
value
- The objectiveUpperLimit to set.- Returns:
- This builder for chaining.
-
clearObjectiveUpperLimit
optional double objective_upper_limit = 41 [default = inf];
- Returns:
- This builder for chaining.
-
hasDegenerateMinistepFactor
public boolean hasDegenerateMinistepFactor()During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
- Specified by:
hasDegenerateMinistepFactor
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the degenerateMinistepFactor field is set.
-
getDegenerateMinistepFactor
public double getDegenerateMinistepFactor()During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
- Specified by:
getDegenerateMinistepFactor
in interfaceGlopParametersOrBuilder
- Returns:
- The degenerateMinistepFactor.
-
setDegenerateMinistepFactor
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
- Parameters:
value
- The degenerateMinistepFactor to set.- Returns:
- This builder for chaining.
-
clearDegenerateMinistepFactor
During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable). That is, the variable values are unchanged for the primal simplex or the reduced cost are unchanged for the dual simplex. However, instead of doing a step of length zero, it seems to be better on degenerate problems to do a small positive step. This is what is recommended in the EXPAND procedure described in: P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti- cycling procedure for linearly constrained optimization". Mathematical Programming, 45:437\u2013474, 1989. Here, during a degenerate iteration we do a small positive step of this factor times the primal (resp. dual) tolerance. In the primal simplex, this may effectively push variable values (very slightly) further out of their bounds (resp. reduced costs for the dual simplex). Setting this to zero reverts to the more conservative approach of a zero step during degenerate iterations.
optional double degenerate_ministep_factor = 42 [default = 0.01];
- Returns:
- This builder for chaining.
-
hasRandomSeed
public boolean hasRandomSeed()At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
- Specified by:
hasRandomSeed
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the randomSeed field is set.
-
getRandomSeed
public int getRandomSeed()At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
- Specified by:
getRandomSeed
in interfaceGlopParametersOrBuilder
- Returns:
- The randomSeed.
-
setRandomSeed
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
- Parameters:
value
- The randomSeed to set.- Returns:
- This builder for chaining.
-
clearRandomSeed
At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed. If you change the random seed, the solver may make different choices during the solving process. Note that this may lead to a different solution, for example a different optimal basis. For some problems, the running time may vary a lot depending on small change in the solving algorithm. Running the solver with different seeds enables to have more robust benchmarks when evaluating new features. Also note that the solver is fully deterministic: two runs of the same binary, on the same machine, on the exact same data and with the same parameters will go through the exact same iterations. If they hit a time limit, they might of course yield different results because one will have advanced farther than the other.
optional int32 random_seed = 43 [default = 1];
- Returns:
- This builder for chaining.
-
hasUseAbslRandom
public boolean hasUseAbslRandom()Whether to use absl::BitGen instead of MTRandom.
optional bool use_absl_random = 72 [default = false];
- Specified by:
hasUseAbslRandom
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useAbslRandom field is set.
-
getUseAbslRandom
public boolean getUseAbslRandom()Whether to use absl::BitGen instead of MTRandom.
optional bool use_absl_random = 72 [default = false];
- Specified by:
getUseAbslRandom
in interfaceGlopParametersOrBuilder
- Returns:
- The useAbslRandom.
-
setUseAbslRandom
Whether to use absl::BitGen instead of MTRandom.
optional bool use_absl_random = 72 [default = false];
- Parameters:
value
- The useAbslRandom to set.- Returns:
- This builder for chaining.
-
clearUseAbslRandom
Whether to use absl::BitGen instead of MTRandom.
optional bool use_absl_random = 72 [default = false];
- Returns:
- This builder for chaining.
-
hasNumOmpThreads
public boolean hasNumOmpThreads()Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
- Specified by:
hasNumOmpThreads
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the numOmpThreads field is set.
-
getNumOmpThreads
public int getNumOmpThreads()Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
- Specified by:
getNumOmpThreads
in interfaceGlopParametersOrBuilder
- Returns:
- The numOmpThreads.
-
setNumOmpThreads
Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
- Parameters:
value
- The numOmpThreads to set.- Returns:
- This builder for chaining.
-
clearNumOmpThreads
Number of threads in the OMP parallel sections. If left to 1, the code will not create any OMP threads and will remain single-threaded.
optional int32 num_omp_threads = 44 [default = 1];
- Returns:
- This builder for chaining.
-
hasPerturbCostsInDualSimplex
public boolean hasPerturbCostsInDualSimplex()When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
- Specified by:
hasPerturbCostsInDualSimplex
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the perturbCostsInDualSimplex field is set.
-
getPerturbCostsInDualSimplex
public boolean getPerturbCostsInDualSimplex()When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
- Specified by:
getPerturbCostsInDualSimplex
in interfaceGlopParametersOrBuilder
- Returns:
- The perturbCostsInDualSimplex.
-
setPerturbCostsInDualSimplex
When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
- Parameters:
value
- The perturbCostsInDualSimplex to set.- Returns:
- This builder for chaining.
-
clearPerturbCostsInDualSimplex
When this is true, then the costs are randomly perturbed before the dual simplex is even started. This has been shown to improve the dual simplex performance. For a good reference, see Huangfu Q (2013) "High performance simplex solver", Ph.D, dissertation, University of Edinburgh.
optional bool perturb_costs_in_dual_simplex = 53 [default = false];
- Returns:
- This builder for chaining.
-
hasUseDedicatedDualFeasibilityAlgorithm
public boolean hasUseDedicatedDualFeasibilityAlgorithm()We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
- Specified by:
hasUseDedicatedDualFeasibilityAlgorithm
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useDedicatedDualFeasibilityAlgorithm field is set.
-
getUseDedicatedDualFeasibilityAlgorithm
public boolean getUseDedicatedDualFeasibilityAlgorithm()We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
- Specified by:
getUseDedicatedDualFeasibilityAlgorithm
in interfaceGlopParametersOrBuilder
- Returns:
- The useDedicatedDualFeasibilityAlgorithm.
-
setUseDedicatedDualFeasibilityAlgorithm
We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
- Parameters:
value
- The useDedicatedDualFeasibilityAlgorithm to set.- Returns:
- This builder for chaining.
-
clearUseDedicatedDualFeasibilityAlgorithm
We have two possible dual phase I algorithms. Both work on an LP that minimize the sum of dual infeasiblities. One use dedicated code (when this param is true), the other one use exactly the same code as the dual phase II but on an auxiliary problem where the variable bounds of the original problem are changed. TODO(user): For now we have both, but ideally the non-dedicated version will win since it is a lot less code to maintain.
optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
- Returns:
- This builder for chaining.
-
hasRelativeCostPerturbation
public boolean hasRelativeCostPerturbation()The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
- Specified by:
hasRelativeCostPerturbation
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the relativeCostPerturbation field is set.
-
getRelativeCostPerturbation
public double getRelativeCostPerturbation()The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
- Specified by:
getRelativeCostPerturbation
in interfaceGlopParametersOrBuilder
- Returns:
- The relativeCostPerturbation.
-
setRelativeCostPerturbation
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
- Parameters:
value
- The relativeCostPerturbation to set.- Returns:
- This builder for chaining.
-
clearRelativeCostPerturbation
The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
optional double relative_cost_perturbation = 54 [default = 1e-05];
- Returns:
- This builder for chaining.
-
hasRelativeMaxCostPerturbation
public boolean hasRelativeMaxCostPerturbation()optional double relative_max_cost_perturbation = 55 [default = 1e-07];
- Specified by:
hasRelativeMaxCostPerturbation
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the relativeMaxCostPerturbation field is set.
-
getRelativeMaxCostPerturbation
public double getRelativeMaxCostPerturbation()optional double relative_max_cost_perturbation = 55 [default = 1e-07];
- Specified by:
getRelativeMaxCostPerturbation
in interfaceGlopParametersOrBuilder
- Returns:
- The relativeMaxCostPerturbation.
-
setRelativeMaxCostPerturbation
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
- Parameters:
value
- The relativeMaxCostPerturbation to set.- Returns:
- This builder for chaining.
-
clearRelativeMaxCostPerturbation
optional double relative_max_cost_perturbation = 55 [default = 1e-07];
- Returns:
- This builder for chaining.
-
hasInitialConditionNumberThreshold
public boolean hasInitialConditionNumberThreshold()If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
- Specified by:
hasInitialConditionNumberThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the initialConditionNumberThreshold field is set.
-
getInitialConditionNumberThreshold
public double getInitialConditionNumberThreshold()If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
- Specified by:
getInitialConditionNumberThreshold
in interfaceGlopParametersOrBuilder
- Returns:
- The initialConditionNumberThreshold.
-
setInitialConditionNumberThreshold
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
- Parameters:
value
- The initialConditionNumberThreshold to set.- Returns:
- This builder for chaining.
-
clearInitialConditionNumberThreshold
If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
optional double initial_condition_number_threshold = 59 [default = 1e+50];
- Returns:
- This builder for chaining.
-
hasLogSearchProgress
public boolean hasLogSearchProgress()If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
- Specified by:
hasLogSearchProgress
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the logSearchProgress field is set.
-
getLogSearchProgress
public boolean getLogSearchProgress()If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
- Specified by:
getLogSearchProgress
in interfaceGlopParametersOrBuilder
- Returns:
- The logSearchProgress.
-
setLogSearchProgress
If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
- Parameters:
value
- The logSearchProgress to set.- Returns:
- This builder for chaining.
-
clearLogSearchProgress
If true, logs the progress of a solve to LOG(INFO). Note that the same messages can also be turned on by displaying logs at level 1 for the relevant files.
optional bool log_search_progress = 61 [default = false];
- Returns:
- This builder for chaining.
-
hasLogToStdout
public boolean hasLogToStdout()If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
- Specified by:
hasLogToStdout
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the logToStdout field is set.
-
getLogToStdout
public boolean getLogToStdout()If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
- Specified by:
getLogToStdout
in interfaceGlopParametersOrBuilder
- Returns:
- The logToStdout.
-
setLogToStdout
If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
- Parameters:
value
- The logToStdout to set.- Returns:
- This builder for chaining.
-
clearLogToStdout
If true, logs will be displayed to stdout instead of using Google log info.
optional bool log_to_stdout = 66 [default = true];
- Returns:
- This builder for chaining.
-
hasCrossoverBoundSnappingDistance
public boolean hasCrossoverBoundSnappingDistance()If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
- Specified by:
hasCrossoverBoundSnappingDistance
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the crossoverBoundSnappingDistance field is set.
-
getCrossoverBoundSnappingDistance
public double getCrossoverBoundSnappingDistance()If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
- Specified by:
getCrossoverBoundSnappingDistance
in interfaceGlopParametersOrBuilder
- Returns:
- The crossoverBoundSnappingDistance.
-
setCrossoverBoundSnappingDistance
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
- Parameters:
value
- The crossoverBoundSnappingDistance to set.- Returns:
- This builder for chaining.
-
clearCrossoverBoundSnappingDistance
If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter. The starting statuses can contains FREE variables with bounds, if a user set it like this externally. Also, any variable with an initial BASIC status that was not kept in the initial basis is marked as FREE before this step is applied. Note that by default a FREE variable is assumed to be zero unless a starting value was specified via SetStartingVariableValuesForNextSolve(). Note that, at the end of the solve, some of these FREE variable with bounds and an interior point value might still be left in the final solution. Enable push_to_vertex to clean these up.
optional double crossover_bound_snapping_distance = 64 [default = inf];
- Returns:
- This builder for chaining.
-
hasPushToVertex
public boolean hasPushToVertex()If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
- Specified by:
hasPushToVertex
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the pushToVertex field is set.
-
getPushToVertex
public boolean getPushToVertex()If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
- Specified by:
getPushToVertex
in interfaceGlopParametersOrBuilder
- Returns:
- The pushToVertex.
-
setPushToVertex
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
- Parameters:
value
- The pushToVertex to set.- Returns:
- This builder for chaining.
-
clearPushToVertex
If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution. Note this situation can happen only if a starting value was specified via SetStartingVariableValuesForNextSolve().
optional bool push_to_vertex = 65 [default = true];
- Returns:
- This builder for chaining.
-
hasUseImpliedFreePreprocessor
public boolean hasUseImpliedFreePreprocessor()If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
- Specified by:
hasUseImpliedFreePreprocessor
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the useImpliedFreePreprocessor field is set.
-
getUseImpliedFreePreprocessor
public boolean getUseImpliedFreePreprocessor()If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
- Specified by:
getUseImpliedFreePreprocessor
in interfaceGlopParametersOrBuilder
- Returns:
- The useImpliedFreePreprocessor.
-
setUseImpliedFreePreprocessor
If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
- Parameters:
value
- The useImpliedFreePreprocessor to set.- Returns:
- This builder for chaining.
-
clearUseImpliedFreePreprocessor
If presolve runs, include the pass that detects implied free variables.
optional bool use_implied_free_preprocessor = 67 [default = true];
- Returns:
- This builder for chaining.
-
hasMaxValidMagnitude
public boolean hasMaxValidMagnitude()Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
- Specified by:
hasMaxValidMagnitude
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the maxValidMagnitude field is set.
-
getMaxValidMagnitude
public double getMaxValidMagnitude()Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
- Specified by:
getMaxValidMagnitude
in interfaceGlopParametersOrBuilder
- Returns:
- The maxValidMagnitude.
-
setMaxValidMagnitude
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
- Parameters:
value
- The maxValidMagnitude to set.- Returns:
- This builder for chaining.
-
clearMaxValidMagnitude
Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid. This is needed to avoid floating point overflow when evaluating bounds * coeff for instance. In practice, users shouldn't use super large values in an LP. With the default threshold, even evaluating large constraint with variables at their bound shouldn't cause any overflow.
optional double max_valid_magnitude = 70 [default = 1e+30];
- Returns:
- This builder for chaining.
-
hasDropMagnitude
public boolean hasDropMagnitude()Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
- Specified by:
hasDropMagnitude
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dropMagnitude field is set.
-
getDropMagnitude
public double getDropMagnitude()Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
- Specified by:
getDropMagnitude
in interfaceGlopParametersOrBuilder
- Returns:
- The dropMagnitude.
-
setDropMagnitude
Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
- Parameters:
value
- The dropMagnitude to set.- Returns:
- This builder for chaining.
-
clearDropMagnitude
Value in the input LP lower than this will be ignored. This is similar to drop_tolerance but more aggressive as this is used before scaling. This is mainly here to avoid underflow and have simpler invariant in the code, like a * b == 0 iff a or b is zero and things like this.
optional double drop_magnitude = 71 [default = 1e-30];
- Returns:
- This builder for chaining.
-
hasDualPricePrioritizeNorm
public boolean hasDualPricePrioritizeNorm()On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
- Specified by:
hasDualPricePrioritizeNorm
in interfaceGlopParametersOrBuilder
- Returns:
- Whether the dualPricePrioritizeNorm field is set.
-
getDualPricePrioritizeNorm
public boolean getDualPricePrioritizeNorm()On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
- Specified by:
getDualPricePrioritizeNorm
in interfaceGlopParametersOrBuilder
- Returns:
- The dualPricePrioritizeNorm.
-
setDualPricePrioritizeNorm
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
- Parameters:
value
- The dualPricePrioritizeNorm to set.- Returns:
- This builder for chaining.
-
clearDualPricePrioritizeNorm
On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
optional bool dual_price_prioritize_norm = 69 [default = false];
- Returns:
- This builder for chaining.
-