Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::sat Namespace Reference

Classes

class  ActivityBoundHelper
 
struct  AffineExpression
 
class  AllDifferentBoundsPropagator
 
class  AllDifferentConstraint
 Implementation of AllDifferentAC(). More...
 
class  AllIntervalsHelper
 
class  ArcGraphNeighborhoodGenerator
 
struct  ArcWithLpValue
 
struct  AssignmentInfo
 Information about a variable assignment. More...
 
struct  AssignmentType
 
class  AssignmentView
 
class  AutomatonConstraint
 
struct  BaseEvent
 Internal methods and data structures, useful for testing. More...
 
class  BasicKnapsackSolver
 
struct  BinaryClause
 A binary clause. This is used by BinaryClauseManager. More...
 
class  BinaryClauseManager
 A simple class to manage a set of binary clauses. More...
 
class  BinaryImplicationGraph
 
class  BlockedClauseSimplifier
 
struct  BooleanOrIntegerLiteral
 
struct  BooleanOrIntegerVariable
 
class  BooleanXorPropagator
 
class  BoolRLTCutHelper
 
class  BoolVar
 
class  BoundedVariableElimination
 
struct  BruteForceResult
 
struct  CachedIntervalData
 
struct  CachedTaskBounds
 
class  CanonicalBooleanLinearProblem
 
class  CapacityProfile
 
class  CircuitConstraint
 
class  CircuitCoveringPropagator
 
class  CircuitPropagator
 
struct  ClauseInfo
 
class  ClauseManager
 
class  ClauseWithOneMissingHasher
 Class to help detects clauses that differ on a single literal. More...
 
class  CombinedDisjunctive
 
class  CompactVectorVector
 
class  CompiledAllDiffConstraint
 
class  CompiledBoolXorConstraint
 The violation of a bool_xor constraint is 0 or 1. More...
 
class  CompiledCircuitConstraint
 --— CompiledCircuitConstraint --— More...
 
class  CompiledConstraint
 View of a generic (non linear) constraint for the LsEvaluator. More...
 
class  CompiledConstraintWithProto
 
class  CompiledIntDivConstraint
 
class  CompiledIntModConstraint
 
class  CompiledIntProdConstraint
 
class  CompiledLinMaxConstraint
 
class  CompiledNoOverlap2dConstraint
 
class  CompiledReservoirConstraint
 
class  CompoundMoveBuilder
 
class  Constraint
 
class  ConstraintGraphNeighborhoodGenerator
 
class  ConstraintPropagationOrder
 
class  ContinuousProber
 
class  CoreBasedOptimizer
 
class  CoverCutHelper
 Helper to find knapsack cover cuts. More...
 
class  CpModelBuilder
 
class  CpModelMapping
 
class  CpModelPresolver
 
class  CpModelProtoWrapper
 This implement the implicit contract needed by the SatCnfReader class. More...
 
class  CpModelView
 
struct  CpSolverResponseStatisticCallbacks
 
struct  CtEvent
 
class  CumulativeConstraint
 
class  CumulativeDualFeasibleEnergyConstraint
 Implementation of AddCumulativeOverloadCheckerDff(). More...
 
class  CumulativeEnergyConstraint
 Implementation of AddCumulativeOverloadChecker(). More...
 
class  CumulativeIsAfterSubsetConstraint
 
struct  CutData
 Our cut are always of the form linear_expression <= rhs. More...
 
class  CutDataBuilder
 Stores temporaries used to build or manipulate a CutData. More...
 
struct  CutGenerator
 
struct  CutTerm
 
struct  DebugSolution
 
class  DecompositionGraphNeighborhoodGenerator
 
struct  DelayedRootLevelDeduction
 
class  DFFComposedF2F0
 
struct  DiffnBaseEvent
 Internal methods and data structures, useful for testing. More...
 
struct  DiffnCtEvent
 
struct  DiffnEnergyEvent
 
struct  DiophantineSolution
 
class  DisjunctiveDetectablePrecedences
 
class  DisjunctiveEdgeFinding
 
class  DisjunctiveNotLast
 
class  DisjunctiveOverloadChecker
 
class  DisjunctivePrecedences
 
class  DisjunctiveSimplePrecedences
 
class  DisjunctiveWithTwoItems
 
class  DivisionPropagator
 
class  DomainDeductions
 
class  DoubleLinearExpr
 
class  DratChecker
 
class  DratProofHandler
 
class  DratWriter
 
class  DualBoundStrengthening
 
class  DualFeasibleFunctionF0
 
class  ElementEncodings
 
class  EncodingNode
 
struct  EnergyEvent
 
class  EnforcementPropagator
 This is meant as an helper to deal with enforcement for any constraint. More...
 
class  ExponentialMovingAverage
 
class  FeasibilityJumpSolver
 
class  FeasibilityPump
 
struct  FindRectanglesResult
 
class  FirstFewValues
 
class  FixedCapacityVector
 
class  FixedDivisionPropagator
 
class  FixedModuloPropagator
 
struct  FullIntegerPrecedence
 
class  GenericLiteralWatcher
 
class  GreaterThanAtLeastOneOfDetector
 
class  GreaterThanAtLeastOneOfPropagator
 
class  HittingSetOptimizer
 
class  IdentityMap
 
struct  ImpliedBoundEntry
 
class  ImpliedBounds
 
class  ImpliedBoundsProcessor
 
class  InclusionDetector
 
class  IncrementalAverage
 Manages incremental averages. More...
 
struct  IndexedInterval
 
struct  IndexReferences
 
class  Inprocessing
 
struct  IntegerDomains
 
class  IntegerEncoder
 
struct  IntegerLiteral
 
class  IntegerRoundingCutHelper
 
class  IntegerSearchHelper
 An helper class to share the code used by the different kind of search. More...
 
class  IntegerTrail
 
class  IntervalsRepository
 
class  IntervalVar
 
class  IntVar
 
struct  ItemForPairwiseRestriction
 
class  JumpTable
 
class  LazyReasonInterface
 
class  LbTreeSearch
 
struct  LevelZeroCallbackHelper
 
class  LevelZeroEquality
 
class  LinearBooleanProblemWrapper
 This implement the implicit contract needed by the SatCnfReader class. More...
 
struct  LinearConstraint
 
class  LinearConstraintBuilder
 
class  LinearConstraintManager
 
class  LinearConstraintPropagator
 
class  LinearExpr
 
struct  LinearExpression
 
class  LinearIncrementalEvaluator
 
class  LinearModel
 
class  LinearProgrammingConstraint
 
class  LinearProgrammingConstraintCollection
 A class that stores the collection of all LP constraints in a model. More...
 
class  LinearProgrammingDispatcher
 
class  LinearPropagator
 
struct  LinearRelaxation
 
struct  LinearTerm
 
class  LinMinPropagator
 
class  Literal
 
struct  LiteralValueValue
 
struct  LiteralWithCoeff
 Represents a term in a pseudo-Boolean formula. More...
 
class  LocalBranchingLpBasedNeighborhoodGenerator
 
struct  LsCounters
 
class  LsEvaluator
 
struct  LsOptions
 The parameters used by the local search code. More...
 
struct  LsState
 
class  MaxBoundedSubsetSum
 
class  MinPropagator
 
class  Model
 
class  ModelCopy
 
struct  ModelLpValues
 
class  ModelRandomGenerator
 
class  ModelSharedTimeLimit
 The model "singleton" shared time limit. More...
 
class  MultipleCircuitConstraint
 
class  MutableUpperBoundedLinearConstraint
 
struct  Neighborhood
 Neighborhood returned by Neighborhood generators. More...
 
class  NeighborhoodGenerator
 Base class for a CpModelProto neighborhood generator. More...
 
class  NeighborhoodGeneratorHelper
 
class  NoCyclePropagator
 Enforce the fact that there is no cycle in the given directed graph. More...
 
class  NonOverlappingRectanglesDisjunctivePropagator
 
class  NonOverlappingRectanglesEnergyPropagator
 Propagates using a box energy reasoning. More...
 
class  NoOverlap2DConstraint
 
class  NoOverlapBetweenTwoIntervals
 
struct  ObjectiveDefinition
 
class  ObjectiveEncoder
 
class  ObjectiveShavingSolver
 
class  OpbReader
 
class  OrthogonalPackingInfeasibilityDetector
 
struct  OrthogonalPackingOptions
 
class  OrthogonalPackingResult
 
struct  PairwiseRestriction
 
class  PbConstraints
 
struct  PbConstraintsEnqueueHelper
 
class  Percentile
 
struct  PermutableEvent
 
struct  PermutableItem
 
struct  PostsolveClauses
 
class  PrecedenceRelations
 
class  PrecedencesPropagator
 
class  PresolveContext
 
class  PresolveTimer
 
class  Prober
 
struct  ProbingOptions
 
class  ProbingRectangle
 
class  ProductDecomposer
 Helper class to express a product as a linear constraint. More...
 
class  ProductDetector
 
class  ProductPropagator
 
class  PropagationGraph
 
struct  PropagationStatistics
 Simple class to display statistics at the end if –v=1. More...
 
class  PropagatorInterface
 Base class for CP like propagators. More...
 
class  ProtoLiteral
 
class  ProtoTrail
 
class  PseudoCosts
 
class  QuickSmallDivision
 
class  RandomIntervalSchedulingNeighborhoodGenerator
 
class  RandomPrecedenceSchedulingNeighborhoodGenerator
 
class  RandomPrecedencesPackingNeighborhoodGenerator
 
class  RandomRectanglesPackingNeighborhoodGenerator
 
struct  Rectangle
 
struct  RectangleInRange
 
class  RectanglePairwisePropagator
 Propagator that compares the boxes pairwise. More...
 
struct  ReducedDomainNeighborhood
 
class  RelaxationInducedNeighborhoodGenerator
 
class  RelaxRandomConstraintsGenerator
 
class  RelaxRandomVariablesGenerator
 
class  ReservoirConstraint
 
class  ReservoirTimeTabling
 
class  RestartPolicy
 Contain the logic to decide when to restart a SAT tree search. More...
 
class  RevIntegerValueRepository
 
class  RevIntRepository
 
class  RoundingDualFeasibleFunction
 
class  RoundingDualFeasibleFunctionPowerOfTwo
 Same as above for k = 2^log2_k. More...
 
struct  RoundingOptions
 
class  RoutingFullPathNeighborhoodGenerator
 
class  RoutingPathNeighborhoodGenerator
 
class  RoutingRandomNeighborhoodGenerator
 
class  SatClause
 
class  SatCnfReader
 
class  SatDecisionPolicy
 
class  SatPostsolver
 
struct  SatPresolveOptions
 
class  SatPresolver
 
class  SatPropagator
 Base class for all the SAT constraints. More...
 
class  SatSolver
 
class  SavedLiteral
 
class  SavedVariable
 
class  ScatteredIntegerVector
 
class  SccGraph
 
class  SchedulingConstraintHelper
 
class  SchedulingDemandHelper
 
class  SchedulingResourceWindowsNeighborhoodGenerator
 
class  SchedulingTimeWindowNeighborhoodGenerator
 
struct  SearchHeuristics
 
class  SharedBoundsManager
 
struct  SharedClasses
 
class  SharedClausesManager
 
class  SharedIncompleteSolutionManager
 
class  SharedLPSolutionRepository
 
class  SharedLsStates
 Shared set of local search states that we work on. More...
 
class  SharedResponseManager
 
class  SharedSolutionRepository
 
class  SharedStatistics
 Simple class to add statistics by name and print them at the end. More...
 
class  SharedStatTables
 Contains the table we display after the solver is done. More...
 
class  SharedTreeManager
 
class  SharedTreeWorker
 
class  SlicePackingNeighborhoodGenerator
 
class  SquarePropagator
 
class  StampingSimplifier
 
class  SubSolver
 
class  SubsolverNameFilter
 Simple class used to filter executed subsolver names. More...
 
class  SumOfAllDiffLowerBounder
 Utility class for the AllDiff cut generator. More...
 
class  SymmetryPropagator
 
class  SynchronizationPoint
 A simple wrapper to add a synchronization point in the list of subsolvers. More...
 
class  TableConstraint
 
class  TaskSet
 
struct  TaskTime
 
class  ThetaLambdaTree
 
class  TimeTableEdgeFinding
 
class  TimeTablingPerTask
 
class  TopN
 
class  TopNCuts
 
class  Trail
 
class  UniqueClauseStream
 
class  UpperBoundedLinearConstraint
 
struct  ValueLiteralPair
 A value and a literal. More...
 
class  VarDomainWrapper
 
class  VarDomination
 
class  VariableGraphNeighborhoodGenerator
 
class  VariablesAssignment
 
class  VariablesShavingSolver
 
class  VariableWithSameReasonIdentifier
 
struct  VarValue
 Stores one variable and its strategy value. More...
 
class  ZeroHalfCutHelper
 

Typedefs

using InlinedIntegerLiteralVector = absl::InlinedVector<IntegerLiteral, 2>
 
using InlinedIntegerValueVector
 
using IntegerSumLE = LinearConstraintPropagator<false>
 
using IntegerSumLE128 = LinearConstraintPropagator<true>
 

Enumerations

enum  SatFormat { DIMACS , DRAT }
 The file formats that can be used to save a list of clauses. More...
 
enum class  EnforcementStatus { IS_FALSE = 0 , CANNOT_PROPAGATE = 1 , CAN_PROPAGATE = 2 , IS_ENFORCED = 3 }
 

Functions

void SolveFzWithCpModelProto (const fz::Model &fz_model, const fz::FlatzincSatParameters &p, const std::string &sat_params, SolverLogger *logger, SolverLogger *solution_logger)
 
std::vector< Rectangle > GenerateNonConflictingRectangles (int num_rectangles, absl::BitGenRef random)
 
std::vector< RectangleInRangeMakeItemsFromRectangles (absl::Span< const Rectangle > rectangles, double slack_factor, absl::BitGenRef random)
 
std::vector< ItemForPairwiseRestrictionGenerateItemsRectanglesWithNoPairwiseConflict (const std::vector< Rectangle > &rectangles, double slack_factor, absl::BitGenRef random)
 
std::vector< ItemForPairwiseRestrictionGenerateItemsRectanglesWithNoPairwisePropagation (int num_rectangles, double slack_factor, absl::BitGenRef random)
 
bool Preprocess (absl::Span< PermutableItem > &items, std::pair< IntegerValue, IntegerValue > &bounding_box_size, int max_complexity)
 Exposed for testing.
 
BruteForceResult BruteForceOrthogonalPacking (absl::Span< const IntegerValue > sizes_x, absl::Span< const IntegerValue > sizes_y, std::pair< IntegerValue, IntegerValue > bounding_box_size, int max_complexity)
 
bool PresolveFixed2dRectangles (absl::Span< const RectangleInRange > non_fixed_boxes, std::vector< Rectangle > *fixed_boxes)
 
bool ReduceNumberofBoxes (std::vector< Rectangle > *mandatory_rectangles, std::vector< Rectangle > *optional_rectangles)
 
std::function< void(Model *)> AllDifferentBinary (const std::vector< IntegerVariable > &vars)
 
std::function< void(Model *)> AllDifferentOnBounds (const std::vector< AffineExpression > &expressions)
 
std::function< void(Model *)> AllDifferentOnBounds (const std::vector< IntegerVariable > &vars)
 
std::function< void(Model *)> AllDifferentAC (const std::vector< IntegerVariable > &variables)
 
void ExtractAssignment (const LinearBooleanProblem &problem, const SatSolver &solver, std::vector< bool > *assignment)
 
absl::Status ValidateBooleanProblem (const LinearBooleanProblem &problem)
 
CpModelProto BooleanProblemToCpModelproto (const LinearBooleanProblem &problem)
 
void ChangeOptimizationDirection (LinearBooleanProblem *problem)
 
bool LoadBooleanProblem (const LinearBooleanProblem &problem, SatSolver *solver)
 Loads a BooleanProblem into a given SatSolver instance.
 
bool LoadAndConsumeBooleanProblem (LinearBooleanProblem *problem, SatSolver *solver)
 
void UseObjectiveForSatAssignmentPreference (const LinearBooleanProblem &problem, SatSolver *solver)
 
bool AddObjectiveUpperBound (const LinearBooleanProblem &problem, Coefficient upper_bound, SatSolver *solver)
 Adds the constraint that the objective is smaller than the given upper bound.
 
bool AddObjectiveConstraint (const LinearBooleanProblem &problem, bool use_lower_bound, Coefficient lower_bound, bool use_upper_bound, Coefficient upper_bound, SatSolver *solver)
 
Coefficient ComputeObjectiveValue (const LinearBooleanProblem &problem, const std::vector< bool > &assignment)
 Returns the objective value under the current assignment.
 
bool IsAssignmentValid (const LinearBooleanProblem &problem, const std::vector< bool > &assignment)
 Checks that an assignment is valid for the given BooleanProblem.
 
std::string LinearBooleanProblemToCnfString (const LinearBooleanProblem &problem)
 
void StoreAssignment (const VariablesAssignment &assignment, BooleanAssignment *output)
 
void ExtractSubproblem (const LinearBooleanProblem &problem, const std::vector< int > &constraint_indices, LinearBooleanProblem *subproblem)
 Constructs a sub-problem formed by the constraints with given indices.
 
template<typename Graph >
GraphGenerateGraphForSymmetryDetection (const LinearBooleanProblem &problem, std::vector< int > *initial_equivalence_classes)
 
void MakeAllLiteralsPositive (LinearBooleanProblem *problem)
 
void FindLinearBooleanProblemSymmetries (const LinearBooleanProblem &problem, std::vector< std::unique_ptr< SparsePermutation > > *generators)
 
void ApplyLiteralMappingToBooleanProblem (const util_intops::StrongVector< LiteralIndex, LiteralIndex > &mapping, LinearBooleanProblem *problem)
 
void ProbeAndSimplifyProblem (SatPostsolver *postsolver, LinearBooleanProblem *problem)
 
double AddOffsetAndScaleObjectiveValue (const LinearBooleanProblem &problem, Coefficient v)
 Adds the offset and returns the scaled version of the given objective value.
 
std::function< void(Model *)> ExactlyOnePerRowAndPerColumn (const std::vector< std::vector< Literal > > &graph)
 
void LoadSubcircuitConstraint (int num_nodes, const std::vector< int > &tails, const std::vector< int > &heads, const std::vector< Literal > &literals, Model *model, bool multiple_subcircuit_through_zero)
 
std::function< void(Model *)> CircuitCovering (const std::vector< std::vector< Literal > > &graph, const std::vector< int > &distinguished_nodes)
 
template<class IntContainer >
int ReindexArcs (IntContainer *tails, IntContainer *heads, absl::flat_hash_map< int, int > *mapping_output=nullptr)
 
int64_t OverlapOfTwoIntervals (const ConstraintProto &interval1, const ConstraintProto &interval2, absl::Span< const int64_t > solution)
 --— CompiledNoOverlap2dConstraint --—
 
int64_t NoOverlapMinRepairDistance (const ConstraintProto &interval1, const ConstraintProto &interval2, absl::Span< const int64_t > solution)
 
void AddCircuitFlowConstraints (LinearIncrementalEvaluator &linear_evaluator, const ConstraintProto &ct_proto)
 
std::vector< IntegerValue > ToIntegerValueVector (const std::vector< int64_t > &input)
 
std::function< void(Model *)> LiteralXorIs (const std::vector< Literal > &literals, bool value)
 Enforces the XOR of a set of literals to be equal to the given value.
 
std::function< void(Model *)> GreaterThanAtLeastOneOf (IntegerVariable target_var, const absl::Span< const IntegerVariable > vars, const absl::Span< const IntegerValue > offsets, const absl::Span< const Literal > selectors, const absl::Span< const Literal > enforcements)
 
std::function< void(Model *)> PartialIsOneOfVar (IntegerVariable target_var, const std::vector< IntegerVariable > &vars, const std::vector< Literal > &selectors)
 
BoolVar Not (BoolVar x)
 
std::ostream & operator<< (std::ostream &os, const BoolVar &var)
 
std::string VarDebugString (const CpModelProto &proto, int index)
 
std::ostream & operator<< (std::ostream &os, const IntVar &var)
 
std::ostream & operator<< (std::ostream &os, const LinearExpr &e)
 
std::ostream & operator<< (std::ostream &os, const DoubleLinearExpr &e)
 
std::ostream & operator<< (std::ostream &os, const IntervalVar &var)
 
int64_t SolutionIntegerValue (const CpSolverResponse &r, const LinearExpr &expr)
 Evaluates the value of an linear expression in a solver response.
 
bool SolutionBooleanValue (const CpSolverResponse &r, BoolVar x)
 Evaluates the value of a Boolean literal in a solver response.
 
template<typename H >
AbslHashValue (H h, const IntVar &i)
 – ABSL HASHING SUPPORT --------------------------------------------------—
 
template<typename H >
AbslHashValue (H h, const IntervalVar &i)
 
LinearExpr operator- (LinearExpr expr)
 
LinearExpr operator+ (const LinearExpr &lhs, const LinearExpr &rhs)
 
LinearExpr operator+ (LinearExpr &&lhs, const LinearExpr &rhs)
 
LinearExpr operator+ (const LinearExpr &lhs, LinearExpr &&rhs)
 
LinearExpr operator+ (LinearExpr &&lhs, LinearExpr &&rhs)
 
LinearExpr operator- (const LinearExpr &lhs, const LinearExpr &rhs)
 
LinearExpr operator- (LinearExpr &&lhs, const LinearExpr &rhs)
 
LinearExpr operator- (const LinearExpr &lhs, LinearExpr &&rhs)
 
LinearExpr operator- (LinearExpr &&lhs, LinearExpr &&rhs)
 
LinearExpr operator* (LinearExpr expr, int64_t factor)
 
LinearExpr operator* (int64_t factor, LinearExpr expr)
 
DoubleLinearExpr operator- (DoubleLinearExpr expr)
 For DoubleLinearExpr.
 
DoubleLinearExpr operator+ (const DoubleLinearExpr &lhs, const DoubleLinearExpr &rhs)
 
DoubleLinearExpr operator+ (DoubleLinearExpr &&lhs, const DoubleLinearExpr &rhs)
 
DoubleLinearExpr operator+ (const DoubleLinearExpr &lhs, DoubleLinearExpr &&rhs)
 
DoubleLinearExpr operator+ (DoubleLinearExpr &&lhs, DoubleLinearExpr &&rhs)
 
DoubleLinearExpr operator+ (DoubleLinearExpr expr, double rhs)
 
DoubleLinearExpr operator+ (double lhs, DoubleLinearExpr expr)
 
DoubleLinearExpr operator- (const DoubleLinearExpr &lhs, const DoubleLinearExpr &rhs)
 
DoubleLinearExpr operator- (DoubleLinearExpr &&lhs, const DoubleLinearExpr &rhs)
 
DoubleLinearExpr operator- (const DoubleLinearExpr &lhs, DoubleLinearExpr &&rhs)
 
DoubleLinearExpr operator- (DoubleLinearExpr &&lhs, DoubleLinearExpr &&rhs)
 
DoubleLinearExpr operator- (DoubleLinearExpr epxr, double rhs)
 
DoubleLinearExpr operator- (double lhs, DoubleLinearExpr expr)
 
DoubleLinearExpr operator* (DoubleLinearExpr expr, double factor)
 
DoubleLinearExpr operator* (double factor, DoubleLinearExpr expr)
 
bool PossibleIntegerOverflow (const CpModelProto &model, absl::Span< const int > vars, absl::Span< const int64_t > coeffs, int64_t offset)
 
std::string ValidateCpModel (const CpModelProto &model, bool after_presolve)
 
std::string ValidateInputCpModel (const SatParameters &params, const CpModelProto &model)
 
bool ConstraintIsFeasible (const CpModelProto &model, const ConstraintProto &constraint, absl::Span< const int64_t > variable_values)
 
bool SolutionIsFeasible (const CpModelProto &model, absl::Span< const int64_t > variable_values, const CpModelProto *mapping_proto, const std::vector< int > *postsolve_mapping)
 
void PropagateAutomaton (const AutomatonConstraintProto &proto, const PresolveContext &context, std::vector< absl::flat_hash_set< int64_t > > *states, std::vector< absl::flat_hash_set< int64_t > > *labels)
 Fills and propagates the set of reachable states/labels.
 
void ExpandCpModel (PresolveContext *context)
 
void FinalExpansionForLinearConstraint (PresolveContext *context)
 
Neighborhood GenerateSchedulingNeighborhoodFromIntervalPrecedences (const absl::Span< const std::pair< int, int > > precedences, const CpSolverResponse &initial_solution, const NeighborhoodGeneratorHelper &helper)
 
Neighborhood GenerateSchedulingNeighborhoodFromRelaxedIntervals (absl::Span< const int > intervals_to_relax, absl::Span< const int > variables_to_fix, const CpSolverResponse &initial_solution, absl::BitGenRef random, const NeighborhoodGeneratorHelper &helper)
 
void LoadVariables (const CpModelProto &model_proto, bool view_all_booleans_as_integers, Model *m)
 
void LoadBooleanSymmetries (const CpModelProto &model_proto, Model *m)
 
void ExtractEncoding (const CpModelProto &model_proto, Model *m)
 
void ExtractElementEncoding (const CpModelProto &model_proto, Model *m)
 
void PropagateEncodingFromEquivalenceRelations (const CpModelProto &model_proto, Model *m)
 
void DetectOptionalVariables (const CpModelProto &model_proto, Model *m)
 Automatically detect optional variables.
 
void AddFullEncodingFromSearchBranching (const CpModelProto &model_proto, Model *m)
 
void LoadBoolOrConstraint (const ConstraintProto &ct, Model *m)
 
void LoadBoolAndConstraint (const ConstraintProto &ct, Model *m)
 
void LoadAtMostOneConstraint (const ConstraintProto &ct, Model *m)
 
void LoadExactlyOneConstraint (const ConstraintProto &ct, Model *m)
 
void LoadBoolXorConstraint (const ConstraintProto &ct, Model *m)
 
void SplitAndLoadIntermediateConstraints (bool lb_required, bool ub_required, std::vector< IntegerVariable > *vars, std::vector< int64_t > *coeffs, Model *m)
 
void LoadLinearConstraint (const ConstraintProto &ct, Model *m)
 
void LoadAllDiffConstraint (const ConstraintProto &ct, Model *m)
 
void LoadIntProdConstraint (const ConstraintProto &ct, Model *m)
 
void LoadIntDivConstraint (const ConstraintProto &ct, Model *m)
 
void LoadIntModConstraint (const ConstraintProto &ct, Model *m)
 
void LoadLinMaxConstraint (const ConstraintProto &ct, Model *m)
 
void LoadNoOverlapConstraint (const ConstraintProto &ct, Model *m)
 
void LoadNoOverlap2dConstraint (const ConstraintProto &ct, Model *m)
 
void LoadCumulativeConstraint (const ConstraintProto &ct, Model *m)
 
void LoadReservoirConstraint (const ConstraintProto &ct, Model *m)
 
void LoadCircuitConstraint (const ConstraintProto &ct, Model *m)
 
void LoadRoutesConstraint (const ConstraintProto &ct, Model *m)
 
bool LoadConstraint (const ConstraintProto &ct, Model *m)
 
void LoadIntMinConstraint (const ConstraintProto &ct, Model *m)
 
void LoadIntMaxConstraint (const ConstraintProto &ct, Model *m)
 
void LoadCircuitCoveringConstraint (const ConstraintProto &ct, Model *m)
 
void PostsolveClause (const ConstraintProto &ct, std::vector< Domain > *domains)
 
void PostsolveExactlyOne (const ConstraintProto &ct, std::vector< Domain > *domains)
 
void SetEnforcementLiteralToFalse (const ConstraintProto &ct, std::vector< Domain > *domains)
 
void PostsolveLinear (const ConstraintProto &ct, std::vector< Domain > *domains)
 
void PostsolveLinMax (const ConstraintProto &ct, std::vector< Domain > *domains)
 
void PostsolveElement (const ConstraintProto &ct, std::vector< Domain > *domains)
 We only support 3 cases in the presolve currently.
 
void PostsolveIntMod (const ConstraintProto &ct, std::vector< Domain > *domains)
 We only support assigning to an affine target.
 
void PostsolveResponse (const int64_t num_variables_in_original_model, const CpModelProto &mapping_proto, const std::vector< int > &postsolve_mapping, std::vector< int64_t > *solution)
 
void FillTightenedDomainInResponse (const CpModelProto &original_model, const CpModelProto &mapping_proto, const std::vector< int > &postsolve_mapping, const std::vector< Domain > &search_domains, CpSolverResponse *response, SolverLogger *logger)
 
bool ImportModelWithBasicPresolveIntoContext (const CpModelProto &in_model, PresolveContext *context)
 
bool ImportModelAndDomainsWithBasicPresolveIntoContext (const CpModelProto &in_model, const std::vector< Domain > &domains, std::function< bool(int)> active_constraints, PresolveContext *context)
 
void CopyEverythingExceptVariablesAndConstraintsFieldsIntoContext (const CpModelProto &in_model, PresolveContext *context)
 Copies the non constraint, non variables part of the model.
 
CpSolverStatus PresolveCpModel (PresolveContext *context, std::vector< int > *postsolve_mapping)
 Convenient wrapper to call the full presolve.
 
void ApplyVariableMapping (const std::vector< int > &mapping, const PresolveContext &context)
 
std::vector< std::pair< int, int > > FindDuplicateConstraints (const CpModelProto &model_proto, bool ignore_enforcement)
 
std::function< BooleanOrIntegerLiteral()> ConstructUserSearchStrategy (const CpModelProto &cp_model_proto, Model *model)
 Constructs the search strategy specified in the given CpModelProto.
 
std::function< BooleanOrIntegerLiteral()> ConstructHeuristicSearchStrategy (const CpModelProto &cp_model_proto, Model *model)
 Constructs a search strategy tailored for the current model.
 
std::function< BooleanOrIntegerLiteral()> ConstructIntegerCompletionSearchStrategy (const std::vector< IntegerVariable > &variable_mapping, IntegerVariable objective_var, Model *model)
 Constructs an integer completion search strategy.
 
std::function< BooleanOrIntegerLiteral()> ConstructHintSearchStrategy (const CpModelProto &cp_model_proto, CpModelMapping *mapping, Model *model)
 Constructs a search strategy that follow the hint from the model.
 
std::function< BooleanOrIntegerLiteral()> ConstructFixedSearchStrategy (std::function< BooleanOrIntegerLiteral()> user_search, std::function< BooleanOrIntegerLiteral()> heuristic_search, std::function< BooleanOrIntegerLiteral()> integer_completion)
 
std::function< BooleanOrIntegerLiteral()> InstrumentSearchStrategy (const CpModelProto &cp_model_proto, const std::vector< IntegerVariable > &variable_mapping, std::function< BooleanOrIntegerLiteral()> instrumented_strategy, Model *model)
 
absl::flat_hash_map< std::string, SatParameters > GetNamedParameters (SatParameters base_params)
 
std::vector< SatParameters > GetFullWorkerParameters (const SatParameters &base_params, const CpModelProto &cp_model, int num_already_present, SubsolverNameFilter *filter)
 
std::vector< SatParameters > GetFirstSolutionBaseParams (const SatParameters &base_params)
 
std::vector< SatParameters > RepeatParameters (absl::Span< const SatParameters > base_params, int num_params_to_generate)
 
std::string CpSatSolverVersion ()
 Returns a string that describes the version of the solver.
 
std::string CpModelStats (const CpModelProto &model)
 Returns a string with some statistics on the given CpModelProto.
 
std::string CpSolverResponseStats (const CpSolverResponse &response, bool has_objective)
 
std::function< void(Model *)> NewFeasibleSolutionObserver (const std::function< void(const CpSolverResponse &response)> &callback)
 
std::function< void(Model *)> NewFeasibleSolutionLogCallback (const std::function< std::string(const CpSolverResponse &response)> &callback)
 
std::function< void(Model *)> NewBestBoundCallback (const std::function< void(double)> &callback)
 
std::function< SatParameters(Model *)> NewSatParameters (const std::string &params)
 
std::function< SatParameters(Model *)> NewSatParameters (const sat::SatParameters &parameters)
 
CpSolverResponse SolveCpModel (const CpModelProto &model_proto, Model *model)
 
CpSolverResponse Solve (const CpModelProto &model_proto)
 Solves the given CpModelProto and returns an instance of CpSolverResponse.
 
CpSolverResponse SolveWithParameters (const CpModelProto &model_proto, const SatParameters &params)
 Solves the given CpModelProto with the given parameters.
 
CpSolverResponse SolveWithParameters (const CpModelProto &model_proto, const std::string &params)
 
void LoadAndSolveCpModelForTest (const CpModelProto &model_proto, Model *model)
 
std::function< SatParameters(Model *)> NewSatParameters (const SatParameters &parameters)
 
void LoadDebugSolution (const CpModelProto &model_proto, Model *model)
 
void InitializeDebugSolution (const CpModelProto &model_proto, Model *model)
 
std::vector< int64_t > GetSolutionValues (const CpModelProto &model_proto, const Model &model)
 
IntegerVariable AddLPConstraints (bool objective_need_to_be_tight, const CpModelProto &model_proto, Model *m)
 Adds one LinearProgrammingConstraint per connected component of the model.
 
void RegisterVariableBoundsLevelZeroExport (const CpModelProto &, SharedBoundsManager *shared_bounds_manager, Model *model)
 
void RegisterVariableBoundsLevelZeroImport (const CpModelProto &model_proto, SharedBoundsManager *shared_bounds_manager, Model *model)
 
void RegisterObjectiveBestBoundExport (IntegerVariable objective_var, SharedResponseManager *shared_response_manager, Model *model)
 
void RegisterObjectiveBoundsImport (SharedResponseManager *shared_response_manager, Model *model)
 
void RegisterClausesExport (int id, SharedClausesManager *shared_clauses_manager, Model *model)
 Registers a callback that will export good clauses discovered during search.
 
int RegisterClausesLevelZeroImport (int id, SharedClausesManager *shared_clauses_manager, Model *model)
 
void LoadBaseModel (const CpModelProto &model_proto, Model *model)
 
void LoadFeasibilityPump (const CpModelProto &model_proto, Model *model)
 
void LoadCpModel (const CpModelProto &model_proto, Model *model)
 
void SolveLoadedCpModel (const CpModelProto &model_proto, Model *model)
 
void QuickSolveWithHint (const CpModelProto &model_proto, Model *model)
 
void MinimizeL1DistanceWithHint (const CpModelProto &model_proto, Model *model)
 
void PostsolveResponseWithFullSolver (int num_variables_in_original_model, CpModelProto mapping_proto, const std::vector< int > &postsolve_mapping, std::vector< int64_t > *solution)
 
void PostsolveResponseWrapper (const SatParameters &params, int num_variable_in_original_model, const CpModelProto &mapping_proto, const std::vector< int > &postsolve_mapping, std::vector< int64_t > *solution)
 
void AdaptGlobalParameters (const CpModelProto &model_proto, Model *model)
 
void FindCpModelSymmetries (const SatParameters &params, const CpModelProto &problem, std::vector< std::unique_ptr< SparsePermutation > > *generators, double deterministic_limit, SolverLogger *logger)
 
void DetectAndAddSymmetryToProto (const SatParameters &params, CpModelProto *proto, SolverLogger *logger)
 Detects symmetries and fill the symmetry field.
 
bool DetectAndExploitSymmetriesInPresolve (PresolveContext *context)
 
int64_t LinearExpressionGcd (const LinearExpressionProto &expr, int64_t gcd)
 
void DivideLinearExpression (int64_t divisor, LinearExpressionProto *expr)
 
void SetToNegatedLinearExpression (const LinearExpressionProto &input_expr, LinearExpressionProto *output_negated_expr)
 Fills the target as negated ref.
 
IndexReferences GetReferencesUsedByConstraint (const ConstraintProto &ct)
 
void GetReferencesUsedByConstraint (const ConstraintProto &ct, std::vector< int > *variables, std::vector< int > *literals)
 
void ApplyToAllLiteralIndices (const std::function< void(int *)> &f, ConstraintProto *ct)
 
void ApplyToAllVariableIndices (const std::function< void(int *)> &f, ConstraintProto *ct)
 
void ApplyToAllIntervalIndices (const std::function< void(int *)> &f, ConstraintProto *ct)
 
absl::string_view ConstraintCaseName (ConstraintProto::ConstraintCase constraint_case)
 
std::vector< int > UsedVariables (const ConstraintProto &ct)
 
std::vector< int > UsedIntervals (const ConstraintProto &ct)
 Returns the sorted list of interval used by a constraint.
 
int64_t ComputeInnerObjective (const CpObjectiveProto &objective, absl::Span< const int64_t > solution)
 
bool ExpressionContainsSingleRef (const LinearExpressionProto &expr)
 Returns true if a linear expression can be reduced to a single ref.
 
bool ExpressionIsAffine (const LinearExpressionProto &expr)
 Checks if the expression is affine or constant.
 
int GetSingleRefFromExpression (const LinearExpressionProto &expr)
 
void AddLinearExpressionToLinearConstraint (const LinearExpressionProto &expr, int64_t coefficient, LinearConstraintProto *linear)
 
bool SafeAddLinearExpressionToLinearConstraint (const LinearExpressionProto &expr, int64_t coefficient, LinearConstraintProto *linear)
 Same method, but returns if the addition was possible without overflowing.
 
bool LinearExpressionProtosAreEqual (const LinearExpressionProto &a, const LinearExpressionProto &b, int64_t b_scaling=1)
 Returns true iff a == b * b_scaling.
 
uint64_t FingerprintExpression (const LinearExpressionProto &lin, uint64_t seed)
 Returns a stable fingerprint of a linear expression.
 
uint64_t FingerprintModel (const CpModelProto &model, uint64_t seed=kDefaultFingerprintSeed)
 Returns a stable fingerprint of a model.
 
void SetupTextFormatPrinter (google::protobuf::TextFormat::Printer *printer)
 
bool ConvertCpModelProtoToCnf (const CpModelProto &cp_model, std::string *out)
 
int CombineSeed (int base_seed, int64_t delta)
 We assume delta >= 0 and we only use the low bit of delta.
 
int NegatedRef (int ref)
 Small utility functions to deal with negative variable/literal references.
 
int PositiveRef (int ref)
 
bool RefIsPositive (int ref)
 
bool HasEnforcementLiteral (const ConstraintProto &ct)
 Small utility functions to deal with half-reified constraints.
 
int EnforcementLiteral (const ConstraintProto &ct)
 
template<typename Set >
void InsertVariablesFromConstraint (const CpModelProto &model_proto, int index, Set &output)
 Insert variables in a constraint into a set.
 
template<typename ProtoWithDomain >
bool DomainInProtoContains (const ProtoWithDomain &proto, int64_t value)
 
template<typename ProtoWithDomain >
void FillDomainInProto (const Domain &domain, ProtoWithDomain *proto)
 Serializes a Domain into the domain field of a proto.
 
template<typename ProtoWithDomain >
Domain ReadDomainFromProto (const ProtoWithDomain &proto)
 Reads a Domain from the domain field of a proto.
 
template<typename ProtoWithDomain >
std::vector< int64_t > AllValuesInDomain (const ProtoWithDomain &proto)
 
double ScaleObjectiveValue (const CpObjectiveProto &proto, int64_t value)
 Scales back a objective value to a double value from the original model.
 
int64_t ScaleInnerObjectiveValue (const CpObjectiveProto &proto, int64_t value)
 Similar to ScaleObjectiveValue() but uses the integer version.
 
double UnscaleObjectiveValue (const CpObjectiveProto &proto, double value)
 Removes the objective scaling and offset from the given value.
 
template<class ExpressionList >
bool ExpressionsContainsOnlyOneVar (const ExpressionList &exprs)
 Returns true if there exactly one variable appearing in all the expressions.
 
template<class T >
uint64_t FingerprintRepeatedField (const google::protobuf::RepeatedField< T > &sequence, uint64_t seed)
 
template<class T >
uint64_t FingerprintSingleField (const T &field, uint64_t seed)
 
template<class M >
bool WriteModelProtoToFile (const M &proto, absl::string_view filename)
 
bool operator== (const BoolArgumentProto &lhs, const BoolArgumentProto &rhs)
 
template<typename H >
AbslHashValue (H h, const BoolArgumentProto &m)
 
bool operator== (const LinearConstraintProto &lhs, const LinearConstraintProto &rhs)
 
template<typename H >
AbslHashValue (H h, const LinearConstraintProto &m)
 
std::function< void(Model *)> Cumulative (const std::vector< IntervalVariable > &vars, const std::vector< AffineExpression > &demands, AffineExpression capacity, SchedulingConstraintHelper *helper)
 
std::function< void(Model *)> CumulativeTimeDecomposition (const std::vector< IntervalVariable > &vars, const std::vector< AffineExpression > &demands, AffineExpression capacity, SchedulingConstraintHelper *helper)
 
std::function< void(Model *)> CumulativeUsingReservoir (const std::vector< IntervalVariable > &vars, const std::vector< AffineExpression > &demands, AffineExpression capacity, SchedulingConstraintHelper *helper=nullptr)
 Another testing code, same assumptions as the CumulativeTimeDecomposition().
 
void AddCumulativeOverloadChecker (AffineExpression capacity, SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands, Model *model)
 
void AddCumulativeOverloadCheckerDff (AffineExpression capacity, SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands, Model *model)
 
IntegerValue GetFactorT (IntegerValue rhs_remainder, IntegerValue divisor, IntegerValue max_magnitude)
 
std::function< IntegerValue(IntegerValue)> GetSuperAdditiveRoundingFunction (IntegerValue rhs_remainder, IntegerValue divisor, IntegerValue t, IntegerValue max_scaling)
 
std::function< IntegerValue(IntegerValue)> GetSuperAdditiveStrengtheningFunction (IntegerValue positive_rhs, IntegerValue min_magnitude)
 
std::function< IntegerValue(IntegerValue)> GetSuperAdditiveStrengtheningMirFunction (IntegerValue positive_rhs, IntegerValue scaling)
 
CutGenerator CreatePositiveMultiplicationCutGenerator (AffineExpression z, AffineExpression x, AffineExpression y, int linearization_level, Model *model)
 A cut generator for z = x * y (x and y >= 0).
 
LinearConstraint ComputeHyperplanAboveSquare (AffineExpression x, AffineExpression square, IntegerValue x_lb, IntegerValue x_ub, Model *model)
 
LinearConstraint ComputeHyperplanBelowSquare (AffineExpression x, AffineExpression square, IntegerValue x_value, Model *model)
 
CutGenerator CreateSquareCutGenerator (AffineExpression y, AffineExpression x, int linearization_level, Model *model)
 
CutGenerator CreateAllDifferentCutGenerator (const std::vector< AffineExpression > &exprs, Model *model)
 
CutGenerator CreateLinMaxCutGenerator (const IntegerVariable target, const std::vector< LinearExpression > &exprs, const std::vector< IntegerVariable > &z_vars, Model *model)
 
bool BuildMaxAffineUpConstraint (const LinearExpression &target, IntegerVariable var, const std::vector< std::pair< IntegerValue, IntegerValue > > &affines, Model *model, LinearConstraintBuilder *builder)
 
CutGenerator CreateMaxAffineCutGenerator (LinearExpression target, IntegerVariable var, std::vector< std::pair< IntegerValue, IntegerValue > > affines, const std::string cut_name, Model *model)
 
CutGenerator CreateCliqueCutGenerator (const std::vector< IntegerVariable > &base_variables, Model *model)
 
std::function< IntegerValue(IntegerValue)> ExtendNegativeFunction (std::function< IntegerValue(IntegerValue)> base_f, IntegerValue period)
 
void AddNonOverlappingRectangles (const std::vector< IntervalVariable > &x, const std::vector< IntervalVariable > &y, Model *model)
 
void GenerateNoOverlap2dEnergyCut (absl::Span< const std::vector< LiteralValueValue > > energies, absl::Span< int > rectangles, absl::string_view cut_name, Model *model, LinearConstraintManager *manager, SchedulingConstraintHelper *x_helper, SchedulingConstraintHelper *y_helper, SchedulingDemandHelper *y_demands_helper)
 
CutGenerator CreateNoOverlap2dEnergyCutGenerator (SchedulingConstraintHelper *x_helper, SchedulingConstraintHelper *y_helper, SchedulingDemandHelper *x_demands_helper, SchedulingDemandHelper *y_demands_helper, const std::vector< std::vector< LiteralValueValue > > &energies, Model *model)
 
void GenerateNoOvelap2dCompletionTimeCutsWithEnergy (absl::string_view cut_name, std::vector< DiffnCtEvent > events, bool use_lifting, bool skip_low_sizes, Model *model, LinearConstraintManager *manager)
 
CutGenerator CreateNoOverlap2dCompletionTimeCutGenerator (SchedulingConstraintHelper *x_helper, SchedulingConstraintHelper *y_helper, Model *model)
 
std::vector< absl::Span< int > > GetOverlappingRectangleComponents (absl::Span< const Rectangle > rectangles, absl::Span< int > active_rectangles)
 
bool ReportEnergyConflict (Rectangle bounding_box, absl::Span< const int > boxes, SchedulingConstraintHelper *x, SchedulingConstraintHelper *y)
 
bool BoxesAreInEnergyConflict (const std::vector< Rectangle > &rectangles, const std::vector< IntegerValue > &energies, absl::Span< const int > boxes, Rectangle *conflict)
 
bool AnalyzeIntervals (bool transpose, absl::Span< const int > local_boxes, absl::Span< const Rectangle > rectangles, absl::Span< const IntegerValue > rectangle_energies, IntegerValue *x_threshold, IntegerValue *y_threshold, Rectangle *conflict)
 
absl::Span< int > FilterBoxesAndRandomize (absl::Span< const Rectangle > cached_rectangles, absl::Span< int > boxes, IntegerValue threshold_x, IntegerValue threshold_y, absl::BitGenRef random)
 
absl::Span< int > FilterBoxesThatAreTooLarge (absl::Span< const Rectangle > cached_rectangles, absl::Span< const IntegerValue > energies, absl::Span< int > boxes)
 
std::ostream & operator<< (std::ostream &out, const IndexedInterval &interval)
 
void ConstructOverlappingSets (bool already_sorted, std::vector< IndexedInterval > *intervals, std::vector< std::vector< int > > *result)
 
void GetOverlappingIntervalComponents (std::vector< IndexedInterval > *intervals, std::vector< std::vector< int > > *components)
 
std::vector< int > GetIntervalArticulationPoints (std::vector< IndexedInterval > *intervals)
 
void AppendPairwiseRestrictions (absl::Span< const ItemForPairwiseRestriction > items, std::vector< PairwiseRestriction > *result)
 
void AppendPairwiseRestrictions (absl::Span< const ItemForPairwiseRestriction > items, absl::Span< const ItemForPairwiseRestriction > other_items, std::vector< PairwiseRestriction > *result)
 
IntegerValue Smallest1DIntersection (IntegerValue range_min, IntegerValue range_max, IntegerValue size, IntegerValue interval_min, IntegerValue interval_max)
 
FindRectanglesResult FindRectanglesWithEnergyConflictMC (const std::vector< RectangleInRange > &intervals, absl::BitGenRef random, double temperature, double candidate_energy_usage_factor)
 
std::string RenderDot (std::optional< Rectangle > bb, absl::Span< const Rectangle > solution)
 
std::vector< Rectangle > FindEmptySpaces (const Rectangle &bounding_box, std::vector< Rectangle > ocupied_rectangles)
 
void ReduceModuloBasis (absl::Span< const std::vector< absl::int128 > > basis, const int elements_to_consider, std::vector< absl::int128 > &v)
 
std::vector< int > GreedyFastDecreasingGcd (const absl::Span< const int64_t > coeffs)
 
DiophantineSolution SolveDiophantine (absl::Span< const int64_t > coeffs, int64_t rhs, absl::Span< const int64_t > var_lbs, absl::Span< const int64_t > var_ubs)
 
 floor (|P|/2)<
 
void AddDisjunctive (const std::vector< IntervalVariable > &intervals, Model *model)
 
void AddDisjunctiveWithBooleanPrecedencesOnly (const std::vector< IntervalVariable > &intervals, Model *model)
 
bool ContainsLiteral (absl::Span< const Literal > clause, Literal literal)
 
bool Resolve (absl::Span< const Literal > clause, absl::Span< const Literal > other_clause, Literal complementary_literal, VariablesAssignment *assignment, std::vector< Literal > *resolvent)
 
bool AddProblemClauses (const std::string &file_path, DratChecker *drat_checker)
 
bool AddInferedAndDeletedClauses (const std::string &file_path, DratChecker *drat_checker)
 
bool PrintClauses (const std::string &file_path, SatFormat format, absl::Span< const std::vector< Literal > > clauses, int num_variables)
 
 DEFINE_STRONG_INDEX_TYPE (ClauseIndex)
 Index of a clause (>= 0).
 
const ClauseIndex kNoClauseIndex (-1)
 
EncodingNode LazyMerge (EncodingNode *a, EncodingNode *b, SatSolver *solver)
 
void IncreaseNodeSize (EncodingNode *node, SatSolver *solver)
 
EncodingNode FullMerge (Coefficient upper_bound, EncodingNode *a, EncodingNode *b, SatSolver *solver)
 
EncodingNodeMergeAllNodesWithDeque (Coefficient upper_bound, const std::vector< EncodingNode * > &nodes, SatSolver *solver, std::deque< EncodingNode > *repository)
 
EncodingNodeLazyMergeAllNodeWithPQAndIncreaseLb (Coefficient weight, const std::vector< EncodingNode * > &nodes, SatSolver *solver, std::deque< EncodingNode > *repository)
 
void ReduceNodes (Coefficient upper_bound, Coefficient *lower_bound, std::vector< EncodingNode * > *nodes, SatSolver *solver)
 
std::vector< LiteralExtractAssumptions (Coefficient stratified_lower_bound, const std::vector< EncodingNode * > &nodes, SatSolver *solver)
 
Coefficient ComputeCoreMinWeight (const std::vector< EncodingNode * > &nodes, const std::vector< Literal > &core)
 
Coefficient MaxNodeWeightSmallerThan (const std::vector< EncodingNode * > &nodes, Coefficient upper_bound)
 
std::vector< LiteralValueValueTryToReconcileEncodings (const AffineExpression &size2_affine, const AffineExpression &affine, absl::Span< const ValueLiteralPair > affine_var_encoding, bool put_affine_left_in_result, IntegerEncoder *integer_encoder)
 
std::vector< LiteralValueValueTryToReconcileSize2Encodings (const AffineExpression &left, const AffineExpression &right, IntegerEncoder *integer_encoder)
 
template<typename Storage >
 InclusionDetector (const Storage &storage) -> InclusionDetector< Storage >
 Deduction guide.
 
std::vector< IntegerVariable > NegationOf (const std::vector< IntegerVariable > &vars)
 Returns the vector of the negated variables.
 
std::ostream & operator<< (std::ostream &os, const ValueLiteralPair &p)
 
 DEFINE_STRONG_INT64_TYPE (IntegerValue)
 
constexpr IntegerValue kMaxIntegerValue (std::numeric_limits< IntegerValue::ValueType >::max() - 1)
 
constexpr IntegerValue kMinIntegerValue (-kMaxIntegerValue.value())
 
double ToDouble (IntegerValue value)
 
template<class IntType >
IntType IntTypeAbs (IntType t)
 
IntegerValue CeilRatio (IntegerValue dividend, IntegerValue positive_divisor)
 
IntegerValue FloorRatio (IntegerValue dividend, IntegerValue positive_divisor)
 
IntegerValue CapProdI (IntegerValue a, IntegerValue b)
 Overflows and saturated arithmetic.
 
IntegerValue CapSubI (IntegerValue a, IntegerValue b)
 
IntegerValue CapAddI (IntegerValue a, IntegerValue b)
 
bool ProdOverflow (IntegerValue t, IntegerValue value)
 
bool AtMinOrMaxInt64I (IntegerValue t)
 
IntegerValue PositiveRemainder (IntegerValue dividend, IntegerValue positive_divisor)
 
bool AddTo (IntegerValue a, IntegerValue *result)
 
bool AddProductTo (IntegerValue a, IntegerValue b, IntegerValue *result)
 Computes result += a * b, and return false iff there is an overflow.
 
 DEFINE_STRONG_INDEX_TYPE (IntegerVariable)
 
const IntegerVariable kNoIntegerVariable (-1)
 
IntegerVariable NegationOf (IntegerVariable i)
 
bool VariableIsPositive (IntegerVariable i)
 
IntegerVariable PositiveVariable (IntegerVariable i)
 
 DEFINE_STRONG_INDEX_TYPE (PositiveOnlyIndex)
 Special type for storing only one thing for var and NegationOf(var).
 
PositiveOnlyIndex GetPositiveOnlyIndex (IntegerVariable var)
 
std::string IntegerTermDebugString (IntegerVariable var, IntegerValue coeff)
 
std::ostream & operator<< (std::ostream &os, IntegerLiteral i_lit)
 
std::ostream & operator<< (std::ostream &os, absl::Span< const IntegerLiteral > literals)
 
template<typename H >
AbslHashValue (H h, const AffineExpression &e)
 
std::function< BooleanVariable(Model *)> NewBooleanVariable ()
 
std::function< IntegerVariable(Model *)> ConstantIntegerVariable (int64_t value)
 
std::function< IntegerVariable(Model *)> NewIntegerVariable (int64_t lb, int64_t ub)
 
std::function< IntegerVariable(Model *)> NewIntegerVariable (const Domain &domain)
 
IntegerVariable CreateNewIntegerVariableFromLiteral (Literal lit, Model *model)
 
std::function< IntegerVariable(Model *)> NewIntegerVariableFromLiteral (Literal lit)
 
std::function< int64_t(const Model &)> LowerBound (IntegerVariable v)
 
std::function< int64_t(const Model &)> UpperBound (IntegerVariable v)
 
std::function< bool(const Model &)> IsFixed (IntegerVariable v)
 
std::function< int64_t(const Model &)> Value (IntegerVariable v)
 This checks that the variable is fixed.
 
std::function< void(Model *)> GreaterOrEqual (IntegerVariable v, int64_t lb)
 
std::function< void(Model *)> LowerOrEqual (IntegerVariable v, int64_t ub)
 
std::function< void(Model *)> Equality (IntegerVariable v, int64_t value)
 Fix v to a given value.
 
std::function< void(Model *)> Implication (absl::Span< const Literal > enforcement_literals, IntegerLiteral i)
 
std::function< void(Model *)> ImpliesInInterval (Literal in_interval, IntegerVariable v, int64_t lb, int64_t ub)
 in_interval => v in [lb, ub].
 
std::function< std::vector< ValueLiteralPair >(Model *)> FullyEncodeVariable (IntegerVariable var)
 
std::function< void(Model *)> IsOneOf (IntegerVariable var, const std::vector< Literal > &selectors, const std::vector< IntegerValue > &values)
 
template<typename VectorInt >
std::function< void(Model *)> WeightedSumLowerOrEqual (const std::vector< IntegerVariable > &vars, const VectorInt &coefficients, int64_t upper_bound)
 Weighted sum <= constant.
 
template<typename VectorInt >
std::function< void(Model *)> WeightedSumGreaterOrEqual (const std::vector< IntegerVariable > &vars, const VectorInt &coefficients, int64_t lower_bound)
 Weighted sum >= constant.
 
template<typename VectorInt >
std::function< void(Model *)> FixedWeightedSum (const std::vector< IntegerVariable > &vars, const VectorInt &coefficients, int64_t value)
 Weighted sum == constant.
 
void AddWeightedSumLowerOrEqual (absl::Span< const Literal > enforcement_literals, absl::Span< const IntegerVariable > vars, absl::Span< const int64_t > coefficients, int64_t upper_bound, Model *model)
 enforcement_literals => sum <= upper_bound
 
void AddWeightedSumGreaterOrEqual (absl::Span< const Literal > enforcement_literals, absl::Span< const IntegerVariable > vars, absl::Span< const int64_t > coefficients, int64_t lower_bound, Model *model)
 enforcement_literals => sum >= lower_bound
 
std::function< void(Model *)> ConditionalWeightedSumLowerOrEqual (const std::vector< Literal > &enforcement_literals, const std::vector< IntegerVariable > &vars, const std::vector< int64_t > &coefficients, int64_t upper_bound)
 
std::function< void(Model *)> ConditionalWeightedSumGreaterOrEqual (const std::vector< Literal > &enforcement_literals, const std::vector< IntegerVariable > &vars, const std::vector< int64_t > &coefficients, int64_t upper_bound)
 
void LoadConditionalLinearConstraint (const absl::Span< const Literal > enforcement_literals, const LinearConstraint &cst, Model *model)
 LinearConstraint version.
 
void LoadLinearConstraint (const LinearConstraint &cst, Model *model)
 
void AddConditionalAffinePrecedence (const absl::Span< const Literal > enforcement_literals, AffineExpression left, AffineExpression right, Model *model)
 
template<typename VectorInt >
std::function< IntegerVariable(Model *)> NewWeightedSum (const VectorInt &coefficients, const std::vector< IntegerVariable > &vars)
 
std::function< void(Model *)> IsEqualToMinOf (IntegerVariable min_var, const std::vector< IntegerVariable > &vars)
 
std::function< void(Model *)> IsEqualToMinOf (const LinearExpression &min_expr, const std::vector< LinearExpression > &exprs)
 
std::function< void(Model *)> IsEqualToMaxOf (IntegerVariable max_var, const std::vector< IntegerVariable > &vars)
 
template<class T >
void RegisterAndTransferOwnership (Model *model, T *ct)
 
std::function< void(Model *)> ProductConstraint (AffineExpression a, AffineExpression b, AffineExpression p)
 Adds the constraint: a * b = p.
 
std::function< void(Model *)> DivisionConstraint (AffineExpression num, AffineExpression denom, AffineExpression div)
 Adds the constraint: num / denom = div. (denom > 0).
 
std::function< void(Model *)> FixedDivisionConstraint (AffineExpression a, IntegerValue b, AffineExpression c)
 Adds the constraint: a / b = c where b is a constant.
 
std::function< void(Model *)> FixedModuloConstraint (AffineExpression a, IntegerValue b, AffineExpression c)
 Adds the constraint: a % b = c where b is a constant.
 
IntegerLiteral AtMinValue (IntegerVariable var, IntegerTrail *integer_trail)
 
IntegerLiteral ChooseBestObjectiveValue (IntegerVariable var, Model *model)
 If a variable appear in the objective, branch on its best objective value.
 
IntegerLiteral GreaterOrEqualToMiddleValue (IntegerVariable var, IntegerTrail *integer_trail)
 
IntegerLiteral SplitAroundGivenValue (IntegerVariable var, IntegerValue value, Model *model)
 
IntegerLiteral SplitAroundLpValue (IntegerVariable var, Model *model)
 
IntegerLiteral SplitUsingBestSolutionValueInRepository (IntegerVariable var, const SharedSolutionRepository< int64_t > &solution_repo, Model *model)
 
std::function< BooleanOrIntegerLiteral()> FirstUnassignedVarAtItsMinHeuristic (const std::vector< IntegerVariable > &vars, Model *model)
 
std::function< BooleanOrIntegerLiteral()> MostFractionalHeuristic (Model *model)
 Choose the variable with most fractional LP value.
 
std::function< BooleanOrIntegerLiteral()> BoolPseudoCostHeuristic (Model *model)
 
std::function< BooleanOrIntegerLiteral()> LpPseudoCostHeuristic (Model *model)
 
std::function< BooleanOrIntegerLiteral()> UnassignedVarWithLowestMinAtItsMinHeuristic (const std::vector< IntegerVariable > &vars, Model *model)
 
std::function< BooleanOrIntegerLiteral()> SequentialSearch (std::vector< std::function< BooleanOrIntegerLiteral()> > heuristics)
 
std::function< BooleanOrIntegerLiteral()> SequentialValueSelection (std::vector< std::function< IntegerLiteral(IntegerVariable)> > value_selection_heuristics, std::function< BooleanOrIntegerLiteral()> var_selection_heuristic, Model *model)
 
bool LinearizedPartIsLarge (Model *model)
 
std::function< BooleanOrIntegerLiteral()> IntegerValueSelectionHeuristic (std::function< BooleanOrIntegerLiteral()> var_selection_heuristic, Model *model)
 
std::function< BooleanOrIntegerLiteral()> SatSolverHeuristic (Model *model)
 Returns the BooleanOrIntegerLiteral advised by the underlying SAT solver.
 
std::function< BooleanOrIntegerLiteral()> ShaveObjectiveLb (Model *model)
 
std::function< BooleanOrIntegerLiteral()> PseudoCost (Model *model)
 
std::function< BooleanOrIntegerLiteral()> SchedulingSearchHeuristic (Model *model)
 A simple heuristic for scheduling models.
 
std::function< BooleanOrIntegerLiteral()> DisjunctivePrecedenceSearchHeuristic (Model *model)
 
std::function< BooleanOrIntegerLiteral()> CumulativePrecedenceSearchHeuristic (Model *model)
 
std::function< BooleanOrIntegerLiteral()> RandomizeOnRestartHeuristic (bool lns_mode, Model *model)
 
std::function< BooleanOrIntegerLiteral()> FollowHint (const std::vector< BooleanOrIntegerVariable > &vars, const std::vector< IntegerValue > &values, Model *model)
 
std::function< bool()> RestartEveryKFailures (int k, SatSolver *solver)
 A restart policy that restarts every k failures.
 
std::function< bool()> SatSolverRestartPolicy (Model *model)
 A restart policy that uses the underlying sat solver's policy.
 
void ConfigureSearchHeuristics (Model *model)
 
std::vector< std::function< BooleanOrIntegerLiteral()> > CompleteHeuristics (absl::Span< const std::function< BooleanOrIntegerLiteral()> > incomplete_heuristics, const std::function< BooleanOrIntegerLiteral()> &completion_heuristic)
 
SatSolver::Status ResetAndSolveIntegerProblem (const std::vector< Literal > &assumptions, Model *model)
 
SatSolver::Status SolveIntegerProblemWithLazyEncoding (Model *model)
 
IntegerLiteral SplitDomainUsingBestSolutionValue (IntegerVariable var, Model *model)
 
IntegerValue ComputeEnergyMinInWindow (IntegerValue start_min, IntegerValue start_max, IntegerValue end_min, IntegerValue end_max, IntegerValue size_min, IntegerValue demand_min, absl::Span< const LiteralValueValue > filtered_energy, IntegerValue window_start, IntegerValue window_end)
 
void AddIntegerVariableFromIntervals (SchedulingConstraintHelper *helper, Model *model, std::vector< IntegerVariable > *vars)
 Cuts helpers.
 
void AppendVariablesFromCapacityAndDemands (const AffineExpression &capacity, SchedulingDemandHelper *demands_helper, Model *model, std::vector< IntegerVariable > *vars)
 
 DEFINE_STRONG_INDEX_TYPE (IntervalVariable)
 
const IntervalVariable kNoIntervalVariable (-1)
 
std::function< int64_t(const Model &)> MinSize (IntervalVariable v)
 
std::function< int64_t(const Model &)> MaxSize (IntervalVariable v)
 
std::function< bool(const Model &)> IsOptional (IntervalVariable v)
 
std::function< Literal(const Model &)> IsPresentLiteral (IntervalVariable v)
 
std::function< IntervalVariable(Model *)> NewInterval (int64_t min_start, int64_t max_end, int64_t size)
 
std::function< IntervalVariable(Model *)> NewInterval (IntegerVariable start, IntegerVariable end, IntegerVariable size)
 
std::function< IntervalVariable(Model *)> NewIntervalWithVariableSize (int64_t min_start, int64_t max_end, int64_t min_size, int64_t max_size)
 
std::function< IntervalVariable(Model *)> NewOptionalInterval (int64_t min_start, int64_t max_end, int64_t size, Literal is_present)
 
std::function< IntervalVariable(Model *)> NewOptionalInterval (IntegerVariable start, IntegerVariable end, IntegerVariable size, Literal is_present)
 
std::function< IntervalVariable(Model *)> NewOptionalIntervalWithVariableSize (int64_t min_start, int64_t max_end, int64_t min_size, int64_t max_size, Literal is_present)
 
double ComputeActivity (const LinearConstraint &constraint, const util_intops::StrongVector< IntegerVariable, double > &values)
 
double ComputeL2Norm (const LinearConstraint &constraint)
 Returns sqrt(sum square(coeff)).
 
IntegerValue ComputeInfinityNorm (const LinearConstraint &constraint)
 Returns the maximum absolute value of the coefficients.
 
double ScalarProduct (const LinearConstraint &ct1, const LinearConstraint &ct2)
 
void DivideByGCD (LinearConstraint *constraint)
 
void RemoveZeroTerms (LinearConstraint *constraint)
 Removes the entries with a coefficient of zero.
 
void MakeAllCoefficientsPositive (LinearConstraint *constraint)
 Makes all coefficients positive by transforming a variable to its negation.
 
void MakeAllVariablesPositive (LinearConstraint *constraint)
 Makes all variables "positive" by transforming a variable to its negation.
 
bool NoDuplicateVariable (const LinearConstraint &ct)
 Returns false if duplicate variables are found in ct.
 
LinearExpression CanonicalizeExpr (const LinearExpression &expr)
 
bool ValidateLinearConstraintForOverflow (const LinearConstraint &constraint, const IntegerTrail &integer_trail)
 
LinearExpression NegationOf (const LinearExpression &expr)
 Preserves canonicality.
 
LinearExpression PositiveVarExpr (const LinearExpression &expr)
 Returns the same expression with positive variables.
 
IntegerValue GetCoefficient (const IntegerVariable var, const LinearExpression &expr)
 
IntegerValue GetCoefficientOfPositiveVar (const IntegerVariable var, const LinearExpression &expr)
 
bool PossibleOverflow (const IntegerTrail &integer_trail, const LinearConstraint &constraint)
 
std::ostream & operator<< (std::ostream &os, const LinearConstraint &ct)
 
void CleanTermsAndFillConstraint (std::vector< std::pair< IntegerVariable, IntegerValue > > *terms, LinearExpression *output)
 
void CleanTermsAndFillConstraint (std::vector< std::pair< IntegerVariable, IntegerValue > > *terms, LinearConstraint *output)
 
std::ostream & operator<< (std::ostream &os, const EnforcementStatus &e)
 
 DEFINE_STRONG_INDEX_TYPE (EnforcementId)
 
void AppendRelaxationForEqualityEncoding (IntegerVariable var, const Model &model, LinearRelaxation *relaxation, int *num_tight, int *num_loose)
 
void AppendPartialGreaterThanEncodingRelaxation (IntegerVariable var, const Model &model, LinearRelaxation *relaxation)
 
void AppendBoolOrRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AppendBoolAndRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation, ActivityBoundHelper *activity_helper)
 
void AppendAtMostOneRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AppendExactlyOneRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
std::vector< LiteralCreateAlternativeLiteralsWithView (int num_literals, Model *model, LinearRelaxation *relaxation)
 
void AppendCircuitRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 Routing relaxation and cut generators.
 
void AppendRoutesRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AddCircuitCutGenerator (const ConstraintProto &ct, Model *m, LinearRelaxation *relaxation)
 
void AddRoutesCutGenerator (const ConstraintProto &ct, Model *m, LinearRelaxation *relaxation)
 
std::optional< int > DetectMakespan (const std::vector< IntervalVariable > &intervals, const std::vector< AffineExpression > &demands, const AffineExpression &capacity, Model *model)
 
void AppendNoOverlapRelaxationAndCutGenerator (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AppendCumulativeRelaxationAndCutGenerator (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AddCumulativeRelaxation (const AffineExpression &capacity, SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands, const std::optional< AffineExpression > &makespan, Model *model, LinearRelaxation *relaxation)
 Scheduling relaxations and cut generators.
 
void AppendNoOverlap2dRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 Adds the energetic relaxation sum(areas) <= bounding box area.
 
void AppendLinMaxRelaxationPart1 (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AppendMaxAffineRelaxation (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AddMaxAffineCutGenerator (const ConstraintProto &ct, Model *model, LinearRelaxation *relaxation)
 
void AppendLinMaxRelaxationPart2 (IntegerVariable target, const std::vector< Literal > &alternative_literals, const std::vector< LinearExpression > &exprs, Model *model, LinearRelaxation *relaxation)
 
void AppendLinearConstraintRelaxation (const ConstraintProto &ct, bool linearize_enforced_constraints, Model *model, LinearRelaxation *relaxation, ActivityBoundHelper *activity_helper)
 
void TryToLinearizeConstraint (const CpModelProto &model_proto, const ConstraintProto &ct, int linearization_level, Model *model, LinearRelaxation *relaxation, ActivityBoundHelper *helper=nullptr)
 Adds linearization of different types of constraints.
 
void AddIntProdCutGenerator (const ConstraintProto &ct, int linearization_level, Model *m, LinearRelaxation *relaxation)
 Cut generators.
 
void AppendSquareRelaxation (const ConstraintProto &ct, Model *m, LinearRelaxation *relaxation)
 
void AddSquareCutGenerator (const ConstraintProto &ct, int linearization_level, Model *m, LinearRelaxation *relaxation)
 
void AddAllDiffRelaxationAndCutGenerator (const ConstraintProto &ct, int linearization_level, Model *m, LinearRelaxation *relaxation)
 
bool IntervalIsVariable (const IntervalVariable interval, IntervalsRepository *intervals_repository)
 
void AddCumulativeCutGenerator (const AffineExpression &capacity, SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands_helper, const std::optional< AffineExpression > &makespan, Model *m, LinearRelaxation *relaxation)
 
void AddNoOverlapCutGenerator (SchedulingConstraintHelper *helper, const std::optional< AffineExpression > &makespan, Model *m, LinearRelaxation *relaxation)
 
void AddNoOverlap2dCutGenerator (const ConstraintProto &ct, Model *m, LinearRelaxation *relaxation)
 
void AddLinMaxCutGenerator (const ConstraintProto &ct, Model *m, LinearRelaxation *relaxation)
 
void AppendElementEncodingRelaxation (Model *m, LinearRelaxation *relaxation)
 
LinearRelaxation ComputeLinearRelaxation (const CpModelProto &model_proto, Model *m)
 Builds the linear relaxation of a CpModelProto.
 
std::vector< double > ScaleContinuousVariables (double scaling, double max_bound, MPModelProto *mp_model)
 
int64_t FindRationalFactor (double x, int64_t limit, double tolerance)
 
bool MakeBoundsOfIntegerVariablesInteger (const SatParameters &params, MPModelProto *mp_model, SolverLogger *logger)
 
void ChangeLargeBoundsToInfinity (double max_magnitude, MPModelProto *mp_model, SolverLogger *logger)
 
void RemoveNearZeroTerms (const SatParameters &params, MPModelProto *mp_model, SolverLogger *logger)
 
bool MPModelProtoValidationBeforeConversion (const SatParameters &params, const MPModelProto &mp_model, SolverLogger *logger)
 
std::vector< double > DetectImpliedIntegers (MPModelProto *mp_model, SolverLogger *logger)
 
double FindBestScalingAndComputeErrors (const std::vector< double > &coefficients, absl::Span< const double > lower_bounds, absl::Span< const double > upper_bounds, int64_t max_absolute_activity, double wanted_absolute_activity_precision, double *relative_coeff_error, double *scaled_sum_error)
 
bool ConvertMPModelProtoToCpModelProto (const SatParameters &params, const MPModelProto &mp_model, CpModelProto *cp_model, SolverLogger *logger)
 
bool ConvertCpModelProtoToMPModelProto (const CpModelProto &input, MPModelProto *output)
 
bool ScaleAndSetObjective (const SatParameters &params, const std::vector< std::pair< int, double > > &objective, double objective_offset, bool maximize, CpModelProto *cp_model, SolverLogger *logger)
 
bool ConvertBinaryMPModelProtoToBooleanProblem (const MPModelProto &mp_model, LinearBooleanProblem *problem)
 
void ConvertBooleanProblemToLinearProgram (const LinearBooleanProblem &problem, glop::LinearProgram *lp)
 Converts a Boolean optimization problem to its lp formulation.
 
double ComputeTrueObjectiveLowerBound (const CpModelProto &model_proto_with_floating_point_objective, const CpObjectiveProto &integer_objective, const int64_t inner_integer_objective_lower_bound)
 
void MinimizeCoreWithPropagation (TimeLimit *limit, SatSolver *solver, std::vector< Literal > *core)
 
void MinimizeCoreWithSearch (TimeLimit *limit, SatSolver *solver, std::vector< Literal > *core)
 
bool ProbeLiteral (Literal assumption, SatSolver *solver)
 
void FilterAssignedLiteral (const VariablesAssignment &assignment, std::vector< Literal > *core)
 A core cannot be all true.
 
SatSolver::Status MinimizeIntegerVariableWithLinearScanAndLazyEncoding (IntegerVariable objective_var, const std::function< void()> &feasible_solution_observer, Model *model)
 
void RestrictObjectiveDomainWithBinarySearch (IntegerVariable objective_var, const std::function< void()> &feasible_solution_observer, Model *model)
 
void PresolveBooleanLinearExpression (std::vector< Literal > *literals, std::vector< Coefficient > *coefficients, Coefficient *offset)
 
std::string ValidateParameters (const SatParameters &params)
 
bool ComputeBooleanLinearExpressionCanonicalForm (std::vector< LiteralWithCoeff > *cst, Coefficient *bound_shift, Coefficient *max_value)
 
bool ApplyLiteralMapping (const util_intops::StrongVector< LiteralIndex, LiteralIndex > &mapping, std::vector< LiteralWithCoeff > *cst, Coefficient *bound_shift, Coefficient *max_value)
 
bool BooleanLinearExpressionIsCanonical (absl::Span< const LiteralWithCoeff > cst)
 Returns true iff the Boolean linear expression is in canonical form.
 
void SimplifyCanonicalBooleanLinearConstraint (std::vector< LiteralWithCoeff > *cst, Coefficient *rhs)
 
Coefficient ComputeCanonicalRhs (Coefficient upper_bound, Coefficient bound_shift, Coefficient max_value)
 
Coefficient ComputeNegatedCanonicalRhs (Coefficient lower_bound, Coefficient bound_shift, Coefficient max_value)
 
 DEFINE_STRONG_INT64_TYPE (Coefficient)
 
const Coefficient kCoefficientMax (std::numeric_limits< Coefficient::ValueType >::max())
 
template<typename H >
AbslHashValue (H h, const LiteralWithCoeff &term)
 
std::ostream & operator<< (std::ostream &os, LiteralWithCoeff term)
 
std::function< void(Model *)> LowerOrEqual (IntegerVariable a, IntegerVariable b)
 a <= b.
 
std::function< void(Model *)> LowerOrEqualWithOffset (IntegerVariable a, IntegerVariable b, int64_t offset)
 a + offset <= b.
 
std::function< void(Model *)> AffineCoeffOneLowerOrEqualWithOffset (AffineExpression a, AffineExpression b, int64_t offset)
 a + offset <= b. (when a and b are of the form 1 * var + offset).
 
void AddConditionalSum2LowerOrEqual (absl::Span< const Literal > enforcement_literals, IntegerVariable a, IntegerVariable b, int64_t ub, Model *model)
 l => (a + b <= ub).
 
void AddConditionalSum3LowerOrEqual (absl::Span< const Literal > enforcement_literals, IntegerVariable a, IntegerVariable b, IntegerVariable c, int64_t ub, Model *model)
 
std::function< void(Model *)> GreaterOrEqual (IntegerVariable a, IntegerVariable b)
 a >= b.
 
std::function< void(Model *)> Equality (IntegerVariable a, IntegerVariable b)
 a == b.
 
std::function< void(Model *)> EqualityWithOffset (IntegerVariable a, IntegerVariable b, int64_t offset)
 a + offset == b.
 
std::function< void(Model *)> ConditionalLowerOrEqualWithOffset (IntegerVariable a, IntegerVariable b, int64_t offset, Literal is_le)
 is_le => (a + offset <= b).
 
bool LoadModelForProbing (PresolveContext *context, Model *local_model)
 
template<typename ProtoWithVarsAndCoeffs >
bool CanonicalizeLinearExpressionInternal (absl::Span< const int > enforcements, ProtoWithVarsAndCoeffs *proto, int64_t *offset, std::vector< std::pair< int, int64_t > > *tmp_terms, PresolveContext *context)
 
bool AddLinearConstraintMultiple (int64_t factor, const ConstraintProto &to_add, ConstraintProto *to_modify)
 
bool SubstituteVariable (int var, int64_t var_coeff_in_definition, const ConstraintProto &definition, ConstraintProto *ct)
 
bool FindSingleLinearDifference (const LinearConstraintProto &lin1, const LinearConstraintProto &lin2, int *var1, int64_t *coeff1, int *var2, int64_t *coeff2)
 Same as LinearsDifferAtOneTerm() below but also fills the differing terms.
 
bool ClauseIsEnforcementImpliesLiteral (absl::Span< const int > clause, absl::Span< const int > enforcement, int literal)
 
bool LinearsDifferAtOneTerm (const LinearConstraintProto &lin1, const LinearConstraintProto &lin2)
 
bool LookForTrivialSatSolution (double deterministic_time_limit, Model *model, SolverLogger *logger)
 
bool FailedLiteralProbingRound (ProbingOptions options, Model *model)
 
int SUniv (int i)
 
void RecordLPRelaxationValues (Model *model)
 Adds the current LP solution to the pool.
 
ReducedDomainNeighborhood GetRinsRensNeighborhood (const SharedResponseManager *response_manager, const SharedLPSolutionRepository *lp_solutions, SharedIncompleteSolutionManager *incomplete_solutions, double difficulty, absl::BitGenRef random)
 
void GenerateInterestingSubsets (int num_nodes, const std::vector< std::pair< int, int > > &arcs, int stop_at_num_components, std::vector< int > *subset_data, std::vector< absl::Span< const int > > *subsets)
 
void ExtractAllSubsetsFromForest (const std::vector< int > &parent, std::vector< int > *subset_data, std::vector< absl::Span< const int > > *subsets, int node_limit)
 
std::vector< int > ComputeGomoryHuTree (int num_nodes, const std::vector< ArcWithLpValue > &relevant_arcs)
 
void SymmetrizeArcs (std::vector< ArcWithLpValue > *arcs)
 
void SeparateSubtourInequalities (int num_nodes, const std::vector< int > &tails, const std::vector< int > &heads, const std::vector< Literal > &literals, absl::Span< const int64_t > demands, int64_t capacity, LinearConstraintManager *manager, Model *model)
 
CutGenerator CreateStronglyConnectedGraphCutGenerator (int num_nodes, std::vector< int > tails, std::vector< int > heads, std::vector< Literal > literals, Model *model)
 
CutGenerator CreateCVRPCutGenerator (int num_nodes, std::vector< int > tails, std::vector< int > heads, std::vector< Literal > literals, std::vector< int64_t > demands, int64_t capacity, Model *model)
 
void SeparateFlowInequalities (int num_nodes, absl::Span< const int > tails, absl::Span< const int > heads, absl::Span< const AffineExpression > arc_capacities, std::function< void(const std::vector< bool > &in_subset, IntegerValue *min_incoming_flow, IntegerValue *min_outgoing_flow)> get_flows, const util_intops::StrongVector< IntegerVariable, double > &lp_values, LinearConstraintManager *manager, Model *model)
 
CutGenerator CreateFlowCutGenerator (int num_nodes, const std::vector< int > &tails, const std::vector< int > &heads, const std::vector< AffineExpression > &arc_capacities, std::function< void(const std::vector< bool > &in_subset, IntegerValue *min_incoming_flow, IntegerValue *min_outgoing_flow)> get_flows, Model *model)
 
 DEFINE_STRONG_INDEX_TYPE (BooleanVariable)
 Index of a variable (>= 0).
 
const BooleanVariable kNoBooleanVariable (-1)
 
 DEFINE_STRONG_INDEX_TYPE (LiteralIndex)
 Index of a literal (>= 0), see Literal below.
 
const LiteralIndex kNoLiteralIndex (-1)
 
const LiteralIndex kTrueLiteralIndex (-2)
 
const LiteralIndex kFalseLiteralIndex (-3)
 
std::ostream & operator<< (std::ostream &os, Literal literal)
 
template<typename Sink , typename... T>
void AbslStringify (Sink &sink, Literal arg)
 
std::ostream & operator<< (std::ostream &os, absl::Span< const Literal > literals)
 
std::vector< LiteralLiterals (absl::Span< const int > input)
 
std::string SatStatusString (SatSolver::Status status)
 Returns a string representation of a SatSolver::Status.
 
void MinimizeCore (SatSolver *solver, std::vector< Literal > *core)
 
std::function< void(Model *)> BooleanLinearConstraint (int64_t lower_bound, int64_t upper_bound, std::vector< LiteralWithCoeff > *cst)
 
std::function< void(Model *)> CardinalityConstraint (int64_t lower_bound, int64_t upper_bound, const std::vector< Literal > &literals)
 
std::function< void(Model *)> ExactlyOneConstraint (const std::vector< Literal > &literals)
 
std::function< void(Model *)> AtMostOneConstraint (const std::vector< Literal > &literals)
 
std::function< void(Model *)> ClauseConstraint (absl::Span< const Literal > literals)
 
std::function< void(Model *)> Implication (Literal a, Literal b)
 a => b.
 
std::function< void(Model *)> Equality (Literal a, Literal b)
 a == b.
 
std::function< void(Model *)> ReifiedBoolOr (const std::vector< Literal > &literals, Literal r)
 r <=> (at least one literal is true). This is a reified clause.
 
std::function< void(Model *)> EnforcedClause (absl::Span< const Literal > enforcement_literals, absl::Span< const Literal > clause)
 enforcement_literals => clause.
 
std::function< void(Model *)> ReifiedBoolAnd (const std::vector< Literal > &literals, Literal r)
 
std::function< void(Model *)> ReifiedBoolLe (Literal a, Literal b, Literal r)
 r <=> (a <= b).
 
std::function< int64_t(const Model &)> Value (Literal l)
 This checks that the variable is fixed.
 
std::function< int64_t(const Model &)> Value (BooleanVariable b)
 This checks that the variable is fixed.
 
std::function< void(Model *)> ExcludeCurrentSolutionAndBacktrack ()
 
std::ostream & operator<< (std::ostream &os, SatSolver::Status status)
 
void GenerateCumulativeEnergeticCutsWithMakespanAndFixedCapacity (absl::string_view cut_name, const util_intops::StrongVector< IntegerVariable, double > &lp_values, std::vector< EnergyEvent > events, IntegerValue capacity, AffineExpression makespan, TimeLimit *time_limit, Model *model, LinearConstraintManager *manager)
 
void GenerateCumulativeEnergeticCuts (const std::string &cut_name, const util_intops::StrongVector< IntegerVariable, double > &lp_values, std::vector< EnergyEvent > events, const AffineExpression &capacity, TimeLimit *time_limit, Model *model, LinearConstraintManager *manager)
 
CutGenerator CreateCumulativeEnergyCutGenerator (SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands_helper, const AffineExpression &capacity, const std::optional< AffineExpression > &makespan, Model *model)
 
CutGenerator CreateNoOverlapEnergyCutGenerator (SchedulingConstraintHelper *helper, const std::optional< AffineExpression > &makespan, Model *model)
 
CutGenerator CreateCumulativeTimeTableCutGenerator (SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands_helper, const AffineExpression &capacity, Model *model)
 
void GenerateCutsBetweenPairOfNonOverlappingTasks (absl::string_view cut_name, const util_intops::StrongVector< IntegerVariable, double > &lp_values, std::vector< CachedIntervalData > events, IntegerValue capacity_max, Model *model, LinearConstraintManager *manager)
 
CutGenerator CreateCumulativePrecedenceCutGenerator (SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands_helper, const AffineExpression &capacity, Model *model)
 
CutGenerator CreateNoOverlapPrecedenceCutGenerator (SchedulingConstraintHelper *helper, Model *model)
 
bool ComputeMinSumOfWeightedEndMins (std::vector< PermutableEvent > &events, IntegerValue capacity_max, IntegerValue &min_sum_of_end_mins, IntegerValue &min_sum_of_weighted_end_mins, IntegerValue unweighted_threshold, IntegerValue weighted_threshold)
 
void GenerateShortCompletionTimeCutsWithExactBound (const std::string &cut_name, std::vector< CtEvent > events, IntegerValue capacity_max, Model *model, LinearConstraintManager *manager)
 
void GenerateCompletionTimeCutsWithEnergy (absl::string_view cut_name, std::vector< CtEvent > events, IntegerValue capacity_max, bool skip_low_sizes, Model *model, LinearConstraintManager *manager)
 
CutGenerator CreateNoOverlapCompletionTimeCutGenerator (SchedulingConstraintHelper *helper, Model *model)
 
CutGenerator CreateCumulativeCompletionTimeCutGenerator (SchedulingConstraintHelper *helper, SchedulingDemandHelper *demands_helper, const AffineExpression &capacity, Model *model)
 
bool SimplifyClause (const std::vector< Literal > &a, std::vector< Literal > *b, LiteralIndex *opposite_literal, int64_t *num_inspected_literals)
 
LiteralIndex DifferAtGivenLiteral (const std::vector< Literal > &a, const std::vector< Literal > &b, Literal l)
 
bool ComputeResolvant (Literal x, const std::vector< Literal > &a, const std::vector< Literal > &b, std::vector< Literal > *out)
 
int ComputeResolvantSize (Literal x, const std::vector< Literal > &a, const std::vector< Literal > &b)
 
void ProbeAndFindEquivalentLiteral (SatSolver *solver, SatPostsolver *postsolver, DratProofHandler *drat_proof_handler, util_intops::StrongVector< LiteralIndex, LiteralIndex > *mapping, SolverLogger *logger)
 
void SequentialLoop (std::vector< std::unique_ptr< SubSolver > > &subsolvers)
 
void DeterministicLoop (std::vector< std::unique_ptr< SubSolver > > &subsolvers, int num_threads, int batch_size, int max_num_batches)
 
void NonDeterministicLoop (std::vector< std::unique_ptr< SubSolver > > &subsolvers, const int num_threads)
 
std::vector< std::vector< int > > BasicOrbitopeExtraction (absl::Span< const std::unique_ptr< SparsePermutation > > generators)
 
std::vector< int > GetOrbits (int n, absl::Span< const std::unique_ptr< SparsePermutation > > generators)
 
std::vector< int > GetOrbitopeOrbits (int n, absl::Span< const std::vector< int > > orbitope)
 
void TransformToGeneratorOfStabilizer (int to_stabilize, std::vector< std::unique_ptr< SparsePermutation > > *generators)
 
void FillSolveStatsInResponse (Model *model, CpSolverResponse *response)
 
std::string ExtractSubSolverName (const std::string &improvement_info)
 
std::function< void(Model *)> LiteralTableConstraint (const std::vector< std::vector< Literal > > &literal_tuples, const std::vector< Literal > &line_literals)
 
template<typename IntegerType >
constexpr IntegerType IntegerTypeMinimumValue ()
 The minimal value of an envelope, for instance the envelope of the empty set.
 
template<>
constexpr IntegerValue IntegerTypeMinimumValue ()
 
void AddReservoirConstraint (std::vector< AffineExpression > times, std::vector< AffineExpression > deltas, std::vector< Literal > presences, int64_t min_level, int64_t max_level, Model *model)
 
std::string FormatCounter (int64_t num)
 Prints a positive number with separators for easier reading (ex: 1'348'065).
 
std::string FormatTable (std::vector< std::vector< std::string > > &table, int spacing)
 
void RandomizeDecisionHeuristic (absl::BitGenRef random, SatParameters *parameters)
 Randomizes the decision heuristic of the given SatParameters.
 
int64_t ModularInverse (int64_t x, int64_t m)
 
int64_t PositiveMod (int64_t x, int64_t m)
 Just returns x % m but with a result always in [0, m).
 
int64_t ProductWithModularInverse (int64_t coeff, int64_t mod, int64_t rhs)
 
bool SolveDiophantineEquationOfSizeTwo (int64_t &a, int64_t &b, int64_t &cte, int64_t &x0, int64_t &y0)
 
int64_t FloorSquareRoot (int64_t a)
 The argument must be non-negative.
 
int64_t CeilSquareRoot (int64_t a)
 
int64_t ClosestMultiple (int64_t value, int64_t base)
 
bool LinearInequalityCanBeReducedWithClosestMultiple (int64_t base, absl::Span< const int64_t > coeffs, absl::Span< const int64_t > lbs, absl::Span< const int64_t > ubs, int64_t rhs, int64_t *new_rhs)
 
int MoveOneUnprocessedLiteralLast (const absl::btree_set< LiteralIndex > &processed, int relevant_prefix_size, std::vector< Literal > *literals)
 
int WeightedPick (absl::Span< const double > input, absl::BitGenRef random)
 
void CompressTuples (absl::Span< const int64_t > domain_sizes, std::vector< std::vector< int64_t > > *tuples)
 
std::vector< std::vector< absl::InlinedVector< int64_t, 2 > > > FullyCompressTuples (absl::Span< const int64_t > domain_sizes, std::vector< std::vector< int64_t > > *tuples)
 
std::vector< absl::Span< int > > AtMostOneDecomposition (const std::vector< std::vector< int > > &graph, absl::BitGenRef random, std::vector< int > *buffer)
 
std::string FormatName (absl::string_view name)
 This is used to format our table first row entry.
 
int64_t SafeDoubleToInt64 (double value)
 
bool IsNegatableInt64 (absl::int128 x)
 Tells whether a int128 can be casted to a int64_t that can be negated.
 
template<typename IntType , bool ceil>
IntType CeilOrFloorOfRatio (IntType numerator, IntType denominator)
 
template<typename IntType >
IntType CeilOfRatio (IntType numerator, IntType denominator)
 
template<typename IntType >
IntType FloorOfRatio (IntType numerator, IntType denominator)
 
void ScanModelForDominanceDetection (PresolveContext &context, VarDomination *var_domination)
 
void ScanModelForDualBoundStrengthening (const PresolveContext &context, DualBoundStrengthening *dual_bound_strengthening)
 Scan the model so that dual_bound_strengthening.Strenghten() works.
 
bool ExploitDominanceRelations (const VarDomination &var_domination, PresolveContext *context)
 

Variables

static constexpr int kMaxProblemSize = 16
 
constexpr uint64_t kDefaultFingerprintSeed = 0xa5b85c5e198ed849
 Default seed for fingerprints.
 
for i = 0 ... k-2
 
for b [i][j] = 0 if j > i+1
 
constexpr int kObjectiveConstraint = -1
 We use some special constraint index in our variable <-> constraint graph.
 
constexpr int kAffineRelationConstraint = -2
 
constexpr int kAssumptionsConstraint = -3
 
const int kUnsatTrailIndex = -1
 A constant used by the EnqueueDecision*() API.
 
constexpr int64_t kTableAnyValue = std::numeric_limits<int64_t>::min()
 

Detailed Description

Solution Feasibility.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Typedef Documentation

◆ InlinedIntegerLiteralVector

Definition at line 286 of file integer.h.

◆ InlinedIntegerValueVector

Initial value:
absl::InlinedVector<std::pair<IntegerVariable, IntegerValue>, 2>

Definition at line 287 of file integer.h.

◆ IntegerSumLE

◆ IntegerSumLE128

Enumeration Type Documentation

◆ EnforcementStatus

An enforced constraint can be in one of these 4 states.

Note
we rely on the integer encoding to take 2 bits for optimization.
Enumerator
IS_FALSE 

One enforcement literal is false.

CANNOT_PROPAGATE 

More than two literals are unassigned.

CAN_PROPAGATE 

All enforcement literals are true but one.

IS_ENFORCED 

All enforcement literals are true.

Definition at line 50 of file linear_propagation.h.

◆ SatFormat

The file formats that can be used to save a list of clauses.

Enumerator
DIMACS 
DRAT 

Definition at line 334 of file drat_checker.h.

Function Documentation

◆ AbslHashValue() [1/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const AffineExpression & e )

Definition at line 363 of file integer.h.

◆ AbslHashValue() [2/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const BoolArgumentProto & m )

Definition at line 324 of file cp_model_utils.h.

◆ AbslHashValue() [3/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const IntervalVar & i )

Definition at line 520 of file cp_model.h.

◆ AbslHashValue() [4/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const IntVar & i )

– ABSL HASHING SUPPORT --------------------------------------------------—

Definition at line 515 of file cp_model.h.

◆ AbslHashValue() [5/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const LinearConstraintProto & m )

Definition at line 346 of file cp_model_utils.h.

◆ AbslHashValue() [6/6]

template<typename H >
H operations_research::sat::AbslHashValue ( H h,
const LiteralWithCoeff & term )

Definition at line 65 of file pb_constraint.h.

◆ AbslStringify()

template<typename Sink , typename... T>
void operations_research::sat::AbslStringify ( Sink & sink,
Literal arg )

Definition at line 123 of file sat_base.h.

◆ AdaptGlobalParameters()

void operations_research::sat::AdaptGlobalParameters ( const CpModelProto & model_proto,
Model * model )

Update params.num_workers() if the old field was used.

Initialize the number of workers if set to 0.

Sometimes, hardware_concurrency will return 0. So always default to 1.

We currently only use the feasibility pump or rins/rens if it is enabled and some other parameters are not on.

Todo
(user): for now this is not deterministic so we disable it on interleave search. Fix.

We disable this if the global param asked for no LP.

Disable shared bounds if we are in single thread and we are not tightening the domains.

Definition at line 1707 of file cp_model_solver_helpers.cc.

◆ AddAllDiffRelaxationAndCutGenerator()

void operations_research::sat::AddAllDiffRelaxationAndCutGenerator ( const ConstraintProto & ct,
int linearization_level,
Model * m,
LinearRelaxation * relaxation )

Build union of affine expressions domains to check if this is a permutation.

In case of a permutation, the linear constraint is tight.

Definition at line 1545 of file linear_relaxation.cc.

◆ AddCircuitCutGenerator()

void operations_research::sat::AddCircuitCutGenerator ( const ConstraintProto & ct,
Model * m,
LinearRelaxation * relaxation )

Definition at line 575 of file linear_relaxation.cc.

◆ AddCircuitFlowConstraints()

void operations_research::sat::AddCircuitFlowConstraints ( LinearIncrementalEvaluator & linear_evaluator,
const ConstraintProto & ct_proto )

Definition at line 1445 of file constraint_violation.cc.

◆ AddConditionalAffinePrecedence()

void operations_research::sat::AddConditionalAffinePrecedence ( const absl::Span< const Literal > enforcement_literals,
AffineExpression left,
AffineExpression right,
Model * model )
inline

Definition at line 652 of file integer_expr.h.

◆ AddConditionalSum2LowerOrEqual()

void operations_research::sat::AddConditionalSum2LowerOrEqual ( absl::Span< const Literal > enforcement_literals,
IntegerVariable a,
IntegerVariable b,
int64_t ub,
Model * model )
inline

l => (a + b <= ub).

Todo
(user): Refactor to be sure we do not miss any level zero relations.

Definition at line 603 of file precedences.h.

◆ AddConditionalSum3LowerOrEqual()

void operations_research::sat::AddConditionalSum3LowerOrEqual ( absl::Span< const Literal > enforcement_literals,
IntegerVariable a,
IntegerVariable b,
IntegerVariable c,
int64_t ub,
Model * model )
inline

l => (a + b + c <= ub).

Todo
(user): Use level zero bounds to infer binary precedence relations?

Definition at line 620 of file precedences.h.

◆ AddCumulativeCutGenerator()

void operations_research::sat::AddCumulativeCutGenerator ( const AffineExpression & capacity,
SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const std::optional< AffineExpression > & makespan,
Model * m,
LinearRelaxation * relaxation )

Checks if at least one rectangle has a variable size, is optional, or if the demand or the capacity are variable.

Checks variable demand.

Definition at line 1611 of file linear_relaxation.cc.

◆ AddCumulativeOverloadChecker()

void operations_research::sat::AddCumulativeOverloadChecker ( AffineExpression capacity,
SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands,
Model * model )

Enforces the existence of a preemptive schedule where every task is executed inside its interval, using energy units of the resource during execution.

Important: This only uses the energies min/max and not the actual demand of a task. It can thus be used in some non-conventional situation.

All energy expression are assumed to take a non-negative value; if the energy of a task is 0, the task can run anywhere. The schedule never uses more than capacity units of energy at a given time.

This is mathematically equivalent to making a model with energy(task) different tasks with demand and size 1, but is much more efficient, since it uses O(tasks) variables instead of O(sum_{task} |energy(task)|).

Definition at line 42 of file cumulative_energy.cc.

◆ AddCumulativeOverloadCheckerDff()

void operations_research::sat::AddCumulativeOverloadCheckerDff ( AffineExpression capacity,
SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands,
Model * model )

Same as above, but applying a Dual Feasible Function (also known as a conservative scale) before looking for overload.

Definition at line 53 of file cumulative_energy.cc.

◆ AddCumulativeRelaxation()

void operations_research::sat::AddCumulativeRelaxation ( const AffineExpression & capacity,
SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const std::optional< AffineExpression > & makespan,
Model * model,
LinearRelaxation * relaxation )

Scheduling relaxations and cut generators.

This relaxation will compute the bounding box of all tasks in the cumulative, and add the constraint that the sum of energies of each task must fit in the capacity * span area.

Adds linearization of cumulative constraints.The second part adds an energetic equation linking the duration of all potential tasks to the actual span * capacity of the cumulative constraint. It uses the makespan to compute the span of the constraint if defined.

There are no active intervals, no need to add the relaxation.

If nothing is variable, and the coefficients cannot be reduced, the linear relaxation will already be enforced by the scheduling propagators.

Specialized case 1: sizes are fixed with a non 1 gcd and no makespan.

We can simplify the capacity only if it is fixed.

Todo
(user): We could use (capacity / demands_gcd) * demands_gcd.

Copy the decomposed energy.

The energy is defined if the vector is not empty. Let's reduce the coefficients.

We know the size is fixed.

Add the available energy of the cumulative.

Todo
(user): Implement demands_gcd != 1 && capacity is fixed.

The energy is defined if the vector is not empty.

The energy is not a decomposed product, but it could still be constant or linear. If not, a McCormick relaxation will be introduced. AddQuadraticLowerBound() supports all cases.

Create and link span_start and span_end to the starts and ends of the tasks.

Todo
(user): In some cases, we could have only one task that can be first.

Definition at line 791 of file linear_relaxation.cc.

◆ AddDisjunctive()

void operations_research::sat::AddDisjunctive ( const std::vector< IntervalVariable > & intervals,
Model * model )

Enforces a disjunctive (or no overlap) constraint on the given interval variables. The intervals are interpreted as [start, end) and the constraint enforces that no time point belongs to two intervals.

Todo
(user): This is not completely true for empty intervals (start == end). Make sure such intervals are ignored by the constraint.

Depending on the parameters, create all pair of conditional precedences.

Todo
(user): create them dynamically instead?

Experiments to use the timetable only to propagate the disjunctive.

We decided to create the propagators in this particular order, but it shouldn't matter much because of the different priorities used.

This one will not propagate anything if we added all precedence literals since the linear propagator will already do that in that case.

Only one direction is needed by this one.

Note
we keep this one even when there is just two intervals. This is because it might push a variable that is after both of the intervals using the fact that they are in disjunction.

Definition at line 39 of file disjunctive.cc.

◆ AddDisjunctiveWithBooleanPrecedencesOnly()

void operations_research::sat::AddDisjunctiveWithBooleanPrecedencesOnly ( const std::vector< IntervalVariable > & intervals,
Model * model )

Creates Boolean variables for all the possible precedences of the form (task i is before task j) and forces that, for each couple of task (i,j), either i is before j or j is before i. Do not create any other propagators.

Definition at line 155 of file disjunctive.cc.

◆ AddFullEncodingFromSearchBranching()

void operations_research::sat::AddFullEncodingFromSearchBranching ( const CpModelProto & model_proto,
Model * m )

Inspect the search strategy stored in the model, and adds a full encoding to variables appearing in a SELECT_MEDIAN_VALUE search strategy if the search branching is set to FIXED_SEARCH.

Definition at line 983 of file cp_model_loader.cc.

◆ AddInferedAndDeletedClauses()

bool operations_research::sat::AddInferedAndDeletedClauses ( const std::string & file_path,
DratChecker * drat_checker )

Adds to the given drat checker the infered and deleted clauses from the file at the given path, which must be in DRAT format. Returns true iff the file was successfully parsed.

Definition at line 564 of file drat_checker.cc.

◆ AddIntegerVariableFromIntervals()

void operations_research::sat::AddIntegerVariableFromIntervals ( SchedulingConstraintHelper * helper,
Model * model,
std::vector< IntegerVariable > * vars )

Cuts helpers.

Definition at line 1212 of file intervals.cc.

◆ AddIntProdCutGenerator()

void operations_research::sat::AddIntProdCutGenerator ( const ConstraintProto & ct,
int linearization_level,
Model * m,
LinearRelaxation * relaxation )

Cut generators.

Constraint is z == x * y.

We currently only support variables with non-negative domains.

Change signs to return to the case where all variables are a domain with non negative values only.

Definition at line 1443 of file linear_relaxation.cc.

◆ AddLinearConstraintMultiple()

bool operations_research::sat::AddLinearConstraintMultiple ( int64_t factor,
const ConstraintProto & to_add,
ConstraintProto * to_modify )

Does "to_modify += factor * to_add". Both constraint must be linear. Returns false and does not change anything in case of overflow.

Note
the enforcement literals (if any) are ignored and left untouched.

Copy to_modify terms.

Add factor * to_add and check first kind of overflow.

Merge terms, return false if we get an overflow here.

Copy terms.

Write new rhs. We want to be exact during the multiplication. Note that in practice this domain is fixed, so this will always be the case.

Definition at line 183 of file presolve_util.cc.

◆ AddLinearExpressionToLinearConstraint()

void operations_research::sat::AddLinearExpressionToLinearConstraint ( const LinearExpressionProto & expr,
int64_t coefficient,
LinearConstraintProto * linear )

Adds a linear expression proto to a linear constraint in place.

Important: The domain must already be set, otherwise the offset will be lost. We also do not do any duplicate detection, so the constraint might need presolving afterwards.

Definition at line 585 of file cp_model_utils.cc.

◆ AddLinMaxCutGenerator()

void operations_research::sat::AddLinMaxCutGenerator ( const ConstraintProto & ct,
Model * m,
LinearRelaxation * relaxation )
Todo
(user): Support linearization of general target expression.
Note
Cut generator requires all expressions to contain only positive vars.
Todo
(user): Move this out of here.

Add initial big-M linear relaxation. z_vars[i] == 1 <=> target = exprs[i].

Definition at line 1735 of file linear_relaxation.cc.

◆ AddLPConstraints()

IntegerVariable operations_research::sat::AddLPConstraints ( bool objective_need_to_be_tight,
const CpModelProto & model_proto,
Model * m )

Adds one LinearProgrammingConstraint per connected component of the model.

Non const as we will std::move() stuff out of there.

The bipartite graph of LP constraints might be disconnected: make a partition of the variables into connected components. Constraint nodes are indexed by [0..num_lp_constraints), variable nodes by [num_lp_constraints..num_lp_constraints+num_variables).

Todo
(user): look into biconnected components.
Todo
(user): Optimize memory layout.

Make sure any constraint that touch the objective is not discarded even if it is the only one in its component. This is important to propagate as much as possible the objective bound by using any bounds the LP give us on one of its components. This is critical on the zephyrus problems for instance.

Dispatch every constraint to its LinearProgrammingConstraint.

Load the constraint.

Dispatch every cut generator to its LinearProgrammingConstraint.

Add the objective.

First pass: set objective coefficients on the lp constraints, and store the cp terms in one vector per component.

Component is too small. We still need to store the objective term.

Second pass: Build the cp sub-objectives per component.

Register LP constraints. Note that this needs to be done after all the constraints have been added.

Definition at line 390 of file cp_model_solver_helpers.cc.

◆ AddMaxAffineCutGenerator()

void operations_research::sat::AddMaxAffineCutGenerator ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

If the target is constant, propagation is enough.

Definition at line 1086 of file linear_relaxation.cc.

◆ AddNonOverlappingRectangles()

void operations_research::sat::AddNonOverlappingRectangles ( const std::vector< IntervalVariable > & x,
const std::vector< IntervalVariable > & y,
Model * model )

Enforces that the boxes with corners in (x, y), (x + dx, y), (x, y + dy) and (x + dx, y + dy) do not overlap.

We must first check if the cumulative relaxation is possible.

Abort as the task would be conditioned by two literals.

We cannot use x_size as the demand of the cumulative based on the y_intervals.

We cannot use y_size as the demand of the cumulative based on the y_intervals.

Definition at line 174 of file diffn.cc.

◆ AddNoOverlap2dCutGenerator()

void operations_research::sat::AddNoOverlap2dCutGenerator ( const ConstraintProto & ct,
Model * m,
LinearRelaxation * relaxation )

Checks if at least one rectangle has a variable dimension or is optional.

Ignore absent rectangles.

Checks non-present intervals.

Checks variable sized intervals.

Definition at line 1668 of file linear_relaxation.cc.

◆ AddNoOverlapCutGenerator()

void operations_research::sat::AddNoOverlapCutGenerator ( SchedulingConstraintHelper * helper,
const std::optional< AffineExpression > & makespan,
Model * m,
LinearRelaxation * relaxation )

Checks if at least one rectangle has a variable size or is optional.

Definition at line 1645 of file linear_relaxation.cc.

◆ AddObjectiveConstraint()

bool operations_research::sat::AddObjectiveConstraint ( const LinearBooleanProblem & problem,
bool use_lower_bound,
Coefficient lower_bound,
bool use_upper_bound,
Coefficient upper_bound,
SatSolver * solver )

Adds the constraint that the objective is smaller or equals to the given upper bound.

Definition at line 349 of file boolean_problem.cc.

◆ AddObjectiveUpperBound()

bool operations_research::sat::AddObjectiveUpperBound ( const LinearBooleanProblem & problem,
Coefficient upper_bound,
SatSolver * solver )

Adds the constraint that the objective is smaller than the given upper bound.

Definition at line 341 of file boolean_problem.cc.

◆ AddOffsetAndScaleObjectiveValue()

double operations_research::sat::AddOffsetAndScaleObjectiveValue ( const LinearBooleanProblem & problem,
Coefficient v )
inline

Adds the offset and returns the scaled version of the given objective value.

Definition at line 39 of file boolean_problem.h.

◆ AddProblemClauses()

bool operations_research::sat::AddProblemClauses ( const std::string & file_path,
DratChecker * drat_checker )

Adds to the given drat checker the problem clauses from the file at the given path, which must be in DIMACS format. Returns true iff the file was successfully parsed.

Ignore empty and comment lines.

Definition at line 515 of file drat_checker.cc.

◆ AddProductTo()

bool operations_research::sat::AddProductTo ( IntegerValue a,
IntegerValue b,
IntegerValue * result )
inline

Computes result += a * b, and return false iff there is an overflow.

Definition at line 169 of file integer.h.

◆ AddReservoirConstraint()

void operations_research::sat::AddReservoirConstraint ( std::vector< AffineExpression > times,
std::vector< AffineExpression > deltas,
std::vector< Literal > presences,
int64_t min_level,
int64_t max_level,
Model * model )

Adds a reservoir constraint to the model. Note that to account for level not containing zero at time zero, we might needs to create an artificial fixed event.

This instantiate one or more ReservoirTimeTabling class to perform the propagation.

We only create a side if it can fail.

Definition at line 31 of file timetable.cc.

◆ AddRoutesCutGenerator()

void operations_research::sat::AddRoutesCutGenerator ( const ConstraintProto & ct,
Model * m,
LinearRelaxation * relaxation )

Definition at line 589 of file linear_relaxation.cc.

◆ AddSquareCutGenerator()

void operations_research::sat::AddSquareCutGenerator ( const ConstraintProto & ct,
int linearization_level,
Model * m,
LinearRelaxation * relaxation )

Constraint is square == x * x.

We currently only support variables with non-negative domains.

Change the sigh of x if its domain is non-positive.

Definition at line 1521 of file linear_relaxation.cc.

◆ AddTo()

bool operations_research::sat::AddTo ( IntegerValue a,
IntegerValue * result )
inline

Definition at line 160 of file integer.h.

◆ AddWeightedSumGreaterOrEqual()

void operations_research::sat::AddWeightedSumGreaterOrEqual ( absl::Span< const Literal > enforcement_literals,
absl::Span< const IntegerVariable > vars,
absl::Span< const int64_t > coefficients,
int64_t lower_bound,
Model * model )
inline

enforcement_literals => sum >= lower_bound

We just negate everything and use an <= constraint.

Definition at line 584 of file integer_expr.h.

◆ AddWeightedSumLowerOrEqual()

void operations_research::sat::AddWeightedSumLowerOrEqual ( absl::Span< const Literal > enforcement_literals,
absl::Span< const IntegerVariable > vars,
absl::Span< const int64_t > coefficients,
int64_t upper_bound,
Model * model )
inline

enforcement_literals => sum <= upper_bound

Linear1.

Detect precedences with 2 and 3 terms.

If value == min(expression), then we can avoid creating the sum.

Todo
(user): Deal with the case with no enforcement literal, in case the presolve was turned off?

Tricky: as we create integer literal, we might propagate stuff and the bounds might change, so if the expression_min increase with the bound we use, then the literal must be false.

Definition at line 468 of file integer_expr.h.

◆ AffineCoeffOneLowerOrEqualWithOffset()

std::function< void(Model *)> operations_research::sat::AffineCoeffOneLowerOrEqualWithOffset ( AffineExpression a,
AffineExpression b,
int64_t offset )
inline

a + offset <= b. (when a and b are of the form 1 * var + offset).

Definition at line 588 of file precedences.h.

◆ AllDifferentAC()

std::function< void(Model *)> operations_research::sat::AllDifferentAC ( const std::vector< IntegerVariable > & variables)

This constraint forces all variables to take different values. This is meant to be used as a complement to an alldifferent decomposition like AllDifferentBinary(): DO NOT USE WITHOUT ONE. Doing the filtering that the decomposition can do with an appropriate algorithm should be cheaper and yield more accurate explanations.

It uses the matching algorithm described in Regin at AAAI1994: "A filtering algorithm for constraints of difference in CSPs".

This will fully encode variables.

Definition at line 99 of file all_different.cc.

◆ AllDifferentBinary()

std::function< void(Model *)> operations_research::sat::AllDifferentBinary ( const std::vector< IntegerVariable > & vars)

Enforces that the given tuple of variables takes different values. This fully encodes all the variables and simply enforces a <= 1 constraint on each possible values.

Fully encode all the given variables and construct a mapping value -> List of literal each indicating that a given variable takes this value.

Note
we use a map to always add the constraints in the same order.

Add an at most one constraint for each value.

If the number of values is equal to the number of variables, we have a permutation. We can add a bool_or for each literals attached to a value.

Definition at line 38 of file all_different.cc.

◆ AllDifferentOnBounds() [1/2]

std::function< void(Model *)> operations_research::sat::AllDifferentOnBounds ( const std::vector< AffineExpression > & expressions)

Definition at line 72 of file all_different.cc.

◆ AllDifferentOnBounds() [2/2]

std::function< void(Model *)> operations_research::sat::AllDifferentOnBounds ( const std::vector< IntegerVariable > & vars)

Enforces that the given tuple of variables takes different values. Same as AllDifferentBinary() but use a different propagator that only enforce the so called "bound consistency" on the variable domains.

Compared to AllDifferentBinary() this doesn't require fully encoding the variables and it is also quite fast. Note that the propagation is different, this will not remove already taken values from inside a domain, but it will propagates more the domain bounds.

Definition at line 83 of file all_different.cc.

◆ AllValuesInDomain()

template<typename ProtoWithDomain >
std::vector< int64_t > operations_research::sat::AllValuesInDomain ( const ProtoWithDomain & proto)

Returns the list of values in a given domain. This will fail if the domain contains more than one millions values.

Todo
(user): work directly on the Domain class instead.

Definition at line 147 of file cp_model_utils.h.

◆ AnalyzeIntervals()

bool operations_research::sat::AnalyzeIntervals ( bool transpose,
absl::Span< const int > boxes,
absl::Span< const Rectangle > rectangles,
absl::Span< const IntegerValue > rectangle_energies,
IntegerValue * x_threshold,
IntegerValue * y_threshold,
Rectangle * conflict = nullptr )

A O(n^2) algorithm to analyze all the relevant X intervals and infer a threshold of the y size of a bounding box after which there is no point checking for energy overload.

Returns false on conflict, and fill the bounding box that caused the conflict.

If transpose is true, we analyze the relevant Y intervals instead.

First, we compute the possible x_min values (removing duplicates). We also sort the relevant tasks by their x_max.

Todo
(user): If the number of unique x_max is smaller than the number of unique x_min, it is better to do it the other way around.
Note
for the same end_max, the order change our heuristic to evaluate the max_conflict_height.

The maximum y dimension of a bounding area for which there is a potential conflict.

This is currently only used for logging.

All quantities at index j correspond to the interval [starts[j], x_max].

Sentinel.

Iterate over all boxes by increasing x_max values.

Add this box contribution to all the [starts[j], x_max] intervals.

If the new box is disjoint in y from the ones added so far, there cannot be a new conflict involving this box, so we skip until we add new boxes.

We have a conflict.

Because we currently do not have a conflict involving the new box, the only way to have one is to remove enough energy to reduce the y domain.

In this case, we need to remove at least old_energy_at_max to have a conflict.

If the new box height is above the conflict_height, do not count it now. We only need to consider conflict involving the new box.

Definition at line 226 of file diffn_util.cc.

◆ AppendAtMostOneRelaxation()

void operations_research::sat::AppendAtMostOneRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Definition at line 416 of file linear_relaxation.cc.

◆ AppendBoolAndRelaxation()

void operations_research::sat::AppendBoolAndRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation,
ActivityBoundHelper * activity_helper )
Todo
(user): These constraints can be many, and if they are not regrouped in big at most ones, then they should probably only added lazily as cuts. Regroup this with future clique-cut separation logic.
Note
for the case with only one enforcement, what we do below is already done by the clique merging code.

If we have many_literals => many_fixed literal, it is important to try to use a tight big-M if we can. This is important on neos-957323.pb.gz for instance.

We split the literal into disjoint AMO and we encode each with sum Not(literals) <= sum Not(enforcement)

Note
what we actually do is use the decomposition into at most one and add a constraint for each part rather than just adding the sum of them.
Todo
(user): More generally, do not miss the same structure if the bool_and was expanded into many clauses!
Todo
(user): It is not 100% clear that just not adding one constraint is worse. Relaxation is worse, but then we have less constraint.

Definition at line 347 of file linear_relaxation.cc.

◆ AppendBoolOrRelaxation()

void operations_research::sat::AppendBoolOrRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Definition at line 333 of file linear_relaxation.cc.

◆ AppendCircuitRelaxation()

void operations_research::sat::AppendCircuitRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Routing relaxation and cut generators.

Each node must have exactly one incoming and one outgoing arc (note that it can be the unique self-arc of this node too).

We separate the two constraints.

Definition at line 487 of file linear_relaxation.cc.

◆ AppendCumulativeRelaxationAndCutGenerator()

void operations_research::sat::AppendCumulativeRelaxationAndCutGenerator ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Adds linearization of cumulative constraints.The second part adds an energetic equation linking the duration of all potential tasks to the actual span * capacity of the cumulative constraint.

We remove the makespan data from the intervals the demands vector.

We try to linearize the energy of each task (size * demand).

We can now add the relaxation and the cut generators.

Definition at line 748 of file linear_relaxation.cc.

◆ AppendElementEncodingRelaxation()

void operations_research::sat::AppendElementEncodingRelaxation ( Model * m,
LinearRelaxation * relaxation )

If we have an exactly one between literals l_i, and each l_i => var == value_i, then we can add a strong linear relaxation: var = sum l_i * value_i.

This codes detect this and add the corresponding linear equations.

Todo
(user): We can do something similar with just an at most one, however it is harder to detect that if all literal are false then none of the implied value can be taken.

If the term has no view, we abort.

Definition at line 1785 of file linear_relaxation.cc.

◆ AppendExactlyOneRelaxation()

void operations_research::sat::AppendExactlyOneRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

We just encode the at most one part that might be partially linearized later.

Definition at line 425 of file linear_relaxation.cc.

◆ AppendLinearConstraintRelaxation()

void operations_research::sat::AppendLinearConstraintRelaxation ( const ConstraintProto & ct,
bool linearize_enforced_constraints,
Model * model,
LinearRelaxation * relaxation,
ActivityBoundHelper * activity_helper = nullptr )

Appends linear constraints to the relaxation. This also handles the relaxation of linear constraints with enforcement literals. A linear constraint lb <= ax <= ub with enforcement literals {ei} is relaxed as following. lb <= (Sum Negated(ei) * (lb - implied_lb)) + ax <= inf -inf <= (Sum Negated(ei) * (ub - implied_ub)) + ax <= ub Where implied_lb and implied_ub are trivial lower and upper bounds of the constraint.

Note
we ignore the holes in the domain.
Todo
(user): In LoadLinearConstraint() we already created intermediate Booleans for each disjoint interval, we should reuse them here if possible.
Todo
(user): process the "at most one" part of a == 1 separately?

Reified version.

We linearize fully reified constraints of size 1 all together for a given variable. But we need to process half-reified ones or constraint with more than one enforcement.

Todo
(user): Use cleaner "already loaded" logic, and mark as such constraint already encoded by code like AppendRelaxationForEqualityEncoding().

Compute min/max activity.

Everything here should have a view.

And(ei) => terms >= rhs_domain_min <=> Sum_i (~ei * (rhs_domain_min - min_activity)) + terms >= rhs_domain_min

And(ei) => terms <= rhs_domain_max <=> Sum_i (~ei * (rhs_domain_max - max_activity)) + terms <= rhs_domain_max

Definition at line 1184 of file linear_relaxation.cc.

◆ AppendLinMaxRelaxationPart1()

void operations_research::sat::AppendLinMaxRelaxationPart1 ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Adds linearization of int max constraints. Returns a vector of z vars such that: z_vars[l] == 1 <=> target = exprs[l].

Consider the Lin Max constraint with d expressions and n variables in the form: target = max {exprs[l] = Sum (wli * xi + bl)}. l in {1,..,d}. Li = lower bound of xi Ui = upper bound of xi. Let zl be in {0,1} for all l in {1,..,d}. The target = exprs[l] when zl = 1.

The following is a valid linearization for Lin Max. target >= exprs[l], for all l in {1,..,d} target <= Sum_i(wki * xi) + Sum_l((Nkl + bl) * zl), for all k in {1,..,d} Where Nkl is a large number defined as: Nkl = Sum_i(max((wli - wki)*Li, (wli - wki)*Ui)) = Sum (max corner difference for variable i, target expr k, max expr l) Reference: "Strong mixed-integer programming formulations for trained neural networks" by Ross Anderson et. (https://arxiv.org/pdf/1811.01988.pdf).

Todo
(user): Support linear expression as target.

We want to linearize target = max(exprs[1], exprs[2], ..., exprs[d]). Part 1: Encode target >= max(exprs[1], exprs[2], ..., exprs[d])

Definition at line 1044 of file linear_relaxation.cc.

◆ AppendLinMaxRelaxationPart2()

void operations_research::sat::AppendLinMaxRelaxationPart2 ( IntegerVariable target,
const std::vector< Literal > & alternative_literals,
const std::vector< LinearExpression > & exprs,
Model * model,
LinearRelaxation * relaxation )

Part 2: Encode upper bound on X.

Add linking constraint to the CP solver sum zi = 1 and for all i, zi => max = expr_i.

First add the CP constraints.

For the relaxation, we use different constraints with a stronger linear relaxation as explained in the .h

Cache coefficients.

Todo
(user): Remove hash_map ?

Definition at line 1110 of file linear_relaxation.cc.

◆ AppendMaxAffineRelaxation()

void operations_research::sat::AppendMaxAffineRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )
Todo
(user): experiment with: 1) remove this code 2) keep this code 3) remove this code and create the cut generator at level 1.
Note
This only works if all affine expressions share the same variable.

Definition at line 1066 of file linear_relaxation.cc.

◆ AppendNoOverlap2dRelaxation()

void operations_research::sat::AppendNoOverlap2dRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Adds the energetic relaxation sum(areas) <= bounding box area.

We have only one active literal.

Not including the term if we don't have a view is ok.

Definition at line 973 of file linear_relaxation.cc.

◆ AppendNoOverlapRelaxationAndCutGenerator()

void operations_research::sat::AppendNoOverlapRelaxationAndCutGenerator ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Adds linearization of no overlap constraints. It adds an energetic equation linking the duration of all potential tasks to the actual span of the no overlap constraint.

Definition at line 711 of file linear_relaxation.cc.

◆ AppendPairwiseRestrictions() [1/2]

void operations_research::sat::AppendPairwiseRestrictions ( absl::Span< const ItemForPairwiseRestriction > items,
absl::Span< const ItemForPairwiseRestriction > other_items,
std::vector< PairwiseRestriction > * result )

Same as above, but test items against other_items and append the restrictions found to result.

Definition at line 633 of file diffn_util.cc.

◆ AppendPairwiseRestrictions() [2/2]

void operations_research::sat::AppendPairwiseRestrictions ( absl::Span< const ItemForPairwiseRestriction > items,
std::vector< PairwiseRestriction > * result )

Find pair of items that are either in conflict or could have their range shrinked to avoid conflict.

Definition at line 623 of file diffn_util.cc.

◆ AppendPartialGreaterThanEncodingRelaxation()

void operations_research::sat::AppendPartialGreaterThanEncodingRelaxation ( IntegerVariable var,
const Model & model,
LinearRelaxation * relaxation )

This is a different relaxation that use a partial set of literal li such that (li <=> var >= xi). In which case we use the following encoding:

  • li >= l_{i+1} for all possible i. Note that the xi need to be sorted.
  • var >= min + l0 * (x0 - min) + Sum_{i>0} li * (xi - x_{i-1})
  • and same as above for NegationOf(var) for the upper bound.

Like for AppendRelaxationForEqualityEncoding() we skip any li that do not have an integer view.

Start by the var >= side. And also add the implications between used literals.

Skip the entry if the literal doesn't have a view.

Add var <= prev_var, which is the same as var + not(prev_var) <= 1

Note
by construction, this shouldn't be able to overflow.

Do the same for the var <= side by using NegationOfVar().

Note
we do not need to add the implications between literals again.

Skip the entry if the literal doesn't have a view.

Note
by construction, this shouldn't be able to overflow.

Definition at line 262 of file linear_relaxation.cc.

◆ AppendRelaxationForEqualityEncoding()

void operations_research::sat::AppendRelaxationForEqualityEncoding ( IntegerVariable var,
const Model & model,
LinearRelaxation * relaxation,
int * num_tight,
int * num_loose )

Looks at all the encoding literal (li <=> var == value_i) that have a view and add a linear relaxation of their relationship with var.

If the encoding is full, we can just add:

  • Sum li == 1
  • var == min_value + Sum li * (value_i - min_value)

When the set of such encoding literals do not cover the full domain of var, we do something a bit more involved. Let min_not_encoded/max_not_encoded the min and max value of the domain of var that is NOT part of the encoding. We add:

  • Sum li <= 1
  • var >= (Sum li * value_i) + (1 - Sum li) * min_not_encoded
  • var <= (Sum li * value_i) + (1 - Sum li) * max_not_encoded

Note of the special case where min_not_encoded == max_not_encoded that kind of reduce to the full encoding, except with a different "rhs" value.

We also increment the corresponding counter if we added something. We consider the relaxation "tight" if the encoding was full or if min_not_encoded == max_not_encoded.

Note
we skip pairs that do not have an Integer view.
Todo
(user): PartialDomainEncoding() filter pair corresponding to literal set to false, however the initial variable Domain is not always updated. As a result, these min/max can be larger than in reality. Try to fix this even if in practice this is a rare occurrence, as the presolve should have propagated most of what we can.

This means that there are no non-encoded value and we have a full encoding. We substract the minimum value to reduce its size.

It is possible that the linear1 encoding respect our overflow precondition but not the Var = sum bool * value one. In this case, we just don't encode it this way. Hopefully, most normal model will not run into this.

In this special case, the two constraints below can be merged into an equality: var = rhs + sum l_i * (value_i - rhs).

min + sum l_i * (value_i - min) <= var.

Note
this might overflow in corner cases, so we need to prevent that.

var <= max + sum l_i * (value_i - max).

Note
this might overflow in corner cases, so we need to prevent that.
empty/trivial constraints will be filtered later.

Definition at line 136 of file linear_relaxation.cc.

◆ AppendRoutesRelaxation()

void operations_research::sat::AppendRoutesRelaxation ( const ConstraintProto & ct,
Model * model,
LinearRelaxation * relaxation )

Each node except node zero must have exactly one incoming and one outgoing arc (note that it can be the unique self-arc of this node too). For node zero, the number of incoming arcs should be the same as the number of outgoing arcs.

We separate the two constraints.

Definition at line 525 of file linear_relaxation.cc.

◆ AppendSquareRelaxation()

void operations_research::sat::AppendSquareRelaxation ( const ConstraintProto & ct,
Model * m,
LinearRelaxation * relaxation )

Constraint is square == x * x.

We currently only support variables with non-negative domains.

Change the sigh of x if its domain is non-positive.

Check for potential overflows.

Todo
(user): We could add all or some below_hyperplans.

The hyperplan will use x_ub - 1 and x_ub.

Definition at line 1479 of file linear_relaxation.cc.

◆ AppendVariablesFromCapacityAndDemands()

void operations_research::sat::AppendVariablesFromCapacityAndDemands ( const AffineExpression & capacity,
SchedulingDemandHelper * demands_helper,
Model * model,
std::vector< IntegerVariable > * vars )

Definition at line 1238 of file intervals.cc.

◆ ApplyLiteralMapping()

bool operations_research::sat::ApplyLiteralMapping ( const util_intops::StrongVector< LiteralIndex, LiteralIndex > & mapping,
std::vector< LiteralWithCoeff > * cst,
Coefficient * bound_shift,
Coefficient * max_value )

Maps all the literals of the given constraint using the given mapping. The mapping may map a literal index to kTrueLiteralIndex or kFalseLiteralIndex in which case the literal will be considered fixed to the appropriate value.

Note
this function also canonicalizes the constraint and updates bound_shift and max_value like ComputeBooleanLinearExpressionCanonicalForm() does.

Finally, this will return false if some integer overflow or underflow occurred during the constraint simplification.

Nothing to do if the literal is false.

Definition at line 117 of file pb_constraint.cc.

◆ ApplyLiteralMappingToBooleanProblem()

void operations_research::sat::ApplyLiteralMappingToBooleanProblem ( const util_intops::StrongVector< LiteralIndex, LiteralIndex > & mapping,
LinearBooleanProblem * problem )

Maps all the literals of the problem. Note that this converts the cost of a variable correctly, that is if a variable with cost is mapped to another, the cost of the later is updated.

Preconditions: the mapping must map l and not(l) to the same variable and be of the correct size. It can also map a literal index to kTrueLiteralIndex or kFalseLiteralIndex in order to fix the variable.

First the objective.

Now the clauses.

Add bound_shift to the bounds and remove a bound if it is now trivial.

This is because ApplyLiteralMapping make all coefficient positive.

If the constraint is always true, we just leave it empty.

Remove empty constraints.

Computes the new number of variables and set it.

Todo
(user): The names is currently all scrambled. Do something about it so that non-fixed variables keep their names.

Definition at line 755 of file boolean_problem.cc.

◆ ApplyToAllIntervalIndices()

void operations_research::sat::ApplyToAllIntervalIndices ( const std::function< void(int *)> & f,
ConstraintProto * ct )

Definition at line 368 of file cp_model_utils.cc.

◆ ApplyToAllLiteralIndices()

void operations_research::sat::ApplyToAllLiteralIndices ( const std::function< void(int *)> & f,
ConstraintProto * ct )

Definition at line 207 of file cp_model_utils.cc.

◆ ApplyToAllVariableIndices()

void operations_research::sat::ApplyToAllVariableIndices ( const std::function< void(int *)> & function,
ConstraintProto * ct )

Applies the given function to all variables/literals/intervals indices of the constraint. This function is used in a few places to have a "generic" code dealing with constraints.

Definition at line 270 of file cp_model_utils.cc.

◆ ApplyVariableMapping()

void operations_research::sat::ApplyVariableMapping ( const std::vector< int > & mapping,
const PresolveContext & context )

Replaces all the instance of a variable i (and the literals referring to it) by mapping[i]. The definition of variables i is also moved to its new index. Variables with a negative mapping value are ignored and it is an error if such variable is referenced anywhere (this is CHECKed).

The image of the mapping should be dense in [0, new_num_variables), this is also CHECKed.

Remap all the variable/literal references in the constraints and the enforcement literals in the variables.

Remap the objective variables.

Remap the assumptions.

Remap the search decision heuristic.

Note
we delete any heuristic related to a removed variable.

Remove strategy with empty affine expression.

Remap the solution hint. Note that after remapping, we may have duplicate variable, so we only keep the first occurrence.

We always move a hint within bounds. This also make sure a hint of INT_MIN or INT_MAX does not overflow.

Note
if (hinted_value - r.offset) is not divisible by r.coeff, then the hint is clearly infeasible, but we still set it to a "close" value.

Move the variable definitions.

Check that all variables are used.

Definition at line 13057 of file cp_model_presolve.cc.

◆ AtMinOrMaxInt64I()

bool operations_research::sat::AtMinOrMaxInt64I ( IntegerValue t)
inline

Definition at line 121 of file integer.h.

◆ AtMinValue()

IntegerLiteral operations_research::sat::AtMinValue ( IntegerVariable var,
IntegerTrail * integer_trail )

Returns decision corresponding to var at its lower bound. Returns an invalid literal if the variable is fixed.

Definition at line 57 of file integer_search.cc.

◆ AtMostOneConstraint()

std::function< void(Model *)> operations_research::sat::AtMostOneConstraint ( const std::vector< Literal > & literals)
inline

Definition at line 919 of file sat_solver.h.

◆ AtMostOneDecomposition()

std::vector< absl::Span< int > > operations_research::sat::AtMostOneDecomposition ( const std::vector< std::vector< int > > & graph,
absl::BitGenRef random,
std::vector< int > * buffer )

Assuming n "literal" in [0, n), and a graph such that graph[i] list the literal in [0, n) implied to false when the literal with index i is true, this returns an heuristic decomposition of the literals into disjoint at most ones.

Note(user): Symmetrize the matrix if not already, maybe rephrase in term of undirected graph, and clique decomposition.

Definition at line 998 of file util.cc.

◆ BasicOrbitopeExtraction()

std::vector< std::vector< int > > operations_research::sat::BasicOrbitopeExtraction ( absl::Span< const std::unique_ptr< SparsePermutation > > generators)

Given the generator for a permutation group of [0, n-1], tries to identify a grouping of the variables in an p x q matrix such that any permutations of the columns of this matrix is in the given group.

The name comes from: "Packing and Partitioning Orbitopes", Volker Kaibel, Marc E. Pfetsch, https://arxiv.org/abs/math/0603678 . Here we just detect it, independently of the constraints on the variables in this matrix. We can also detect non-Boolean orbitope.

In order to detect orbitope, this basic algorithm requires that the generators of the orbitope must only contain one or more 2-cyle (i.e transpositions). Thus they must be involutions. The list of transpositions in the SparsePermutation must also be listed in a canonical order.

Todo
(user): Detect more than one orbitope? Note that once detected, the structure can be exploited efficiently, but for now, a more "generic" algorithm based on stabilizator should achieve the same preprocessing power, so I don't know how hard we need to invest in orbitope detection.
Todo
(user): The heuristic is quite limited for now, but this works on graph20-20-1rand.mps.gz. I suspect the generators provided by the detection code follow our preconditions.

Count the number of permutations that are compositions of 2-cycle and regroup them according to the number of cycles.

Heuristic: we try to grow the orbitope that has the most potential for fixing variables.

Todo
(user): We could grow each and keep the real maximum.

We will track the element already added so we never have duplicates.

Greedily grow the orbitope.

Start using the first permutation.

We want to find a column such that g sends it to variables not already in the orbitope matrix.

Note(user): This relies on the cycle in each permutation to be ordered by smaller element first. This way we don't have to account any row permutation of the orbitope matrix. The code that detect the symmetries of the problem should already return permutation in this canonical format.

Extract the two elements of this transposition.

We want one element to appear in matching_column_index and the other to not appear at all.

If grow is of full size, we can extend the orbitope.

Definition at line 30 of file symmetry_util.cc.

◆ BooleanLinearConstraint()

std::function< void(Model *)> operations_research::sat::BooleanLinearConstraint ( int64_t lower_bound,
int64_t upper_bound,
std::vector< LiteralWithCoeff > * cst )
inline

Model based functions.

Todo
(user): move them in another file, and unit-test them.

Definition at line 880 of file sat_solver.h.

◆ BooleanLinearExpressionIsCanonical()

bool operations_research::sat::BooleanLinearExpressionIsCanonical ( absl::Span< const LiteralWithCoeff > cst)

Returns true iff the Boolean linear expression is in canonical form.

Todo
(user): Also check for no duplicates literals + unit tests.

Definition at line 150 of file pb_constraint.cc.

◆ BooleanProblemToCpModelproto()

CpModelProto operations_research::sat::BooleanProblemToCpModelproto ( const LinearBooleanProblem & problem)

Converts a LinearBooleanProblem to a CpModelProto which should eventually replace completely the LinearBooleanProblem proto.

Note
the new format is slightly different.

The term was coeff * (1 - var).

Definition at line 164 of file boolean_problem.cc.

◆ BoolPseudoCostHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::BoolPseudoCostHeuristic ( Model * model)

Variant used for LbTreeSearch experimentation. Note that each decision is in O(num_variables), but it is kind of ok with LbTreeSearch as we only call this for "new" decision, not when we move around in the tree.

Only look at non-fixed booleans.

Get associated literal.

Definition at line 203 of file integer_search.cc.

◆ BoxesAreInEnergyConflict()

bool operations_research::sat::BoxesAreInEnergyConflict ( const std::vector< Rectangle > & rectangles,
const std::vector< IntegerValue > & energies,
absl::Span< const int > boxes,
Rectangle * conflict = nullptr )

Visible for testing. The algo is in O(n^4) so shouldn't be used directly. Returns true if there exist a bounding box with too much energy.

First consider all relevant intervals along the x axis.

Redo the same on the y coordinate for the current x interval which is [starts[j], x_max].

Definition at line 159 of file diffn_util.cc.

◆ BruteForceOrthogonalPacking()

BruteForceResult operations_research::sat::BruteForceOrthogonalPacking ( absl::Span< const IntegerValue > sizes_x,
absl::Span< const IntegerValue > sizes_y,
std::pair< IntegerValue, IntegerValue > bounding_box_size,
int max_complexity )

It is unlikely that preprocessing will remove half of the items, so don't lose time trying.

VLOG_EVERY_N_SEC(3, 3) << "Found a feasible packing by brute force. Dot:\n " << RenderDot(bounding_box_size, result);

Definition at line 640 of file 2d_packing_brute_force.cc.

◆ BuildMaxAffineUpConstraint()

bool operations_research::sat::BuildMaxAffineUpConstraint ( const LinearExpression & target,
IntegerVariable var,
const std::vector< std::pair< IntegerValue, IntegerValue > > & affines,
Model * model,
LinearConstraintBuilder * builder )

Helper for the affine max constraint.

This function will reset the bounds of the builder.

target <= y_at_min + (delta_y / delta_x) * (var - x_min) delta_x * target <= delta_x * y_at_min + delta_y * (var - x_min) -delta_y * var + delta_x * target <= delta_x * y_at_min - delta_y * x_min

Checks the rhs for overflows.

Checks target * delta_x for overflow.

Prevent to create constraints that can overflow.

Definition at line 2648 of file cuts.cc.

◆ CanonicalizeExpr()

LinearExpression operations_research::sat::CanonicalizeExpr ( const LinearExpression & expr)

Returns the same expression in the canonical form (all positive coefficients).

Definition at line 380 of file linear_constraint.cc.

◆ CanonicalizeLinearExpressionInternal()

template<typename ProtoWithVarsAndCoeffs >
bool operations_research::sat::CanonicalizeLinearExpressionInternal ( absl::Span< const int > enforcements,
ProtoWithVarsAndCoeffs * proto,
int64_t * offset,
std::vector< std::pair< int, int64_t > > * tmp_terms,
PresolveContext * context )

First regroup the terms on the same variables and sum the fixed ones.

Todo
(user): Add a quick pass to skip most of the work below if the constraint is already in canonical form?

Remove fixed variable and take affine representative.

Note
we need to do that before we test for equality with an enforcement (they should already have been mapped).
Todo
(user): Avoid the quadratic loop for the corner case of many enforcement literal (this should be pretty rare though).

If the constraint is enforced, we can assume the variable is at 1.

We can assume the variable is at zero.

Definition at line 2340 of file presolve_context.cc.

◆ CapAddI()

IntegerValue operations_research::sat::CapAddI ( IntegerValue a,
IntegerValue b )
inline

Definition at line 113 of file integer.h.

◆ CapProdI()

IntegerValue operations_research::sat::CapProdI ( IntegerValue a,
IntegerValue b )
inline

Overflows and saturated arithmetic.

Definition at line 105 of file integer.h.

◆ CapSubI()

IntegerValue operations_research::sat::CapSubI ( IntegerValue a,
IntegerValue b )
inline

Definition at line 109 of file integer.h.

◆ CardinalityConstraint()

std::function< void(Model *)> operations_research::sat::CardinalityConstraint ( int64_t lower_bound,
int64_t upper_bound,
const std::vector< Literal > & literals )
inline

Definition at line 890 of file sat_solver.h.

◆ CeilOfRatio()

template<typename IntType >
IntType operations_research::sat::CeilOfRatio ( IntType numerator,
IntType denominator )

Definition at line 729 of file util.h.

◆ CeilOrFloorOfRatio()

template<typename IntType , bool ceil>
IntType operations_research::sat::CeilOrFloorOfRatio ( IntType numerator,
IntType denominator )

These functions are copied from MathUtils. However, the original ones are incompatible with absl::int128 as MathLimits<absl::int128>::kIsInteger == false.

Definition at line 709 of file util.h.

◆ CeilRatio()

IntegerValue operations_research::sat::CeilRatio ( IntegerValue dividend,
IntegerValue positive_divisor )
inline

Definition at line 85 of file integer.h.

◆ CeilSquareRoot()

int64_t operations_research::sat::CeilSquareRoot ( int64_t a)
Todo
(user): Find better implementation?

Definition at line 265 of file util.cc.

◆ ChangeLargeBoundsToInfinity()

void operations_research::sat::ChangeLargeBoundsToInfinity ( double max_magnitude,
MPModelProto * mp_model,
SolverLogger * logger )

This function changes bounds of variables or constraints that have a magnitude greater than mip_max_valid_magnitude.

Definition at line 237 of file lp_utils.cc.

◆ ChangeOptimizationDirection()

void operations_research::sat::ChangeOptimizationDirection ( LinearBooleanProblem * problem)

Keeps the same objective but change the optimization direction from a minimization problem to a maximization problem.

Ex: if the problem was to minimize 2 + x, the new problem will be to maximize 2 + x subject to exactly the same constraints.

We need 'auto' here to keep the open-source compilation happy (it uses the public protobuf release).

Definition at line 221 of file boolean_problem.cc.

◆ ChooseBestObjectiveValue()

IntegerLiteral operations_research::sat::ChooseBestObjectiveValue ( IntegerVariable var,
Model * model )

If a variable appear in the objective, branch on its best objective value.

Definition at line 64 of file integer_search.cc.

◆ CircuitCovering()

std::function< void(Model *)> operations_research::sat::CircuitCovering ( const std::vector< std::vector< Literal > > & graph,
const std::vector< int > & distinguished_nodes )

Definition at line 697 of file circuit.cc.

◆ ClauseConstraint()

std::function< void(Model *)> operations_research::sat::ClauseConstraint ( absl::Span< const Literal > literals)
inline

Definition at line 933 of file sat_solver.h.

◆ ClauseIsEnforcementImpliesLiteral()

bool operations_research::sat::ClauseIsEnforcementImpliesLiteral ( absl::Span< const int > clause,
absl::Span< const int > enforcement,
int literal )
inline

Specific function. Returns true if the negation of all literals in clause except literal is exactly equal to the literal of enforcement.

We assumes that enforcement and negated(clause) are sorted lexicographically Or negated(enforcement) and clause. Both option works. If not, we will only return false more often. When we return true, the property is enforced.

Todo
(user): For the same complexity, we do not need to specify literal and can recover it.

Definition at line 331 of file presolve_util.h.

◆ CleanTermsAndFillConstraint() [1/2]

void operations_research::sat::CleanTermsAndFillConstraint ( std::vector< std::pair< IntegerVariable, IntegerValue > > * terms,
LinearConstraint * output )
inline

Sort and add coeff of duplicate variables. Note that a variable and its negation will appear one after another in the natural order.

Definition at line 368 of file linear_constraint.h.

◆ CleanTermsAndFillConstraint() [2/2]

void operations_research::sat::CleanTermsAndFillConstraint ( std::vector< std::pair< IntegerVariable, IntegerValue > > * terms,
LinearExpression * output )
inline

Sorts and merges duplicate IntegerVariable in the given "terms". Fills the given LinearConstraint or LinearExpression with the result.

Sort and add coeff of duplicate variables. Note that a variable and its negation will appear one after another in the natural order.

Definition at line 337 of file linear_constraint.h.

◆ ClosestMultiple()

int64_t operations_research::sat::ClosestMultiple ( int64_t value,
int64_t base )

Returns the multiple of base closest to value. If there is a tie, we return the one closest to zero. This way we have ClosestMultiple(x) = -ClosestMultiple(-x) which is important for how this is used.

Definition at line 273 of file util.cc.

◆ CombineSeed()

int operations_research::sat::CombineSeed ( int base_seed,
int64_t delta )

We assume delta >= 0 and we only use the low bit of delta.

Definition at line 938 of file cp_model_utils.cc.

◆ CompleteHeuristics()

std::vector< std::function< BooleanOrIntegerLiteral()> > operations_research::sat::CompleteHeuristics ( absl::Span< const std::function< BooleanOrIntegerLiteral()> > incomplete_heuristics,
const std::function< BooleanOrIntegerLiteral()> & completion_heuristic )

Concatenates each input_heuristic with a default heuristic that instantiate all the problem's Boolean variables, into a new vector.

Definition at line 1307 of file integer_search.cc.

◆ CompressTuples()

void operations_research::sat::CompressTuples ( absl::Span< const int64_t > domain_sizes,
std::vector< std::vector< int64_t > > * tuples )

Remove duplicates if any.

Definition at line 454 of file util.cc.

◆ ComputeActivity()

double operations_research::sat::ComputeActivity ( const LinearConstraint & constraint,
const util_intops::StrongVector< IntegerVariable, double > & values )

Returns the activity of the given constraint. That is the current value of the linear terms.

Definition at line 165 of file linear_constraint.cc.

◆ ComputeBooleanLinearExpressionCanonicalForm()

bool operations_research::sat::ComputeBooleanLinearExpressionCanonicalForm ( std::vector< LiteralWithCoeff > * cst,
Coefficient * bound_shift,
Coefficient * max_value )

Puts the given Boolean linear expression in canonical form:

  • Merge all the literal corresponding to the same variable.
  • Remove zero coefficients.
  • Make all the coefficients positive.
  • Sort the terms by increasing coefficient values.

This function also computes:

  • max_value: the maximum possible value of the formula.
  • bound_shift: which allows to updates initial bounds. That is, if an initial pseudo-Boolean constraint was lhs < initial_pb_formula < rhs then the new one is: lhs + bound_shift < canonical_form < rhs + bound_shift

Finally, this will return false, if some integer overflow or underflow occurred during the reduction to the canonical form.

Note(user): For some reason, the IntType checking doesn't work here ?! that is a bit worrying, but the code seems to behave correctly.

First, sort by literal to remove duplicate literals. This also remove term with a zero coefficient.

Here current_literal is equal to (1 - representative).

Then, make all coefficients positive by replacing a term "-c x" into "c(1-x) - c" which is the same as "c(not x) - c".

Finally sort by increasing coefficients.

Definition at line 55 of file pb_constraint.cc.

◆ ComputeCanonicalRhs()

Coefficient operations_research::sat::ComputeCanonicalRhs ( Coefficient upper_bound,
Coefficient bound_shift,
Coefficient max_value )

From a constraint 'expr <= ub' and the result (bound_shift, max_value) of calling ComputeBooleanLinearExpressionCanonicalForm() on 'expr', this returns a new rhs such that 'canonical expression <= rhs' is an equivalent constraint. This function deals with all the possible overflow corner cases.

The result will be in [-1, max_value] where -1 means unsatisfiable and max_value means trivialy satisfiable.

Positive overflow. The constraint is trivially true. This is because the canonical linear expression is in [0, max_value].

Negative overflow. The constraint is infeasible.

Definition at line 174 of file pb_constraint.cc.

◆ ComputeCoreMinWeight()

Coefficient operations_research::sat::ComputeCoreMinWeight ( const std::vector< EncodingNode * > & nodes,
const std::vector< Literal > & core )

Returns the minimum weight of the nodes in the core. Note that the literal in the core must appear in the same order as the one in nodes.

Definition at line 560 of file encoding.cc.

◆ ComputeEnergyMinInWindow()

IntegerValue operations_research::sat::ComputeEnergyMinInWindow ( IntegerValue start_min,
IntegerValue start_max,
IntegerValue end_min,
IntegerValue end_max,
IntegerValue size_min,
IntegerValue demand_min,
absl::Span< const LiteralValueValue > filtered_energy,
IntegerValue window_start,
IntegerValue window_end )

Utilities

Returns zero if the interval do not necessarily overlap.

Definition at line 865 of file intervals.cc.

◆ ComputeGomoryHuTree()

std::vector< int > operations_research::sat::ComputeGomoryHuTree ( int num_nodes,
const std::vector< ArcWithLpValue > & relevant_arcs )

Given a set of arcs on a directed graph with n nodes (in [0, num_nodes)), returns a "parent" vector of size n encoding a rooted Gomory-Hu tree.

Note
usually each edge in the tree is attached a max-flow value (its weight), but we don't need it here. It can be added if needed. This tree as the property that for all (s, t) pair of nodes, if you take the minimum weight edge on the path from s to t and split the tree in two, then this is a min-cut for that pair.

IMPORTANT: This algorithm currently "symmetrize" the graph, so we will actually have all the min-cuts that minimize sum incoming + sum outgoing lp values. The algo do not work as is on an asymmetric graph. Note however that because of flow conservation, our outgoing lp values should be the same as our incoming one on a circuit/route constraint.

We use a simple implementation described in "Very Simple Methods for All Pairs Network Flow Analysis", Dan Gusfield, 1990, https://ranger.uta.edu/~weems/NOTES5311/LAB/LAB2SPR21/gusfield.huGomory.pdf

Initialize the graph. Note that we use only arcs with a relevant lp value, so this should be small in practice.

Compute an equivalent max-flow tree, according to the paper. This version should actually produce a Gomory-Hu cut tree.

Definition at line 517 of file routing_cuts.cc.

◆ ComputeHyperplanAboveSquare()

LinearConstraint operations_research::sat::ComputeHyperplanAboveSquare ( AffineExpression x,
AffineExpression square,
IntegerValue x_lb,
IntegerValue x_ub,
Model * model )

Above hyperplan for square = x * x: square should be below the line (x_lb, x_lb ^ 2) to (x_ub, x_ub ^ 2). The slope of that line is (ub^2 - lb^2) / (ub - lb) = ub + lb. square <= (x_lb + x_ub) * x - x_lb * x_ub This only works for positive x.

Definition at line 1954 of file cuts.cc.

◆ ComputeHyperplanBelowSquare()

LinearConstraint operations_research::sat::ComputeHyperplanBelowSquare ( AffineExpression x,
AffineExpression square,
IntegerValue x_value,
Model * model )

Below hyperplan for square = x * x: y should be above the line (x_value, x_value ^ 2) to (x_value + 1, (x_value + 1) ^ 2) The slope of that line is 2 * x_value + 1 square >= below_slope * (x - x_value) + x_value ^ 2 square >= below_slope * x - x_value ^ 2 - x_value

Definition at line 1966 of file cuts.cc.

◆ ComputeInfinityNorm()

IntegerValue operations_research::sat::ComputeInfinityNorm ( const LinearConstraint & ct)

Returns the maximum absolute value of the coefficients.

Definition at line 209 of file linear_constraint.cc.

◆ ComputeInnerObjective()

int64_t operations_research::sat::ComputeInnerObjective ( const CpObjectiveProto & objective,
absl::Span< const int64_t > solution )

Computes the "inner" objective of a response that contains a solution. This is the objective without offset and scaling. Call ScaleObjectiveValue() to get the user facing objective.

Definition at line 556 of file cp_model_utils.cc.

◆ ComputeL2Norm()

double operations_research::sat::ComputeL2Norm ( const LinearConstraint & ct)

Returns sqrt(sum square(coeff)).

Definition at line 201 of file linear_constraint.cc.

◆ ComputeLinearRelaxation()

LinearRelaxation operations_research::sat::ComputeLinearRelaxation ( const CpModelProto & model_proto,
Model * m )

Builds the linear relaxation of a CpModelProto.

Collect AtMostOne to compute better Big-M.

Linearize the constraints.

Linearize the encoding of variable that are fully encoded.

We first try to linearize the values encoding.

Then we try to linearize the inequality encoding. Note that on some problem like pizza27i.mps.gz, adding both equality and inequality encoding is a must.

Even if the variable is fully encoded, sometimes not all its associated literal have a view (if they are not part of the original model for instance).

Todo
(user): Should we add them to the LP anyway? this isn't clear as we can sometimes create a lot of Booleans like this.
Todo
(user): This is similar to AppendRelaxationForEqualityEncoding() above. Investigate if we can merge the code.
Todo
(user): I am not sure this is still needed. Investigate and explain why or remove.

We display the stats before linearizing the at most ones.

Linearize the at most one constraints. Note that we transform them into maximum "at most one" first and we removes redundant ones.

Note
it is okay to simply ignore the literal if it has no integer view.

We converted all at_most_one to LP constraints, so we need to clear them so that we don't do extra work in the connected component computation.

Propagate unary constraints.

Remove size one LP constraints from the main algorithms, they are not useful.

We add a clique cut generation over all Booleans of the problem.

Note
in practice this might regroup independent LP together.
Todo
(user): compute connected components of the original problem and split these cuts accordingly.
Note
it is okay to simply ignore the literal if it has no integer view.

We add a generator touching all the variable in the builder.

Definition at line 1827 of file linear_relaxation.cc.

◆ ComputeMinSumOfWeightedEndMins()

bool operations_research::sat::ComputeMinSumOfWeightedEndMins ( std::vector< PermutableEvent > & events,
IntegerValue capacity_max,
IntegerValue & min_sum_of_end_mins,
IntegerValue & min_sum_of_weighted_end_mins,
IntegerValue unweighted_threshold,
IntegerValue weighted_threshold )

Reusable storage for ComputeWeightedSumOfEndMinsForOnePermutation().

Definition at line 1095 of file scheduling_cuts.cc.

◆ ComputeNegatedCanonicalRhs()

Coefficient operations_research::sat::ComputeNegatedCanonicalRhs ( Coefficient lower_bound,
Coefficient bound_shift,
Coefficient max_value )

Same as ComputeCanonicalRhs(), but uses the initial constraint lower bound instead. From a constraint 'lb <= expression', this returns a rhs such that 'canonical expression with literals negated <= rhs'.

Note
the range is also [-1, max_value] with the same meaning.

The new bound is "max_value - (lower_bound + bound_shift)", but we must pay attention to possible overflows.

Positive overflow. The constraint is infeasible.

Negative overflow. The constraint is trivialy satisfiable.

If shifted_lb <= 0 then the constraint is trivialy satisfiable. We test this so we are sure that max_value - shifted_lb doesn't overflow below.

Definition at line 192 of file pb_constraint.cc.

◆ ComputeObjectiveValue()

Coefficient operations_research::sat::ComputeObjectiveValue ( const LinearBooleanProblem & problem,
const std::vector< bool > & assignment )

Returns the objective value under the current assignment.

Definition at line 359 of file boolean_problem.cc.

◆ ComputeResolvant()

bool operations_research::sat::ComputeResolvant ( Literal x,
const std::vector< Literal > & a,
const std::vector< Literal > & b,
std::vector< Literal > * out )

Visible for testing. Computes the resolvant of 'a' and 'b' obtained by performing the resolution on 'x'. If the resolvant is trivially true this returns false, otherwise it returns true and fill 'out' with the resolvant.

Note
the resolvant is just 'a' union 'b' with the literals 'x' and not(x) removed. The two clauses are assumed to be sorted, and the computed resolvant will also be sorted.

This is the basic operation when a variable is eliminated by clause distribution.

Copy remaining literals.

Definition at line 1027 of file simplification.cc.

◆ ComputeResolvantSize()

int operations_research::sat::ComputeResolvantSize ( Literal x,
const std::vector< Literal > & a,
const std::vector< Literal > & b )
Note
this function takes a big chunk of the presolve running time.

Same as ComputeResolvant() but just returns the resolvant size. Returns -1 when ComputeResolvant() returns false.

Definition at line 1062 of file simplification.cc.

◆ ComputeTrueObjectiveLowerBound()

double operations_research::sat::ComputeTrueObjectiveLowerBound ( const CpModelProto & model_proto_with_floating_point_objective,
const CpObjectiveProto & integer_objective,
int64_t inner_integer_objective_lower_bound )

Given a CpModelProto with a floating point objective, and its scaled integer version with a known lower bound, this uses the variable bounds to derive a correct lower bound on the original objective.

Note
the integer version can be way different, but then the bound is likely to be bad. For now, we solve this with a simple LP with one constraint.
Todo
(user): Code a custom algo with more precision guarantee?

Create an LP with the correct variable domain.

Add the original problem floating point objective. This is user given, so we do need to deal with duplicate entries.

Add a single constraint "integer_objective >= lower_bound".

This should be fast. However, in case of numerical difficulties, we bound the number of iterations.

Error. Hoperfully this shouldn't happen.

Definition at line 1703 of file lp_utils.cc.

◆ ConditionalLowerOrEqualWithOffset()

std::function< void(Model *)> operations_research::sat::ConditionalLowerOrEqualWithOffset ( IntegerVariable a,
IntegerVariable b,
int64_t offset,
Literal is_le )
inline

is_le => (a + offset <= b).

Definition at line 656 of file precedences.h.

◆ ConditionalWeightedSumGreaterOrEqual()

std::function< void(Model *)> operations_research::sat::ConditionalWeightedSumGreaterOrEqual ( const std::vector< Literal > & enforcement_literals,
const std::vector< IntegerVariable > & vars,
const std::vector< int64_t > & coefficients,
int64_t upper_bound )
inline

Definition at line 605 of file integer_expr.h.

◆ ConditionalWeightedSumLowerOrEqual()

std::function< void(Model *)> operations_research::sat::ConditionalWeightedSumLowerOrEqual ( const std::vector< Literal > & enforcement_literals,
const std::vector< IntegerVariable > & vars,
const std::vector< int64_t > & coefficients,
int64_t upper_bound )
inline
Todo
(user): Delete once Telamon use new function.

Definition at line 596 of file integer_expr.h.

◆ ConfigureSearchHeuristics()

void operations_research::sat::ConfigureSearchHeuristics ( Model * model)

Given a base "fixed_search" function that should mainly control in which order integer variables are lazily instantiated (and at what value), this uses the current solver parameters to set the SearchHeuristics class in the given model.

Not all Boolean might appear in fixed_search(), so once there is no decision left, we fix all Booleans that are still undecided.

Todo
(user): We might want to restart if external info is available. Code a custom restart for this?

Push user search if present.

Do a portfolio with the default sat heuristics.

Use default restart policies.

Definition at line 1185 of file integer_search.cc.

◆ ConstantIntegerVariable()

std::function< IntegerVariable(Model *)> operations_research::sat::ConstantIntegerVariable ( int64_t value)
inline

Definition at line 1899 of file integer.h.

◆ ConstraintCaseName()

absl::string_view operations_research::sat::ConstraintCaseName ( ConstraintProto::ConstraintCase constraint_case)

Returns the name of the ConstraintProto::ConstraintCase oneof enum. Note(user): There is no such function in the proto API as of 16/01/2017.

Definition at line 429 of file cp_model_utils.cc.

◆ ConstraintIsFeasible()

bool operations_research::sat::ConstraintIsFeasible ( const CpModelProto & model,
const ConstraintProto & constraint,
absl::Span< const int64_t > variable_values )

Checks a single constraint for feasibility. This has some overhead, and should only be used for debugging. The full model is needed for scheduling constraints that refers to intervals.

Definition at line 1684 of file cp_model_checker.cc.

◆ ConstructFixedSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ConstructFixedSearchStrategy ( std::function< BooleanOrIntegerLiteral()> user_search,
std::function< BooleanOrIntegerLiteral()> heuristic_search,
std::function< BooleanOrIntegerLiteral()> integer_completion )

Constructs our "fixed" search strategy which start with ConstructUserSearchStrategy() but is completed by a couple of automatic heuristics.

We start by the user specified heuristic.

Definition at line 402 of file cp_model_search.cc.

◆ ConstructHeuristicSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ConstructHeuristicSearchStrategy ( const CpModelProto & cp_model_proto,
Model * model )

Constructs a search strategy tailored for the current model.

Todo
(user): Implement a routing search.

Tricky: we need to create this at level zero in case there are no linear constraint in the model at the beginning.

Todo
(user): Alternatively, support creation of SatPropagator at positive level.

Definition at line 327 of file cp_model_search.cc.

◆ ConstructHintSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ConstructHintSearchStrategy ( const CpModelProto & cp_model_proto,
CpModelMapping * mapping,
Model * model )

Constructs a search strategy that follow the hint from the model.

Constructs a search strategy that follows the hints from the model.

Definition at line 383 of file cp_model_search.cc.

◆ ConstructIntegerCompletionSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ConstructIntegerCompletionSearchStrategy ( const std::vector< IntegerVariable > & variable_mapping,
IntegerVariable objective_var,
Model * model )

Constructs an integer completion search strategy.

Make sure we try to fix the objective to its lowest value first.

Todo
(user): we could also fix terms of the objective in the right direction.

Definition at line 358 of file cp_model_search.cc.

◆ ConstructOverlappingSets()

void operations_research::sat::ConstructOverlappingSets ( bool already_sorted,
std::vector< IndexedInterval > * intervals,
std::vector< std::vector< int > > * result )

Given n fixed intervals, returns the subsets of intervals that overlap during at least one time unit. Note that we only return "maximal" subset and filter subset strictly included in another.

All Intervals must have a positive size.

The algo is in O(n log n) + O(result_size) which is usually O(n^2).

We do a line sweep. The "current" subset crossing the "line" at (time, time + 1) will be in (*intervals)[start_index, end_index) at the end of the loop block.

First, if there is some deletion, we will push the "old" set to the result before updating it. Otherwise, we will have a superset later, so we just continue for now.

Do not output subset of size one.

Add all the new intervals starting exactly at "time".

Definition at line 421 of file diffn_util.cc.

◆ ConstructUserSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ConstructUserSearchStrategy ( const CpModelProto & cp_model_proto,
Model * model )

Constructs the search strategy specified in the given CpModelProto.

Note
we copy strategies to keep the return function validity independently of the life of the passed vector.
Todo
(user): Improve the complexity if this becomes an issue which may be the case if we do a fixed_search.

To store equivalent variables in randomized search.

The size of the domain is not multiplied by the coeff.

The size of the domain is not multiplied by the coeff.

We need to use -value as we want the minimum valued variables. We add a random noise to get improve the entropy.

We can stop scanning if the variable selection strategy is to use the first unbound variable and no randomization is needed.

Check if one active variable has been found.

Pick the winner when decisions are randomized.

Definition at line 184 of file cp_model_search.cc.

◆ ContainsLiteral()

bool operations_research::sat::ContainsLiteral ( absl::Span< const Literal > clause,
Literal literal )

Returns true if the given clause contains the given literal. This works in O(clause.size()).

Definition at line 474 of file drat_checker.cc.

◆ ConvertBinaryMPModelProtoToBooleanProblem()

bool operations_research::sat::ConvertBinaryMPModelProtoToBooleanProblem ( const MPModelProto & mp_model,
LinearBooleanProblem * problem )

Converts an integer program with only binary variables to a Boolean optimization problem. Returns false if the problem didn't contains only binary integer variable, or if the coefficients couldn't be converted to integer with a good enough precision.

Test if the variables are binary variables. Add constraints for the fixed variables.

This will be changed to false as soon as we detect the variable to be non-binary. This is done this way so we can display a nice error message before aborting the function and returning false.

4 cases.

Binary variable. Ok.

Fixed variable at 1.

Fixed variable at 0.

No possible integer value!

Abort if the variable is not binary.

Variables needed to scale the double coefficients into int64_t.

Add all constraints.

First scale the coefficients of the constraints.

Add the bounds. Note that we do not pass them to GetBestScalingOfDoublesToInt64() because we know that the sum of absolute coefficients of the constraint fit on an int64_t. If one of the scaled bound overflows, we don't care by how much because in this case the constraint is just trivial or unsatisfiable.

Otherwise, the constraint is not needed.

Otherwise, the constraint is not needed.

Display the error/scaling without taking into account the objective first.

Add the objective.

Display the objective error/scaling.

Note
here we set the scaling factor for the inverse operation of getting the "true" objective value from the scaled one. Hence the inverse.

If the problem was a maximization one, we need to modify the objective.

Test the precision of the conversion.

Definition at line 1460 of file lp_utils.cc.

◆ ConvertBooleanProblemToLinearProgram()

void operations_research::sat::ConvertBooleanProblemToLinearProgram ( const LinearBooleanProblem & problem,
glop::LinearProgram * lp )

Converts a Boolean optimization problem to its lp formulation.

Variables name are optional.

Objective.

Definition at line 1639 of file lp_utils.cc.

◆ ConvertCpModelProtoToCnf()

bool operations_research::sat::ConvertCpModelProtoToCnf ( const CpModelProto & cp_model,
std::string * out )

We should have no objective, only unassigned Boolean, and only bool_or and bool_and.

We can convert.

Definition at line 881 of file cp_model_utils.cc.

◆ ConvertCpModelProtoToMPModelProto()

bool operations_research::sat::ConvertCpModelProtoToMPModelProto ( const CpModelProto & input,
MPModelProto * output )

Converts a CP-SAT model to a MPModelProto one. This only works for pure linear model (otherwise it returns false). This is mainly useful for debugging or using CP-SAT presolve and then trying other MIP solvers.

Todo
(user): This first version do not even handle basic Boolean constraint. Support more constraints as needed.

Copy variables.

Copy integer or float objective.

Copy constraint.

Todo
(user): Support more constraints with enforcement.

Compute min/max activity.

term <= ub + coeff * (1 - enf);

term >= lb + coeff * (1 - enf)

Definition at line 1150 of file lp_utils.cc.

◆ ConvertMPModelProtoToCpModelProto()

bool operations_research::sat::ConvertMPModelProtoToCpModelProto ( const SatParameters & params,
const MPModelProto & mp_model,
CpModelProto * cp_model,
SolverLogger * logger )

Converts a MIP problem to a CpModel. Returns false if the coefficients couldn't be converted to integers with a good enough precision.

There is a bunch of caveats and you can find more details on the SatParameters proto documentation for the mip_* parameters.

To make sure we cannot have integer overflow, we use this bound for any unbounded variable.

Todo
(user): This could be made larger if needed, so be smarter if we have MIP problem that we cannot "convert" because of this. Note however than we cannot go that much further because we need to make sure we will not run into overflow if we add a big linear combination of such variables. It should always be possible for a user to scale its problem so that all relevant quantities are a couple of millions. A LP/MIP solver have a similar condition in disguise because problem with a difference of more than 6 magnitudes between the variable values will likely run into numeric trouble.

Add the variables.

Deal with the corner case of a domain far away from zero.

Todo
(user): We could avoid these cases by shifting the domain of all variables to contain zero. This should also lead to a better scaling, but it has some complications with integer variables and require some post-solve.
Note
we must process the lower bound first.
the cast is "perfect" because we forbid large values.

Notify if a continuous variable has a small domain as this is likely to make an all integer solution far from a continuous one.

Add the constraints. We scale each of them individually.

Add the indicator.

Display the error/scaling on the constraints.

Since cp_model support a floating point objective, we use that. This will allow us to scale the objective a bit later so we can potentially do more domain reduction first.

If the objective is fixed to zero, we consider there is none.

Definition at line 933 of file lp_utils.cc.

◆ CopyEverythingExceptVariablesAndConstraintsFieldsIntoContext()

void operations_research::sat::CopyEverythingExceptVariablesAndConstraintsFieldsIntoContext ( const CpModelProto & in_model,
PresolveContext * context )

Copies the non constraint, non variables part of the model.

We make sure we do not use the old variables field.

Definition at line 12380 of file cp_model_presolve.cc.

◆ CpModelStats()

std::string operations_research::sat::CpModelStats ( const CpModelProto & model_proto)

Returns a string with some statistics on the given CpModelProto.

Public API.

Note
we only store pointer to "constant" string literals. This is slightly faster and take less space for model with millions of constraints.

We split the linear constraints into 3 buckets has it gives more insight on the type of problem we are facing.

For pure Boolean constraints, we also display the total number of literal involved as this gives a good idea of the problem size.

We always list Boolean first.

Definition at line 161 of file cp_model_solver.cc.

◆ CpSatSolverVersion()

std::string operations_research::sat::CpSatSolverVersion ( )

Returns a string that describes the version of the solver.

Definition at line 124 of file cp_model_solver.cc.

◆ CpSolverResponseStats()

std::string operations_research::sat::CpSolverResponseStats ( const CpSolverResponse & response,
bool has_objective = true )

Returns a string with some statistics on the solver response.

If the second argument is false, we will just display NA for the objective value instead of zero. It is not really needed but it makes things a bit clearer to see that there is no objective.

Todo
(user): This is probably better named "binary_propagation", but we just output "propagations" to be consistent with sat/analyze.sh.

Definition at line 567 of file cp_model_solver.cc.

◆ CreateAllDifferentCutGenerator()

CutGenerator operations_research::sat::CreateAllDifferentCutGenerator ( const std::vector< AffineExpression > & exprs,
Model * model )

A cut generator for all_diff(xi). Let the united domain of all xi be D. Sum of any k-sized subset of xi need to be greater or equal to the sum of smallest k values in D and lesser or equal to the sum of largest k values in D. The cut generator first sorts the variables based on LP values and adds cuts of the form described above if they are violated by lp solution. Note that all the fixed variables are ignored while generating cuts.

These cuts work at all levels but the generator adds too many cuts on some instances and degrade the performance so we only use it at level

Other direction.

Definition at line 2452 of file cuts.cc.

◆ CreateAlternativeLiteralsWithView()

std::vector< Literal > operations_research::sat::CreateAlternativeLiteralsWithView ( int num_literals,
Model * model,
LinearRelaxation * relaxation )

Returns a vector of new literals in exactly one relationship. In addition, this create an IntegerView for all these literals and also add the exactly one to the LinearRelaxation.

This is not supposed to happen, but it is easy enough to cover, just in case. We might however want to use encoder->GetTrueLiteral().

Todo
(user): We shouldn't need to create this view ideally. Even better, we should be able to handle Literal natively in the linear relaxation, but that is a lot of work.

Definition at line 446 of file linear_relaxation.cc.

◆ CreateCliqueCutGenerator()

CutGenerator operations_research::sat::CreateCliqueCutGenerator ( const std::vector< IntegerVariable > & base_variables,
Model * model )

Extracts the variables that have a Literal view from base variables and create a generator that will returns constraint of the form "at_most_one" between such literals.

Filter base_variables to only keep the one with a literal view, and do the conversion.

We need to express such "at most one" in term of the initial variables, so we do not use the LinearConstraintBuilder::AddLiteralTerm() here.

Add 1 - X to the linear constraint.

Definition at line 2724 of file cuts.cc.

◆ CreateCumulativeCompletionTimeCutGenerator()

CutGenerator operations_research::sat::CreateCumulativeCompletionTimeCutGenerator ( SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const AffineExpression & capacity,
Model * model )

Completion time cuts for the cumulative constraint. It is a simple relaxation where we replace a cumulative task with demand k and duration d by a no_overlap task with duration d * k / capacity_max.

Definition at line 1470 of file scheduling_cuts.cc.

◆ CreateCumulativeEnergyCutGenerator()

CutGenerator operations_research::sat::CreateCumulativeEnergyCutGenerator ( SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const AffineExpression & capacity,
const std::optional< AffineExpression > & makespan,
Model * model )

For a given set of intervals and demands, we compute the energy of each task and make sure their sum fits in the span of the intervals * its capacity.

If an interval is optional, it contributes min_demand * min_size * presence_literal amount of total energy.

If an interval is performed, we use the linear energy formulation (if defined, that is if different from a constant -1), or the McCormick relaxation of the product size * demand if not defined.

The maximum energy is capacity * span of intervals at level 0.

Todo
(user): use level 0 bounds ?

We can always skip events.

Definition at line 582 of file scheduling_cuts.cc.

◆ CreateCumulativePrecedenceCutGenerator()

CutGenerator operations_research::sat::CreateCumulativePrecedenceCutGenerator ( SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const AffineExpression & capacity,
Model * model )

For a given set of intervals in a cumulative constraint, we detect violated mandatory precedences and create a cut for these.

Definition at line 945 of file scheduling_cuts.cc.

◆ CreateCumulativeTimeTableCutGenerator()

CutGenerator operations_research::sat::CreateCumulativeTimeTableCutGenerator ( SchedulingConstraintHelper * helper,
SchedulingDemandHelper * demands_helper,
const AffineExpression & capacity,
Model * model )

For a given set of intervals and demands, we first compute the mandatory part of the interval as [start_max , end_min]. We use this to calculate mandatory demands for each start_max time points for eligible intervals. Since the sum of these mandatory demands must be smaller or equal to the capacity, we create a cut representing that.

If an interval is optional, it contributes min_demand * presence_literal amount of demand to the mandatory demands sum. So the final cut is generated as follows: sum(demands of always present intervals)

  • sum(presence_literal * min_of_demand) <= capacity.

Iterate through the intervals. If start_max < end_min, the demand is mandatory.

Ignore the interval if the linearized demand fails.

Sort events by time. It is also important that all positive event with the same time as negative events appear after for the correctness of the algo below.

Reset positive event added. We do not want to create cuts for each negative event in sequence.

Create cut.

The i-th event, which is a negative event, follows a positive event. We must ignore it in our cut generation.

The demand_lp was added in case of a positive event. We need to remove it for a negative event.

Definition at line 692 of file scheduling_cuts.cc.

◆ CreateCVRPCutGenerator()

CutGenerator operations_research::sat::CreateCVRPCutGenerator ( int num_nodes,
std::vector< int > tails,
std::vector< int > heads,
std::vector< Literal > literals,
std::vector< int64_t > demands,
int64_t capacity,
Model * model )

Almost the same as CreateStronglyConnectedGraphCutGenerator() but for each components, computes the demand needed to serves it, and depending on whether it contains the depot (node zero) or not, compute the minimum number of vehicle that needs to cross the component border.

Definition at line 776 of file routing_cuts.cc.

◆ CreateFlowCutGenerator()

CutGenerator operations_research::sat::CreateFlowCutGenerator ( int num_nodes,
const std::vector< int > & tails,
const std::vector< int > & heads,
const std::vector< AffineExpression > & arc_capacities,
std::function< void(const std::vector< bool > &in_subset, IntegerValue *min_incoming_flow, IntegerValue *min_outgoing_flow)> get_flows,
Model * model )

Try to find a subset where the current LP capacity of the outgoing or incoming arc is not enough to satisfy the demands.

We support the special value -1 for tail or head that means that the arc comes from (or is going to) outside the nodes in [0, num_nodes). Such arc must still have a capacity assigned to it.

Todo

(user): Support general linear expression for capacities.

(user): Some model applies the same capacity to both an arc and its reverse. Also support this case.

Definition at line 938 of file routing_cuts.cc.

◆ CreateLinMaxCutGenerator()

CutGenerator operations_research::sat::CreateLinMaxCutGenerator ( IntegerVariable target,
const std::vector< LinearExpression > & exprs,
const std::vector< IntegerVariable > & z_vars,
Model * model )

Consider the Lin Max constraint with d expressions and n variables in the form: target = max {exprs[k] = Sum (wki * xi + bk)}. k in {1,..,d}. Li = lower bound of xi Ui = upper bound of xi. Let zk be in {0,1} for all k in {1,..,d}. The target = exprs[k] when zk = 1.

The following is a valid linearization for Lin Max. target >= exprs[k], for all k in {1,..,d} target <= Sum (wli * xi) + Sum((Nlk + bk) * zk), for all l in {1,..,d} Where Nlk is a large number defined as: Nlk = Sum (max((wki - wli)*Li, (wki - wli)*Ui)) = Sum (max corner difference for variable i, target expr l, max expr k)

Consider a partition of variables xi into set {1,..,d} as I. i.e. I(i) = j means xi is mapped to jth index. The following inequality is valid and sharp cut for the lin max constraint described above.

target <= Sum(i=1..n)(wI(i)i * xi + Sum(k=1..d)(MPlusCoefficient_ki * zk))

  • Sum(k=1..d)(bk * zk) , Where MPlusCoefficient_ki = max((wki - wI(i)i) * Li, (wki - wI(i)i) * Ui) = max corner difference for variable i, target expr I(i), max expr k.

For detailed proof of validity, refer Reference: "Strong mixed-integer programming formulations for trained neural networks" by Ross Anderson et. (https://arxiv.org/pdf/1811.01988.pdf).

In the cut generator, we compute the most violated partition I by computing the rhs value (wI(i)i * lp_value(xi) + Sum(k=1..d)(MPlusCoefficient_ki * zk)) for each variable for each partition index. We choose the partition index that gives lowest rhs value for a given variable.

Note
This cut generator requires all expressions to contain only positive vars.

All expressions should only contain positive variables.

Definition at line 2563 of file cuts.cc.

◆ CreateMaxAffineCutGenerator()

CutGenerator operations_research::sat::CreateMaxAffineCutGenerator ( LinearExpression target,
IntegerVariable var,
std::vector< std::pair< IntegerValue, IntegerValue > > affines,
std::string cut_name,
Model * model )

By definition, the Max of affine functions is convex. The linear polytope is bounded by all affine functions on the bottom, and by a single hyperplane that join the two points at the extreme of the var domain, and their y-values of the max of the affine functions.

Definition at line 2702 of file cuts.cc.

◆ CreateNewIntegerVariableFromLiteral()

IntegerVariable operations_research::sat::CreateNewIntegerVariableFromLiteral ( Literal lit,
Model * model )
inline

Creates a 0-1 integer variable "view" of the given literal. It will have a value of 1 when the literal is true, and 0 when the literal is false.

Definition at line 1925 of file integer.h.

◆ CreateNoOverlap2dCompletionTimeCutGenerator()

CutGenerator operations_research::sat::CreateNoOverlap2dCompletionTimeCutGenerator ( SchedulingConstraintHelper * x_helper,
SchedulingConstraintHelper * y_helper,
Model * model )
Todo
(user): Use demands_helper and decomposed energy.

Completion time cuts for the no_overlap_2d constraint. It actually generates the completion time cumulative cuts in both axis.

Todo
(user): It might be possible/better to use some shifted value here, but for now this code is not in the hot spot, so better be defensive and only do connected components on really disjoint rectangles.
Todo
(user): Use improved energy from demands helper.

Definition at line 562 of file diffn_cuts.cc.

◆ CreateNoOverlap2dEnergyCutGenerator()

CutGenerator operations_research::sat::CreateNoOverlap2dEnergyCutGenerator ( SchedulingConstraintHelper * x_helper,
SchedulingConstraintHelper * y_helper,
SchedulingDemandHelper * x_demands_helper,
SchedulingDemandHelper * y_demands_helper,
const std::vector< std::vector< LiteralValueValue > > & energies,
Model * model )

Energetic cuts for the no_overlap_2d constraint.

For a given set of rectangles, we compute the area of each rectangle and make sure their sum is less than the area of the bounding interval.

If an interval is optional, it contributes min_size_x * min_size_y * presence_literal amount of total area.

If an interval is performed, we use the linear area formulation (if possible), or the McCormick relaxation of the size_x * size_y.

The maximum area is the area of the bounding rectangle of each intervals at level 0.

We do not consider rectangles controlled by 2 different unassigned enforcement literals.

Todo
(user): It might be possible/better to use some shifted value here, but for now this code is not in the hot spot, so better be defensive and only do connected components on really disjoint rectangles.

Forward pass. No need to do a backward pass.

Definition at line 309 of file diffn_cuts.cc.

◆ CreateNoOverlapCompletionTimeCutGenerator()

CutGenerator operations_research::sat::CreateNoOverlapCompletionTimeCutGenerator ( SchedulingConstraintHelper * helper,
Model * model )

For a given set of intervals in a no_overlap constraint, we detect violated area based cuts from Queyranne 93 [see note in the code] and create a cut for these.

Definition at line 1424 of file scheduling_cuts.cc.

◆ CreateNoOverlapEnergyCutGenerator()

CutGenerator operations_research::sat::CreateNoOverlapEnergyCutGenerator ( SchedulingConstraintHelper * helper,
const std::optional< AffineExpression > & makespan,
Model * model )

For a given set of intervals, we first compute the min and max of all intervals. Then we create a cut that indicates that all intervals must fit in that span.

If an interval is optional, it contributes min_size * presence_literal amount of demand to the mandatory demands sum. So the final cut is generated as follows: sum(sizes of always present intervals)

  • sum(presence_literal * min_of_size) <= span of all intervals.

We can always skip events.

Definition at line 643 of file scheduling_cuts.cc.

◆ CreateNoOverlapPrecedenceCutGenerator()

CutGenerator operations_research::sat::CreateNoOverlapPrecedenceCutGenerator ( SchedulingConstraintHelper * helper,
Model * model )

For a given set of intervals in a no_overlap constraint, we detect violated mandatory precedences and create a cut for these.

Definition at line 977 of file scheduling_cuts.cc.

◆ CreatePositiveMultiplicationCutGenerator()

CutGenerator operations_research::sat::CreatePositiveMultiplicationCutGenerator ( AffineExpression z,
AffineExpression x,
AffineExpression y,
int linearization_level,
Model * model )

A cut generator for z = x * y (x and y >= 0).

if x or y are fixed, the McCormick equations are exact.

Check for overflow with the product of expression bounds and the product of one expression bound times the constant part of the other expression.

Todo
(user): As the bounds change monotonically, these cuts dominate any previous one. try to keep a reference to the cut and replace it. Alternatively, add an API for a level-zero bound change callback.

Cut -z + x_coeff * x + y_coeff* y <= rhs

Cut -z + x_coeff * x + y_coeff* y >= rhs

McCormick relaxation of bilinear constraints. These 4 cuts are the exact facets of the x * y polyhedron for a bounded x and y.

Each cut correspond to plane that contains two of the line (x=x_lb), (x=x_ub), (y=y_lb), (y=y_ub). The easiest to understand them is to draw the x*y curves and see the 4 planes that correspond to the convex hull of the graph.

Definition at line 1859 of file cuts.cc.

◆ CreateSquareCutGenerator()

CutGenerator operations_research::sat::CreateSquareCutGenerator ( AffineExpression y,
AffineExpression x,
int linearization_level,
Model * model )

A cut generator for y = x ^ 2 (x >= 0). It will dynamically add a linear inequality to push y closer to the parabola.

Check for potential overflows.

Definition at line 1978 of file cuts.cc.

◆ CreateStronglyConnectedGraphCutGenerator()

CutGenerator operations_research::sat::CreateStronglyConnectedGraphCutGenerator ( int num_nodes,
std::vector< int > tails,
std::vector< int > heads,
std::vector< Literal > literals,
Model * model )

We use a basic algorithm to detect components that are not connected to the rest of the graph in the LP solution, and add cuts to force some arcs to enter and leave this component from outside.

Cut generator for the circuit constraint, where in any feasible solution, the arcs that are present (variable at 1) must form a circuit through all the nodes of the graph. Self arc are forbidden in this case.

In more generality, this currently enforce the resulting graph to be strongly connected. Note that we already assume basic constraint to be in the lp, so we do not add any cuts for components of size 1.

Definition at line 762 of file routing_cuts.cc.

◆ Cumulative()

std::function< void(Model *)> operations_research::sat::Cumulative ( const std::vector< IntervalVariable > & vars,
const std::vector< AffineExpression > & demands,
AffineExpression capacity,
SchedulingConstraintHelper * helper = nullptr )

Adds a cumulative constraint on the given intervals, the associated demands and the capacity expressions.

Each interval represents a task to be scheduled in time such that the task consumes the resource during the time range [lb, ub) where lb and ub respectively represent the lower and upper bounds of the corresponding interval variable. The amount of resource consumed by the task is the value of its associated demand variable.

The cumulative constraint forces the set of task to be scheduled such that the sum of the demands of all the tasks that overlap any time point cannot exceed the capacity of the resource.

This constraint assumes that an interval can be optional or have a size of zero. The demands and the capacity can be any non-negative number.

Optimization: If one already have an helper constructed from the interval variable, it can be passed as last argument.

Redundant constraints to ensure that the resource capacity is high enough for each task. Also ensure that no task consumes more resource than what is available. This is useful because the subsequent propagators do not filter the capacity variable very well.

If the interval can be of size zero, it currently do not count towards the capacity.

Todo
(user): Change that since we have optional interval for this.

Detect a subset of intervals that needs to be in disjunction and add a Disjunctive() constraint over them.

Todo
(user): We need to exclude intervals that can be of size zero because the disjunctive do not "ignore" them like the cumulative does. That is, the interval [2,2) will be assumed to be in disjunction with [1, 3) for instance. We need to uniformize the handling of interval with size zero.

Liftable? We might be able to add one more interval!

Add a disjunctive constraint on the intervals in in_disjunction. Do not create the cumulative at all when all intervals must be in disjunction.

Todo
(user): Do proper experiments to see how beneficial this is, the disjunctive will propagate more but is also using slower algorithms. That said, this is more a question of optimizing the disjunctive propagation code.
Todo
(user): Another "known" idea is to detect pair of tasks that must be in disjunction and to create a Boolean to indicate which one is before the other. It shouldn't change the propagation, but may result in a faster one with smaller explanations, and the solver can also take decision on such Boolean.
Todo
(user): A better place for stuff like this could be in the presolver so that it is easier to disable and play with alternatives.

For each variables that is after a subset of task ends (i.e. like a makespan objective), we detect it and add a special constraint to propagate it.

Todo
(user): Models that include the makespan as a special interval might be better, but then not everyone does that. In particular this code allows to have decent lower bound on the large cumulative minizinc instances.
Todo
(user): this require the precedence constraints to be already loaded, and there is no guarantee of that currently. Find a more robust way.
Todo
(user): There is a bit of code duplication with the disjunctive precedence propagator. Abstract more?

The CumulativeIsAfterSubsetConstraint() always reset the helper to the forward time direction, so it is important to also precompute the precedence relation using the same direction! This is needed in case the helper has already been used and set in the other direction.

Todo
(user): Handle generic affine relation?
Todo
(user): This can lead to many constraints. By analyzing a bit more the precedences, we could restrict that. In particular for cases were the cumulative is always (bunch of tasks B), T, (bunch of tasks A) and task T always in the middle, we never need to explicit list the precedence of a task in B with a task in A.
Todo
(user): If more than one variable are after the same set of intervals, we should regroup them in a single constraint rather than having two independent constraint doing the same propagation.

We have var >= end_exp.var + offset, so var >= (end_exp.var + end_exp.cte) + (offset - end_exp.cte) var >= task end + new_offset.

Propagator responsible for applying Timetabling filtering rule. It increases the minimum of the start variables, decrease the maximum of the end variables, and increase the minimum of the capacity variable.

Propagator responsible for applying the Overload Checking filtering rule. It increases the minimum of the capacity variable.

Since we use the potential DFF conflict on demands to apply the heuristic, only do so if any demand is greater than 1.

Propagator responsible for applying the Timetable Edge finding filtering rule. It increases the minimum of the start variables and decreases the maximum of the end variables,

Definition at line 42 of file cumulative.cc.

◆ CumulativePrecedenceSearchHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::CumulativePrecedenceSearchHeuristic ( Model * model)

The algo goes as follow:

  • Build a profile of all the tasks packed to the right as long as that is feasible.
  • If we can't grow the profile, we have identified a set of tasks that all overlap if they are packed on the right, and whose sum of demand exceed the capacity.
  • Look for two tasks in that set that can be made non-overlapping, and take a "precedence" decision between them.

We use a similar algo as in BuildProfile() in timetable.cc

Start and height of the currently built profile rectangle.

Remove added task ending there. Set their demand to zero.

Corner case if task is of duration zero.

Add new task starting here. If the task cannot be added we have a candidate for precedence.

Todo
(user): tie-break tasks not fitting in the profile smartly.

We should have everything needed here to add a new precedence.

If packing everything to the left is feasible, continue.

We will use a bunch of heuristic to add a new precedence. All the task in open_tasks cannot share a time point since they exceed the capacity. Moreover if we pack all to the left, they have an intersecting point. So we should be able to make two of them disjoint

Todo
(user): If the two box cannot overlap because of high demand, use repo.CreateDisjunctivePrecedenceLiteral() instead.
Todo
(user): Add heuristic ordering for creating interesting precedence first.

Can we add s <= t ? All the considered tasks are intersecting if on the left.

skip if we already have a literal created and assigned to false.

It shouldn't be able to be true here otherwise we will have s and t disjoint.

This should always be true in normal usage after SAT search has fixed all literal, but if it is not, we can just return this decision.

Make sure s could be before t.

It shouldn't be able to fail since s can be before t.

Branch on that precedence.

If no precedence can be created, and all precedence are assigned to false we have a conflict since all these interval must intersect but cannot fit in the capacity!

Todo
(user): We need to add the reason for demand_min and capacity_max.
Todo
Todo
(user): unfortunately we can't report it from here.
Todo
(user): add heuristic criteria, right now we stop at first one. See above.

Definition at line 759 of file integer_search.cc.

◆ CumulativeTimeDecomposition()

std::function< void(Model *)> operations_research::sat::CumulativeTimeDecomposition ( const std::vector< IntervalVariable > & vars,
const std::vector< AffineExpression > & demands,
AffineExpression capacity,
SchedulingConstraintHelper * helper = nullptr )

Adds a simple cumulative constraint. See the comment of Cumulative() above for a definition of the constraint. This is only used for testing.

This constraint assumes that task demands and the resource capacity are fixed to non-negative number.

Compute time range.

Task t consumes the resource at time if consume_condition is true.

Task t consumes the resource at time if it is present.

Task t overlaps time.

this is needed because we currently can't create a boolean variable if the model is unsat.

The profile cannot exceed the capacity at time.

Abort if UNSAT.

Definition at line 283 of file cumulative.cc.

◆ CumulativeUsingReservoir()

std::function< void(Model *)> operations_research::sat::CumulativeUsingReservoir ( const std::vector< IntervalVariable > & vars,
const std::vector< AffineExpression > & demands,
AffineExpression capacity,
SchedulingConstraintHelper * helper )

Another testing code, same assumptions as the CumulativeTimeDecomposition().

Definition at line 365 of file cumulative.cc.

◆ DEFINE_STRONG_INDEX_TYPE() [1/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( BooleanVariable )

Index of a variable (>= 0).

◆ DEFINE_STRONG_INDEX_TYPE() [2/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( ClauseIndex )

Index of a clause (>= 0).

◆ DEFINE_STRONG_INDEX_TYPE() [3/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( EnforcementId )

◆ DEFINE_STRONG_INDEX_TYPE() [4/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( IntegerVariable )

Index of an IntegerVariable.

Each time we create an IntegerVariable we also create its negation. This is done like that so internally we only stores and deal with lower bound. The upper bound being the lower bound of the negated variable.

◆ DEFINE_STRONG_INDEX_TYPE() [5/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( IntervalVariable )

◆ DEFINE_STRONG_INDEX_TYPE() [6/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( LiteralIndex )

Index of a literal (>= 0), see Literal below.

◆ DEFINE_STRONG_INDEX_TYPE() [7/7]

operations_research::sat::DEFINE_STRONG_INDEX_TYPE ( PositiveOnlyIndex )

Special type for storing only one thing for var and NegationOf(var).

◆ DEFINE_STRONG_INT64_TYPE() [1/2]

operations_research::sat::DEFINE_STRONG_INT64_TYPE ( Coefficient )

The type of the integer coefficients in a pseudo-Boolean constraint. This is also used for the current value of a constraint or its bounds.

◆ DEFINE_STRONG_INT64_TYPE() [2/2]

operations_research::sat::DEFINE_STRONG_INT64_TYPE ( IntegerValue )

Value type of an integer variable. An integer variable is always bounded on both sides, and this type is also used to store the bounds [lb, ub] of the range of each integer variable.

Note
both bounds are inclusive, which allows to write many propagation algorithms for just one of the bound and apply it to the negated variables to get the symmetric algorithm for the other bound.

◆ DetectAndAddSymmetryToProto()

void operations_research::sat::DetectAndAddSymmetryToProto ( const SatParameters & params,
CpModelProto * proto,
SolverLogger * logger )

Detects symmetries and fill the symmetry field.

Definition at line 742 of file cp_model_symmetries.cc.

◆ DetectAndExploitSymmetriesInPresolve()

bool operations_research::sat::DetectAndExploitSymmetriesInPresolve ( PresolveContext * context)

Basic implementation of some symmetry breaking during presolve.

Currently this just try to fix variables by detecting symmetries between Booleans in bool_and, at_most_one or exactly_one constraints.

We need to make sure the proto is up to date before computing symmetries!

Tricky: the equivalence relation are not part of the proto. We thus add them temporarily to compute the symmetry.

Remove temporary affine relation.

Collect the at most ones.

Note(user): This relies on the fact that the pointers remain stable when we adds new constraints. It should be the case, but it is a bit unsafe. On the other hand it is annoying to deal with both cases below.

We have a few heuristics. The first only look at the global orbits under the symmetry group and try to infer Boolean variable fixing via symmetry breaking. Note that nothing is fixed yet, we will decide later if we fix these Booleans or not.

Get the global orbits and their size.

Log orbit info.

First heuristic based on propagation, see the function comment.

If an at most one intersect with one or more orbit, in each intersection, we can fix all but one variable to zero. For now we only test positive literal, and maximize the number of fixing.

Todo
(user): Doing that is not always good, on cod105.mps, fixing variables instead of letting the inner solver handle Boolean symmetries make the problem unsolvable instead of easily solved. This is probably because this fixing do not exploit the full structure of these symmeteries. Note however that the fixing via propagation above close cod105 even more efficiently.

Compute how many variables we can fix with this at most one.

We count all but the first one in each orbit.

Redo a pass to copy the intersection.

We push all but the first one in each orbit.

Sparse clean up.

Orbitope approach.

This is basically the same as the generic approach, but because of the extra structure, computing the orbit of any stabilizer subgroup is easy. We look for orbits intersecting at most one constraints, so we can break symmetry by fixing variables.

Todo
(user): The same effect could be achieved by adding symmetry breaking constraints of the form "a >= b " between Booleans and let the presolve do the reduction. This might be less code, but it is also less efficient. Similarly, when we cannot just fix variables to break symmetries, we could add these constraints, but it is unclear if we should do it all the time or not.
Todo
(user): code the generic approach with orbits and stabilizer.

HACK for flatzinc wordpress* problem.

If we have a large orbitope, with one objective term by column, we break the symmetry by ordering the objective terms. This usually increase drastically the objective lower bounds we can discover.

Todo
(user): generalize somehow. See if we can exploit this in lb_tree_search directly. We also have a lot more structure than just the objective can be ordered. Like if the objective is a max, we can still do that.
Todo
(user): Actually the constraint we add is really just breaking the orbitope symmetry on one line. But this line being the objective is key. We can also explicitly look for a full permutation group of the objective terms directly instead of finding the largest orbitope first.

Supper simple heuristic to use the orbitope or not.

In an orbitope with an at most one on each row, we can fix the upper right triangle. We could use a formula, but the loop is fast enough.

Todo
(user): Compute the stabilizer under the only non-fixed element and iterate!

Moreover, we can add the implication that in the orbit of distinguished_var, either everything is false, or var is at one.

This will always be kept all zero after usage.

Todo
(user): The code below requires that no variable appears twice in the same at most one. In particular lit and not(lit) cannot appear in the same at most one.

Using the orbitope orbits and intersecting at most ones, we will be able in some case to derive a property of the literals of one row of the orbitope. Namely that:

  • All literals of that row take the same value.
  • At most one literal can be true.
  • At most one literal can be false.

See the comment below for how we can infer this.

Because in the orbitope case, we have a full symmetry group of the columns, we can infer more than just using the orbits under a general permutation group. If an at most one contains two variables from the row, we can infer: 1/ If the two variables appear positively, then there is an at most one on the full row, and we can set n - 1 variables to zero to break the symmetry. 2/ If the two variables appear negatively, then the opposite situation arise and there is at most one zero on the row, we can set n - 1 variables to one. 3/ If two literals of opposite sign appear, then the only possibility for the row are all at one or all at zero, thus we can mark all variables as equivalent.

These property comes from the fact that when we permute a line of the orbitope in any way, then the position than ends up in the at most one must never be both at one.

Note
3/ can be done without breaking any symmetry, but for 1/ and 2/ by choosing which variable is not fixed, we will break some symmetry.
Todo
(user): for 1/ and 2/ we could add an at most one constraint on the full row if it is not already there!

Note(user): On the miplib, only 1/ and 2/ happens currently. Not sure with LNS though.

An at most one touching two positions in an orbitope row can be extended to include the full row.

Note(user): I am not sure we care about that here. By symmetry, if we have an at most one touching two positions, then we should have others touching all pair of positions. And the at most one expansion would already have extended it. So this is more FYI.

Todo
(user): if the same at most one touch more than one row, we can deduce more. It is a bit tricky and maybe not frequent enough to make a big difference. Also, as we start to fix things, at most one might propagate by themselves.

List the row in "at most one" by score. We will be able to fix a "triangle" of literals in order to break some of the symmetry.

Mark all the equivalence or fixed rows.

Note
this operation do not change the symmetry group.
Todo
(user): We could remove these rows from the orbitope. Note that currently this never happen on the miplib (maybe in LNS though).

If we have both property, it means we have

  • sum_j orbitope[row][j] <= 1
  • sum_j not(orbitope[row][j]) <= 1 which is the same as sum_j orbitope[row][j] >= num_cols - 1. This is only possible if we have two elements and we don't have row_is_all_equivalent.

We have [1, 0] or [0, 1].

No solution.

Here we proved that the row is either all ones or all zeros. This was because we had: at_most_one = [x, ~y, ...] orbitope = [x, y, ...] and by symmetry we have at_most_one = [~x, y, ...] This for all pairs of positions in that row.

We use as the score the number of constraint in which variables from this row participate.

Break the symmetry by fixing at each step all but one literal to true or false. Note that each time we do that for a row, we need to exclude the non-fixed column from the rest of the row processing. We thus fix a "triangle" of literals.

This is the same as ordering the columns in some lexicographic order and using the at_most_ones to fix known position. Note that we can still add lexicographic symmetry breaking inequality on the columns as long as we do that in the same order as these fixing.

For correctness of the code below, reduce the orbitope.

Todo
(user): This is probably not needed if we add lexicographic constraint instead of just breaking a single row below.

Remove the first num_processed_rows.

For each of them remove the first num_processed_rows entries.

If we are left with a set of variable than can all be permuted, lets break the symmetry by ordering them.

Add orbitope[0][i] >= orbitope[0][i+1].

Definition at line 900 of file cp_model_symmetries.cc.

◆ DetectImpliedIntegers()

std::vector< double > operations_research::sat::DetectImpliedIntegers ( MPModelProto * mp_model,
SolverLogger * logger )

This will mark implied integer as such. Note that it can also discover variable of the form coeff * Integer + offset, and will change the model so that these are marked as integer. It is why we return both a scaling and an offset to transform the solution back to its original domain.

Todo
(user): Actually implement the offset part. This currently only happens on the 3 neos-46470* miplib problems where we have a non-integer rhs.

We will process all equality constraints with exactly one non-integer.

Scale the variable right away and mark it as implied integer.

Note
the constraints will be scaled later.

Update the queue of constraints with a single non-integer.

The non integer variable was already made integer by one other constraint.

Ignore non-equality here.

This will be set to the unique non-integer term of this constraint.

We are looking for a "multiplier" so that the unique non-integer term in this constraint (i.e. var * var_coeff) times this multiplier is an integer.

If this is set to zero or becomes too large, we fail to detect a new implied integer and ignore this constraint.

This actually compute the smallest multiplier to make all other terms in the constraint integer.

These "rhs" fail could be handled by shifting the variable.

We want to multiply the variable so that it is integer. We know that coeff * multiplier is an integer, so we just multiply by that.

But if a variable appear in more than one equality, we want to find the smallest integrality factor! See diameterc-msts-v40a100d5i.mps for an instance of this.

Ignore non-equality here.

Process continuous variables that only appear as the unique non integer in a set of non-equality constraints.

Note
turning to integer such variable cannot in turn trigger new integer detection, so there is no point doing that in a loop.

This should be presolved and not happen.

The situation is a bit tricky here, we have a bunch of coeffs c_i, and we know that X * c_i can take integer value without changing the constraint i meaning.

For now we take the min, and scale only if all c_i / min are integer.

Todo
(user): be smarter! we should be able to handle these cases.

Tricky, we also need the bound of the scaled variable to be integer.

Todo
(user): If we scale more we migth be able to turn it into an integer.

Definition at line 482 of file lp_utils.cc.

◆ DetectMakespan()

std::optional< int > operations_research::sat::DetectMakespan ( const std::vector< IntervalVariable > & intervals,
const std::vector< AffineExpression > & demands,
const AffineExpression & capacity,
Model * model )

Scan the intervals of a cumulative/no_overlap constraint, and its capacity (1 for the no_overlap). It returns the index of the makespan interval if found, or std::nullopt otherwise.

Currently, this requires the capacity to be fixed in order to scan for a makespan interval.

The makespan interval has the following property:

  • its end is fixed at the horizon
  • it is always present
  • its demand is the capacity of the cumulative/no_overlap.
  • its size is > 0.

These property ensures that all other intervals ends before the start of the makespan interval.

Todo
(user): Supports variable capacity.

Detect the horizon (max of all end max of all intervals).

Definition at line 630 of file linear_relaxation.cc.

◆ DetectOptionalVariables()

void operations_research::sat::DetectOptionalVariables ( const CpModelProto & model_proto,
Model * m )

Automatically detect optional variables.

The variables from the objective cannot be marked as optional!

Compute for each variables the intersection of the enforcement literals of the constraints in which they appear.

Todo
(user): This deals with the simplest cases, but we could try to detect literals that implies all the constraints in which a variable appear to false. This can be done with a LCA computation in the tree of Boolean implication (once the presolve remove cycles). Not sure if we can properly exploit that afterwards though. Do some research!

Take the intersection.

Auto-detect optional variables.

Definition at line 908 of file cp_model_loader.cc.

◆ DeterministicLoop()

void operations_research::sat::DeterministicLoop ( std::vector< std::unique_ptr< SubSolver > > & subsolvers,
int num_threads,
int batch_size,
int max_num_batches = 0 )

Similar to NonDeterministicLoop() except this should result in a deterministic solver provided that all SubSolver respect the Synchronize() contract.

Executes the following loop: 1/ Synchronize all in given order. 2/ generate and schedule up to batch_size tasks using an heuristic to select which one to run. 3/ wait for all task to finish. 4/ repeat until no task can be generated in step 2.

If max_num_batches is > 0, stop after that many batches.

We abort the loop after the last synchronize to properly reports final status in case max_num_batches is used.

We first generate all task to run in this batch.

Note
we can't start the task right away since if a task finish before we schedule everything, we will not be deterministic.

Schedule each task.

Wait for all tasks of this batch to be done before scheduling another batch.

Update times.

Definition at line 120 of file subsolver.cc.

◆ DifferAtGivenLiteral()

LiteralIndex operations_research::sat::DifferAtGivenLiteral ( const std::vector< Literal > & a,
const std::vector< Literal > & b,
Literal l )

Visible for testing. Returns kNoLiteralIndex except if:

  • a and b differ in only one literal.
  • For a it is the given literal l. In which case, returns the LiteralIndex of the literal in b that is not in a.

A literal of a is not in b, it must be l.

Note
this can only happen once.

A literal of b is not in a, save it. We abort if this happen twice.

Check the corner case of the difference at the end.

Definition at line 993 of file simplification.cc.

◆ DisjunctivePrecedenceSearchHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::DisjunctivePrecedenceSearchHeuristic ( Model * model)

The algo goes as follow:

  • For each disjunctive, consider the intervals by start time, consider adding the first precedence between overlapping interval.
  • Take the smallest start time amongst all disjunctive.

Compared to SchedulingSearchHeuristic() this one take decision on precedences between tasks. Lazily creating a precedence Boolean for the task in disjunction.

Note
this one is meant to be used when all Boolean has been fixed, so more as a "completion" heuristic rather than a fixed search one.
Todo
(user): tie break by size/start-max
Todo
Todo
(user): Use conditional lower bounds? note that in automatic search all precedence will be fixed before this is called though. In fixed search maybe we should use the other SchedulingSearchHeuristic().

Swap (a,b) if they have the same start_min.

Corner case in case b can fit before a (size zero)

Todo
(Fdid): Also compare the second part of the precedence in PrecedenceIsBetter() and not just the interval before?

Definition at line 690 of file integer_search.cc.

◆ DivideByGCD()

void operations_research::sat::DivideByGCD ( LinearConstraint * constraint)

Computes the GCD of the constraint coefficient, and divide them by it. This also tighten the constraint bounds assumming all the variables are integer.

Definition at line 261 of file linear_constraint.cc.

◆ DivideLinearExpression()

void operations_research::sat::DivideLinearExpression ( int64_t divisor,
LinearExpressionProto * expr )

Divide the expression in place by 'divisor'. It will DCHECK that 'divisor' divides all constants.

Definition at line 59 of file cp_model_utils.cc.

◆ DivisionConstraint()

std::function< void(Model *)> operations_research::sat::DivisionConstraint ( AffineExpression num,
AffineExpression denom,
AffineExpression div )
inline

Adds the constraint: num / denom = div. (denom > 0).

Definition at line 809 of file integer_expr.h.

◆ DomainInProtoContains()

template<typename ProtoWithDomain >
bool operations_research::sat::DomainInProtoContains ( const ProtoWithDomain & proto,
int64_t value )

Returns true if a proto.domain() contain the given value. The domain is expected to be encoded as a sorted disjoint interval list.

Definition at line 113 of file cp_model_utils.h.

◆ EnforcedClause()

std::function< void(Model *)> operations_research::sat::EnforcedClause ( absl::Span< const Literal > enforcement_literals,
absl::Span< const Literal > clause )
inline

enforcement_literals => clause.

Definition at line 973 of file sat_solver.h.

◆ EnforcementLiteral()

int operations_research::sat::EnforcementLiteral ( const ConstraintProto & ct)
inline

Definition at line 51 of file cp_model_utils.h.

◆ Equality() [1/3]

std::function< void(Model *)> operations_research::sat::Equality ( IntegerVariable a,
IntegerVariable b )
inline

a == b.

Definition at line 637 of file precedences.h.

◆ Equality() [2/3]

std::function< void(Model *)> operations_research::sat::Equality ( IntegerVariable v,
int64_t value )
inline

Fix v to a given value.

Definition at line 2012 of file integer.h.

◆ Equality() [3/3]

std::function< void(Model *)> operations_research::sat::Equality ( Literal a,
Literal b )
inline

a == b.

Definition at line 949 of file sat_solver.h.

◆ EqualityWithOffset()

std::function< void(Model *)> operations_research::sat::EqualityWithOffset ( IntegerVariable a,
IntegerVariable b,
int64_t offset )
inline

a + offset == b.

Definition at line 646 of file precedences.h.

◆ ExactlyOneConstraint()

std::function< void(Model *)> operations_research::sat::ExactlyOneConstraint ( const std::vector< Literal > & literals)
inline

Definition at line 905 of file sat_solver.h.

◆ ExactlyOnePerRowAndPerColumn()

std::function< void(Model *)> operations_research::sat::ExactlyOnePerRowAndPerColumn ( const std::vector< std::vector< Literal > > & graph)
Todo
(user): Change to a sparse API like for the function above.

Definition at line 628 of file circuit.cc.

◆ ExcludeCurrentSolutionAndBacktrack()

std::function< void(Model *)> operations_research::sat::ExcludeCurrentSolutionAndBacktrack ( )
inline

This can be used to enumerate all the solutions. After each SAT call to Solve(), calling this will reset the solver and exclude the current solution so that the next call to Solve() will give a new solution or UNSAT is there is no more new solutions.

Note
we only exclude the current decisions, which is an efficient way to not get the same SAT assignment.

Definition at line 1038 of file sat_solver.h.

◆ ExpandCpModel()

void operations_research::sat::ExpandCpModel ( PresolveContext * context)

Expands a given CpModelProto by rewriting complex constraints into simpler constraints. This is different from PresolveCpModel() as there are no reduction or simplification of the model. Furthermore, this expansion is mandatory.

None of the function here need to be run twice. This is because we never create constraint that need to be expanded during presolve.

Make sure all domains are initialized.

Clear the precedence cache.

First pass: we look at constraints that may fully encode variables.

If we only do expansion, we do that as part of the main loop. This way we don't need to call FinalExpansionForLinearConstraint().

Update variable-constraint graph.

Early exit if the model is unsat.

Second pass. We may decide to expand constraints if all their variables are fully encoded.

Cache for variable scanning.

Update variable-constraint graph.

Early exit if the model is unsat.

The precedence cache can become invalid during presolve as it does not handle variable substitution. It is safer just to clear it at the end of the expansion phase.

Make sure the context is consistent.

Update any changed domain from the context.

Definition at line 2483 of file cp_model_expand.cc.

◆ ExploitDominanceRelations()

bool operations_research::sat::ExploitDominanceRelations ( const VarDomination & var_domination,
PresolveContext * context )

Once detected, exploit the dominance relations that appear in the same constraint. This does a full scan of the model.

Return false if the problem is infeasible.

Abort early if there is nothing to do.

Strenghtening via domination. When a variable is dominated by a bunch of other, either we can do (var–, dom++) or if we can't (i.e all dominated variable at their upper bound) then maybe all constraint are satisfied if var is high enough and we can also decrease it.

Temporary data that we fill/clear for each linear constraint.

Temporary data used for boolean constraints.

If (a–, b–) is valid, we can always set a to false.

If (b++, a++) is valid, then we can always set b to true.

Todo
(user): More generally, combine with probing? if a dominated variable implies one of its dominant to zero, then it can be set to zero. It seems adding the implication below should have the same effect? but currently it requires a lot of presolve rounds.

Precompute.

Returns the change magnitude in min-activity (resp. max-activity) if all the given variables are fixed to their upper bound.

Tricky: For now we skip complex domain as we are not sure they can be moved correctly.

Look for dominated var.

For strenghtening using domination, just consider >= constraint.

Always transform to coeff_magnitude * current_ref + ... >=

When all dominated var are at their upper bound, we miss 'slack' to make the constraint trivially satisfiable.

Any increase such that coeff * delta >= slack make the constraint trivial.

Note(user): It look like even if any of the upper bound of the dominating var decrease, this should still be valid. Here we only decrease such a bound due to a dominance relation, so the slack when all dominating variable are at their bound should not really decrease.

Compute the delta in min-activity if all dominating var moves to their other bound.

We need to update the precomputed quantities.

Tricky: If there are holes, we can't just reduce the domain to new_ub if it is not a valid value, so we need to compute the Min() of the intersection.

We need to update the precomputed quantities.

Restore.

For any dominance relation still left (i.e. between non-fixed vars), if the variable are Boolean and X is dominated by Y, we can add (X = 1) => (Y = 1). But, as soon as we do that, we break some symmetry and cannot add any incompatible relations.

EX: It is possible that X dominate Y and Y dominate X if they are both appearing in exactly the same constraint with the same coefficient.

Todo
(user): if both variable are in a bool_or, this will allow us to remove the dominated variable. Maybe we should exploit that to decide which implication we add. Or just remove such variable and not add the implications?
Todo
(user): generalize to non Booleans?

Increase the count for variable in the objective to account for the kObjectiveConstraint in their VarToConstraints() list.

We need to account for domain with hole, hence the ValueAtOrAfter().

We have a candidate, however, we need to make sure the dominating variable upper bound didn't change.

Todo
(user): It look like testing this is not really necessary. The reduction done by this class seem to be order independent.
Note
we assumed that a fixed point was reached before this is called, so modified_domains should have been empty as we entered this function. If not, the code is still correct, but we might miss some reduction, they will still likely be done later though.
Todo
(user): Is this needed?

The rest of the loop only care about Booleans. And if this was boolean, we would have fixed it. If it became Boolean, we wait for the next call.

Todo
(user): maybe the last point can be improved.

dom– or var++ are now forbidden.

Todo
(user): We should probably be able to do something with this.

Definition at line 1418 of file var_domination.cc.

◆ ExpressionContainsSingleRef()

bool operations_research::sat::ExpressionContainsSingleRef ( const LinearExpressionProto & expr)

Returns true if a linear expression can be reduced to a single ref.

Definition at line 569 of file cp_model_utils.cc.

◆ ExpressionIsAffine()

bool operations_research::sat::ExpressionIsAffine ( const LinearExpressionProto & expr)

Checks if the expression is affine or constant.

Definition at line 574 of file cp_model_utils.cc.

◆ ExpressionsContainsOnlyOneVar()

template<class ExpressionList >
bool operations_research::sat::ExpressionsContainsOnlyOneVar ( const ExpressionList & exprs)

Returns true if there exactly one variable appearing in all the expressions.

Definition at line 229 of file cp_model_utils.h.

◆ ExtendNegativeFunction()

std::function< IntegerValue(IntegerValue)> operations_research::sat::ExtendNegativeFunction ( std::function< IntegerValue(IntegerValue)> base_f,
IntegerValue period )
inline

Given a super-additive non-decreasing function f(), we periodically extend its restriction from [-period, 0] to Z. Such extension is not always super-additive and it is up to the caller to know when this is true or not.

Definition at line 362 of file cuts.h.

◆ ExtractAllSubsetsFromForest()

void operations_research::sat::ExtractAllSubsetsFromForest ( const std::vector< int > & parent,
std::vector< int > * subset_data,
std::vector< absl::Span< const int > > * subsets,
int node_limit = std::numeric_limits< int >::max() )

Given a set of rooted tree on n nodes represented by the parent vector, returns the n sets of nodes corresponding to all the possible subtree. Note that the output memory is just n as all subset will point into the same vector.

This assumes no cycles, otherwise it will not crash but the result will not be correct.

In the TSP context, if the tree is a Gomory-Hu cut tree, this will returns a set of "min-cut" that contains a min-cut for all node pairs.

Todo
(user): This also allocate O(n) memory internally, we could reuse it from call to call if needed.

To not reallocate memory since we need the span to point inside this vector, we resize subset_data right away.

Starts by creating the corresponding graph and find the root.

Perform a dfs on the rooted tree. The subset_data will just be the node in post-order.

The node was already explored, output its subtree and pop it.

Explore.

Definition at line 464 of file routing_cuts.cc.

◆ ExtractAssignment()

void operations_research::sat::ExtractAssignment ( const LinearBooleanProblem & problem,
const SatSolver & solver,
std::vector< bool > * assignment )

Copies the assignment from the solver into the given Boolean vector. Note that variables with a greater index that the given num_variables are ignored.

Definition at line 63 of file boolean_problem.cc.

◆ ExtractAssumptions()

std::vector< Literal > operations_research::sat::ExtractAssumptions ( Coefficient stratified_lower_bound,
const std::vector< EncodingNode * > & nodes,
SatSolver * solver )

Extract the assumptions from the nodes.

Definition at line 547 of file encoding.cc.

◆ ExtractElementEncoding()

void operations_research::sat::ExtractElementEncoding ( const CpModelProto & model_proto,
Model * m )

Extract element encodings from exactly_one constraints and lit => var == value constraints. This function must be called after ExtractEncoding() has been called.

Scan all exactly_one constraints and look for literal => var == value to detect element encodings.

Project the implied values onto each integer variable.

Used for logging only.

Search for variable fully covered by the literals of the exactly_one.

We use the order of literals of the exactly_one.

Encode the holes propagation (but we don't create extra literal if they are not already there). If there are non-encoded values we also add the direct min/max propagation.

Lets not create var >= value or var <= value if they do not exist.

We do not create an extra literal if it doesn't exist.

If all literal supporting a value are false, then the value must be false. Note that such a clause is only useful if there are more than one literal supporting the value, otherwise we should already have detected the equivalence.

Todo
(user): It should be safe otherwise the exactly_one will have duplicate literal, but I am not sure that if presolve is off we can assume that.

And the <= side.

Definition at line 681 of file cp_model_loader.cc.

◆ ExtractEncoding()

void operations_research::sat::ExtractEncoding ( const CpModelProto & model_proto,
Model * m )

The logic assumes that the linear constraints have been presolved, so that equality with a domain bound have been converted to <= or >= and so that we never have any trivial inequalities.

Todo
(user): Regroup/presolve two encoding like b => x > 2 and the same Boolean b => x > 5. These shouldn't happen if we merge linear constraints.

Extract the encodings (IntegerVariable <-> Booleans) present in the model. This effectively load some linear constraints of size 1 that will be marked as already loaded.

Todo
(user): Debug what makes it unsat at this point.

Detection of literal equivalent to (i_var == value). We collect all the half-reified constraint lit => equality or lit => inequality for a given variable, and we will later sort them to detect equivalence.

Todo
(user): We will re-add the same implied bounds during probing, so it might not be necessary to do that here. Also, it might be too early if some of the literal view used in the LP are created later, but that should be fixable via calls to implied_bounds->NotifyNewIntegerView().

Detection of literal equivalent to (i_var >= bound). We also collect all the half-refied part and we will sort the vector for detection of the equivalence.

Loop over all constraints and fill var_to_equalities and inequalities.

ct is a linear constraint with one term and one enforcement literal.

Detect enforcement_literal => (var >= value or var <= value).

Detect implied bounds. The test is less strict than the above test.

Detect enforcement_literal => (var == value or var != value).

Note
for domain with 2 values like [0, 1], we will detect both == 0 and != 1. Similarly, for a domain in [min, max], we should both detect (== min) and (<= min), and both detect (== max) and (>= max).

Detect Literal <=> X >= value

Todo
(user): In these cases, we could fix the enforcement literal right away or ignore the constraint. Note that it will be done later anyway though.

Encode the half-inequalities.

Detect Literal <=> X == value and associate them in the IntegerEncoder.

Todo
(user): Fully encode variable that are almost fully encoded?
Todo
(user): Try to remove it. Normally we caught UNSAT above, but tests are very flaky (it only happens in parallel). Keeping it there for the time being.

Encode the half-equalities.

Todo
(user): delay this after PropagateEncodingFromEquivalenceRelations()? Otherwise we might create new Boolean variables for no reason. Note however, that in the presolve, we should only use the "representative" in linear constraints, so we should be fine.

If we have just an half-equality, lets not create the <=> literal but just add two implications. If we don't create hole, we don't really need the reverse literal. This way it is also possible for the ExtractElementEncoding() to detect later that actually this literal is <=> to var == value, and this way we create one less Boolean for the same result.

Todo
(user): It is not 100% clear what is the best encoding and if we should create equivalent literal or rely on propagator instead to push bounds.

Update stats.

Definition at line 396 of file cp_model_loader.cc.

◆ ExtractSubproblem()

void operations_research::sat::ExtractSubproblem ( const LinearBooleanProblem & problem,
const std::vector< int > & constraint_indices,
LinearBooleanProblem * subproblem )

Constructs a sub-problem formed by the constraints with given indices.

Definition at line 499 of file boolean_problem.cc.

◆ ExtractSubSolverName()

std::string operations_research::sat::ExtractSubSolverName ( const std::string & improvement_info)

We assume the subsolver name is always first.

Definition at line 776 of file synchronization.cc.

◆ FailedLiteralProbingRound()

bool operations_research::sat::FailedLiteralProbingRound ( ProbingOptions options,
Model * model )

Similar to ProbeBooleanVariables() but different :-)

First, this do not consider integer variable. It doesn't do any disjunctive reasoning (i.e. changing the domain of an integer variable by intersecting it with the union of what happen when x is fixed and not(x) is fixed).

However this should be more efficient and just work better for pure Boolean problems. On integer problems, we might also want to run this one first, and then do just one quick pass of ProbeBooleanVariables().

Note
this by itself just do one "round", look at the code in the Inprocessing class that call this interleaved with other reductions until a fix point is reached.

This can fix a lot of literals via failed literal detection, that is when we detect that x => not(x) via propagation after taking x as a decision. It also use the strongly connected component algorithm to detect equivalent literals.

It will add any detected binary clause (via hyper binary resolution) to the implication graph. See the option comments for more details.

Reset the solver in case it was already used.

When called from Inprocessing, the implication graph should already be a DAG, so these two calls should return right away. But we do need them to get the topological order if this is used in isolation.

This is only needed when options.use_queue is true.

This is only needed when options use_queue is false;

We delay fixing of already assigned literal once we go back to level zero.

Depending on the options. we do not use the same order. With tree look, it is better to start with "leaf" first since we try to reuse propagation as much as possible. This is also interesting to do when extracting binary clauses since we will need to propagate everyone anyway, and this should result in less clauses that can be removed later by transitive reduction.

However, without tree-look and without the need to extract all binary clauses, it is better to just probe the root of the binary implication graph. This is exactly what happen when we probe using the topological order.

We only use this for the queue version.

We only enqueue literal at level zero if we don't use "tree look".

Todo
(user): Instead of minimizing index in topo order (which might be nice for binary extraction), we could try to maximize reusability in some way.

Probe a literal that implies previous decision.

This is a backtrack marker, go back one level.

Fix any delayed fixed literal.

Probe an unexplored node.

The pass is finished.

Probe a literal that implies previous decision.

Note
contrary to the queue based implementation, this do not process them in a particular order.

candidate => previous => not(candidate), so we can fix it.

This shouldn't happen if extract_binary_clauses is false. We have an equivalence.

Sync the queue with the new level.

Fix next_decision to false if not already done.

Even if we fixed something at level zero, next_decision might not be fixed! But we can fix it. It can happen because when we propagate with clauses, we might have a => b but not not(b) => not(a). Like a => b and clause (not(a), not(b), c), propagating a will set c, but propagating not(c) will not do anything.

We "delay" the fixing if we are not at level zero so that we can still reuse the current propagation work via tree look.

Todo
(user): Can we be smarter here? Maybe we can still fix the literal without going back to level zero by simply enqueuing it with no reason? it will be backtracked over, but we will still lazily fix it later.

Inspect the newly propagated literals. Depending on the options, try to extract binary clauses via hyper binary resolution and/or mark the literals on the trail so that they do not need to be probed later.

If we can extract a binary clause that subsume the reason clause, we do add the binary and remove the subsumed clause.

Todo
(user): We could be slightly more generic and subsume some clauses that do not contains last_decision.Negated().

We need to change the reason now that the clause is cleared.

Anything not propagated by the BinaryImplicationGraph is a "new" binary clause. This is because the BinaryImplicationGraph has the highest priority of all propagators.

Note(user): This is not 100% true, since when we launch the clause propagation for one literal we do finish it before calling again the binary propagation.

Todo
(user): Think about trying to extract clause that will not get removed by transitive reduction later. If we can both extract a => c and b => c , ideally we don't want to extract a => c first if we already know that a => b.
Todo
(user): Similar to previous point, we could find the LCA of all literals in the reason for this propagation. And use this as a reason for later hyber binary resolution. Like we do when this clause subsume the reason.

If we don't extract binary, we don't need to explore any of these literal until more variables are fixed.

Inspect the watcher list for last_decision, If we have a blocking literal at true (implied by last decision), then we have subsumptions.

The intuition behind this is that if a binary clause (a,b) subsume a clause, and we watch a.Negated() for this clause with a blocking literal b, then this watch entry will never change because we always propagate binary clauses first and the blocking literal will always be true. So after many propagations, we hope to have such configuration which is quite cheap to test here.

Tricky: If we have many "decision" and we do not extract the binary clause, then the fact that last_decision => literal might not be currently encoded in the problem clause, so if we use that relation to subsume, we should make sure it is added.

Note
it is okay to add duplicate binary clause, we will clean that later.

Add the binary clause if needed. Note that we change the reason to a binary one so that we never add the same clause twice.

Tricky: while last_decision would be a valid reason, we need a reason that was assigned before this literal, so we use the decision at the level where this literal was assigned which is an even better reason. Maybe it is just better to change all the reason above to a binary one so we don't have an issue here.

If the variable was true at level zero, there is no point adding the clause.

Todo
(user): We might just want to do that even more lazily by checking for detached clause while propagating here? and do a big cleanup at the end.

Display stats.

Definition at line 498 of file probing.cc.

◆ FillDomainInProto()

template<typename ProtoWithDomain >
void operations_research::sat::FillDomainInProto ( const Domain & domain,
ProtoWithDomain * proto )

Serializes a Domain into the domain field of a proto.

Definition at line 122 of file cp_model_utils.h.

◆ FillSolveStatsInResponse()

void operations_research::sat::FillSolveStatsInResponse ( Model * model,
CpSolverResponse * response )

Get the solve statistics from the associated model classes and fills the response with them.

Todo
(user): find a way to clear all stats fields that might be set by one of the callback.

Definition at line 145 of file synchronization.cc.

◆ FillTightenedDomainInResponse()

void operations_research::sat::FillTightenedDomainInResponse ( const CpModelProto & original_model,
const CpModelProto & mapping_proto,
const std::vector< int > & postsolve_mapping,
const std::vector< Domain > & search_domains,
CpSolverResponse * response,
SolverLogger * logger )

Try to postsolve with a "best-effort" the reduced domain from the presolved model to the user given model. See the documentation of the CpSolverResponse tightened_variables field for more information on the caveats.

The [0, num_vars) part will contain the tightened domains.

Start with the domain from the mapping proto. Note that by construction this should be tighter than the original variable domains.

The first test is for the corner case of presolve closing the problem, in which case there is no more info to process.

Currently no mapping should mean all variables are in common. This happen when presolve is disabled, but we might still have more variables due to expansion for instance.

There is also the corner case of presolve closing the problem,

This is the normal presolve case. Intersect the domain of the variables in common.

Look for affine relation, and do more intersection.

We can reduce the domain of v1 by using the affine relation and the domain of v2. We have c1 * v2 + c2 * v2 = offset;

Copy the names and replace domains.

Some stats.

Definition at line 410 of file cp_model_postsolve.cc.

◆ FilterAssignedLiteral()

void operations_research::sat::FilterAssignedLiteral ( const VariablesAssignment & assignment,
std::vector< Literal > * core )

A core cannot be all true.

Remove fixed literals from the core.

Definition at line 201 of file optimization.cc.

◆ FilterBoxesAndRandomize()

absl::Span< int > operations_research::sat::FilterBoxesAndRandomize ( absl::Span< const Rectangle > cached_rectangles,
absl::Span< int > boxes,
IntegerValue threshold_x,
IntegerValue threshold_y,
absl::BitGenRef random )

Removes boxes with a size above the thresholds. Also randomize the order. Because we rely on various heuristic, this allow to change the order from one call to the next.

Definition at line 378 of file diffn_util.cc.

◆ FilterBoxesThatAreTooLarge()

absl::Span< int > operations_research::sat::FilterBoxesThatAreTooLarge ( absl::Span< const Rectangle > cached_rectangles,
absl::Span< const IntegerValue > energies,
absl::Span< int > boxes )

Given the total energy of all rectangles (sum of energies[box]) we know that any box with an area greater than that cannot participate in any "bounding box" conflict. As we remove this box, the total energy decrease, so we might remove more. This works in O(n log n).

Sort the boxes by increasing area.

Remove all the large boxes until we have one with area smaller than the energy of the boxes below.

Definition at line 394 of file diffn_util.cc.

◆ FinalExpansionForLinearConstraint()

void operations_research::sat::FinalExpansionForLinearConstraint ( PresolveContext * context)

Linear constraint with a complex rhs need to be expanded at the end of the presolve. We do that at the end, because the presolve is allowed to simplify such constraints by updating the rhs. Also the extra variable we create are only linked by a few constraints to the rest of the model and should not be presolvable.

Definition at line 2631 of file cp_model_expand.cc.

◆ FindBestScalingAndComputeErrors()

double operations_research::sat::FindBestScalingAndComputeErrors ( const std::vector< double > & coefficients,
absl::Span< const double > lower_bounds,
absl::Span< const double > upper_bounds,
int64_t max_absolute_activity,
double wanted_absolute_activity_precision,
double * relative_coeff_error,
double * scaled_sum_error )

Given a linear expression Sum_i c_i * X_i with each X_i in [lb_i, ub_i], this returns a scaling factor f such that 1/ the rounded expression cannot overflow given the domains of the X_i: Sum |std::round(f * c_i) * X_i| <= max_absolute_activity 2/ the error is bounded: | Sum_i (std::round(f * c_i) - f * c_i) | < f * wanted_absolute_activity_precision

This also fills the exact errors made by using the returned scaling factor. The heuristics try to minimize the magnitude of the scaled expression while satisfying the requested precision.

Returns 0.0 if no scaling factor can be found under the condition 1/. Note that we try really hard to satisfy 2/ but we still return our best shot even when 2/ is not satisfied. One can check this by comparing the returned scaled_sum_error / f with wanted_absolute_activity_precision.

Todo

(user): unit test this and move to fp_utils.

(user): Ideally the lower/upper should be int64_t so that we can have an exact definition for the max_absolute_activity allowed.

Starts by computing the highest possible factor.

Returns the smallest factor of the form 2^i that gives us a relative sum error of wanted_absolute_activity_precision and still make sure we will have no integer overflow.

Important: the loop is written in such a way that ComputeScalingErrors() is called on the last factor.

Todo
(user): Make this faster.

This could happen if we always have enough precision.

Because we deal with an approximate input, scaling with a power of 2 might not be the best choice. It is also possible user used rational coeff and then converted them to double (1/2, 1/3, 4/5, etc...). This scaling will recover such rational input and might result in a smaller overall coefficient which is good.

Note
if our current precisions is already above the requested one, we choose integer scaling if we get a better precision.

Definition at line 872 of file lp_utils.cc.

◆ FindCpModelSymmetries()

void operations_research::sat::FindCpModelSymmetries ( const SatParameters & params,
const CpModelProto & problem,
std::vector< std::unique_ptr< SparsePermutation > > * generators,
double deterministic_limit,
SolverLogger * logger )

Returns a list of generators of the symmetry group of the given problem. Each generator is a permutation of the integer range [0, n) where n is the number of variables of the problem. They are permutations of the (index representation of the) problem variables.

Note
we ignore the variables that appear in no constraint, instead of outputing the full symmetry group involving them.
Todo
(user): On SAT problems it is more powerful to detect permutations also involving the negation of the problem variables. So that we could find a symmetry x <-> not(y) for instance.
Todo
(user): As long as we only exploit symmetry involving only Boolean variables we can make this code more efficient by not detecting symmetries involing integer variable.
Todo
(user): Change the API to not return an error when the time limit is reached.

Remove from the permutations the part not concerning the variables.

Note
some permutations may become empty, which means that we had duplicate constraints.

Because variable nodes are in a separate equivalence class than any other node, a cycle can either contain only variable nodes or none, so we just need to check one element of the cycle.

Verify that the cycle's entire support does not touch any variable.

Definition at line 644 of file cp_model_symmetries.cc.

◆ FindDuplicateConstraints()

std::vector< std::pair< int, int > > operations_research::sat::FindDuplicateConstraints ( const CpModelProto & model_proto,
bool ignore_enforcement = false )

Returns the index of duplicate constraints in the given proto in the first element of each pair. The second element of each pair is the "representative" that is the first constraint in the proto in a set of duplicate constraints.

Empty constraints are ignored. We also do a bit more:

  • We ignore names when comparing constraint.
  • For linear constraints, we ignore the domain. This is because we can just merge them if the constraints are the same.
  • We return the special kObjectiveConstraint (< 0) representative if a linear constraint is parallel to the objective and has no enforcement literals. The domain of such constraint can just be merged with the objective domain.

If ignore_enforcement is true, we ignore enforcement literal, but do not do the linear domain or objective special cases. This allow to cover some other cases like:

  • enforced constraint duplicate of non-enforced one.
  • Two enforced constraints with singleton enforcement (vpphard).

Visible here for testing. This is meant to be called at the end of the presolve where constraints have been canonicalized.

We use a map hash that uses the underlying constraint to compute the hash and the equality for the indices.

Create a special representative for the linear objective.

Todo
(user): we could delete duplicate identical interval, but we need to make sure reference to them are updated.

Nothing we will presolve in this case.

Already present!

Definition at line 13312 of file cp_model_presolve.cc.

◆ FindEmptySpaces()

std::vector< Rectangle > operations_research::sat::FindEmptySpaces ( const Rectangle & bounding_box,
std::vector< Rectangle > ocupied_rectangles )

Given a bounding box and a list of rectangles inside that bounding box, returns a list of rectangles partitioning the empty area inside the bounding box.

Sorting is not necessary for correctness but makes it faster.

Definition at line 1568 of file diffn_util.cc.

◆ FindLinearBooleanProblemSymmetries()

void operations_research::sat::FindLinearBooleanProblemSymmetries ( const LinearBooleanProblem & problem,
std::vector< std::unique_ptr< SparsePermutation > > * generators )

Returns a list of generators of the symmetry group of the given problem. Each generator is a permutation of the integer range [0, 2n) where n is the number of variables of the problem. They are permutations of the (index representation of the) problem literals.

Remap the graph nodes to sort them by equivalence classes.

Todo
(user): inject the appropriate time limit here.

Remove from the permutations the part not concerning the literals.

Note
some permutation may becomes empty, which means that we had duplicates constraints.
Todo
(user): Remove them beforehand?

Verify that the cycle's entire support does not touch any variable.

Definition at line 683 of file boolean_problem.cc.

◆ FindRationalFactor()

int64_t operations_research::sat::FindRationalFactor ( double x,
int64_t limit,
double tolerance )

This uses the best rational approximation of x via continuous fractions. It is probably not the best implementation, but according to the unit test, it seems to do the job.

Returns the smallest factor f such that f * abs(x) is integer modulo the given tolerance relative to f (we use f * tolerance). It is only looking for f smaller than the given limit. Returns zero if no such factor exist below the limit.

The complexity is a lot less than O(limit), but it is possible that we might miss the smallest such factor if the tolerance used is too low. This is because we only rely on the best rational approximations of x with increasing denominator.

Definition at line 133 of file lp_utils.cc.

◆ FindRectanglesWithEnergyConflictMC()

FindRectanglesResult operations_research::sat::FindRectanglesWithEnergyConflictMC ( const std::vector< RectangleInRange > & intervals,
absl::BitGenRef random,
double temperature,
double candidate_energy_usage_factor )

Pick a change with a probability proportional to exp(- delta_E / Temp)

Definition at line 1483 of file diffn_util.cc.

◆ FindSingleLinearDifference()

bool operations_research::sat::FindSingleLinearDifference ( const LinearConstraintProto & lin1,
const LinearConstraintProto & lin2,
int * var1,
int64_t * coeff1,
int * var2,
int64_t * coeff2 )

Same as LinearsDifferAtOneTerm() below but also fills the differing terms.

Note
we can't have both undefined or the loop would have exited.

Same term, continue.

We have a diff. term i not in lin2.

term j not in lin1.

Coeff differ. Returns if we had a diff previously.

Definition at line 688 of file presolve_util.cc.

◆ FingerprintExpression()

uint64_t operations_research::sat::FingerprintExpression ( const LinearExpressionProto & lin,
uint64_t seed )

Returns a stable fingerprint of a linear expression.

Definition at line 636 of file cp_model_utils.cc.

◆ FingerprintModel()

uint64_t operations_research::sat::FingerprintModel ( const CpModelProto & model,
uint64_t seed )

Returns a stable fingerprint of a model.

Fingerprint the objective.

Todo
(user): Should we fingerprint decision strategies?

Definition at line 647 of file cp_model_utils.cc.

◆ FingerprintRepeatedField()

template<class T >
uint64_t operations_research::sat::FingerprintRepeatedField ( const google::protobuf::RepeatedField< T > & sequence,
uint64_t seed )
inline

Definition at line 248 of file cp_model_utils.h.

◆ FingerprintSingleField()

template<class T >
uint64_t operations_research::sat::FingerprintSingleField ( const T & field,
uint64_t seed )
inline

Definition at line 256 of file cp_model_utils.h.

◆ FirstUnassignedVarAtItsMinHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::FirstUnassignedVarAtItsMinHeuristic ( const std::vector< IntegerVariable > & vars,
Model * model )
Todo
(user): the complexity caused by the linear scan in this heuristic and the one below is ok when search_branching is set to SAT_SEARCH because it is not executed often, but otherwise it is done for each search decision, which seems expensive. Improve.

Decision heuristic for SolveIntegerProblemWithLazyEncoding(). Returns a function that will return the literal corresponding to the fact that the first currently non-fixed variable value is <= its min. The function will return kNoLiteralIndex if all the given variables are fixed.

Note
this function will create the associated literal if needed.

Definition at line 169 of file integer_search.cc.

◆ FixedDivisionConstraint()

std::function< void(Model *)> operations_research::sat::FixedDivisionConstraint ( AffineExpression a,
IntegerValue b,
AffineExpression c )
inline

Adds the constraint: a / b = c where b is a constant.

Definition at line 828 of file integer_expr.h.

◆ FixedModuloConstraint()

std::function< void(Model *)> operations_research::sat::FixedModuloConstraint ( AffineExpression a,
IntegerValue b,
AffineExpression c )
inline

Adds the constraint: a % b = c where b is a constant.

Definition at line 842 of file integer_expr.h.

◆ FixedWeightedSum()

template<typename VectorInt >
std::function< void(Model *)> operations_research::sat::FixedWeightedSum ( const std::vector< IntegerVariable > & vars,
const VectorInt & coefficients,
int64_t value )
inline

Weighted sum == constant.

Definition at line 458 of file integer_expr.h.

◆ floor()

operations_research::sat::floor ( |P|/ 2)

Reduces v modulo the elements_to_consider first elements of the (normal form) basis. The leading coeff of a basis element is the last one. In other terms, basis has the form:

  • A 0 0 0 0 0
  • * B 0 0 0 0
  • * * C 0 0 0 ............. with non-zero pivots elements A, B, C, ... and the reduction is performed in such a way that for a pivot P of the basis and the correspond entry x of v at the end of the reduction, we have

◆ FloorOfRatio()

template<typename IntType >
IntType operations_research::sat::FloorOfRatio ( IntType numerator,
IntType denominator )

Definition at line 734 of file util.h.

◆ FloorRatio()

IntegerValue operations_research::sat::FloorRatio ( IntegerValue dividend,
IntegerValue positive_divisor )
inline

Definition at line 94 of file integer.h.

◆ FloorSquareRoot()

int64_t operations_research::sat::FloorSquareRoot ( int64_t a)

The argument must be non-negative.

Todo
(user): Find better implementation? In pratice passing via double is almost always correct, but the CapProd() might be a bit slow. However this is only called when we do propagate something.

Definition at line 256 of file util.cc.

◆ FollowHint()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::FollowHint ( const std::vector< BooleanOrIntegerVariable > & vars,
const std::vector< IntegerValue > & values,
Model * model )

This is not ideal as we reserve an int for the full duration of the model even if we use this FollowHint() function just for a while. But it is an easy solution to not have reference to deleted memory in the RevIntRepository(). Note that once we backtrack, these reference will disappear.

If we retake a decision at this level, we will restart from i.

If we retake a decision at this level, we will restart from i.

If the value is outside the current possible domain, we skip it.

Definition at line 1110 of file integer_search.cc.

◆ FormatCounter()

std::string operations_research::sat::FormatCounter ( int64_t num)

Prints a positive number with separators for easier reading (ex: 1'348'065).

Definition at line 49 of file util.cc.

◆ FormatName()

std::string operations_research::sat::FormatName ( absl::string_view name)
inline

This is used to format our table first row entry.

Definition at line 196 of file util.h.

◆ FormatTable()

std::string operations_research::sat::FormatTable ( std::vector< std::vector< std::string > > & table,
int spacing = 2 )

Display tabular data by auto-computing cell width. Note that we right align everything but the first row/col that is assumed to be the table name and is left aligned.

We order by name.

We currently only left align the table name.

Definition at line 77 of file util.cc.

◆ FullMerge()

EncodingNode operations_research::sat::FullMerge ( Coefficient upper_bound,
EncodingNode * a,
EncodingNode * b,
SatSolver * solver )

Merges the two given EncodingNode by creating a new node that corresponds to the sum of the two given ones. The given upper_bound is interpreted as a bound on this sum, and allows creating fewer binary variables.

Fix the variable to false because of the given upper_bound.

Fix the variable to false because of the given upper_bound.

if x <= ia and y <= ib, then x + y <= ia + ib.

if x > ia and y > ib, then x + y > ia + ib + 1.

Definition at line 388 of file encoding.cc.

◆ FullyCompressTuples()

std::vector< std::vector< absl::InlinedVector< int64_t, 2 > > > operations_research::sat::FullyCompressTuples ( absl::Span< const int64_t > domain_sizes,
std::vector< std::vector< int64_t > > * tuples )
Todo
(user): We can probably reuse the tuples memory always and never create new one. We should also be able to code an iterative version of this. Note however that the recursion level is bounded by the number of coluns which should be small.

Similar to CompressTuples() but produces a final table where each cell is a set of value. This should result in a table that can still be encoded efficiently in SAT but with less tuples and thus less extra Booleans. Note that if a set of value is empty, it is interpreted at "any" so we can gain some space.

The passed tuples vector is used as temporary memory and is detroyed. We interpret kTableAnyValue as an "any" tuple.

Todo
(user): To reduce memory, we could return some absl::Span in the last layer instead of vector.
Todo
(user): The final compression is depend on the order of the variables. For instance the table (1,1)(1,2)(1,3),(1,4),(2,3) can either be compressed as (1,*)(2,3) or (1,{1,2,4})({1,3},3). More experiment are needed to devise a better heuristic. It might for example be good to call CompressTuples() first.

Definition at line 878 of file util.cc.

◆ FullyEncodeVariable()

std::function< std::vector< ValueLiteralPair >(Model *)> operations_research::sat::FullyEncodeVariable ( IntegerVariable var)
inline

Calling model.Add(FullyEncodeVariable(var)) will create one literal per value in the domain of var (if not already done), and wire everything correctly. This also returns the full encoding, see the FullDomainEncoding() method of the IntegerEncoder class.

Definition at line 2074 of file integer.h.

◆ GenerateCompletionTimeCutsWithEnergy()

void operations_research::sat::GenerateCompletionTimeCutsWithEnergy ( absl::string_view cut_name,
std::vector< CtEvent > events,
IntegerValue capacity_max,
bool skip_low_sizes,
Model * model,
LinearConstraintManager * manager )

We generate the cut from the Smith's rule from: M. Queyranne, Structure of a simple scheduling polyhedron, Mathematical Programming 58 (1993), 263–285

The original cut is: sum(end_min_i * duration_min_i) >= (sum(duration_min_i^2) + sum(duration_min_i)^2) / 2 We strengthen this cuts by noticing that if all tasks starts after S, then replacing end_min_i by (end_min_i - S) is still valid.

A second difference is that we look at a set of intervals starting after a given start_min, sorted by relative (end_lp - start_min).

Todo
(user): merge with Packing cuts.

Sort by start min to bucketize by start_min.

Skip to the next start_min value.

We look at event that start before sequence_start_min, but are forced to cross this time point. In that case, we replace this event by a truncated event starting at sequence_start_min. To do this, we reduce the size_min, align the start_min with the sequence_start_min, and scale the energy down accordingly.

Build the vector of energies as the vector of sizes.

This is competing with the brute force approach. Skip cases covered by the other code.

We compute the cuts like if it was a disjunctive cut with all the duration actually equal to energy / capacity. But to keep the computation in the integer domain, we multiply by capacity everywhere instead.

We compute the efficacity in the unscaled domain where the l2 norm of the cuts is exactly the sqrt of the sum of squared duration.

Todo
(user): Check overflow and ignore if too big.

Definition at line 1271 of file scheduling_cuts.cc.

◆ GenerateCumulativeEnergeticCuts()

void operations_research::sat::GenerateCumulativeEnergeticCuts ( const std::string & cut_name,
const util_intops::StrongVector< IntegerVariable, double > & lp_values,
std::vector< EnergyEvent > events,
const AffineExpression & capacity,
TimeLimit * time_limit,
Model * model,
LinearConstraintManager * manager )

Currently, we look at all the possible time windows, and will push all cuts in the TopNCuts object. From our observations, this generator creates only a few cuts for a given run.

The complexity of this loop is n^3. if we follow the latest research, we could implement this in n log^2(n). Still, this is not visible in the profile as we only this method at the root node,

Compute relevant time points.

Todo
(user): We could reduce this set.

Checks the time limit if the problem is too big.

After max_end_min, all tasks can fit before window_start.

Compute the max energy available for the tasks.

Add all contributions.

Definition at line 481 of file scheduling_cuts.cc.

◆ GenerateCumulativeEnergeticCutsWithMakespanAndFixedCapacity()

void operations_research::sat::GenerateCumulativeEnergeticCutsWithMakespanAndFixedCapacity ( absl::string_view cut_name,
const util_intops::StrongVector< IntegerVariable, double > & lp_values,
std::vector< EnergyEvent > events,
IntegerValue capacity,
AffineExpression makespan,
TimeLimit * time_limit,
Model * model,
LinearConstraintManager * manager )

This cumulative energetic cut generator will split the cumulative span in 2 regions.

In the region before the min of the makespan, we will compute a more precise reachable profile and have a better estimation of the energy available between two time point. the improvement can come from two sources:

  • subset sum indicates that the max capacity cannot be reached.
  • sum of demands < max capacity.

In the region after the min of the makespan, we will use fixed_capacity * (makespan - makespan_min) as the available energy.

Checks the precondition of the code.

Currently, we look at all the possible time windows, and will push all cuts in the TopNCuts object. From our observations, this generator creates only a few cuts for a given run.

The complexity of this loop is n^3. if we follow the latest research, we could implement this in n log^2(n). Still, this is not visible in the profile as we only this method at the root node,

Compute relevant time points.

Todo
(user): We could reduce this set.
Todo
Todo
(user): we can compute the max usage between makespan_min and makespan_max.

In practice, it stops the DP as the upper bound is reached.

Checks the time limit if the problem is too big.

After max_end_min, all tasks can fit before window_start.

Update states for the name of the generated cut.

We prefer using the makespan as the cut will tighten itself when the objective value is improved.

We reuse the min cut violation to allow some slack in the comparison between the two computed energy values.

Add contributions from all events.

Definition at line 274 of file scheduling_cuts.cc.

◆ GenerateCutsBetweenPairOfNonOverlappingTasks()

void operations_research::sat::GenerateCutsBetweenPairOfNonOverlappingTasks ( absl::string_view cut_name,
const util_intops::StrongVector< IntegerVariable, double > & lp_values,
std::vector< CachedIntervalData > events,
IntegerValue capacity_max,
Model * model,
LinearConstraintManager * manager )

Balas disjunctive cuts on 2 tasks a and b: start_1 * (duration_1 + start_min_1 - start_min_2) + start_2 * (duration_2 + start_min_2 - start_min_1) >= duration_1 * duration_2 + start_min_1 * duration_2 + start_min_2 * duration_1 From: David L. Applegate, William J. Cook: A Computational Study of the Job-Shop Scheduling Problem. 149-156 INFORMS Journal on Computing, Volume 3, Number 1, Winter 1991

Checks hypothesis from the cut.

Encode only the interesting pairs.

interval_1.end <= interval_2.start

interval_2.end <= interval_1.start

Definition at line 849 of file scheduling_cuts.cc.

◆ GenerateGraphForSymmetryDetection()

template<typename Graph >
Graph * operations_research::sat::GenerateGraphForSymmetryDetection ( const LinearBooleanProblem & problem,
std::vector< int > * initial_equivalence_classes )

Returns a graph whose automorphisms can be mapped back to the symmetries of the given LinearBooleanProblem.

Any permutation of the graph that respects the initial_equivalence_classes output can be mapped to a symmetry of the given problem simply by taking its restriction on the first 2 * num_variables nodes and interpreting its index as a literal index. In a sense, a node with a low enough index i is in one-to-one correspondence with a literals i (using the index representation of literal).

The format of the initial_equivalence_classes is the same as the one described in GraphSymmetryFinder::FindSymmetries(). The classes must be dense in [0, num_classes) and any symmetry will only map nodes with the same class between each other.

First, we convert the problem to its canonical representation.

Todo
(user): reserve the memory for the graph? not sure it is worthwhile since it would require some linear scan of the problem though.

We will construct a graph with 3 different types of node that must be in different equivalent classes.

First, we need one node per literal with an edge between each literal and its negation.

We have two nodes for each variable.

Note
the indices are in [0, 2 * num_variables) and in one to one correspondence with the index representation of a literal.

We use 0 for their initial equivalence class, but that may be modified with the objective coefficient (see below).

Literals with different objective coeffs shouldn't be in the same class.

We need to canonicalize the objective to regroup literals corresponding to the same variables. Note that we don't care about the offset or optimization direction here, we just care about literals with the same canonical coefficient.

Then, for each constraint, we will have one or more nodes.

First we have a node for the constraint with an equivalence class depending on the rhs.

Note
Since we add nodes one by one, initial_equivalence_classes->size() gives the number of nodes at any point, which we use as next node index.

This node will also be connected to all literals of the constraint with a coefficient of 1. Literals with new coefficients will be grouped under a new node connected to the constraint_node_index.

Note
this works because a canonical constraint is sorted by increasing coefficient value (all positive).

Connect this node to the constraint node. Note that we don't technically need the arcs in both directions, but that may help a bit the algorithm to find symmetries.

Connect this node to the associated term.literal node. Note that we don't technically need the arcs in both directions, but that may help a bit the algorithm to find symmetries.

Definition at line 545 of file boolean_problem.cc.

◆ GenerateInterestingSubsets()

void operations_research::sat::GenerateInterestingSubsets ( int num_nodes,
const std::vector< std::pair< int, int > > & arcs,
int stop_at_num_components,
std::vector< int > * subset_data,
std::vector< absl::Span< const int > > * subsets )

Given a graph with nodes in [0, num_nodes) and a set of arcs (the order is important), this will:

  • Start with each nodes in separate "subsets".
  • Consider the arc in order, and each time one connects two separate subsets, merge the two subsets into a new one.
  • Stops when there is only 2 subset left.
  • Output all subsets generated this way (at most 2 * num_nodes). The subsets spans will point in the subset_data vector (which will be of size exactly num_nodes).

This is an heuristic to generate interesting cuts for TSP or other graph based constraints. We roughly follow the algorithm described in section 6 of "The Traveling Salesman Problem, A computational Study", David L. Applegate, Robert E. Bixby, Vasek Chvatal, William J. Cook.

Note
this is mainly a "symmetric" case algo, but it does still work for the asymmetric case.

We will do a union-find by adding one by one the arc of the lp solution in the order above. Every intermediate set during this construction will be a candidate for a cut.

In parallel to the union-find, to efficiently reconstruct these sets (at most num_nodes), we construct a "decomposition forest" of the different connected components. Note that we don't exploit any asymmetric nature of the graph here. This is exactly the algo 6.3 in the book above.

Update the decomposition forest, note that the number of nodes is growing.

It is important that the union-find representative is the same node.

For each node in the decomposition forest, try to add a cut for the set formed by the nodes and its children. To do that efficiently, we first order the nodes so that for each node in a tree, the set of children forms a consecutive span in the subset_data vector. This vector just lists the nodes in the "pre-order" graph traversal order. The Spans will point inside the subset_data vector, it is why we initialize it once and for all.

Definition at line 404 of file routing_cuts.cc.

◆ GenerateItemsRectanglesWithNoPairwiseConflict()

std::vector< ItemForPairwiseRestriction > operations_research::sat::GenerateItemsRectanglesWithNoPairwiseConflict ( const std::vector< Rectangle > & rectangles,
double slack_factor,
absl::BitGenRef random )

Definition at line 106 of file 2d_orthogonal_packing_testing.cc.

◆ GenerateItemsRectanglesWithNoPairwisePropagation()

std::vector< ItemForPairwiseRestriction > operations_research::sat::GenerateItemsRectanglesWithNoPairwisePropagation ( int num_rectangles,
double slack_factor,
absl::BitGenRef random )

Now run the propagator until there is no more pairwise conditions.

Remove the slack we added

Definition at line 129 of file 2d_orthogonal_packing_testing.cc.

◆ GenerateNonConflictingRectangles()

std::vector< Rectangle > operations_research::sat::GenerateNonConflictingRectangles ( int num_rectangles,
absl::BitGenRef random )

Definition at line 30 of file 2d_orthogonal_packing_testing.cc.

◆ GenerateNoOvelap2dCompletionTimeCutsWithEnergy()

void operations_research::sat::GenerateNoOvelap2dCompletionTimeCutsWithEnergy ( absl::string_view cut_name,
std::vector< DiffnCtEvent > events,
bool use_lifting,
bool skip_low_sizes,
Model * model,
LinearConstraintManager * manager )

We generate the cut from the Smith's rule from: M. Queyranne, Structure of a simple scheduling polyhedron, Mathematical Programming 58 (1993), 263–285

The original cut is: sum(end_min_i * duration_min_i) >= (sum(duration_min_i^2) + sum(duration_min_i)^2) / 2 We strengthen this cuts by noticing that if all tasks starts after S, then replacing end_min_i by (end_min_i - S) is still valid.

A second difference is that we look at a set of intervals starting after a given start_min, sorted by relative (end_lp - start_min).

Todo
(user): merge with Packing cuts.

Sort by start min to bucketize by start_min.

Skip to the next start_min value.

We look at event that start before sequence_start_min, but are forced to cross this time point. In that case, we replace this event by a truncated event starting at sequence_start_min. To do this, we reduce the size_min, align the start_min with the sequence_start_min, and scale the energy down accordingly.

Build the vector of energies as the vector of sizes.

This is competing with the brute force approach. Skip cases covered by the other code.

For the capacity, we use the worse |y_max - y_min| and if all the tasks so far have a fixed demand with a gcd > 1, we can round it down.

Todo
(user): Use dynamic programming to compute all possible values for the sum of demands as long as the involved numbers are small or the number of tasks are small.

We compute the cuts like if it was a disjunctive cut with all the duration actually equal to energy / capacity. But to keep the computation in the integer domain, we multiply by capacity everywhere instead.

We compute the efficacity in the unscaled domain where the l2 norm of the cuts is exactly the sqrt of the sum of squared duration.

Todo
(user): Check overflow and ignore if too big.

Definition at line 404 of file diffn_cuts.cc.

◆ GenerateNoOverlap2dEnergyCut()

void operations_research::sat::GenerateNoOverlap2dEnergyCut ( absl::Span< const std::vector< LiteralValueValue > > energies,
absl::Span< int > rectangles,
absl::string_view cut_name,
Model * model,
LinearConstraintManager * manager,
SchedulingConstraintHelper * x_helper,
SchedulingConstraintHelper * y_helper,
SchedulingDemandHelper * y_demands_helper )

We can always skip events.

Compute y_spread.

The sum of all energies can be used to stop iterating early.

For each start time, we will keep the most violated cut generated while scanning the residual intervals.

Accumulate intervals, areas, energies and check for potential cuts.

We sort all tasks (x_start_min(task) >= x_start_min(start_index) by increasing end max.

Let's process residual tasks and evaluate the violation of the cut at each step. We follow the same structure as the cut creation code below.

Dominance rule. If the next interval also fits in [window_min, window_max]*[y_min, y_max], the cut will be stronger with the next interval/rectangle.

Checks the current area vs the sum of all energies. The area is capacity_profile.GetBoundingArea(). We can compare it to the bounding box area: (window_max - window_min) * (y_max - y_min).

Compute the violation of the potential cut.

A maximal violated cut has been found. Build it and add it to the pool.

Definition at line 138 of file diffn_cuts.cc.

◆ GenerateSchedulingNeighborhoodFromIntervalPrecedences()

Neighborhood operations_research::sat::GenerateSchedulingNeighborhoodFromIntervalPrecedences ( absl::Span< const std::pair< int, int > > precedences,
const CpSolverResponse & initial_solution,
const NeighborhoodGeneratorHelper & helper )

Helper method for the scheduling neighborhood generators. Returns a full neighborhood enriched with the set or precedences passed to the generate method.

Collect seen intervals.

Fix the presence/absence of unseen intervals.

If the interval is not enforced, we just relax it. If it belongs to an exactly one constraint, and the enforced interval is not relaxed, then propagation will force this interval to stay not enforced. Otherwise, LNS will be able to change which interval will be enforced among all alternatives.

Fix the value.

Set the current solution as a hint.

Definition at line 1863 of file cp_model_lns.cc.

◆ GenerateSchedulingNeighborhoodFromRelaxedIntervals()

Neighborhood operations_research::sat::GenerateSchedulingNeighborhoodFromRelaxedIntervals ( absl::Span< const int > intervals_to_relax,
absl::Span< const int > variables_to_fix,
const CpSolverResponse & initial_solution,
absl::BitGenRef random,
const NeighborhoodGeneratorHelper & helper )

Helper method for the scheduling neighborhood generators. Returns a neighborhood defined from the given set of intervals to relax. For each scheduling constraint, it adds strict relation order between the non-relaxed intervals.

We will extend the set with some interval that we cannot fix.

Fix the presence/absence of non-relaxed intervals.

If the interval is not enforced, we just relax it. If it belongs to an exactly one constraint, and the enforced interval is not relaxed, then propagation will force this interval to stay not enforced. Otherwise, LNS will be able to change which interval will be enforced among all alternatives.

Fix the value.

We differ from the ICAPS05 paper as we do not consider ignored intervals when generating the precedence graph, instead of building the full graph, then removing intervals, and reconstructing the precedence graph heuristically after that.

fix the extra variables passed as parameters.

Set the current solution as a hint.

Definition at line 1927 of file cp_model_lns.cc.

◆ GenerateShortCompletionTimeCutsWithExactBound()

void operations_research::sat::GenerateShortCompletionTimeCutsWithExactBound ( const std::string & cut_name,
std::vector< CtEvent > events,
IntegerValue capacity_max,
Model * model,
LinearConstraintManager * manager )
Todo
(user): Improve performance
  • detect disjoint tasks (no need to crossover to the second part)
  • better caching of explored states

Sort by start min to bucketize by start_min.

Skip to the next start_min value.

We look at event that start before sequence_start_min, but are forced to cross this time point. In that case, we replace this event by a truncated event starting at sequence_start_min. To do this, we reduce the size_min, and align the start_min with the sequence_start_min.

Both cases with 1 or 2 tasks are trivial and independent of the order. Also, if capacity is not exceeded, pushing all ends left is a valid LP assignment.

We re-index the elements, so we will start enumerating the permutation from there. Note that if the previous i caused an abort because of the threshold, we might abort right away again!

Unweighted cuts.

Weighted cuts.

Definition at line 1138 of file scheduling_cuts.cc.

◆ GetCoefficient()

IntegerValue operations_research::sat::GetCoefficient ( IntegerVariable var,
const LinearExpression & expr )

Returns the coefficient of the variable in the expression. Works in linear time.

Note
GetCoefficient(NegationOf(var, expr)) == -GetCoefficient(var, expr).

Definition at line 446 of file linear_constraint.cc.

◆ GetCoefficientOfPositiveVar()

IntegerValue operations_research::sat::GetCoefficientOfPositiveVar ( const IntegerVariable var,
const LinearExpression & expr )

Definition at line 458 of file linear_constraint.cc.

◆ GetFactorT()

IntegerValue operations_research::sat::GetFactorT ( IntegerValue rhs_remainder,
IntegerValue divisor,
IntegerValue max_magnitude )

Compute the larger t <= max_t such that t * rhs_remainder >= divisor / 2.

This is just a separate function as it is slightly faster to compute the result only once.

Visible for testing. Returns a function f on integers such that:

  • f is non-decreasing.
  • f is super-additive: f(a) + f(b) <= f(a + b)
  • 1 <= f(divisor) <= max_scaling
  • For all x, f(x * divisor) = x * f(divisor)
  • For all x, f(x * divisor + remainder) = x * f(divisor)

Preconditions:

  • 0 <= remainder < divisor.
  • 1 <= max_scaling.

This is used in IntegerRoundingCut() and is responsible for "strengthening" the cut. Just taking f(x) = x / divisor result in the non-strengthened cut and using any function that stricly dominate this one is better.

Algorithm:

  • We first scale by a factor t so that rhs_remainder >= divisor / 2.
  • Then, if max_scaling == 2, we use the function described in "Strenghtening Chvatal-Gomory cuts and Gomory fractional cuts", Adam N. Letchfrod, Andrea Lodi.
  • Otherwise, we use a generalization of this which is a discretized version of the classical MIR rounding function that only take the value of the form "an_integer / max_scaling". As max_scaling goes to infinity, this converge to the real-valued MIR function.
Note
for each value of max_scaling we will get a different function. And that there is no dominance relation between any of these functions. So it could be nice to try to generate a cut using different values of max_scaling.

Make sure that when we multiply the rhs or the coefficient by a factor t, we do not have an integer overflow. Note that the rhs should be counted in max_magnitude since we will apply f() on it.

Definition at line 482 of file cuts.cc.

◆ GetFirstSolutionBaseParams()

std::vector< SatParameters > operations_research::sat::GetFirstSolutionBaseParams ( const SatParameters & base_params)

Returns a vector of base parameters to specify solvers specialized to find a initial solution. This is meant to be used with RepeatParameters() and FilterParameters().

Add one feasibility jump.

Random search.

Add a second feasibility jump.

Random quick restart.

Add a linear feasibility jump. This one seems to perform worse, so we add only 1 for 2 normal LS, and we add this late.

Definition at line 924 of file cp_model_search.cc.

◆ GetFullWorkerParameters()

std::vector< SatParameters > operations_research::sat::GetFullWorkerParameters ( const SatParameters & base_params,
const CpModelProto & cp_model,
int num_already_present,
SubsolverNameFilter * filter )
Note
in flatzinc setting, we know we always have a fixed search defined.

Things to try:

  • Specialize for purely boolean problems
  • Disable linearization_level options for non linear problems
  • Fast restart in randomized search
  • Different propatation levels for scheduling constraints

Defines a set of named strategies so it is easier to read in one place the one that are used. See below.

We only use a "fixed search" worker if some strategy is specified or if we have a scheduling model.

Todo
(user): For scheduling, this is important to find good first solution but afterwards it is not really great and should probably be replaced by a LNS worker.

Our current set of strategies

Todo
(user): Avoid launching two strategies if they are the same, like if there is no lp, or everything is already linearized at level 1.

Starts by adding user specified ones.

We use the default if empty.

Note
the order is important as the list can be truncated.

Hack for flatzinc. At the time of parameter setting, the objective is not expanded. So we do not know if core is applicable or not.

Remove the names that should be ignored.

Creates the diverse set of parameters with names and seed.

Do some filtering.

Todo
(user): Enable probing_search in deterministic mode. Currently it timeouts on small problems as the deterministic time limit never hits the sharding limit.
Todo
(user): Enable shaving search in interleave mode. Currently it do not respect ^C, and has no per chunk time limit.

In the corner case of empty variable, lets not schedule the probing as it currently just loop forever instead of returning right away.

Disable core search if there is only 1 term in the objective.

Disable subsolvers that do not implement the deterministic mode.

Todo
(user): Enable lb_tree_search in deterministic mode.

Remove subsolvers that require an objective.

Add this strategy.

In interleaved mode, we run all of them.

Todo
(user): Actually make sure the gap num_workers <-> num_heuristics is contained.

Apply the logic for how many we keep.

Derive some automatic number to leave room for LS/LNS and other strategies not taken into account here.

Definition at line 754 of file cp_model_search.cc.

◆ GetIntervalArticulationPoints()

std::vector< int > operations_research::sat::GetIntervalArticulationPoints ( std::vector< IndexedInterval > * intervals)

Similar to GetOverlappingIntervalComponents(), but returns the indices of all intervals whose removal would create one more connected component in the interval graph. Those are sorted by start. See: https://en.wikipedia.org/wiki/Glossary_of_graph_theory#articulation_point.

New connected component.

Still the same connected component. Was the previous "max" an articulation point ?

We might be re-inserting the same articulation point: guard against it.

Update the max end.

Convert articulation point indices to IndexedInterval.index.

Definition at line 503 of file diffn_util.cc.

◆ GetNamedParameters()

absl::flat_hash_map< std::string, SatParameters > operations_research::sat::GetNamedParameters ( SatParameters base_params)

Returns all the named set of parameters known to the solver. This include our default strategies like "max_lp", "core", etc... It is visible here so that this can be reused by parameter validation.

Usually, named strategies just override a few field from the base_params.

By default we disable the logging when we generate a set of parameter. It is possible to force it by setting it in the corresponding named parameter via the subsolver_params field.

The "default" name can be used for the base_params unchanged.

Lp variations only.

Core. Note that we disable the lp here because it is faster on the minizinc benchmark.

Todo
(user): Do more experiments, the LP with core could be useful, but we probably need to incorporate the newly created integer variables from the core algorithm into the LP.

It can be interesting to try core and lp.

We do not want to change the objective_var lb from outside as it gives better result to only use locally derived reason in that algo.

We want to spend more time on the LP here.

We want to spend more time on the LP here.

Search variation.

Quick restart.

Todo
(user): Experiment with search_random_variable_pool_size.
Note
no dual scheduling heuristics.

Less encoding.

Base parameters for shared tree worker.

These settings don't make sense with shared tree search, turn them off as they can break things.

Base parameters for LNS worker.

We disable costly presolve/inprocessing.

Add user defined ones.

Note
this might be merged to our default ones.

Merge the named parameters with the base parameters to create the new parameters.

Fix names (we don't set them above).

Definition at line 478 of file cp_model_search.cc.

◆ GetOrbitopeOrbits()

std::vector< int > operations_research::sat::GetOrbitopeOrbits ( int n,
absl::Span< const std::vector< int > > orbitope )

Returns the orbits under the given orbitope action. Same results format as in GetOrbits(). Note that here, the orbit index is simply the row index of an element in the orbitope matrix.

Definition at line 185 of file symmetry_util.cc.

◆ GetOrbits()

std::vector< int > operations_research::sat::GetOrbits ( int n,
absl::Span< const std::unique_ptr< SparsePermutation > > generators )

Returns a vector of size n such that

  • orbits[i] == -1 iff i is never touched by the generators (singleton orbit).
  • orbits[i] = orbit_index, where orbits are numbered from 0 to num_orbits - 1

    Todo
    (user): We could reuse the internal memory if needed.
Note
there is currently no random access api like cycle[j].

Definition at line 153 of file symmetry_util.cc.

◆ GetOverlappingIntervalComponents()

void operations_research::sat::GetOverlappingIntervalComponents ( std::vector< IndexedInterval > * intervals,
std::vector< std::vector< int > > * components )

Given n intervals, returns the set of connected components (using the overlap relation between 2 intervals). Components are sorted by their start, and inside a component, the intervals are also sorted by start. intervals is only sorted (by start), and not modified otherwise.

For correctness, ComparatorByStart is enough, but in unit tests we want to verify this function against another implementation, and fully defined sorting with tie-breaking makes that much easier. If that becomes a performance bottleneck:

  • One may want to sort the list outside of this function, and simply have this function DCHECK that it's sorted by start.
  • One may use stable_sort() with ComparatorByStart().

Definition at line 470 of file diffn_util.cc.

◆ GetOverlappingRectangleComponents()

std::vector< absl::Span< int > > operations_research::sat::GetOverlappingRectangleComponents ( absl::Span< const Rectangle > rectangles,
absl::Span< int > active_rectangles )

Creates a graph when two nodes are connected iff their rectangles overlap. Then partition into connected components.

This method removes all singleton components. It will modify the active_rectangle span in place.

Find the component of active_rectangles[start].

Definition at line 102 of file diffn_util.cc.

◆ GetPositiveOnlyIndex()

PositiveOnlyIndex operations_research::sat::GetPositiveOnlyIndex ( IntegerVariable var)
inline

Definition at line 199 of file integer.h.

◆ GetReferencesUsedByConstraint() [1/2]

IndexReferences operations_research::sat::GetReferencesUsedByConstraint ( const ConstraintProto & ct)

Definition at line 81 of file cp_model_utils.cc.

◆ GetReferencesUsedByConstraint() [2/2]

void operations_research::sat::GetReferencesUsedByConstraint ( const ConstraintProto & ct,
std::vector< int > * variables,
std::vector< int > * literals )

Definition at line 87 of file cp_model_utils.cc.

◆ GetRinsRensNeighborhood()

ReducedDomainNeighborhood operations_research::sat::GetRinsRensNeighborhood ( const SharedResponseManager * response_manager,
const SharedLPSolutionRepository * lp_solutions,
SharedIncompleteSolutionManager * incomplete_solutions,
double difficulty,
absl::BitGenRef random )

Helper method to create a RINS neighborhood by fixing variables with same values in relaxation solution and the current best solution in the response_manager. Prioritizes repositories in following order to get a neighborhood.

  1. incomplete_solutions
  2. lp_solutions

If response_manager has no solution, this generates a RENS neighborhood by ignoring the solutions and using the relaxation values. The domain of the variables are reduced to integer values around relaxation values. If the relaxation value is integer, then we fix the domain of the variable to that value.

Using a partial LP relaxation computed by feasibility_pump, and a full lp relaxation periodically dumped by linearization=2 workers is equiprobable.

Definition at line 174 of file rins.cc.

◆ GetSingleRefFromExpression()

int operations_research::sat::GetSingleRefFromExpression ( const LinearExpressionProto & expr)

Returns the reference the expression can be reduced to. It will DCHECK that ExpressionContainsSingleRef(expr) is true.

Definition at line 580 of file cp_model_utils.cc.

◆ GetSolutionValues()

std::vector< int64_t > operations_research::sat::GetSolutionValues ( const CpModelProto & model_proto,
const Model & model )

For ignored or not fully instantiated variable, we just use the lower bound.

Just use the lower bound if the variable is not fully instantiated.

Todo
(user): Checks against initial model.

Definition at line 286 of file cp_model_solver_helpers.cc.

◆ GetSuperAdditiveRoundingFunction()

std::function< IntegerValue(IntegerValue)> operations_research::sat::GetSuperAdditiveRoundingFunction ( IntegerValue rhs_remainder,
IntegerValue divisor,
IntegerValue t,
IntegerValue max_scaling )

Adjust after the multiplication by t.

Make sure we don't have an integer overflow below. Note that we assume that divisor and the maximum coeff magnitude are not too different (maybe a factor 1000 at most) so that the final result will never overflow.

Todo
(user): Use everywhere a two step computation to avoid overflow? First divide by divisor, then multiply by t. For now, we limit t so that we never have an overflow instead.

Because of our max_t limitation, the rhs_remainder might stay small.

If it is "too small" we cannot use the code below because it will not be valid. So we just divide divisor into max_scaling bucket. The rhs_remainder will be in the bucket 0.

Note(user): This seems the same as just increasing t, modulo integer overflows. Maybe we should just always do the computation like this so that we can use larger t even if coeff is close to kint64max.

We divide (size = divisor - rhs_remainder) into (max_scaling - 1) buckets and increase the function by 1 / max_scaling for each of them.

Note
for different values of max_scaling, we get a family of functions that do not dominate each others. So potentially, a max scaling as low as 2 could lead to the better cut (this is exactly the Letchford & Lodi function).

Another interesting fact, is that if we want to compute the maximum alpha for a constraint with 2 terms like: divisor * Y + (ratio * divisor + remainder) * X <= rhs_ratio * divisor + rhs_remainder so that we have the cut: Y + (ratio + alpha) * X <= rhs_ratio This is the same as computing the maximum alpha such that for all integer X > 0 we have CeilRatio(alpha * divisor * X, divisor) <= CeilRatio(remainder * X - rhs_remainder, divisor). We can prove that this alpha is of the form (n - 1) / n, and it will be reached by such function for a max_scaling of n.

Todo
(user): This function is not always maximal when size % (max_scaling - 1) == 0. Improve?

Definition at line 496 of file cuts.cc.

◆ GetSuperAdditiveStrengtheningFunction()

std::function< IntegerValue(IntegerValue)> operations_research::sat::GetSuperAdditiveStrengtheningFunction ( IntegerValue positive_rhs,
IntegerValue min_magnitude )

If we have an equation sum ci.Xi >= rhs with everything positive, and all ci are >= min_magnitude then any ci >= rhs can be set to rhs. Also if some ci are in [rhs - min, rhs) then they can be strenghtened to rhs - min.

If we apply this to the negated equation (sum -ci.Xi + sum cj.Xj <= -rhs) with potentially positive terms, this reduce to apply a super-additive function:

Plot look like: x=-rhs x=0 | | y=0 : | ------------------------------— | — | / |— y=-rhs ----—

Todo
(user): Extend it for ci >= max_magnitude, we can probaly "lift" such coefficient.

The transformation only work if 2 * second_threshold >= positive_rhs.

Todo
(user): Limit the number of value used with scaling like above.

This should actually never happen by the definition of min_magnitude. But with it, the function is supper-additive even if min_magnitude is not correct.

Todo
(user): we might want to intoduce some step to reduce the final magnitude of the cut.

Definition at line 581 of file cuts.cc.

◆ GetSuperAdditiveStrengtheningMirFunction()

std::function< IntegerValue(IntegerValue)> operations_research::sat::GetSuperAdditiveStrengtheningMirFunction ( IntegerValue positive_rhs,
IntegerValue scaling )

Similar to above but with scaling of the linear part to just have at most scaling values.

Simple case, no scaling required.

We need to scale.

We divide [-positive_rhs + 1, 0] into (scaling - 1) bucket.

Definition at line 616 of file cuts.cc.

◆ GreaterOrEqual() [1/2]

std::function< void(Model *)> operations_research::sat::GreaterOrEqual ( IntegerVariable a,
IntegerVariable b )
inline

a >= b.

Definition at line 629 of file precedences.h.

◆ GreaterOrEqual() [2/2]

std::function< void(Model *)> operations_research::sat::GreaterOrEqual ( IntegerVariable v,
int64_t lb )
inline

Definition at line 1983 of file integer.h.

◆ GreaterOrEqualToMiddleValue()

IntegerLiteral operations_research::sat::GreaterOrEqualToMiddleValue ( IntegerVariable var,
IntegerTrail * integer_trail )

Returns decision corresponding to var >= lb + max(1, (ub - lb) / 2). It also CHECKs that the variable is not fixed.

Definition at line 76 of file integer_search.cc.

◆ GreaterThanAtLeastOneOf()

std::function< void(Model *)> operations_research::sat::GreaterThanAtLeastOneOf ( IntegerVariable target_var,
const absl::Span< const IntegerVariable > vars,
const absl::Span< const IntegerValue > offsets,
const absl::Span< const Literal > selectors,
const absl::Span< const Literal > enforcements )
inline

Definition at line 138 of file cp_constraints.h.

◆ GreedyFastDecreasingGcd()

std::vector< int > operations_research::sat::GreedyFastDecreasingGcd ( absl::Span< const int64_t > coeffs)

Returns an ordering of the indices of coefficients such that the GCD of its initial segments decreases fast. As the product of the 15 smallest prime numbers is the biggest fitting in an int64_t, it is guaranteed that the GCD becomes stationary after at most 15 steps. Returns an empty vector if the GCD is equal to the absolute value of one of the coefficients.

Todo
(user): The following is a heuristic to make drop the GCD as fast as possible. It might be suboptimal in general (as we could miss two coprime coefficients for instance).

initial_count is very small (proven <= 15, usually much smaller).

Definition at line 66 of file diophantine.cc.

◆ HasEnforcementLiteral()

bool operations_research::sat::HasEnforcementLiteral ( const ConstraintProto & ct)
inline

Small utility functions to deal with half-reified constraints.

Definition at line 48 of file cp_model_utils.h.

◆ Implication() [1/2]

std::function< void(Model *)> operations_research::sat::Implication ( absl::Span< const Literal > enforcement_literals,
IntegerLiteral i )
inline
Todo
(user): This is one of the rare case where it is better to use Equality() rather than two Implications(). Maybe we should modify our internal implementation to use half-reified encoding? that is do not propagate the direction integer-bound => literal, but just literal => integer-bound? This is the same as using different underlying variable for an integer literal and its negation.

Always true! nothing to do.

Always false.

Todo
(user): Double check what happen when we associate a trivially true or false literal.

Definition at line 2025 of file integer.h.

◆ Implication() [2/2]

std::function< void(Model *)> operations_research::sat::Implication ( Literal a,
Literal b )
inline

a => b.

Definition at line 942 of file sat_solver.h.

◆ ImpliesInInterval()

std::function< void(Model *)> operations_research::sat::ImpliesInInterval ( Literal in_interval,
IntegerVariable v,
int64_t lb,
int64_t ub )
inline

in_interval => v in [lb, ub].

Definition at line 2052 of file integer.h.

◆ ImportModelAndDomainsWithBasicPresolveIntoContext()

bool operations_research::sat::ImportModelAndDomainsWithBasicPresolveIntoContext ( const CpModelProto & in_model,
const std::vector< Domain > & domains,
std::function< bool(int)> active_constraints,
PresolveContext * context )

Same as ImportModelWithBasicPresolveIntoContext() except that variable domains are read from domains.

Definition at line 12365 of file cp_model_presolve.cc.

◆ ImportModelWithBasicPresolveIntoContext()

bool operations_research::sat::ImportModelWithBasicPresolveIntoContext ( const CpModelProto & in_model,
PresolveContext * context )

Copy in_model to the model in the presolve context. It performs on the fly simplification, and returns false if the model is proved infeasible. If reads the parameters 'ignore_names' and keeps or deletes variables and constraints names accordingly.

This should only be called on the first copy of the user given model.

Note
this reorder all constraints that use intervals last. We loose the user-defined order, but hopefully that should not matter too much.

Definition at line 12353 of file cp_model_presolve.cc.

◆ InclusionDetector()

template<typename Storage >
operations_research::sat::InclusionDetector ( const Storage & storage) -> InclusionDetector< Storage >

Deduction guide.

◆ IncreaseNodeSize()

void operations_research::sat::IncreaseNodeSize ( EncodingNode * node,
SatSolver * solver )

Increases the size of the given node by one. To keep all the needed relations with its children, we also need to increase their size by one, and so on recursively. Also adds all the necessary clauses linking the newly added literals.

Only one side of the constraint is mandatory (the one propagating the ones to the top of the encoding tree), and it seems more efficient not to encode the other side.

Todo
(user): Experiment more.

Integer leaf node.

Note
since we were able to increase its size, n must have children. n->GreaterThan(target) is the new literal of n.

Add a literal to a if needed. That is, now that the node n can go up to it new current_ub, if we need to increase the current_ub of a.

Add a literal to b if needed.

Wire the new literal of n correctly with its two children.

if x <= ia and y <= ib then x + y <= ia + ib.

if x > ia and y > ib - 1 then x + y > ia + ib.

Case ia = a->lb() - 1; a->GreaterThan(ia) always true.

case ia == a->ub; a->GreaterThan(ia) always false.

Definition at line 289 of file encoding.cc.

◆ InitializeDebugSolution()

void operations_research::sat::InitializeDebugSolution ( const CpModelProto & model_proto,
Model * model )

This both copy the "main" DebugSolution to a local_model and also cache the value of the integer variables in that solution.

Copy the proto values.

Fill the values by integer variable.

If the solution is fully boolean (there is no integer variable), and we have a decision problem (so no new boolean should be created), we load it in the sat solver for debugging too.

The objective variable is usually not part of the proto, but it is still nice to have it, so we recompute it here.

We also register a DEBUG callback to check our reasons.

First case, this Boolean is mapped.

Second case, it is associated to IntVar >= value. We can use any of them, so if one is false, we use this one.

Note the sign is inversed, we cannot have all literal false and all integer literal true.

Definition at line 142 of file cp_model_solver_helpers.cc.

◆ InsertVariablesFromConstraint()

template<typename Set >
void operations_research::sat::InsertVariablesFromConstraint ( const CpModelProto & model_proto,
int index,
Set & output )

Insert variables in a constraint into a set.

Definition at line 104 of file cp_model_utils.h.

◆ InstrumentSearchStrategy()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::InstrumentSearchStrategy ( const CpModelProto & cp_model_proto,
const std::vector< IntegerVariable > & variable_mapping,
std::function< BooleanOrIntegerLiteral()> instrumented_strategy,
Model * model )

For debugging fixed-search: display information about the named variables domain before taking each decision. Note that we copy the instrumented strategy so it doesn't have to outlive the returned functions like the other arguments.

Definition at line 421 of file cp_model_search.cc.

◆ IntegerTermDebugString()

std::string operations_research::sat::IntegerTermDebugString ( IntegerVariable var,
IntegerValue coeff )
inline

Definition at line 203 of file integer.h.

◆ IntegerTypeMinimumValue() [1/2]

template<typename IntegerType >
IntegerType operations_research::sat::IntegerTypeMinimumValue ( )
constexpr

The minimal value of an envelope, for instance the envelope of the empty set.

The Theta-Lambda tree can be used to implement several scheduling algorithms.

This template class is instantiated only for IntegerValue and int64_t.

The tree structure itself is a binary tree coded in a vector, where node 0 is unused, node 1 is the root, node 2 is the left child of the root, node 3 its right child, etc.

The API gives access to rightmost events that realize a given envelope.

See: _ (0) Petr Vilim's PhD thesis "Global Constraints in Scheduling". _ (1) Petr Vilim "Edge Finding Filtering Algorithm for Discrete Cumulative Resources in O(kn log n)" _ (2) Petr Vilim "Max energy filtering algorithm for discrete cumulative resources". _ (3) Wolf & Schrader "O(n log n) Overload Checking for the Cumulative Constraint and Its Application". _ (4) Kameugne & Fotso "A cumulative not-first/not-last filtering algorithm in O(n^2 log n)". _ (5) Ouellet & Quimper "Time-table extended-edge-finding for the cumulative constraint".

Instead of providing one declination of the theta-tree per possible filtering algorithm, this generalization intends to provide a data structure that can fit several algorithms. This tree is based around the notion of events. It has events at its leaves that can be present or absent, and present events come with an initial_envelope, a minimal and a maximal energy. All nodes maintain values on the set of present events under them: _ sum_energy_min(node) = sum_{leaf \in leaves(node)} energy_min(leaf) _ envelope(node) = max_{leaf \in leaves(node)} initial_envelope(leaf) + sum_{leaf' \in leaves(node), leaf' >= leaf} energy_min(leaf').

Thus, the envelope of a leaf representing an event, when present, is initial_envelope(event) + sum_energy_min(event).

We also maintain envelope_opt with is the maximum envelope a node could take if at most one of the events were at its maximum energy. _ energy_delta(leaf) = energy_max(leaf) - energy_min(leaf) _ max_energy_delta(node) = max_{leaf \in leaves(node)} energy_delta(leaf) _ envelope_opt(node) = max_{leaf \in leaves(node)} initial_envelope(leaf) + sum_{leaf' \in leaves(node), leaf' >= leaf} energy_min(leaf') + max_{leaf' \in leaves(node), leaf' >= leaf} energy_delta(leaf');

Most articles using theta-tree variants hack Vilim's original theta tree for the disjunctive resource constraint by manipulating envelope and energy: _ in (0), initial_envelope = start_min, energy = duration _ in (3), initial_envelope = C * start_min, energy = demand * duration _ in (5), there are several trees in parallel: initial_envelope = C * start_min or (C - h) * start_min energy = demand * duration, h * (Horizon - start_min), or h * (end_min). _ in (2), same as (3), but putting the max energy instead of min in lambda. _ in OscaR's TimeTableOverloadChecker, initial_envelope = C * start_min - energy of mandatory profile before start_min, energy = demand * duration

There is hope to unify the variants of these algorithms by abstracting the tasks away to reason only on events.

Definition at line 95 of file theta_tree.h.

◆ IntegerTypeMinimumValue() [2/2]

template<>
IntegerValue operations_research::sat::IntegerTypeMinimumValue ( )
constexpr

Definition at line 99 of file theta_tree.h.

◆ IntegerValueSelectionHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::IntegerValueSelectionHeuristic ( std::function< BooleanOrIntegerLiteral()> var_selection_heuristic,
Model * model )
Note
all these heuristic do not depend on the variable being positive or negative.
Todo
(user): Experiment more with value selection heuristics.

Changes the value of the given decision by 'var_selection_heuristic' according to various value selection heuristics. Looks at the code to know exactly what heuristic we use.

LP based value.

Note
we only do this if a big enough percentage of the problem variables appear in the LP relaxation.

Solution based value.

Objective based value.

Definition at line 363 of file integer_search.cc.

◆ IntervalIsVariable()

bool operations_research::sat::IntervalIsVariable ( const IntervalVariable interval,
IntervalsRepository * intervals_repository )

Ignore absent rectangles.

Checks non-present intervals.

Checks variable sized intervals.

Definition at line 1590 of file linear_relaxation.cc.

◆ IntTypeAbs()

template<class IntType >
IntType operations_research::sat::IntTypeAbs ( IntType t)
inline

Definition at line 81 of file integer.h.

◆ IsAssignmentValid()

bool operations_research::sat::IsAssignmentValid ( const LinearBooleanProblem & problem,
const std::vector< bool > & assignment )

Checks that an assignment is valid for the given BooleanProblem.

Check that all constraints are satisfied.

Definition at line 373 of file boolean_problem.cc.

◆ IsEqualToMaxOf()

std::function< void(Model *)> operations_research::sat::IsEqualToMaxOf ( IntegerVariable max_var,
const std::vector< IntegerVariable > & vars )
inline

Expresses the fact that an existing integer variable is equal to the maximum of other integer variables.

Definition at line 758 of file integer_expr.h.

◆ IsEqualToMinOf() [1/2]

std::function< void(Model *)> operations_research::sat::IsEqualToMinOf ( const LinearExpression & min_expr,
const std::vector< LinearExpression > & exprs )
inline

Expresses the fact that an existing integer variable is equal to the minimum of linear expressions. Assumes Canonical expressions (all positive coefficients).

Create a new variable if the expression is not just a single variable.

min_var = min_expr

Definition at line 717 of file integer_expr.h.

◆ IsEqualToMinOf() [2/2]

std::function< void(Model *)> operations_research::sat::IsEqualToMinOf ( IntegerVariable min_var,
const std::vector< IntegerVariable > & vars )
inline

Expresses the fact that an existing integer variable is equal to the minimum of other integer variables.

Definition at line 700 of file integer_expr.h.

◆ IsFixed()

std::function< bool(const Model &)> operations_research::sat::IsFixed ( IntegerVariable v)
inline

Definition at line 1967 of file integer.h.

◆ IsNegatableInt64()

bool operations_research::sat::IsNegatableInt64 ( absl::int128 x)
inline

Tells whether a int128 can be casted to a int64_t that can be negated.

Definition at line 700 of file util.h.

◆ IsOneOf()

std::function< void(Model *)> operations_research::sat::IsOneOf ( IntegerVariable var,
const std::vector< Literal > & selectors,
const std::vector< IntegerValue > & values )

Expresses the fact that an existing integer variable is equal to one of the given values, each selected by a given literal.

Note
it is more efficient to call AssociateToIntegerEqualValue() with the values ordered, like we do here.

Definition at line 1656 of file integer_expr.cc.

◆ IsOptional()

std::function< bool(const Model &)> operations_research::sat::IsOptional ( IntervalVariable v)
inline

Definition at line 952 of file intervals.h.

◆ IsPresentLiteral()

std::function< Literal(const Model &)> operations_research::sat::IsPresentLiteral ( IntervalVariable v)
inline

Definition at line 958 of file intervals.h.

◆ kCoefficientMax()

const Coefficient operations_research::sat::kCoefficientMax ( std::numeric_limits< Coefficient::ValueType > ::max())

IMPORTANT: We can't use numeric_limits<Coefficient>::max() which will compile but just returns zero!!

◆ kFalseLiteralIndex()

const LiteralIndex operations_research::sat::kFalseLiteralIndex ( - 3)

◆ kMaxIntegerValue()

IntegerValue operations_research::sat::kMaxIntegerValue ( std::numeric_limits< IntegerValue::ValueType >::max() - 1)
constexpr

The max range of an integer variable is [kMinIntegerValue, kMaxIntegerValue].

It is symmetric so the set of possible ranges stays the same when we take the negation of a variable. Moreover, we need some IntegerValue that fall outside this range on both side so that we can usually take care of integer overflow by simply doing "saturated arithmetic" and if one of the bound overflow, the two bounds will "cross" each others and we will get an empty range.

◆ kMinIntegerValue()

IntegerValue operations_research::sat::kMinIntegerValue ( -kMaxIntegerValue. value())
constexpr

◆ kNoBooleanVariable()

const BooleanVariable operations_research::sat::kNoBooleanVariable ( - 1)

◆ kNoClauseIndex()

const ClauseIndex operations_research::sat::kNoClauseIndex ( - 1)

◆ kNoIntegerVariable()

const IntegerVariable operations_research::sat::kNoIntegerVariable ( - 1)

◆ kNoIntervalVariable()

const IntervalVariable operations_research::sat::kNoIntervalVariable ( - 1)

◆ kNoLiteralIndex()

const LiteralIndex operations_research::sat::kNoLiteralIndex ( - 1)

◆ kTrueLiteralIndex()

const LiteralIndex operations_research::sat::kTrueLiteralIndex ( - 2)

Special values used in some API to indicate a literal that is always true or always false.

◆ LazyMerge()

EncodingNode operations_research::sat::LazyMerge ( EncodingNode * a,
EncodingNode * b,
SatSolver * solver )

Merges the two given EncodingNodes by creating a new node that corresponds to the sum of the two given ones. Only the left-most binary variable is created for the parent node, the other ones will be created later when needed.

Definition at line 279 of file encoding.cc.

◆ LazyMergeAllNodeWithPQAndIncreaseLb()

EncodingNode * operations_research::sat::LazyMergeAllNodeWithPQAndIncreaseLb ( Coefficient weight,
const std::vector< EncodingNode * > & nodes,
SatSolver * solver,
std::deque< EncodingNode > * repository )

Same as MergeAllNodesWithDeque() but use a priority queue to merge in priority nodes with smaller sizes. This also enforce that the sum of nodes is greater than its lower bound.

Definition at line 461 of file encoding.cc.

◆ LinearBooleanProblemToCnfString()

std::string operations_research::sat::LinearBooleanProblemToCnfString ( const LinearBooleanProblem & problem)

Note(user): This function makes a few assumptions about the format of the given LinearBooleanProblem. All constraint coefficients must be 1 (and of the form >= 1) and all objective weights must be strictly positive.

Converts a LinearBooleanProblem to the cnf file format.

Note
this only works for pure SAT problems (only clauses), max-sat or weighted max-sat problems. Returns an empty string on error.

Hack: We know that all the variables with index greater than this have been created "artificially" in order to encode a max-sat problem into our format. Each extra variable appear only once, and was used as a slack to reify a soft clause.

This will contains the objective.

This will be the weight of the "hard" clauses in the wcnf format. It must be greater than the sum of the weight of all the soft clauses, so we will just set it to this sum + 1.

There is no direct support for an objective offset in the wcnf format. So this is not a perfect translation of the objective. It is however possible to achieve the same effect by adding a new variable x, and two soft clauses: x with weight offset, and -x with weight offset.

Todo
(user): implement this trick.

Output the rest of the objective as singleton constraints.

Since it is falsifying this clause that cost "weigtht", we need to take its negation.

Definition at line 403 of file boolean_problem.cc.

◆ LinearExpressionGcd()

int64_t operations_research::sat::LinearExpressionGcd ( const LinearExpressionProto & expr,
int64_t gcd = 0 )

Returns the gcd of the given LinearExpressionProto. Specifying the second argument will take the gcd with it.

Definition at line 51 of file cp_model_utils.cc.

◆ LinearExpressionProtosAreEqual()

bool operations_research::sat::LinearExpressionProtosAreEqual ( const LinearExpressionProto & a,
const LinearExpressionProto & b,
int64_t b_scaling )

Returns true iff a == b * b_scaling.

Definition at line 619 of file cp_model_utils.cc.

◆ LinearInequalityCanBeReducedWithClosestMultiple()

bool operations_research::sat::LinearInequalityCanBeReducedWithClosestMultiple ( int64_t base,
absl::Span< const int64_t > coeffs,
absl::Span< const int64_t > lbs,
absl::Span< const int64_t > ubs,
int64_t rhs,
int64_t * new_rhs )

Given a linear equation "sum coeff_i * X_i <= rhs. We can rewrite it using ClosestMultiple() as "base * new_terms + error <= rhs" where error can be bounded using the provided bounds on each variables. This will return true if the error can be ignored and this equation is completely equivalent to new_terms <= new_rhs.

This is useful for cases like 9'999 X + 10'0001 Y <= 155'000 where we have weird coefficient (maybe due to scaling). With a base of 10K, this is equivalent to X + Y <= 15.

Preconditions: All coeffs are assumed to be positive. You can easily negate all the negative coeffs and corresponding bounds before calling this.

Precompute some bounds for the equation base * X + error <= rhs.

The constraint is trivially true.

This is the max error assuming that activity > rhs.

We have old solution valid => base * X + error <= rhs base * X <= rhs - error base * X <= rhs - min_error X <= new_rhs

And we have old solution invalid => base * X + error >= rhs + 1 base * X >= rhs + 1 - max_error_if_invalid X >= infeasibility_bound

If the two bounds can be separated, we have an equivalence !

Definition at line 280 of file util.cc.

◆ LinearizedPartIsLarge()

bool operations_research::sat::LinearizedPartIsLarge ( Model * model)

Returns true if the number of variables in the linearized part represent a large enough proportion of all the problem variables.

Definition at line 347 of file integer_search.cc.

◆ LinearsDifferAtOneTerm()

bool operations_research::sat::LinearsDifferAtOneTerm ( const LinearConstraintProto & lin1,
const LinearConstraintProto & lin2 )
inline

Returns true iff the two linear constraint only differ at a single term.

Preconditions: Constraint should be sorted by variable and of same size.

Definition at line 352 of file presolve_util.h.

◆ Literals()

std::vector< Literal > operations_research::sat::Literals ( absl::Span< const int > input)
inline

Only used for testing to use the classical SAT notation for a literal. This allows to write Literals({+1, -4, +3}) for the clause with BooleanVariable 0 and 2 appearing positively and 3 negatively.

Definition at line 146 of file sat_base.h.

◆ LiteralTableConstraint()

std::function< void(Model *)> operations_research::sat::LiteralTableConstraint ( const std::vector< std::vector< Literal > > & literal_tuples,
const std::vector< Literal > & line_literals )

Enforces that exactly one literal in line_literals is true, and that all literals in the corresponding line of the literal_tuples matrix are true. This constraint assumes that exactly one literal per column of the literal_tuples matrix is true.

line_literals[i] == true => literal_tuples[i][j] == true. literal_tuples[i][j] == false => line_literals[i] == false.

Exactly one selected literal is true.

If all selected literals of the lines containing a literal are false, then the literal is false.

Definition at line 30 of file table.cc.

◆ LiteralXorIs()

std::function< void(Model *)> operations_research::sat::LiteralXorIs ( const std::vector< Literal > & literals,
bool value )
inline

Enforces the XOR of a set of literals to be equal to the given value.

Definition at line 126 of file cp_constraints.h.

◆ LoadAllDiffConstraint()

void operations_research::sat::LoadAllDiffConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1512 of file cp_model_loader.cc.

◆ LoadAndConsumeBooleanProblem()

bool operations_research::sat::LoadAndConsumeBooleanProblem ( LinearBooleanProblem * problem,
SatSolver * solver )

Same as LoadBooleanProblem() but also free the memory used by the problem during the loading. This allows to use less peak memory. Note that this function clear all the constraints of the given problem (not the objective though).

We will process the constraints backward so we can free the memory used by each constraint just after processing it. Because of that, we initially reverse all the constraints to add them in the same order.

Definition at line 272 of file boolean_problem.cc.

◆ LoadAndSolveCpModelForTest()

void operations_research::sat::LoadAndSolveCpModelForTest ( const CpModelProto & model_proto,
Model * model )
Todo
(user): Clean this up. Solves a CpModelProto without any processing. Only used for unit tests.

Definition at line 2597 of file cp_model_solver.cc.

◆ LoadAtMostOneConstraint()

void operations_research::sat::LoadAtMostOneConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1037 of file cp_model_loader.cc.

◆ LoadBaseModel()

void operations_research::sat::LoadBaseModel ( const CpModelProto & model_proto,
Model * model )

Simple function for the few places where we do "return unsat()".

We will add them all at once after model_proto is loaded.

Todo
(user): The core algo and symmetries seems to be problematic in some cases. See for instance: neos-691058.mps.gz. This is probably because as we modify the model, our symmetry might be wrong? investigate.
Todo
(user): More generally, we cannot load the symmetry if we create new Booleans and constraints that link them to some Booleans of the model. Creating Booleans related to integer variable is fine since we only deal with Boolean only symmetry here. It is why we disable this when we have linear relaxation as some of them create new constraints.

Check the model is still feasible before continuing.

Fully encode variables as needed by the search strategy.

Reserve space for the precedence relations.

Load the constraints.

We propagate after each new Boolean constraint but not the integer ones. So we call FinishPropagation() manually here.

Note
we only do that in debug mode as this can be really slow on certain types of problems with millions of constraints.

Definition at line 916 of file cp_model_solver_helpers.cc.

◆ LoadBoolAndConstraint()

void operations_research::sat::LoadBoolAndConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1023 of file cp_model_loader.cc.

◆ LoadBooleanProblem()

bool operations_research::sat::LoadBooleanProblem ( const LinearBooleanProblem & problem,
SatSolver * solver )

Loads a BooleanProblem into a given SatSolver instance.

Todo
(user): Currently, the sat solver can load without any issue constraints with duplicate variables, so we just output a warning if the problem is not "valid". Make this a strong check once we have some preprocessing step to remove duplicates variable in the constraints.

Definition at line 232 of file boolean_problem.cc.

◆ LoadBooleanSymmetries()

void operations_research::sat::LoadBooleanSymmetries ( const CpModelProto & model_proto,
Model * m )

Experimental. Loads the symmetry form the proto symmetry field, as long as they only involve Booleans.

Todo
(user): We currently only have the code for Booleans, it is why we currently ignore symmetries involving integer variables.

We currently can only use symmetry that touch a subset of variables.

First, we currently only support loading symmetry between Booleans.

Tricky: Moreover, some constraint will causes extra Boolean to be created and linked with the Boolean in the constraints. We can't use any of the symmetry that touch these since we potentially miss the component that will map these extra Booleans between each other.

Todo
(user): We could add these extra Boolean during expansion/presolve so that we have the symmetry involing them. Or maybe comes up with a different solution.

A linear with a complex domain might need extra Booleans to be loaded.

Note
it should be fine for the Boolean(s) in enforcement_literal though.

Convert the variable symmetry to a "literal" one.

Note
we also need to add the corresponding cycle for the negated literals.

Definition at line 306 of file cp_model_loader.cc.

◆ LoadBoolOrConstraint()

void operations_research::sat::LoadBoolOrConstraint ( const ConstraintProto & ct,
Model * m )

Constraint loading functions.

Definition at line 1010 of file cp_model_loader.cc.

◆ LoadBoolXorConstraint()

void operations_research::sat::LoadBoolXorConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1053 of file cp_model_loader.cc.

◆ LoadCircuitConstraint()

void operations_research::sat::LoadCircuitConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1650 of file cp_model_loader.cc.

◆ LoadCircuitCoveringConstraint()

void operations_research::sat::LoadCircuitCoveringConstraint ( const ConstraintProto & ct,
Model * m )

◆ LoadConditionalLinearConstraint()

void operations_research::sat::LoadConditionalLinearConstraint ( const absl::Span< const Literal > enforcement_literals,
const LinearConstraint & cst,
Model * model )
inline

LinearConstraint version.

The enforcement literals cannot be all at true.

Todo
(user): Remove the conversion!

Definition at line 616 of file integer_expr.h.

◆ LoadConstraint()

bool operations_research::sat::LoadConstraint ( const ConstraintProto & ct,
Model * m )

Calls one of the functions below. Returns false if we do not know how to load the given constraints.

Already dealt with.

Definition at line 1675 of file cp_model_loader.cc.

◆ LoadCpModel()

void operations_research::sat::LoadCpModel ( const CpModelProto & model_proto,
Model * model )

Loads a CpModelProto inside the given model. This should only be called once on a given 'Model' class.

We want to load the debug solution before the initial propag. But at this point the objective is not loaded yet, so we will not have a value for the objective integer variable, so we do it again later.

Simple function for the few places where we do "return unsat()".

Auto detect "at least one of" constraints in the PrecedencesPropagator.

Note
we do that before we finish loading the problem (objective and LP relaxation), because propagation will be faster at this point and it should be enough for the purpose of this auto-detection.
this is already done in the presolve, but it is important to redo it here to collect literal => integer >= bound constraints that are used in many places. Without it, we don't detect them if they depends on long chain of implications.
Todo
(user): We don't have a good deterministic time on all constraints, so this might take more time than wanted.
Note
it is important to do that after the probing.

Compute decomposed energies on demands helper.

We need to know beforehand if the objective var can just be >= terms or needs to be == terms.

Create an objective variable and its associated linear constraint if needed.

Linearize some part of the problem and register LP constraint(s).

We do not care about the <= obj for core, we only need the other side to enforce a restriction of the objective lower bound.

Todo
(user): This might still create intermediate variables to decompose the objective for no reason. Just deal directly with the objective domain in the core algo by forbidding bad assumptions? Alternatively, just ignore the core solution if it is "too" good and rely on other solvers?

Create the objective definition inside the Model so that it can be accessed by the heuristics than needs it.

Note
if there is no mapping, then the variable will be kNoIntegerVariable.

Fill the objective heuristics data.

Register an objective special propagator.

Intersect the objective domain with the given one if any.

Note
we do one last propagation at level zero once all the constraints were added.

Report the initial objective variable bounds.

Watch improved objective best bounds.

Import objective bounds.

Todo
(user): Support objective bounds import in LNS and Core based search.

Initialize the search strategies.

Create the CoreBasedOptimizer class if needed.

Todo
(user): Remove code duplication with the solution_observer in SolveLoadedCpModel().

Definition at line 1063 of file cp_model_solver_helpers.cc.

◆ LoadCumulativeConstraint()

void operations_research::sat::LoadCumulativeConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1619 of file cp_model_loader.cc.

◆ LoadDebugSolution()

void operations_research::sat::LoadDebugSolution ( const CpModelProto & model_proto,
Model * model )

This should be called on the presolved model. It will read the file specified by –cp_model_load_debug_solution and properly fill the model->Get<DebugSolution>() proto vector.

Make sure we load a solution with the same number of variable has in the presolved model.

Definition at line 121 of file cp_model_solver_helpers.cc.

◆ LoadExactlyOneConstraint()

void operations_research::sat::LoadExactlyOneConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1043 of file cp_model_loader.cc.

◆ LoadFeasibilityPump()

void operations_research::sat::LoadFeasibilityPump ( const CpModelProto & model_proto,
Model * model )

Add linear constraints to Feasibility Pump.

Definition at line 1031 of file cp_model_solver_helpers.cc.

◆ LoadIntDivConstraint()

void operations_research::sat::LoadIntDivConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1554 of file cp_model_loader.cc.

◆ LoadIntMaxConstraint()

void operations_research::sat::LoadIntMaxConstraint ( const ConstraintProto & ct,
Model * m )

◆ LoadIntMinConstraint()

void operations_research::sat::LoadIntMinConstraint ( const ConstraintProto & ct,
Model * m )

◆ LoadIntModConstraint()

void operations_research::sat::LoadIntModConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1574 of file cp_model_loader.cc.

◆ LoadIntProdConstraint()

void operations_research::sat::LoadIntProdConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1519 of file cp_model_loader.cc.

◆ LoadLinearConstraint() [1/2]

void operations_research::sat::LoadLinearConstraint ( const ConstraintProto & ct,
Model * m )
Todo
(user): Actually this should never be called since we process linear1 in ExtractEncoding().

Compute the min/max to relax the bounds if needed.

Todo
(user): Reuse ComputeLinearBounds()? but then we need another loop to detect if we only have Booleans.

Load conditional precedences.

To avoid overflow in the code below, we tighten the bounds.

Load precedences.

To avoid overflow in the code below, we tighten the bounds.

Note
we detect and do not add trivial relation.

magnitude * v1 <= magnitude * v2 + rhs_max.

magnitude * v1 >= magnitude * v2 + rhs_min.

Make the terms magnitude * v1 - magnitude * v2 ...

magnitude * v1 + other_lb <= magnitude * v2 + rhs_max

magnitude * v1 + other_ub >= magnitude * v2 + rhs_min

Note
the domain/enforcement of the main constraint do not change. Same for the min/sum and max_sum. The intermediate variables are always equal to the intermediate sum, independently of the enforcement.
Todo
(user): we should probably also implement an half-reified version of this constraint.

We have a linear with a complex Domain, we need to create extra Booleans.

In this case, we can create just one Boolean instead of two since one is the negation of the other.

For enforcement => var \in domain, we can potentially reuse the encoding literal directly rather than creating new ones.

Make sure all booleans are tights when enumerating all solutions.

Definition at line 1215 of file cp_model_loader.cc.

◆ LoadLinearConstraint() [2/2]

void operations_research::sat::LoadLinearConstraint ( const LinearConstraint & cst,
Model * model )
inline

Definition at line 648 of file integer_expr.h.

◆ LoadLinMaxConstraint()

void operations_research::sat::LoadLinMaxConstraint ( const ConstraintProto & ct,
Model * m )
Todo
(user): Consider replacing the min propagator by max.

Definition at line 1586 of file cp_model_loader.cc.

◆ LoadModelForProbing()

bool operations_research::sat::LoadModelForProbing ( PresolveContext * context,
Model * local_model )

Load the constraints in a local model.

Todo
(user): The model we load does not contain affine relations! But ideally we should be able to remove all of them once we allow more complex constraints to contains linear expression.
Todo
(user): remove code duplication with cp_model_solver. Here we also do not run the heuristic to decide which variable to fully encode.
Todo
(user): Maybe do not load slow to propagate constraints? for instance we do not use any linear relaxation here.

Utility function to load the current problem into a in-memory representation that will be used for probing. Returns false if UNSAT.

Update the domain in the current CpModelProto.

Adapt some of the parameters during this probing phase.

Important: Because the model_proto do not contains affine relation or the objective, we cannot call DetectOptionalVariables() ! This might wrongly detect optionality and derive bad conclusion.

Definition at line 2291 of file presolve_context.cc.

◆ LoadNoOverlap2dConstraint()

void operations_research::sat::LoadNoOverlap2dConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1609 of file cp_model_loader.cc.

◆ LoadNoOverlapConstraint()

void operations_research::sat::LoadNoOverlapConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1604 of file cp_model_loader.cc.

◆ LoadReservoirConstraint()

void operations_research::sat::LoadReservoirConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1629 of file cp_model_loader.cc.

◆ LoadRoutesConstraint()

void operations_research::sat::LoadRoutesConstraint ( const ConstraintProto & ct,
Model * m )

Definition at line 1662 of file cp_model_loader.cc.

◆ LoadSubcircuitConstraint()

void operations_research::sat::LoadSubcircuitConstraint ( int num_nodes,
const std::vector< int > & tails,
const std::vector< int > & heads,
const std::vector< Literal > & literals,
Model * model,
bool multiple_subcircuit_through_zero = false )

Model based functions. This just wraps CircuitPropagator. See the comment there to see what this does. Note that any nodes with no outgoing or no incoming arc will cause the problem to be UNSAT. One can call ReindexArcs() first to ignore such nodes.

If a node has no outgoing or no incoming arc, the model will be unsat as soon as we add the corresponding ExactlyOneConstraint().

Definition at line 647 of file circuit.cc.

◆ LoadVariables()

void operations_research::sat::LoadVariables ( const CpModelProto & model_proto,
bool view_all_booleans_as_integers,
Model * m )

Extracts all the used variables in the CpModelProto and creates a sat::Model representation for them. More precisely

  • All Boolean variables will be mapped.
  • All Interval variables will be mapped.
  • All non-Boolean variable will have a corresponding IntegerVariable, and depending on the view_all_booleans_as_integers, some or all of the BooleanVariable will also have an IntegerVariable corresponding to its "integer view".

Note(user): We could create IntegerVariable on the fly as they are needed, but that loose the original variable order which might be useful in heuristics later.

All [0, 1] variables always have a corresponding Boolean, even if it is fixed to 0 (domain == [0,0]) or fixed to 1 (domain == [1,1]).

Compute the list of positive variable reference for which we need to create an IntegerVariable.

Compute the integer variable references used by the model.

We always add a linear relaxation for circuit/route except for linearization level zero.

Add the objectives variables that needs to be referenceable as integer even if they are only used as Booleans.

Make sure any unused variable, that is not already a Boolean is considered "used".

We want the variable in the problem order.

It is important for memory usage to reserve tight vector has we have many indexed by IntegerVariable. Unfortunately, we create intermediate IntegerVariable while loading large linear constraint, or when we have disjoint LP component. So this is a best effort at a tight upper bound.

Link any variable that has both views.

Associate with corresponding integer variable.

Create the interval variables.

Todo
(user): Fix the constant variable situation. An optional interval with constant start/end or size cannot share the same constant variable if it is used in non-optional situation.

Definition at line 126 of file cp_model_loader.cc.

◆ LookForTrivialSatSolution()

bool operations_research::sat::LookForTrivialSatSolution ( double deterministic_time_limit,
Model * model,
SolverLogger * logger )

Try to randomly tweak the search and stop at the first conflict each time. This can sometimes find feasible solution, but more importantly, it is a form of probing that can sometimes find small and interesting conflicts or fix variables. This seems to work well on the SAT14/app/rook-* problems and do fix more variables if run before probing.

If a feasible SAT solution is found (i.e. all Boolean assigned), then this abort and leave the solver with the full solution assigned.

Returns false iff the problem is UNSAT.

Hack to not have empty logger.

Reset the solver in case it was already used.

Note
this code do not care about the non-Boolean part and just try to assign the existing Booleans.

SetParameters() reset the deterministic time to zero inside time_limit.

We randomize at the end so that the default params is executed at least once.

Restore the initial parameters.

Definition at line 418 of file probing.cc.

◆ LowerBound()

std::function< int64_t(const Model &)> operations_research::sat::LowerBound ( IntegerVariable v)
inline

Definition at line 1955 of file integer.h.

◆ LowerOrEqual() [1/2]

std::function< void(Model *)> operations_research::sat::LowerOrEqual ( IntegerVariable a,
IntegerVariable b )
inline

a <= b.

Model based functions.

Definition at line 569 of file precedences.h.

◆ LowerOrEqual() [2/2]

std::function< void(Model *)> operations_research::sat::LowerOrEqual ( IntegerVariable v,
int64_t ub )
inline

Definition at line 1998 of file integer.h.

◆ LowerOrEqualWithOffset()

std::function< void(Model *)> operations_research::sat::LowerOrEqualWithOffset ( IntegerVariable a,
IntegerVariable b,
int64_t offset )
inline

a + offset <= b.

Definition at line 577 of file precedences.h.

◆ LpPseudoCostHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::LpPseudoCostHeuristic ( Model * model)

When not reliable, we skip integer.

Todo
(user): Use strong branching when not reliable.
Todo
Todo
(user): do not branch on integer lp? however it seems better to do that !? Maybe this is because if it has a high pseudo cost average, it is good anyway?

We delay to subsequent heuristic if the score is 0.0.

This direction works better than the inverse in the benchs. But always branching up seems even better.

Todo
(user): investigate.

Definition at line 234 of file integer_search.cc.

◆ MakeAllCoefficientsPositive()

void operations_research::sat::MakeAllCoefficientsPositive ( LinearConstraint * constraint)

Makes all coefficients positive by transforming a variable to its negation.

Definition at line 290 of file linear_constraint.cc.

◆ MakeAllLiteralsPositive()

void operations_research::sat::MakeAllLiteralsPositive ( LinearBooleanProblem * problem)

Modifies the given LinearBooleanProblem so that all the literals appearing inside are positive.

Objective.

Constraints.

Definition at line 648 of file boolean_problem.cc.

◆ MakeAllVariablesPositive()

void operations_research::sat::MakeAllVariablesPositive ( LinearConstraint * constraint)

Makes all variables "positive" by transforming a variable to its negation.

Definition at line 301 of file linear_constraint.cc.

◆ MakeBoundsOfIntegerVariablesInteger()

bool operations_research::sat::MakeBoundsOfIntegerVariablesInteger ( const SatParameters & params,
MPModelProto * mp_model,
SolverLogger * logger )

This simple step helps and should be done first. Returns false if the model is trivially infeasible because of crossing bounds.

Definition at line 204 of file lp_utils.cc.

◆ MakeItemsFromRectangles()

std::vector< RectangleInRange > operations_research::sat::MakeItemsFromRectangles ( absl::Span< const Rectangle > rectangles,
double slack_factor,
absl::BitGenRef random )

Definition at line 74 of file 2d_orthogonal_packing_testing.cc.

◆ MaxNodeWeightSmallerThan()

Coefficient operations_research::sat::MaxNodeWeightSmallerThan ( const std::vector< EncodingNode * > & nodes,
Coefficient upper_bound )

Returns the maximum node weight under the given upper_bound. Returns zero if no such weight exist (note that a node weight is strictly positive, so this make sense).

Definition at line 574 of file encoding.cc.

◆ MaxSize()

std::function< int64_t(const Model &)> operations_research::sat::MaxSize ( IntervalVariable v)
inline

Definition at line 946 of file intervals.h.

◆ MergeAllNodesWithDeque()

EncodingNode * operations_research::sat::MergeAllNodesWithDeque ( Coefficient upper_bound,
const std::vector< EncodingNode * > & nodes,
SatSolver * solver,
std::deque< EncodingNode > * repository )

Merges all the given nodes two by two until there is only one left. Returns the final node which encodes the sum of all the given nodes.

Definition at line 439 of file encoding.cc.

◆ MinimizeCore()

void operations_research::sat::MinimizeCore ( SatSolver * solver,
std::vector< Literal > * core )

Tries to minimize the given UNSAT core with a really simple heuristic. The idea is to remove literals that are consequences of others in the core. We already know that in the initial order, no literal is propagated by the one before it, so we just look for propagation in the reverse order.

Important: The given SatSolver must be the one that just produced the given core.

Todo
(user): One should use MinimizeCoreWithPropagation() instead.

Definition at line 2781 of file sat_solver.cc.

◆ MinimizeCoreWithPropagation()

void operations_research::sat::MinimizeCoreWithPropagation ( TimeLimit * limit,
SatSolver * solver,
std::vector< Literal > * core )

Like MinimizeCore() with a slower but strictly better heuristic. This algorithm should produce a minimal core with respect to propagation. We put each literal of the initial core "last" at least once, so if such literal can be inferred by propagation by any subset of the other literal, it will be removed.

Note
the literal of the minimized core will stay in the same order.
Todo
(user): Avoid spending too much time trying to minimize a core.

We want each literal in candidate to appear last once in our propagation order. We want to do that while maximizing the reutilization of the current assignment prefix, that is minimizing the number of decision/progagation we need to perform.

This is a "weird" API to get the subset of decisions that caused this literal to be false with reason analysis.

We want to preserve the order of literal in the response.

Definition at line 62 of file optimization.cc.

◆ MinimizeCoreWithSearch()

void operations_research::sat::MinimizeCoreWithSearch ( TimeLimit * limit,
SatSolver * solver,
std::vector< Literal > * core )
Todo
(user): tune.

Find a not yet removed literal to remove. We prefer to remove high indices since these are more likely to be of high depth.

Todo
(user): Properly use the node depth instead.

Definition at line 119 of file optimization.cc.

◆ MinimizeIntegerVariableWithLinearScanAndLazyEncoding()

SatSolver::Status operations_research::sat::MinimizeIntegerVariableWithLinearScanAndLazyEncoding ( IntegerVariable objective_var,
const std::function< void()> & feasible_solution_observer,
Model * model )

Model-based API to minimize a given IntegerVariable by solving a sequence of decision problem. Each problem is solved using SolveIntegerProblem(). Returns the status of the last solved decision problem.

The feasible_solution_observer function will be called each time a new feasible solution is found.

Note
this function will resume the search from the current state of the solver, and it is up to the client to backtrack to the root node if needed.

Simple linear scan algorithm to find the optimal.

The objective is the current lower bound of the objective_var.

We have a solution!

Restrict the objective.

Definition at line 216 of file optimization.cc.

◆ MinimizeL1DistanceWithHint()

void operations_research::sat::MinimizeL1DistanceWithHint ( const CpModelProto & model_proto,
Model * model )

Solve a model with a different objective consisting of minimizing the L1 distance with the provided hint. Note that this method creates an in-memory copy of the model and loads a local Model object from the copied model.

Forward some shared class.

Todo
(user): As of now the repair hint doesn't support when enumerate_all_solutions is set since the solution is created on a different model.

Change the parameters.

Update the model to introduce penalties to go away from hinted values.

Todo
(user): For boolean variables we can avoid creating new variables.

Add a new var to represent the difference between var and value.

new_var = var - value.

abs_var = abs(new_var).

Solve optimization problem.

Definition at line 1539 of file cp_model_solver_helpers.cc.

◆ MinSize()

std::function< int64_t(const Model &)> operations_research::sat::MinSize ( IntervalVariable v)
inline

Model based functions.

Definition at line 940 of file intervals.h.

◆ ModularInverse()

int64_t operations_research::sat::ModularInverse ( int64_t x,
int64_t m )

Using the extended Euclidian algo, we find a and b such that a x + b m = gcd(x, m) https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm

Returns a in [0, m) such that a * x = 1 modulo m. If gcd(x, m) != 1, there is no inverse, and it returns 0.

This DCHECK that x is in [0, m). This is integer overflow safe.

Note(user): I didn't find this in a easily usable standard library.

We only keep the last two terms of the sequences with the "^1" trick:

q = r[i-2] / r[i-1] r[i] = r[i-2] % r[i-1] t[i] = t[i-2] - t[i-1] * q

We always have:

  • gcd(r[i], r[i - 1]) = gcd(r[i - 1], r[i - 2])
  • x * t[i] + m * t[i - 1] = r[i]

If the gcd is not one, there is no inverse, we returns 0.

Correct the result so that it is in [0, m). Note that abs(t[i]) is known to be less than or equal to x / 2, and we have thorough unit-tests.

Definition at line 149 of file util.cc.

◆ MostFractionalHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::MostFractionalHeuristic ( Model * model)

Choose the variable with most fractional LP value.

This choose <= value if possible.

Definition at line 181 of file integer_search.cc.

◆ MoveOneUnprocessedLiteralLast()

int operations_research::sat::MoveOneUnprocessedLiteralLast ( const absl::btree_set< LiteralIndex > & processed,
int relevant_prefix_size,
std::vector< Literal > * literals )

Context: this function is not really generic, but required to be unit-tested. It is used in a clause minimization algorithm when we try to detect if any of the clause literals can be propagated by a subset of the other literal being false. For that, we want to enqueue in the solver all the subset of size n-1.

This moves one of the unprocessed literal from literals to the last position. The function tries to do that while preserving the longest possible prefix of literals "amortized" through the calls assuming that we want to move each literal to the last position once.

For a vector of size n, if we want to call this n times so that each literal is last at least once, the sum of the size of the changed suffixes will be O(n log n). If we were to use a simpler algorithm (like moving the last unprocessed literal to the last position), this sum would be O(n^2).

Returns the size of the common prefix of literals before and after the move, or -1 if all the literals are already processed. The argument relevant_prefix_size is used as a hint when keeping more that this prefix size do not matter. The returned value will always be lower or equal to relevant_prefix_size.

To get O(n log n) size of suffixes, we will first process the last n/2 literals, we then move all of them first and process the n/2 literals left. We use the same algorithm recursively. The sum of the suffixes' size S(n) is thus S(n/2) + n + S(n/2). That gives us the correct complexity. The code below simulates one step of this algorithm and is made to be "robust" when from one call to the next, some literals have been removed (but the order of literals is preserved).

Once a prefix size has been decided, it is always better to enqueue the literal already processed first.

Definition at line 344 of file util.cc.

◆ MPModelProtoValidationBeforeConversion()

bool operations_research::sat::MPModelProtoValidationBeforeConversion ( const SatParameters & params,
const MPModelProto & mp_model,
SolverLogger * logger )

Performs some extra tests on the given MPModelProto and returns false if one is not satisfied. These are needed before trying to convert it to the native CP-SAT format.

Abort if there is constraint type we don't currently support.

Abort if finite variable bounds or objective is too large.

Abort if finite constraint bounds or coefficients are too large.

Definition at line 418 of file lp_utils.cc.

◆ NegatedRef()

int operations_research::sat::NegatedRef ( int ref)
inline

Small utility functions to deal with negative variable/literal references.

Definition at line 43 of file cp_model_utils.h.

◆ NegationOf() [1/3]

LinearExpression operations_research::sat::NegationOf ( const LinearExpression & expr)

Preserves canonicality.

Definition at line 423 of file linear_constraint.cc.

◆ NegationOf() [2/3]

std::vector< IntegerVariable > operations_research::sat::NegationOf ( const std::vector< IntegerVariable > & vars)

Returns the vector of the negated variables.

Definition at line 51 of file integer.cc.

◆ NegationOf() [3/3]

IntegerVariable operations_research::sat::NegationOf ( IntegerVariable i)
inline

Definition at line 185 of file integer.h.

◆ NewBestBoundCallback()

std::function< void(Model *)> operations_research::sat::NewBestBoundCallback ( const std::function< void(double)> & callback)

Creates a callbacks that will be called on each new best objective bound found.

Note that this function is called before the update takes place.

Definition at line 1990 of file cp_model_solver.cc.

◆ NewBooleanVariable()

std::function< BooleanVariable(Model *)> operations_research::sat::NewBooleanVariable ( )
inline

Model based functions.

Note
in the model API, we simply use int64_t for the integer values, so that it is nicer for the client. Internally these are converted to IntegerValue which is typechecked.

Definition at line 1893 of file integer.h.

◆ NewFeasibleSolutionLogCallback()

std::function< void(Model *)> operations_research::sat::NewFeasibleSolutionLogCallback ( const std::function< std::string(const CpSolverResponse &response)> & callback)

Creates a callbacks that will append a string to the search log when reporting a new solution.

The given function will be called on each improving feasible solution found during the search. For a non-optimization problem, if the option to find all solution was set, then this will be called on each new solution.

Definition at line 1982 of file cp_model_solver.cc.

◆ NewFeasibleSolutionObserver()

std::function< void(Model *)> operations_research::sat::NewFeasibleSolutionObserver ( const std::function< void(const CpSolverResponse &response)> & callback)

Creates a solution observer with the model with model.Add(NewFeasibleSolutionObserver([](response){...}));

The given function will be called on each improving feasible solution found during the search. For a non-optimization problem, if the option to find all solution was set, then this will be called on each new solution.

WARNING: Except when enumerate_all_solution() is true, one shouldn't rely on this to get a set of "diverse" solutions since any future change to the solver might completely kill any diversity in the set of solutions observed.

Valid usage of this includes implementing features like:

  • Enumerating all solution via enumerate_all_solution(). If only n solutions are needed, this can also be used to abort when this number is reached.
  • Aborting early if a good enough solution is found.
  • Displaying log progress.
  • etc...

Definition at line 1975 of file cp_model_solver.cc.

◆ NewIntegerVariable() [1/2]

std::function< IntegerVariable(Model *)> operations_research::sat::NewIntegerVariable ( const Domain & domain)
inline

Definition at line 1916 of file integer.h.

◆ NewIntegerVariable() [2/2]

std::function< IntegerVariable(Model *)> operations_research::sat::NewIntegerVariable ( int64_t lb,
int64_t ub )
inline

Definition at line 1907 of file integer.h.

◆ NewIntegerVariableFromLiteral()

std::function< IntegerVariable(Model *)> operations_research::sat::NewIntegerVariableFromLiteral ( Literal lit)
inline
Deprecated

Definition at line 1948 of file integer.h.

◆ NewInterval() [1/2]

std::function< IntervalVariable(Model *)> operations_research::sat::NewInterval ( int64_t min_start,
int64_t max_end,
int64_t size )
inline

Definition at line 965 of file intervals.h.

◆ NewInterval() [2/2]

std::function< IntervalVariable(Model *)> operations_research::sat::NewInterval ( IntegerVariable start,
IntegerVariable end,
IntegerVariable size )
inline

Definition at line 980 of file intervals.h.

◆ NewIntervalWithVariableSize()

std::function< IntervalVariable(Model *)> operations_research::sat::NewIntervalWithVariableSize ( int64_t min_start,
int64_t max_end,
int64_t min_size,
int64_t max_size )
inline

Definition at line 988 of file intervals.h.

◆ NewOptionalInterval() [1/2]

std::function< IntervalVariable(Model *)> operations_research::sat::NewOptionalInterval ( int64_t min_start,
int64_t max_end,
int64_t size,
Literal is_present )
inline
Note
this should only be used in tests.

To not have too many solutions during enumeration, we force the start at its min value for absent interval.

Definition at line 1000 of file intervals.h.

◆ NewOptionalInterval() [2/2]

std::function< IntervalVariable(Model *)> operations_research::sat::NewOptionalInterval ( IntegerVariable start,
IntegerVariable end,
IntegerVariable size,
Literal is_present )
inline

Definition at line 1021 of file intervals.h.

◆ NewOptionalIntervalWithVariableSize()

std::function< IntervalVariable(Model *)> operations_research::sat::NewOptionalIntervalWithVariableSize ( int64_t min_start,
int64_t max_end,
int64_t min_size,
int64_t max_size,
Literal is_present )
inline

Definition at line 1031 of file intervals.h.

◆ NewSatParameters() [1/3]

std::function< SatParameters(Model *)> operations_research::sat::NewSatParameters ( const sat::SatParameters & parameters)

Tricky: It is important to initialize the model parameters before any of the solver object are created, so that by default they use the given parameters.

Todo
(user): A notable exception to this is the TimeLimit which is currently not initializing itself from the SatParameters in the model. It will also starts counting from the time of its creation. It will be good to find a solution that is less error prone.

Definition at line 2010 of file cp_model_solver.cc.

◆ NewSatParameters() [2/3]

std::function< SatParameters(Model *)> operations_research::sat::NewSatParameters ( const SatParameters & parameters)

◆ NewSatParameters() [3/3]

std::function< SatParameters(Model *)> operations_research::sat::NewSatParameters ( const std::string & params)
Todo
(user): Support it on android.

Creates parameters for the solver, which you can add to the model with

model->Add(NewSatParameters(parameters_as_string_or_proto))
GRBmodel * model
std::function< SatParameters(Model *)> NewSatParameters(const std::string &params)

before calling SolveCpModel().

Definition at line 1999 of file cp_model_solver.cc.

◆ NewWeightedSum()

template<typename VectorInt >
std::function< IntegerVariable(Model *)> operations_research::sat::NewWeightedSum ( const VectorInt & coefficients,
const std::vector< IntegerVariable > & vars )
inline

Model-based function to create an IntegerVariable that corresponds to the given weighted sum of other IntegerVariables.

Note
this is templated so that it can seamlessly accept vector<int> or vector<int64_t>.
Todo
(user): invert the coefficients/vars arguments.

To avoid overflow in the FixedWeightedSum() constraint, we need to compute the basic bounds on the sum.

Todo
(user): deal with overflow here too!

Definition at line 669 of file integer_expr.h.

◆ NoDuplicateVariable()

bool operations_research::sat::NoDuplicateVariable ( const LinearConstraint & ct)

Returns false if duplicate variables are found in ct.

Definition at line 367 of file linear_constraint.cc.

◆ NonDeterministicLoop()

void operations_research::sat::NonDeterministicLoop ( std::vector< std::unique_ptr< SubSolver > > & subsolvers,
int num_threads )

Executes the following loop: 1/ Synchronize all in given order. 2/ generate and schedule one task from the current "best" subsolver. 3/ repeat until no extra task can be generated and all tasks are done.

The complexity of each selection is in O(num_subsolvers), but that should be okay given that we don't expect more than 100 such subsolvers.

Note
it is okay to incorporate "special" subsolver that never produce any tasks. This can be used to synchronize classes used by many subsolvers just once for instance.

The mutex guards num_in_flight and num_in_flight_per_subsolvers. This is used to detect when the search is done.

Predicate to be used with absl::Condition to detect that num_in_flight < num_threads. Must only be called while locking mutex.

The lambda below are using little space, but there is no reason to create millions of them, so we use the blocking nature of pool.Schedule() when the queue capacity is set.

Set to true if no task is pending right now.

Wait if num_in_flight == num_threads.

To support some "advanced" cancelation of subsolve, we still call synchronize every 0.1 seconds even if there is no worker available.

Todo
(user): We could also directly register callback to set stopping Boolean to false in a few places.

The stopping condition is that we do not have anything else to generate once all the task are done and synchronized.

We need to do that while holding the lock since substask below might be currently updating the time via AddTaskDuration().

It is hard to know when new info will allows for more task to be scheduled, so for now we just sleep for a bit. Note that in practice We will never reach here except at the end of the search because we can always schedule LNS threads.

Schedule next task.

Definition at line 187 of file subsolver.cc.

◆ NoOverlapMinRepairDistance()

int64_t operations_research::sat::NoOverlapMinRepairDistance ( const ConstraintProto & interval1,
const ConstraintProto & interval2,
absl::Span< const int64_t > solution )

Definition at line 1212 of file constraint_violation.cc.

◆ Not()

BoolVar operations_research::sat::Not ( BoolVar x)

A convenient wrapper so we can write Not(x) instead of x.Not() which is sometimes clearer.

Definition at line 87 of file cp_model.cc.

◆ operator*() [1/4]

DoubleLinearExpr operations_research::sat::operator* ( double factor,
DoubleLinearExpr expr )
inline

Definition at line 1318 of file cp_model.h.

◆ operator*() [2/4]

DoubleLinearExpr operations_research::sat::operator* ( DoubleLinearExpr expr,
double factor )
inline

Definition at line 1313 of file cp_model.h.

◆ operator*() [3/4]

LinearExpr operations_research::sat::operator* ( int64_t factor,
LinearExpr expr )
inline

Definition at line 1232 of file cp_model.h.

◆ operator*() [4/4]

LinearExpr operations_research::sat::operator* ( LinearExpr expr,
int64_t factor )
inline

Definition at line 1228 of file cp_model.h.

◆ operator+() [1/10]

DoubleLinearExpr operations_research::sat::operator+ ( const DoubleLinearExpr & lhs,
const DoubleLinearExpr & rhs )
inline

Definition at line 1244 of file cp_model.h.

◆ operator+() [2/10]

DoubleLinearExpr operations_research::sat::operator+ ( const DoubleLinearExpr & lhs,
DoubleLinearExpr && rhs )
inline

Definition at line 1255 of file cp_model.h.

◆ operator+() [3/10]

LinearExpr operations_research::sat::operator+ ( const LinearExpr & lhs,
const LinearExpr & rhs )
inline

Definition at line 1186 of file cp_model.h.

◆ operator+() [4/10]

LinearExpr operations_research::sat::operator+ ( const LinearExpr & lhs,
LinearExpr && rhs )
inline

Definition at line 1195 of file cp_model.h.

◆ operator+() [5/10]

DoubleLinearExpr operations_research::sat::operator+ ( double lhs,
DoubleLinearExpr expr )
inline

Definition at line 1275 of file cp_model.h.

◆ operator+() [6/10]

DoubleLinearExpr operations_research::sat::operator+ ( DoubleLinearExpr && lhs,
const DoubleLinearExpr & rhs )
inline

Definition at line 1250 of file cp_model.h.

◆ operator+() [7/10]

DoubleLinearExpr operations_research::sat::operator+ ( DoubleLinearExpr && lhs,
DoubleLinearExpr && rhs )
inline

Definition at line 1260 of file cp_model.h.

◆ operator+() [8/10]

DoubleLinearExpr operations_research::sat::operator+ ( DoubleLinearExpr expr,
double rhs )
inline

Definition at line 1271 of file cp_model.h.

◆ operator+() [9/10]

LinearExpr operations_research::sat::operator+ ( LinearExpr && lhs,
const LinearExpr & rhs )
inline

Definition at line 1191 of file cp_model.h.

◆ operator+() [10/10]

LinearExpr operations_research::sat::operator+ ( LinearExpr && lhs,
LinearExpr && rhs )
inline

Definition at line 1199 of file cp_model.h.

◆ operator-() [1/12]

DoubleLinearExpr operations_research::sat::operator- ( const DoubleLinearExpr & lhs,
const DoubleLinearExpr & rhs )
inline

Definition at line 1280 of file cp_model.h.

◆ operator-() [2/12]

DoubleLinearExpr operations_research::sat::operator- ( const DoubleLinearExpr & lhs,
DoubleLinearExpr && rhs )
inline

Definition at line 1291 of file cp_model.h.

◆ operator-() [3/12]

LinearExpr operations_research::sat::operator- ( const LinearExpr & lhs,
const LinearExpr & rhs )
inline

Definition at line 1209 of file cp_model.h.

◆ operator-() [4/12]

LinearExpr operations_research::sat::operator- ( const LinearExpr & lhs,
LinearExpr && rhs )
inline

Definition at line 1218 of file cp_model.h.

◆ operator-() [5/12]

DoubleLinearExpr operations_research::sat::operator- ( double lhs,
DoubleLinearExpr expr )
inline

Definition at line 1307 of file cp_model.h.

◆ operator-() [6/12]

DoubleLinearExpr operations_research::sat::operator- ( DoubleLinearExpr && lhs,
const DoubleLinearExpr & rhs )
inline

Definition at line 1286 of file cp_model.h.

◆ operator-() [7/12]

DoubleLinearExpr operations_research::sat::operator- ( DoubleLinearExpr && lhs,
DoubleLinearExpr && rhs )
inline

Definition at line 1297 of file cp_model.h.

◆ operator-() [8/12]

DoubleLinearExpr operations_research::sat::operator- ( DoubleLinearExpr epxr,
double rhs )
inline

Definition at line 1303 of file cp_model.h.

◆ operator-() [9/12]

DoubleLinearExpr operations_research::sat::operator- ( DoubleLinearExpr expr)
inline

For DoubleLinearExpr.

Definition at line 1239 of file cp_model.h.

◆ operator-() [10/12]

LinearExpr operations_research::sat::operator- ( LinearExpr && lhs,
const LinearExpr & rhs )
inline

Definition at line 1214 of file cp_model.h.

◆ operator-() [11/12]

LinearExpr operations_research::sat::operator- ( LinearExpr && lhs,
LinearExpr && rhs )
inline

Definition at line 1223 of file cp_model.h.

◆ operator-() [12/12]

LinearExpr operations_research::sat::operator- ( LinearExpr expr)
inline

Minimal support for "natural" API to create LinearExpr.

Note(user): This might be optimized further by optimizing LinearExpr for holding one term, or introducing an LinearTerm class, but these should mainly be used to construct small expressions. Revisit if we run into performance issues. Note that if perf become a bottleneck for a client, then probably directly writing the proto will be even faster.

Definition at line 1184 of file cp_model.h.

◆ operator<<() [1/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
absl::Span< const IntegerLiteral > literals )
inline

Definition at line 270 of file integer.h.

◆ operator<<() [2/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
absl::Span< const Literal > literals )
inline

Definition at line 127 of file sat_base.h.

◆ operator<<() [3/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const BoolVar & var )

Definition at line 89 of file cp_model.cc.

◆ operator<<() [4/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const DoubleLinearExpr & e )

Definition at line 488 of file cp_model.cc.

◆ operator<<() [5/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const EnforcementStatus & e )

Definition at line 52 of file linear_propagation.cc.

◆ operator<<() [6/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const IntervalVar & var )

Definition at line 641 of file cp_model.cc.

◆ operator<<() [7/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const IntVar & var )

Definition at line 171 of file cp_model.cc.

◆ operator<<() [8/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const LinearConstraint & ct )
inline

Definition at line 124 of file linear_constraint.h.

◆ operator<<() [9/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const LinearExpr & e )

Definition at line 318 of file cp_model.cc.

◆ operator<<() [10/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
const ValueLiteralPair & p )

Definition at line 65 of file integer.cc.

◆ operator<<() [11/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
IntegerLiteral i_lit )
inline

Definition at line 265 of file integer.h.

◆ operator<<() [12/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
Literal literal )
inline

Definition at line 117 of file sat_base.h.

◆ operator<<() [13/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
LiteralWithCoeff term )
inline

Definition at line 70 of file pb_constraint.h.

◆ operator<<() [14/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & os,
SatSolver::Status status )
inline

Definition at line 1058 of file sat_solver.h.

◆ operator<<() [15/15]

std::ostream & operations_research::sat::operator<< ( std::ostream & out,
const IndexedInterval & interval )

Definition at line 416 of file diffn_util.cc.

◆ operator==() [1/2]

bool operations_research::sat::operator== ( const BoolArgumentProto & lhs,
const BoolArgumentProto & rhs )
inline

hashing support.

Currently limited to a few inner types of ConstraintProto.

Definition at line 310 of file cp_model_utils.h.

◆ operator==() [2/2]

bool operations_research::sat::operator== ( const LinearConstraintProto & lhs,
const LinearConstraintProto & rhs )
inline

Definition at line 331 of file cp_model_utils.h.

◆ OverlapOfTwoIntervals()

int64_t operations_research::sat::OverlapOfTwoIntervals ( const ConstraintProto & interval1,
const ConstraintProto & interval2,
absl::Span< const int64_t > solution )

--— CompiledNoOverlap2dConstraint --—

We force a min cost of 1 to cover the case where a interval of size 0 is in the middle of another interval.

Definition at line 1187 of file constraint_violation.cc.

◆ PartialIsOneOfVar()

std::function< void(Model *)> operations_research::sat::PartialIsOneOfVar ( IntegerVariable target_var,
const std::vector< IntegerVariable > & vars,
const std::vector< Literal > & selectors )
inline

The target variable is equal to exactly one of the candidate variable. The equality is controlled by the given "selector" literals.

Note(user): This only propagate from the min/max of still possible candidates to the min/max of the target variable. The full constraint also requires to deal with the case when one of the literal is true.

Note(user): If there is just one or two candidates, this doesn't add anything.

Propagate the min.

Propagate the max.

Definition at line 165 of file cp_constraints.h.

◆ PositiveMod()

int64_t operations_research::sat::PositiveMod ( int64_t x,
int64_t m )

Just returns x % m but with a result always in [0, m).

Definition at line 182 of file util.cc.

◆ PositiveRef()

int operations_research::sat::PositiveRef ( int ref)
inline

Definition at line 44 of file cp_model_utils.h.

◆ PositiveRemainder()

IntegerValue operations_research::sat::PositiveRemainder ( IntegerValue dividend,
IntegerValue positive_divisor )
inline

Returns dividend - FloorRatio(dividend, divisor) * divisor;

This function is around the same speed than the computation above, but it never causes integer overflow. Note also that when calling FloorRatio() then PositiveRemainder(), the compiler should optimize the modulo away and just reuse the one from the first integer division.

Definition at line 153 of file integer.h.

◆ PositiveVarExpr()

LinearExpression operations_research::sat::PositiveVarExpr ( const LinearExpression & expr)

Returns the same expression with positive variables.

Definition at line 431 of file linear_constraint.cc.

◆ PositiveVariable()

IntegerVariable operations_research::sat::PositiveVariable ( IntegerVariable i)
inline

Definition at line 193 of file integer.h.

◆ PossibleIntegerOverflow()

bool operations_research::sat::PossibleIntegerOverflow ( const CpModelProto & model,
absl::Span< const int > vars,
absl::Span< const int64_t > coeffs,
int64_t offset = 0 )

Check if a given linear expression can create overflow. It is exposed to test new constraints created during the presolve.

Note
we use min/max with zero to disallow "alternative" terms and be sure that we cannot have an overflow if we do the computation in a different order.

In addition to computing the min/max possible sum, we also often compare it with the constraint bounds, so we do not want max - min to overflow. We might also create an intermediate variable to represent the sum.

Note
it is important to be symmetric here, as we do not want expr to pass but not -expr!

Definition at line 890 of file cp_model_checker.cc.

◆ PossibleOverflow()

bool operations_research::sat::PossibleOverflow ( const IntegerTrail & integer_trail,
const LinearConstraint & constraint )

Tests for possible overflow in the given linear constraint used for the linear relaxation. This is a bit relaxed compared to what we require for generic linear constraint that are used in our CP propagators.

If this check pass, our constraint should be safe to use in our simplication code, our cut computation, etc...

Definition at line 469 of file linear_constraint.cc.

◆ PostsolveClause()

void operations_research::sat::PostsolveClause ( const ConstraintProto & ct,
std::vector< Domain > * domains )

This postsolve is "special". If the clause is not satisfied, we fix the first literal in the clause to true (even if it was fixed to false). This allows to handle more complex presolve operations used by the SAT presolver.

Also, any "free" Boolean should be fixed to some value for the subsequent postsolve steps.

We still need to assign free variable. Any value should work.

Change the value of the first variable (which was chosen at presolve).

Definition at line 38 of file cp_model_postsolve.cc.

◆ PostsolveElement()

void operations_research::sat::PostsolveElement ( const ConstraintProto & ct,
std::vector< Domain > * domains )

We only support 3 cases in the presolve currently.

Deal with non-fixed target and non-fixed index. This only happen if whatever the value of the index and selected variable, we can choose a valid target, so we just fix the index to its min value in this case.

If the selected variable is not fixed, we also need to fix it.

Deal with fixed index.

Deal with fixed target (and constant vars).

Definition at line 235 of file cp_model_postsolve.cc.

◆ PostsolveExactlyOne()

void operations_research::sat::PostsolveExactlyOne ( const ConstraintProto & ct,
std::vector< Domain > * domains )

Fix one at true.

Fix any free variable left at false.

Definition at line 61 of file cp_model_postsolve.cc.

◆ PostsolveIntMod()

void operations_research::sat::PostsolveIntMod ( const ConstraintProto & ct,
std::vector< Domain > * domains )

We only support assigning to an affine target.

Definition at line 319 of file cp_model_postsolve.cc.

◆ PostsolveLinear()

void operations_research::sat::PostsolveLinear ( const ConstraintProto & ct,
std::vector< Domain > * domains )

Here we simply assign all non-fixed variable to a feasible value. Which should always exists by construction.

Fast track for the most common case.

The postsolve code is a bit involved if there is more than one free variable, we have to postsolve them one by one.

Here we recompute the same domains as during the presolve. Everything is like if we where substiting the variable one by one: terms[i] + fixed_activity \in rhs_domains[i] In the reverse order.

Note
these should be exactly the same computation as the one done during presolve and should be exact. However, we have some tests that do not comply, so we don't check exactness here. Also, as long as we don't get empty domain below, and the complexity of the domain do not explode here, we should be fine.

Choose a value for free_vars[i] that fall into rhs_domains[i] - fixed_activity. This will crash if the intersection is empty, but it shouldn't be.

Todo
(user): I am not 100% that the algo here might cover all the presolve case, so if this fail, it might indicate an issue here and not in the presolve/solver code.

Definition at line 115 of file cp_model_postsolve.cc.

◆ PostsolveLinMax()

void operations_research::sat::PostsolveLinMax ( const ConstraintProto & ct,
std::vector< Domain > * domains )

Compute the max of each expression, and assign it to the target expr. We only support post-solving the case where whatever the value of all expression, there will be a valid target.

In most case all expression are fixed, except in the corner case where one of the expression refer to the target itself !

Definition at line 216 of file cp_model_postsolve.cc.

◆ PostsolveResponse()

void operations_research::sat::PostsolveResponse ( int64_t num_variables_in_original_model,
const CpModelProto & mapping_proto,
const std::vector< int > & postsolve_mapping,
std::vector< int64_t > * solution )

Postsolves the given response using information filled by our presolver.

This works as follow:

  • First we fix fixed variables of the mapping_model according to the solution of the presolved problem and the index mapping.
  • Then, we process the mapping constraints in "reverse" order, and unit propagate each of them when necessary. By construction this should never give rise to any conflicts. And after each constraints, we should have a feasible solution to the presolved problem + all already postsolved constraints. This is the invariant we maintain.
  • Finally, we arbitrarily fix any free variables left and update the given response with the new solution.
Note
Most of the postsolve operations require the constraints to have been written in the correct way by the presolve.
Todo
(user): We could use the search strategy to fix free variables to some chosen values? The feature might never be needed though.

Read the initial variable domains, either from the fixed solution of the presolved problems or from the mapping model.

Process the constraints in reverse order.

We ignore constraint with an enforcement literal set to false. If the enforcement is still unclear, we still process this constraint.

This should never happen as we control what kind of constraint we add to the mapping_proto;

Fill the response. Maybe fix some still unfixed variable.

Definition at line 334 of file cp_model_postsolve.cc.

◆ PostsolveResponseWithFullSolver()

void operations_research::sat::PostsolveResponseWithFullSolver ( int num_variables_in_original_model,
CpModelProto mapping_proto,
const std::vector< int > & postsolve_mapping,
std::vector< int64_t > * solution )
Todo
(user): If this ever shows up in the profile, we could avoid copying the mapping_proto if we are careful about how we modify the variable domain before postsolving it. Note that 'num_variables_in_original_model' refers to the model before presolve.

Fix the correct variable in the mapping_proto.

Postosolve parameters.

Todo
(user): this problem is usually trivial, but we may still want to impose a time limit or copy some of the parameters passed by the user.

We only copy the solution from the postsolve_response to the response.

Definition at line 1649 of file cp_model_solver_helpers.cc.

◆ PostsolveResponseWrapper()

void operations_research::sat::PostsolveResponseWrapper ( const SatParameters & params,
int num_variable_in_original_model,
const CpModelProto & mapping_proto,
const std::vector< int > & postsolve_mapping,
std::vector< int64_t > * solution )

Definition at line 1693 of file cp_model_solver_helpers.cc.

◆ Preprocess()

bool operations_research::sat::Preprocess ( absl::Span< PermutableItem > & items,
std::pair< IntegerValue, IntegerValue > & bounding_box_size,
int max_complexity )

Exposed for testing.

Try to find an equivalent smaller OPP problem by fixing large items. The API is a bit unusual: it takes a reference to a mutable Span of sizes and rectangles. When this function finds an item that can be fixed, it sets the position of the PermutableItem, reorders items to put that item in the end of the span and then resizes the span so it contain only non-fixed items.

Note
the position of input items is not used and the position of non-fixed items will not be modified by this function.

No point in optimizing obviously infeasible instance.

No item (not even the narrowest one) fit alongside the widest item. So we care only about fitting the remaining items in the remaining space.

Definition at line 567 of file 2d_packing_brute_force.cc.

◆ PresolveBooleanLinearExpression()

void operations_research::sat::PresolveBooleanLinearExpression ( std::vector< Literal > * literals,
std::vector< Coefficient > * coefficients,
Coefficient * offset )

Transforms the given linear expression so that:

  • duplicate terms are merged.
  • terms with a literal and its negation are merged.
  • all weight are positive.
Todo
(user): Merge this with similar code like ComputeBooleanLinearExpressionCanonicalForm().

Sorting by literal index regroup duplicate or negated literal together.

Merge terms if needed.

The term is coeff *( 1 - X).

Rebuild with positive coeff.

coeff * X = coeff - coeff * (1 - X).

Definition at line 855 of file optimization.cc.

◆ PresolveCpModel()

CpSolverStatus operations_research::sat::PresolveCpModel ( PresolveContext * context,
std::vector< int > * postsolve_mapping )

Convenient wrapper to call the full presolve.

Public API.

Definition at line 12596 of file cp_model_presolve.cc.

◆ PresolveFixed2dRectangles()

bool operations_research::sat::PresolveFixed2dRectangles ( absl::Span< const RectangleInRange > non_fixed_boxes,
std::vector< Rectangle > * fixed_boxes )

Given a set of fixed boxes and a set of boxes that are not yet fixed (but attributed a range), look for a more optimal set of fixed boxes that are equivalent to the initial set of fixed boxes. This uses "equivalent" in the sense that a placement of the non-fixed boxes will be non-overlapping with all other boxes if and only if it was with the original set of fixed boxes too.

This implementation compiles a set of areas that cannot be occupied by any item, then calls ReduceNumberofBoxes() to use these areas to minimize fixed_boxes.

Fixed items are only useful to constraint where the non-fixed items can be placed. This means in particular that any part of a fixed item outside the bounding box of the non-fixed items is useless. Clip them.

The whole rectangle was outside of the domain, remove it.

Add fake rectangles to build a frame around the bounding box. This allows to find more areas that must be empty. The frame is as follows: +************ +...........+ +...........+ +...........+ ************+

All items we added to optional_boxes at this point are only to be used by the "gap between items" logic below. They are not actual optional boxes and should be removed right after the logic is applied.

Add a rectangle to optional_boxes but respecting that rectangles must remain disjoint.

Now check if there is any space that cannot be occupied by any non-fixed item.

Now look for gaps between objects that are too small to place anything.

Definition at line 34 of file 2d_rectangle_presolve.cc.

◆ PrintClauses()

bool operations_research::sat::PrintClauses ( const std::string & file_path,
SatFormat format,
absl::Span< const std::vector< Literal > > clauses,
int num_variables )

Prints the given clauses in the file at the given path, using the given file format. Returns true iff the file was successfully written.

Definition at line 606 of file drat_checker.cc.

◆ ProbeAndFindEquivalentLiteral()

void operations_research::sat::ProbeAndFindEquivalentLiteral ( SatSolver * solver,
SatPostsolver * postsolver,
DratProofHandler * drat_proof_handler,
util_intops::StrongVector< LiteralIndex, LiteralIndex > * mapping,
SolverLogger * = nullptr )

Presolver that does literals probing and finds equivalent literals by computing the strongly connected components of the graph: literal l -> literals propagated by l.

Clears the mapping if there are no equivalent literals. Otherwise, mapping[l] is the representative of the equivalent class of l. Note that mapping[l] may be equal to l.

The postsolver will be updated so it can recover a solution of the mapped problem. Note that this works on any problem the SatSolver can handle, not only pure SAT problem, but the returned mapping do need to be applied to all constraints.

We have no guarantee that the cycle of x and not(x) touch the same variables. This is because we may have more info for the literal probed later or the propagation may go only in one direction. For instance if we have two clauses (not(x1) v x2) and (not(x1) v not(x2) v x3) then x1 implies x2 and x3 but not(x3) doesn't imply anything by unit propagation.

Todo
(user): Add some constraint so that it does?

Because of this, we "merge" the cycles.

Todo
(user): check compatibility? if x ~ not(x) => unsat. but probably, the solver would have found this too? not sure...

We rely on the fact that the representative of a literal x and the one of its negation are the same variable.

If a variable in a cycle is fixed. We want to fix all of them.

We first fix all representative if one variable of the cycle is fixed. In a second pass we fix all the variable of a cycle whose representative is fixed.

Todo
(user): Fixing a variable might fix more of them by propagation, so we might not fix everything possible with these loops.

Definition at line 1145 of file simplification.cc.

◆ ProbeAndSimplifyProblem()

void operations_research::sat::ProbeAndSimplifyProblem ( SatPostsolver * postsolver,
LinearBooleanProblem * problem )

A simple preprocessing step that does basic probing and removes the equivalent literals.

A simple preprocessing step that does basic probing and removes the fixed and equivalent variables. Note that the variable indices will also be remapped in order to be dense. The given postsolver will be updated with the information needed during postsolve.

Todo
(user): expose the number of iterations as a parameter.

We can abort if no information is learned.

Fix fixed variables in the equivalence map and in the postsolver.

Remap the variables into a dense set. All the variables for which the equiv_map is not the identity are no longer needed.

Apply the variable mapping.

Definition at line 837 of file boolean_problem.cc.

◆ ProbeLiteral()

bool operations_research::sat::ProbeLiteral ( Literal assumption,
SatSolver * solver )
Note
since we only care about Booleans here, even if we have a feasible solution, it might not be feasible for the full cp_model.
Todo
(user): Still use it if the problem is Boolean only.

Definition at line 173 of file optimization.cc.

◆ ProdOverflow()

bool operations_research::sat::ProdOverflow ( IntegerValue t,
IntegerValue value )
inline

Definition at line 117 of file integer.h.

◆ ProductConstraint()

std::function< void(Model *)> operations_research::sat::ProductConstraint ( AffineExpression a,
AffineExpression b,
AffineExpression p )
inline

Adds the constraint: a * b = p.

Definition at line 786 of file integer_expr.h.

◆ ProductWithModularInverse()

int64_t operations_research::sat::ProductWithModularInverse ( int64_t coeff,
int64_t mod,
int64_t rhs )

If we know that X * coeff % mod = rhs % mod, this returns c such that PositiveMod(X, mod) = c.

This requires coeff != 0, mod !=0 and gcd(coeff, mod) == 1. The result will be in [0, mod) but there is no other condition on the sign or magnitude of a and b.

This is overflow safe, and when rhs == 0 or abs(mod) == 1, it returns 0.

Make both in [0, mod).

From X * coeff % mod = rhs We deduce that X % mod = rhs * inverse % mod

We make the operation in 128 bits to be sure not to have any overflow here.

Definition at line 187 of file util.cc.

◆ PropagateAutomaton()

void operations_research::sat::PropagateAutomaton ( const AutomatonConstraintProto & proto,
const PresolveContext & context,
std::vector< absl::flat_hash_set< int64_t > > * states,
std::vector< absl::flat_hash_set< int64_t > > * labels )

Fills and propagates the set of reachable states/labels.

Todo
(user): Note that if we have duplicate variables controlling different time point, this might not reach the fixed point. Fix? it is not that important as the expansion take care of this case anyway.

Forward pass.

Backward pass.

Definition at line 51 of file cp_model_expand.cc.

◆ PropagateEncodingFromEquivalenceRelations()

void operations_research::sat::PropagateEncodingFromEquivalenceRelations ( const CpModelProto & model_proto,
Model * m )

Process all affine relations of the form a*X + b*Y == cte. For each literals associated to (X >= bound) or (X == value) associate it to its corresponding relation on Y. Also do the other side.

Todo
(user): In an ideal world, all affine relations like this should be removed in the presolve.

Loop over all constraints and find affine ones.

Make sure the coefficient are positive.

Todo
(user): This is not supposed to happen, but apparently it did on once on routing_GCM_0001_sat.fzn. Investigate and fix.

We first map the >= literals. It is important to do that first, since otherwise mapping a == literal might creates the underlying >= and <= literals.

Same for the == literals.

Todo
(user): This is similar to LoadEquivalenceAC() for unreified constraints, but when the later is called, more encoding might have taken place.

Using this function deals properly with UNSAT.

Definition at line 822 of file cp_model_loader.cc.

◆ PseudoCost()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::PseudoCost ( Model * model)

Gets the branching variable using pseudo costs and combines it with a value for branching.

Todo
(user): This will be overridden by the value decision heuristic in almost all cases.

Definition at line 445 of file integer_search.cc.

◆ QuickSolveWithHint()

void operations_research::sat::QuickSolveWithHint ( const CpModelProto & model_proto,
Model * model )

Try to find a solution by following the hint and using a low conflict limit. The CpModelProto must already be loaded in the Model.

Temporarily change the parameters.

If the model was loaded with "optimize_with_core" then the objective variable is not linked to its linear expression. Because of that, we can return a solution that does not satisfy the objective domain.

Todo
(user): This is fixable, but then do we need the hint when optimizing with core?

Solve decision problem.

Restrict the objective.

This code is here to debug bad presolve during LNS that corrupt the hint.

Note
sometime the deterministic limit is hit before the hint can be completed, so we don't report that has an error.

Tricky: We can only test that if we don't already have a feasible solution like we do if the hint is complete.

Definition at line 1454 of file cp_model_solver_helpers.cc.

◆ RandomizeDecisionHeuristic()

void operations_research::sat::RandomizeDecisionHeuristic ( absl::BitGenRef random,
SatParameters * parameters )

Randomizes the decision heuristic of the given SatParameters.

Random preferred variable order.

Random polarity initial value.

Other random parameters.

Definition at line 106 of file util.cc.

◆ RandomizeOnRestartHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::RandomizeOnRestartHeuristic ( bool lns_mode,
Model * model )
Todo
(user): Add other policies and perform more experiments.

Add sat search + fixed_search (to complete the search).

Adds user defined search if present.

Always add heuristic search.

The higher weight for the sat policy is because this policy actually contains a lot of variation as we randomize the sat parameters.

Todo
(user): Do more experiments to find better distribution.

Value selection.

LP Based value.

Solution based value.

Min value.

Special case: Don't change the decision value.

Todo
(user): These distribution values are just guessed values. They need to be tuned.

Set some assignment preference.

Todo
(user): Also use LP value as assignment like in Bop.

Use Boolean objective as assignment preference.

Because this is a minimization problem, we prefer to assign a Boolean variable to its "low" objective value. So if a literal has a positive weight when true, we want to set it to false.

Select the variable selection heuristic.

Select the value selection heuristic.

Get the current decision.

Special case: Don't override the decision value.

Decode the decision and get the variable.

Try the selected policy.

Selected policy failed. Revert back to original decision.

Definition at line 955 of file integer_search.cc.

◆ ReadDomainFromProto()

template<typename ProtoWithDomain >
Domain operations_research::sat::ReadDomainFromProto ( const ProtoWithDomain & proto)

Reads a Domain from the domain field of a proto.

Definition at line 133 of file cp_model_utils.h.

◆ RecordLPRelaxationValues()

void operations_research::sat::RecordLPRelaxationValues ( Model * model)

Adds the current LP solution to the pool.

Todo
(user): The default of ::infinity() for variable for which we do not have any LP solution is weird and inconsistent with ModelLpValues default which is zero. Fix. Note that in practice, at linearization level 2, all variable will eventually have an lp relaxation value, so it shoulnd't matter much to just use zero in RINS/RENS.

We only loop over the positive variables.

Definition at line 38 of file rins.cc.

◆ ReduceModuloBasis()

void operations_research::sat::ReduceModuloBasis ( absl::Span< const std::vector< absl::int128 > > basis,
const int elements_to_consider,
std::vector< absl::int128 > & v )

Definition at line 45 of file diophantine.cc.

◆ ReduceNodes()

void operations_research::sat::ReduceNodes ( Coefficient upper_bound,
Coefficient * lower_bound,
std::vector< EncodingNode * > * nodes,
SatSolver * solver )

Reduces the nodes using the now fixed literals, update the lower-bound, and returns the set of assumptions for the next round of the core-based algorithm. Returns an empty set of assumptions if everything is fixed.

Remove the left-most variables fixed to one from each node. Also update the lower_bound. Note that Reduce() needs the solver to be at the root node in order to work.

Fix the nodes right-most variables that are above the gap. If we closed the problem, we abort and return and empty vector.

Remove the empty nodes.

Sort the nodes.

Todo
(user): with DEFAULT_ASSUMPTION_ORDER, this will lead to a somewhat weird behavior, since we will reverse the nodes at each iteration...

Definition at line 501 of file encoding.cc.

◆ ReduceNumberofBoxes()

bool operations_research::sat::ReduceNumberofBoxes ( std::vector< Rectangle > * mandatory_rectangles,
std::vector< Rectangle > * optional_rectangles )

The current implementation just greedly merge rectangles that shares an edge. This is far from optimal, and it exists a polynomial optimal algorithm (see page 3 of [1]) for this problem at least for the case where optional_rectangles is empty.

Todo
(user): improve

[1] Eppstein, David. "Graph-theoretic solutions to computational geometry problems." International Workshop on Graph-Theoretic Concepts in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

bool for is_optional

Merge two rectangles!

Definition at line 272 of file 2d_rectangle_presolve.cc.

◆ RefIsPositive()

bool operations_research::sat::RefIsPositive ( int ref)
inline

Definition at line 45 of file cp_model_utils.h.

◆ RegisterAndTransferOwnership()

template<class T >
void operations_research::sat::RegisterAndTransferOwnership ( Model * model,
T * ct )

Definition at line 781 of file integer_expr.h.

◆ RegisterClausesExport()

void operations_research::sat::RegisterClausesExport ( int id,
SharedClausesManager * shared_clauses_manager,
Model * model )

Registers a callback that will export good clauses discovered during search.

Note
this callback takes no global locks, everything operates on this worker's own clause stream, whose lock is only used by this worker, and briefly when generating a batch in SharedClausesManager::Synchronize().

Definition at line 811 of file cp_model_solver_helpers.cc.

◆ RegisterClausesLevelZeroImport()

int operations_research::sat::RegisterClausesLevelZeroImport ( int id,
SharedClausesManager * shared_clauses_manager,
Model * model )

Registers a callback to import new clauses stored in the shared_clausess_manager. These clauses are imported at level 0 of the search in the linear scan minimize function. it returns the id of the worker in the shared clause manager.

Todo
(user): Can we import them in the core worker ?

Registers a callback to import new clauses stored in the shared_clausess_manager. These clauses are imported at level 0 of the search in the linear scan minimize function. it returns the id of the worker in the shared clause manager.

Todo
(user): Can we import them in the core worker ?

Check this clause was not already learned by this worker. We can delete the fingerprint because we should not learn an identical clause, and the global stream will not emit the same clause while any worker hasn't consumed this clause (and thus also shouldn't relearn the clause).

Definition at line 863 of file cp_model_solver_helpers.cc.

◆ RegisterObjectiveBestBoundExport()

void operations_research::sat::RegisterObjectiveBestBoundExport ( IntegerVariable objective_var,
SharedResponseManager * shared_response_manager,
Model * model )

Registers a callback that will report improving objective best bound. It will be called each time new objective bound are propagated at level zero.

If we are not in interleave_search we synchronize right away.

Definition at line 729 of file cp_model_solver_helpers.cc.

◆ RegisterObjectiveBoundsImport()

void operations_research::sat::RegisterObjectiveBoundsImport ( SharedResponseManager * shared_response_manager,
Model * model )

Registers a callback to import new objective bounds. It will be called each time the search main loop is back to level zero. Note that it the presence of assumptions, this will not happen until the set of assumptions is changed.

Definition at line 758 of file cp_model_solver_helpers.cc.

◆ RegisterVariableBoundsLevelZeroExport()

void operations_research::sat::RegisterVariableBoundsLevelZeroExport ( const CpModelProto & ,
SharedBoundsManager * shared_bounds_manager,
Model * model )

Registers a callback that will export variables bounds fixed at level 0 of the search. This should not be registered to a LNS search.

Inspect the modified IntegerVariables.

Todo
(user): We could imagine an API based on atomic<int64_t> that could preemptively check if this new bounds are improving.

Inspect the newly modified Booleans.

Clear for next call.

If we are not in interleave_search we synchronize right away.

The callback will just be called on NEWLY modified var. So initially, we do want to read all variables.

Todo
(user): Find a better way? It seems nicer to register this before any variable is modified. But then we don't want to call it each time we reach level zero during probing. It should be better to only call it when a new variable has been fixed.

Definition at line 543 of file cp_model_solver_helpers.cc.

◆ RegisterVariableBoundsLevelZeroImport()

void operations_research::sat::RegisterVariableBoundsLevelZeroImport ( const CpModelProto & model_proto,
SharedBoundsManager * shared_bounds_manager,
Model * model )

Registers a callback to import new variables bounds stored in the shared_bounds_manager. These bounds are imported at level 0 of the search in the linear scan minimize function.

If this is a Boolean, fix it if not already done.

Note
it is important not to use AddUnitClause() as we do not want to propagate after each addition.

Deal with integer.

Definition at line 644 of file cp_model_solver_helpers.cc.

◆ ReifiedBoolAnd()

std::function< void(Model *)> operations_research::sat::ReifiedBoolAnd ( const std::vector< Literal > & literals,
Literal r )
inline

r <=> (all literals are true).

Note(user): we could have called ReifiedBoolOr() with everything negated.

All true => r true.

Definition at line 991 of file sat_solver.h.

◆ ReifiedBoolLe()

std::function< void(Model *)> operations_research::sat::ReifiedBoolLe ( Literal a,
Literal b,
Literal r )
inline

r <=> (a <= b).

r <=> (a <= b) is the same as r <=> not(a=1 and b=0). So r <=> a=0 OR b=1.

Definition at line 1007 of file sat_solver.h.

◆ ReifiedBoolOr()

std::function< void(Model *)> operations_research::sat::ReifiedBoolOr ( const std::vector< Literal > & literals,
Literal r )
inline

r <=> (at least one literal is true). This is a reified clause.

All false => r false.

Definition at line 957 of file sat_solver.h.

◆ ReindexArcs()

template<class IntContainer >
int operations_research::sat::ReindexArcs ( IntContainer * tails,
IntContainer * heads,
absl::flat_hash_map< int, int > * mapping_output = nullptr )

Changes the node indices so that we get a graph in [0, num_nodes) where every node has at least one incoming or outgoing arc. Returns the number of nodes.

Put all nodes in a set.

Compute the new indices while keeping a stable order.

Remap the arcs.

Definition at line 210 of file circuit.h.

◆ RemoveNearZeroTerms()

void operations_research::sat::RemoveNearZeroTerms ( const SatParameters & params,
MPModelProto * mp_model,
SolverLogger * logger )

To satisfy our scaling requirements, any terms that is almost zero can just be set to zero. We need to do that before operations like DetectImpliedIntegers(), because really low coefficients can cause issues and might lead to less detection.

Having really low bounds or rhs can be problematic. We set them to zero.

Compute for each variable its current maximum magnitude. Note that we will only scale variable with a coefficient >= 1, so it is safe to use this bound.

Note
when a variable is fixed to zero, the code here remove all its coefficients. But we do not count them here.

We want the maximum absolute error while setting coefficients to zero to not exceed our mip wanted precision. So for a binary variable we might set to zero coefficient around 1e-7. But for large domain, we need lower coeff than that, around 1e-12 with the default params.mip_max_bound(). This also depends on the size of the constraint.

We also do the same for the objective coefficient.

Definition at line 308 of file lp_utils.cc.

◆ RemoveZeroTerms()

void operations_research::sat::RemoveZeroTerms ( LinearConstraint * constraint)

Removes the entries with a coefficient of zero.

Definition at line 278 of file linear_constraint.cc.

◆ RenderDot()

std::string operations_research::sat::RenderDot ( std::optional< Rectangle > bb,
absl::Span< const Rectangle > solution )

Render a packing solution as a Graphviz dot file. Only works in the "neato" or "fdp" Graphviz backends.

Definition at line 1543 of file diffn_util.cc.

◆ RepeatParameters()

std::vector< SatParameters > operations_research::sat::RepeatParameters ( absl::Span< const SatParameters > base_params,
int num_params_to_generate )

Given a base set of parameter, if non-empty, this repeat them (round-robbin) until we get num_params_to_generate. Note that if we don't have a multiple, the first base parameters will be repeated more than the others.

Note
this will also change the random_seed of each of these parameters.

Return if we are done.

Repeat parameters until we have enough.

Definition at line 996 of file cp_model_search.cc.

◆ ReportEnergyConflict()

bool operations_research::sat::ReportEnergyConflict ( Rectangle bounding_box,
absl::Span< const int > boxes,
SchedulingConstraintHelper * x,
SchedulingConstraintHelper * y )

Checks that there is indeed a conflict for the given bounding_box and report it. This returns false for convenience as we usually want to return false on a conflict.

Todo
(user): relax the bounding box dimension to have a relaxed explanation. We can also minimize the number of required intervals.

We abort early if a subset of boxes is enough.

Todo
(user): Also relax the box if possible.

Definition at line 127 of file diffn_util.cc.

◆ ResetAndSolveIntegerProblem()

SatSolver::Status operations_research::sat::ResetAndSolveIntegerProblem ( const std::vector< Literal > & assumptions,
Model * model )

Resets the solver to the given assumptions before calling SolveIntegerProblem().

Backtrack to level zero.

Sync bounds and maybe do some inprocessing. We reuse the BeforeTakingDecision() code

Add the assumptions if any and solve.

Definition at line 1566 of file integer_search.cc.

◆ Resolve()

bool operations_research::sat::Resolve ( absl::Span< const Literal > clause,
absl::Span< const Literal > other_clause,
Literal complementary_literal,
VariablesAssignment * assignment,
std::vector< Literal > * resolvent )

Returns true if 'complementary_literal' is the unique complementary literal in the two given clauses. If so the resolvent of these clauses (i.e. their union with 'complementary_literal' and its negation removed) is set in 'resolvent'. 'clause' must contain 'complementary_literal', while 'other_clause' must contain its negation. 'assignment' must have at least as many variables as each clause, and they must all be unassigned. They are still unassigned upon return.

Temporary assignment used to do the checks below in linear time.

Revert the temporary assignment done above.

Definition at line 478 of file drat_checker.cc.

◆ RestartEveryKFailures()

std::function< bool()> operations_research::sat::RestartEveryKFailures ( int k,
SatSolver * solver )

A restart policy that restarts every k failures.

Definition at line 1157 of file integer_search.cc.

◆ RestrictObjectiveDomainWithBinarySearch()

void operations_research::sat::RestrictObjectiveDomainWithBinarySearch ( IntegerVariable objective_var,
const std::function< void()> & feasible_solution_observer,
Model * model )

Use a low conflict limit and performs a binary search to try to restrict the domain of objective_var.

Set the requested conflict limit.

The assumption (objective <= value) for values in [unknown_min, unknown_max] reached the conflict limit.

We first refine the lower bound and then the upper bound.

Update the objective lower bound.

The objective is the current lower bound of the objective_var.

We have a solution, restrict the objective upper bound to only look for better ones now.

Definition at line 251 of file optimization.cc.

◆ SafeAddLinearExpressionToLinearConstraint()

bool operations_research::sat::SafeAddLinearExpressionToLinearConstraint ( const LinearExpressionProto & expr,
int64_t coefficient,
LinearConstraintProto * linear )

Same method, but returns if the addition was possible without overflowing.

Definition at line 600 of file cp_model_utils.cc.

◆ SafeDoubleToInt64()

int64_t operations_research::sat::SafeDoubleToInt64 ( double value)
inline

Converts a double to int64_t and cap large magnitudes at kint64min/max. We also arbitrarily returns 0 for NaNs.

Note(user): This is similar to SaturatingFloatToInt(), but we use our own since we need to open source it and the code is simple enough.

Implementation.

Definition at line 688 of file util.h.

◆ SatSolverHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::SatSolverHeuristic ( Model * model)

Returns the BooleanOrIntegerLiteral advised by the underlying SAT solver.

Definition at line 406 of file integer_search.cc.

◆ SatSolverRestartPolicy()

std::function< bool()> operations_research::sat::SatSolverRestartPolicy ( Model * model)

A restart policy that uses the underlying sat solver's policy.

Definition at line 1171 of file integer_search.cc.

◆ SatStatusString()

std::string operations_research::sat::SatStatusString ( SatSolver::Status status)

Returns a string representation of a SatSolver::Status.

Fallback. We don't use "default:" so the compiler will return an error if we forgot one enum case above.

Definition at line 2764 of file sat_solver.cc.

◆ ScalarProduct()

double operations_research::sat::ScalarProduct ( const LinearConstraint & constraint1,
const LinearConstraint & constraint2 )

Returns the scalar product of given constraint coefficients. This method assumes that the constraint variables are in sorted order.

Definition at line 217 of file linear_constraint.cc.

◆ ScaleAndSetObjective()

bool operations_research::sat::ScaleAndSetObjective ( const SatParameters & params,
const std::vector< std::pair< int, double > > & objective,
double objective_offset,
bool maximize,
CpModelProto * cp_model,
SolverLogger * logger )

Scales a double objective to its integer version and fills it in the proto. The variable listed in the objective must be already defined in the cp_model proto as this uses the variables bounds to compute a proper scaling.

This uses params.mip_wanted_tolerance() and params.mip_max_activity_exponent() to compute the scaling. Note however that if the wanted tolerance is not satisfied this still scale with best effort. You can see in the log the tolerance guaranteed by this automatic scaling.

This will almost always returns true except for really bad cases like having infinity in the objective.

Make sure the objective is currently empty.

We filter constant terms and compute some needed quantities.

These are the parameters used for scaling the objective.

Display the objective error/scaling.

Note
here we set the scaling factor for the inverse operation of getting the "true" objective value from the scaled one. Hence the inverse.

Definition at line 1354 of file lp_utils.cc.

◆ ScaleContinuousVariables()

std::vector< double > operations_research::sat::ScaleContinuousVariables ( double scaling,
double max_bound,
MPModelProto * mp_model )

Multiplies all continuous variable by the given scaling parameters and change the rest of the model accordingly. The returned vector contains the scaling of each variable (will always be 1.0 for integers) and can be used to recover a solution of the unscaled problem from one of the new scaled problems by dividing the variable values.

We usually scale a continuous variable by scaling, but if its domain is going to have larger values than max_bound, then we scale to have the max domain magnitude equal to max_bound.

Note
it is recommended to call DetectImpliedIntegers() before this function so that we do not scale variables that do not need to be scaled.
Todo
(user): Also scale the solution hint if any.

Definition at line 110 of file lp_utils.cc.

◆ ScaleInnerObjectiveValue()

int64_t operations_research::sat::ScaleInnerObjectiveValue ( const CpObjectiveProto & proto,
int64_t value )
inline

Similar to ScaleObjectiveValue() but uses the integer version.

Definition at line 172 of file cp_model_utils.h.

◆ ScaleObjectiveValue()

double operations_research::sat::ScaleObjectiveValue ( const CpObjectiveProto & proto,
int64_t value )
inline

Scales back a objective value to a double value from the original model.

Definition at line 159 of file cp_model_utils.h.

◆ ScanModelForDominanceDetection()

void operations_research::sat::ScanModelForDominanceDetection ( PresolveContext & context,
VarDomination * var_domination )

Detects the variable dominance relations within the given model. Note that to avoid doing too much work, we might miss some relations.

Ignore variables that have been substituted already or are unused.

Deal with the affine relations that are not part of the proto. Those only need to be processed in the first pass.

First scan: update the partition.

Todo
(user): Maybe we should avoid recomputing that here.

We cannot infer anything if we don't know the constraint.

Todo
(user): Handle enforcement better here.

The objective is handled like a <= constraints, or an == constraint if there is a non-trivial domain.

Important: We need to write the objective first to make sure it is up to date.

do nothing for now.

Now do two more scan.

  • the phase_ = 0 initialize candidate list, then EndFirstPhase()
  • the phase_ = 1 filter them, then EndSecondPhase();

We process it like n clauses.

Todo
(user): the way we process that is a bit restrictive. By working on the implication graph we could detect more dominance relations. Since if a => b we say that a++ can only be paired with b–, but it could actually be paired with any variables that when dereased implies b = 0. This is a bit mitigated by the fact that we regroup when we can such implications into big at most ones.

The objective is handled like a <= constraints, or an == constraint if there is a non-trivial domain.

Early abort if no possible relations can be found.

Todo
(user): We might be able to detect that nothing can be done earlier during the constraint scanning.

Some statistics.

Definition at line 1107 of file var_domination.cc.

◆ ScanModelForDualBoundStrengthening()

void operations_research::sat::ScanModelForDualBoundStrengthening ( const PresolveContext & context,
DualBoundStrengthening * dual_bound_strengthening )

Scan the model so that dual_bound_strengthening.Strenghten() works.

Ignore variables that have been substituted already or are unused.

Deal with the affine relations that are not part of the proto. Those only need to be processed in the first pass.

Todo
(user): Maybe we should avoid recomputing that here.

We cannot infer anything if we don't know the constraint.

Todo
(user): Handle enforcement better here.

The objective is handled like a <= constraints, or an == constraint if there is a non-trivial domain.

Warning
The proto objective might not be up to date, so we need to write it first.

Definition at line 1313 of file var_domination.cc.

◆ SchedulingSearchHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::SchedulingSearchHeuristic ( Model * model)

A simple heuristic for scheduling models.

Simple scheduling heuristic that looks at all the no-overlap constraints and try to assign and perform the intervals that can be scheduled first.

To avoid to scan already fixed intervals, we use a simple reversible int.

Note(user): only the model is captured for no reason.

Variable to fix.

Information to select best.

We want to pack interval to the left. If two have the same start_min, we want to choose the one that will likely leave an easier problem for the other tasks.

Generating random noise can take time, so we use this function to delay it.

Save rev_fixed before we modify it.

Todo
(user): we should also precompute fixed precedences and only fix interval that have all their predecessors fixed.

For task whose presence is still unknown, our propagators should have propagated the minimum time as if it was present. So this should reflect the earliest time at which this interval can be scheduled.

Finish filling candidate.

For variable size, we compute the min size once the start is fixed to time. This is needed to never pick the "artificial" makespan interval at the end in priority compared to intervals that still need to be scheduled.

Do not replace if we have a strict inequality now.

Setup rev_is_in_dive to be true on the next call only if there was no backtrack since the previous call.

Use the next_decision_override to fix in turn all the variables from the selected interval.

We have been trying to fix this interval for a while. Do we miss some propagation? In any case, try to see if the heuristic above would select something else.

First make sure the interval is present.

We assume that start_min is propagated by now.

We assume that end_min is propagated by now.

Everything is fixed, detach the override.

Definition at line 467 of file integer_search.cc.

◆ SeparateFlowInequalities()

void operations_research::sat::SeparateFlowInequalities ( int num_nodes,
absl::Span< const int > tails,
absl::Span< const int > heads,
absl::Span< const AffineExpression > arc_capacities,
std::function< void(const std::vector< bool > &in_subset, IntegerValue *min_incoming_flow, IntegerValue *min_outgoing_flow)> get_flows,
const util_intops::StrongVector< IntegerVariable, double > & lp_values,
LinearConstraintManager * manager,
Model * model )

This is really similar to SeparateSubtourInequalities, see the reference there.

We will collect only the arcs with a positive lp capacity value to speed up some computation below.

Often capacities have a coeff > 1. We currently exploit this if all coeff have a gcd > 1.

Sort the arcs by non-increasing lp_values.

Process each subsets and add any violated cut.

Initialize "in_subset" and the subset demands.

We will sum the offset of all incoming/outgoing arc capacities.

Note
all arcs with a non-zero offset are part of relevant_arcs.

Compute the current flow in and out of the subset.

This can take a significant portion of the running time, it is why it is faster to do it only on arcs with non-zero lp values which should be in linear number rather than the total number of arc which can be quadratic.

If the gcd is greater than one, because all variables are integer we can round the flow lower bound to the next multiple of the gcd.

Todo
(user): Alternatively, try MIR heuristics if the coefficients in the capacities are not all the same.

Sparse clean up.

Definition at line 794 of file routing_cuts.cc.

◆ SeparateSubtourInequalities()

void operations_research::sat::SeparateSubtourInequalities ( int num_nodes,
const std::vector< int > & tails,
const std::vector< int > & heads,
const std::vector< Literal > & literals,
absl::Span< const int64_t > demands,
int64_t capacity,
LinearConstraintManager * manager,
Model * model )

We roughly follow the algorithm described in section 6 of "The Traveling Salesman Problem, A computational Study", David L. Applegate, Robert E. Bixby, Vasek Chvatal, William J. Cook.

Note
this is mainly a "symmetric" case algo, but it does still work for the asymmetric case.

We will collect only the arcs with a positive lp_values to speed up some computation below.

Sort the arcs by non-increasing lp_values.

Add the depot so that we have a trivial bound on the number of vehicle.

Hack/optim: we exploit the tree structure of the subsets to not add a cut for a larger subset if we added a cut from one included in it.

Todo
(user): Currently if we add too many not so relevant cuts, our generic MIP cut heuritic are way too slow on TSP/VRP problems.

Process each subsets and add any violated cut.

If there were no cut added by the heuristic above, we try exact separation.

With n-1 max_flow from a source to all destination, we can get the global min-cut. Here, we use a slightly more advanced algorithm that will find a min-cut for all possible pair of nodes. This is achieved by computing a Gomory-Hu tree, still with n-1 max flow call.

Note(user): Compared to any min-cut, these cut have some nice properties since they are "included" in each other. This might help with combining them within our generic IP cuts framework.

Todo
(user): I had an older version that tried the n-cuts generated during the course of the algorithm. This could also be interesting. But it is hard to tell with our current benchmark setup.

Try all interesting subset from the Gomory-Hu tree.

Exact separation of symmetric Blossom cut. We use the algorithm in the paper: "A Faster Exact Separation Algorithm for Blossom Inequalities", Adam N. Letchford, Gerhard Reinelt, Dirk Oliver Theis, 2004.

Note
the "relevant_arcs" were symmetrized above.

Definition at line 580 of file routing_cuts.cc.

◆ SequentialLoop()

void operations_research::sat::SequentialLoop ( std::vector< std::unique_ptr< SubSolver > > & subsolvers)

Same as above, but specialized implementation for the case num_threads=1. This avoids using a Threadpool altogether. It should have the same behavior than the functions above with num_threads=1 and batch_size=1. Note that an higher batch size will not behave in the same way, even if num_threads=1.

Definition at line 86 of file subsolver.cc.

◆ SequentialSearch()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::SequentialSearch ( std::vector< std::function< BooleanOrIntegerLiteral()> > heuristics)

Combines search heuristics in order: if the i-th one returns kNoLiteralIndex, ask the (i+1)-th. If every heuristic returned kNoLiteralIndex, returns kNoLiteralIndex.

Definition at line 290 of file integer_search.cc.

◆ SequentialValueSelection()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::SequentialValueSelection ( std::vector< std::function< IntegerLiteral(IntegerVariable)> > value_selection_heuristics,
std::function< BooleanOrIntegerLiteral()> var_selection_heuristic,
Model * model )

Changes the value of the given decision by 'var_selection_heuristic'. We try to see if the decision is "associated" with an IntegerVariable, and if it is the case, we choose the new value by the first 'value_selection_heuristics' that is applicable. If none of the heuristics are applicable then the given decision by 'var_selection_heuristic' is returned.

Get the current decision.

When we are in the "stable" phase, we prefer to follow the SAT polarity heuristic.

IntegerLiteral case.

Boolean case. We try to decode the Boolean decision to see if it is associated with an integer variable.

Todo
(user): we will likely stop at the first non-fixed variable.

Sequentially try the value selection heuristics.

Definition at line 301 of file integer_search.cc.

◆ SetEnforcementLiteralToFalse()

void operations_research::sat::SetEnforcementLiteralToFalse ( const ConstraintProto & ct,
std::vector< Domain > * domains )

For now we set the first unset enforcement literal to false. There must be one.

Definition at line 92 of file cp_model_postsolve.cc.

◆ SetToNegatedLinearExpression()

void operations_research::sat::SetToNegatedLinearExpression ( const LinearExpressionProto & input_expr,
LinearExpressionProto * output_negated_expr )

Fills the target as negated ref.

Definition at line 71 of file cp_model_utils.cc.

◆ SetupTextFormatPrinter()

void operations_research::sat::SetupTextFormatPrinter ( google::protobuf::TextFormat::Printer * printer)

We register a few custom printers to display variables and linear expression on one line. This is especially nice for variables where it is easy to recover their indices from the line number now.

ex:

variables { domain: [0, 1] } variables { domain: [0, 1] } variables { domain: [0, 1] }

constraints { linear { vars: [0, 1, 2] coeffs: [2, 4, 5 ] domain: [11, 11] } }

Definition at line 870 of file cp_model_utils.cc.

◆ ShaveObjectiveLb()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::ShaveObjectiveLb ( Model * model)
Todo
(user): Do we need a mechanism to reduce the range of possible gaps when nothing gets proven? This could be a parameter or some adaptative code.

Definition at line 421 of file integer_search.cc.

◆ SimplifyCanonicalBooleanLinearConstraint()

void operations_research::sat::SimplifyCanonicalBooleanLinearConstraint ( std::vector< LiteralWithCoeff > * cst,
Coefficient * rhs )
Todo
(user): Use more complex simplification like dividing by the gcd of everyone and using less different coefficients if possible.

Given a Boolean linear constraint in canonical form, simplify its coefficients using simple heuristics.

Replace all coefficient >= rhs by rhs + 1 (these literal must actually be false). Note that the linear sum of literals remains canonical.

Todo
(user): It is probably better to remove these literals and have other constraint setting them to false from the symmetry finder perspective.

Definition at line 162 of file pb_constraint.cc.

◆ SimplifyClause()

bool operations_research::sat::SimplifyClause ( const std::vector< Literal > & a,
std::vector< Literal > * b,
LiteralIndex * opposite_literal,
int64_t * num_inspected_literals = nullptr )

Visible for testing. Returns true iff:

  • a subsume b (subsumption): the clause a is a subset of b, in which case opposite_literal is set to -1.
  • b is strengthened by self-subsumption using a (self-subsuming resolution): the clause a with one of its literal negated is a subset of b, in which case opposite_literal is set to this negated literal index. Moreover, this opposite_literal is then removed from b.

If num_inspected_literals_ is not nullptr, the "complexity" of this function will be added to it in order to track the amount of work done.

Todo
(user): when a.size() << b.size(), we should use binary search instead of scanning b linearly.

Because we abort early when size_diff becomes negative, the second test in the while loop is not needed.

A literal of b is not in a, we can abort early by comparing the sizes left.

Definition at line 945 of file simplification.cc.

◆ Smallest1DIntersection()

IntegerValue operations_research::sat::Smallest1DIntersection ( IntegerValue range_min,
IntegerValue range_max,
IntegerValue size,
IntegerValue interval_min,
IntegerValue interval_max )

1D counterpart of RectangleInRange::GetMinimumIntersectionArea. Finds the minimum possible overlap of a interval of size size that fits in [range_min, range_max] and a second interval [interval_min, interval_max].

If the item is on the left of the range, we get the intersection between [range_min, range_min + size] and [interval_min, interval_max].

If the item is on the right of the range, we get the intersection between [range_max - size, range_max] and [interval_min, interval_max].

Definition at line 759 of file diffn_util.cc.

◆ SolutionBooleanValue()

bool operations_research::sat::SolutionBooleanValue ( const CpSolverResponse & r,
BoolVar x )

Evaluates the value of a Boolean literal in a solver response.

Definition at line 1403 of file cp_model.cc.

◆ SolutionIntegerValue()

int64_t operations_research::sat::SolutionIntegerValue ( const CpSolverResponse & r,
const LinearExpr & expr )

Evaluates the value of an linear expression in a solver response.

Definition at line 1392 of file cp_model.cc.

◆ SolutionIsFeasible()

bool operations_research::sat::SolutionIsFeasible ( const CpModelProto & model,
absl::Span< const int64_t > variable_values,
const CpModelProto * mapping_proto = nullptr,
const std::vector< int > * postsolve_mapping = nullptr )

Verifies that the given variable assignment is a feasible solution of the given model. The values vector should be in one to one correspondence with the model.variables() list of variables.

The last two arguments are optional and help debugging a failing constraint due to presolve.

Check that all values fall in the variable domains.

Display a message to help debugging.

Check that the objective is within its domain.

Todo
(user): This is not really a "feasibility" question, but we should probably check that the response objective matches with the one we can compute here. This might better be done in another function though.

Definition at line 1691 of file cp_model_checker.cc.

◆ Solve()

CpSolverResponse operations_research::sat::Solve ( const CpModelProto & model_proto)

Solves the given CpModelProto and returns an instance of CpSolverResponse.

Definition at line 2576 of file cp_model_solver.cc.

◆ SolveCpModel()

CpSolverResponse operations_research::sat::SolveCpModel ( const CpModelProto & model_proto,
Model * model )

Solves the given CpModelProto.

This advanced API accept a Model* which allows to access more advanced features by configuring some classes in the Model before solve.

For instance:

  • model->Add(NewSatParameters(parameters_as_string_or_proto));
  • model->GetOrCreate<TimeLimit>()->RegisterExternalBooleanAsLimit(&stop);
  • model->Add(NewFeasibleSolutionObserver(observer));

Dump initial model?

Override parameters?

Enable the logging component.

Note
the postprocessors are executed in reverse order, so this will always dump the response just before it is returned since it is the first one we register.

Always display the final response stats if requested. This also copy the logs to the response if requested.

Always add the timing information to a response. Note that it is important to add this after the log/dump postprocessor since we execute them in reverse order.

Validate parameters.

Note
the few parameters we use before that are Booleans and thus "safe". We need to delay the validation to return a proper response.
Todo
(user): We currently reuse the MODEL_INVALID status even though it is not the best name for this. Maybe we can add a PARAMETERS_INVALID when it become needed. Or rename to INVALID_INPUT ?

Initialize the time limit from the parameters.

Register SIGINT handler if requested by the parameters.

Internally we adapt the parameters so that things are disabled if they do not make sense.

Validate model_proto.

Todo
(user): provide an option to skip this step for speed?

Presolve and expansions.

Note
Allocating in an arena significantly speed up destruction (free) for large messages.

Checks for hints early in case they are forced to be hard constraints.

If the hint is complete, we can use the solution checker to do more validation. Note that after the model has been validated, we are sure there are do duplicate variables in the solution hint, so we can just check the size.

If the objective was a floating point one, do some postprocessing on the final response.

Compute the true objective of the best returned solution.

Also copy the scaled objective which must be in the mapping model. This can be useful for some client, like if they want to do multi-objective optimization in stages.

If requested, compute a correct lb from the one on the integer objective. We only do that if some error were introduced by the scaling algorithm.

To avoid small errors that can be confusing, we take the min/max with the objective value.

Check the absolute gap, and display warning if needed.

Todo
(user): Change status to IMPRECISE?

For the case where the assumptions are currently not supported, we just assume they are fixed, and will always report all of them in the UNSAT core if the problem turn out to be UNSAT.

If the mode is not degraded, we will hopefully report a small subset in case there is no feasible solution under these assumptions.

For now, just pass in all assumptions.

Clear them from the new proto.

Do the actual presolve.

Delete the context as soon a the presolve is done. Note that only postsolve_mapping and mapping_proto are needed for postsolve.

Todo
(user): reduce this function size and find a better place for this?

Collect the info we know about new_cp_model_proto bounds.

Note
this is not really needed as we should have the same information in the mapping_proto.

Intersect with the SharedBoundsManager if it exist.

Postsolve and fill the field.

Solution checking. We either check all solutions, or only the last one. Checking all solution might be expensive if we creates many.

We pass presolve data for more informative message in case the solution is not feasible.

Solution postsolving.

Map back the sufficient assumptions for infeasibility.

Truncate the solution in case model expansion added more variables.

Make sure everything stops when we have a first solution if requested.

If the model is convertible to a MIP, we dump it too.

Todo
(user): We could try to dump our linear relaxation too.

If the model is convertible to a pure SAT one, we dump it too.

If specified, we load the initial objective domain right away in the response manager. Note that the presolve will always fill it with the trivial min/max value if the user left it empty. This avoids to display [-infinity, infinity] for the initial objective search space.

Start counting the primal integral from the current deterministic time and initial objective domain gap that we just filled.

Re-test a complete solution hint to see if it survived the presolve. If it is feasible, we load it right away.

Tricky: when we enumerate all solutions, we cannot properly exclude the current solution if we didn't find it via full propagation, so we don't load it in this case.

Todo
(user): Even for an optimization, if we load the solution right away, we might not have the same behavior as the initial search that follow the hint will be infeasible, so the activities of the variables will be different.

To avoid duplicating code, the single-thread version reuse most of the multi-thread architecture.

Definition at line 2026 of file cp_model_solver.cc.

◆ SolveDiophantine()

DiophantineSolution operations_research::sat::SolveDiophantine ( absl::Span< const int64_t > coeffs,
int64_t rhs,
absl::Span< const int64_t > var_lbs,
absl::Span< const int64_t > var_ubs )

x_i's Satisfying sum(x_i * coeffs[pivots[i]]) = current_gcd.

Z-basis of sum(x_i * arg.coeffs(pivots[i])) = 0.

Solves current_gcd * u + coeff * v = new_gcd. Copy the coefficients as the function below modifies them.

To compute the domains, we use the triangular shape of the basis. The first one is special as it is controlled by two columns of the basis. Note that we don't try to compute exact domains as we would need to multiply then making the number of interval explode. For i = 0, ..., replaced_variable_count - 1, uses identities x[i] = special_solution[i]

  • sum(linear_basis[k][i]*y[k], max(1, i) <= k < vars.size) where: y[k] is a newly created variable if 1 <= k < replaced_variable_count y[k] = x[pivots[k]] else.
    Todo
    (user): look if there is a natural improvement.

Identities 0 and 1 both bound the first element of the basis.

Definition at line 119 of file diophantine.cc.

◆ SolveDiophantineEquationOfSizeTwo()

bool operations_research::sat::SolveDiophantineEquationOfSizeTwo ( int64_t & a,
int64_t & b,
int64_t & cte,
int64_t & x0,
int64_t & y0 )

Returns true if the equation a * X + b * Y = cte has some integer solutions. For now, we check that a and b are different from 0 and from int64_t min.

There is actually always a solution if cte % gcd(a, b) == 0. And because a, b and cte fit on an int64_t, if there is a solution, there is one with X and Y fitting on an int64_t.

We will divide everything by gcd(a, b) first, so it is why we take reference and the equation can change.

If there are solutions, we return one of them (x0, y0). From any such solution, the set of all solutions is given for Z integer by: X = x0 + b * Z; Y = y0 - a * Z;

Given a domain for X and Y, it is possible to compute the "exact" domain of Z with our Domain functions. Note however that this will only compute solution where both x-x0 and y-y0 do fit on an int64_t: DomainOf(x).SubtractionWith(x0).InverseMultiplicationBy(b).IntersectionWith( DomainOf(y).SubtractionWith(y0).InverseMultiplicationBy(-a))

The simple case where (0, 0) is a solution.

We solve a * X + b * Y = cte We take a valid x0 in [0, b) by considering the equation mod b.

We choose x0 of the same sign as cte.

By plugging X = x0 + b * Z We have a * (x0 + b * Z) + b * Y = cte so a * b * Z + b * Y = cte - a * x0; and y0 = (cte - a * x0) / b (with an exact division by construction).

Overflow-wise, there is two cases for cte > 0:

  • a * x0 <= cte, in this case y0 will not overflow (<= cte).
  • a * x0 > cte, in this case y0 will be in (-a, 0].

Definition at line 209 of file util.cc.

◆ SolveFzWithCpModelProto()

void operations_research::sat::SolveFzWithCpModelProto ( const fz::Model & fz_model,
const fz::FlatzincSatParameters & p,
const std::string & sat_params,
SolverLogger * logger,
SolverLogger * solution_logger )

The translation is easy, we create one variable per flatzinc variable, plus eventually a bunch of constant variables that will be created lazily.

The CP-SAT solver checks that constraints cannot overflow during their propagation. Because of that, we trim undefined variable domains (i.e. int in minizinc) to something hopefully large enough.

Translate the constraints.

Fill the objective.

Fill the search order.

Enumerate all sat solutions.

Helps with challenge unit tests.

Computes the number of workers.

We don't support enumerating all solution in parallel for a SAT problem. But note that we do support it for an optimization problem since the meaning of p.all_solutions is not the same in this case.

Todo
(user): Supports setting the number of workers to 0, which will then query the number of cores available. This is complex now as we need to still support the expected behabior (no flags -> 1 thread fixed search, -f -> 1 thread free search).

Specifies single thread specific search modes.

Time limit.

The order is important, we want the flag parameters to overwrite anything set in m.parameters.

We only need an observer if 'p.display_all_solutions' or 'p.search_all_solutions' are true.

Setup logging.

Note
we need to do that before we start calling the sat functions below that might create a SolverLogger() themselves.

Check the returned solution with the fz model checker.

Output the solution in the flatzinc official format.

Already printed otherwise.

Definition at line 1280 of file cp_model_fz_solver.cc.

◆ SolveIntegerProblemWithLazyEncoding()

SatSolver::Status operations_research::sat::SolveIntegerProblemWithLazyEncoding ( Model * model)

Only used in tests. Move to a test utility file.

This configures the model SearchHeuristics with a simple default heuristic and then call ResetAndSolveIntegerProblem() without any assumptions.

Definition at line 1584 of file integer_search.cc.

◆ SolveLoadedCpModel()

void operations_research::sat::SolveLoadedCpModel ( const CpModelProto & model_proto,
Model * model )

Solves an already loaded cp_model_proto. The final CpSolverResponse must be read from the shared_response_manager.

Todo
(user): This should be transformed so that it can be called many times and resume from the last search state as if it wasn't interrupted. That would allow use to easily interleave different heuristics in the same thread.

Solves an already loaded cp_model_proto. The final CpSolverResponse must be read from the shared_response_manager.

Todo
(user): This should be transformed so that it can be called many times and resume from the last search state as if it wasn't interrupted. That would allow use to easily interleave different heuristics in the same thread.

Make sure we are not at a positive level.

Reconfigure search heuristic if it was changed.

Extract a good subset of assumptions and add it to the response.

Optimization problem.

Todo
(user): This doesn't work with splitting in chunk for now. It shouldn't be too hard to fix.
Todo
(user): This parameter breaks the splitting in chunk of a Solve(). It should probably be moved into another SubSolver altogether.

The search is done in both case.

Todo
(user): Remove the weird translation INFEASIBLE->FEASIBLE in the function above?

Definition at line 1322 of file cp_model_solver_helpers.cc.

◆ SolveWithParameters() [1/2]

CpSolverResponse operations_research::sat::SolveWithParameters ( const CpModelProto & model_proto,
const SatParameters & params )

Solves the given CpModelProto with the given parameters.

Definition at line 2581 of file cp_model_solver.cc.

◆ SolveWithParameters() [2/2]

CpSolverResponse operations_research::sat::SolveWithParameters ( const CpModelProto & model_proto,
const std::string & params )

Solves the given CpModelProto with the given sat parameters as string in JSon format, and returns an instance of CpSolverResponse.

Definition at line 2589 of file cp_model_solver.cc.

◆ SplitAndLoadIntermediateConstraints()

void operations_research::sat::SplitAndLoadIntermediateConstraints ( bool lb_required,
bool ub_required,
std::vector< IntegerVariable > * vars,
std::vector< int64_t > * coeffs,
Model * m )
Todo
(user): We could use a smarter way to determine buckets, like putting everyone with the same coeff together if possible and the split is ok.

Part of LoadLinearConstraint() that we reuse to load the objective.

We split large constraints into a square root number of parts. This is to avoid a bad complexity while propagating them since our algorithm is not in O(num_changes).

Todo
(user): Alternatively, we could use a O(num_changes) propagation (a bit tricky to implement), or a decomposition into a tree with more than one level. Both requires experimentations.

If we enumerate all solutions, then we want intermediate variables to be tight independently of what side is required.

Everything should be exactly divisible!

We have sum bucket_var >= lb, so we need local_vars >= bucket_var.

Similarly, bucket_var <= ub, so we need local_vars <= bucket_var

Definition at line 1149 of file cp_model_loader.cc.

◆ SplitAroundGivenValue()

IntegerLiteral operations_research::sat::SplitAroundGivenValue ( IntegerVariable var,
IntegerValue value,
Model * model )

This method first tries var <= value. If this does not reduce the domain it tries var >= value. If that also does not reduce the domain then returns an invalid literal.

Heuristic: Prefer the objective direction first. Reference: Conflict-Driven Heuristics for Mixed Integer Programming (2019) by Jakob Witzig and Ambros Gleixner.

Note
The value might be out of bounds. In that case we return kNoLiteralIndex.

Definition at line 87 of file integer_search.cc.

◆ SplitAroundLpValue()

IntegerLiteral operations_research::sat::SplitAroundLpValue ( IntegerVariable var,
Model * model )

Returns decision corresponding to var <= round(lp_value). If the variable does not appear in the LP, this method returns an invalid literal.

We only use this if the sub-lp has a solution, and depending on the value of exploit_all_lp_solution() if it is a pure-integer solution.

Todo
(user): Depending if we branch up or down, this might not exclude the LP value, which is potentially a bad thing.
Todo
(user): Why is the reduced cost doing things differently?

Because our lp solution might be from higher up in the tree, it is possible that value is now outside the domain of positive_var. In this case, this function will return an invalid literal.

Definition at line 115 of file integer_search.cc.

◆ SplitDomainUsingBestSolutionValue()

IntegerLiteral operations_research::sat::SplitDomainUsingBestSolutionValue ( IntegerVariable var,
Model * model )

Returns decision corresponding to var <= best_solution[var]. If no solution has been found, this method returns a literal with kNoIntegerVariable. This was suggested in paper: "Solution-Based Phase Saving for CP" (2018) by Emir Demirovic, Geoffrey Chu, and Peter J. Stuckey.

◆ SplitUsingBestSolutionValueInRepository()

IntegerLiteral operations_research::sat::SplitUsingBestSolutionValueInRepository ( IntegerVariable var,
const SharedSolutionRepository< int64_t > & solution_repo,
Model * model )

Definition at line 144 of file integer_search.cc.

◆ StoreAssignment()

void operations_research::sat::StoreAssignment ( const VariablesAssignment & assignment,
BooleanAssignment * output )

Store a variable assignment into the given BooleanAssignment proto.

Note
only the assigned variables are stored, so the assignment may be incomplete.

Definition at line 488 of file boolean_problem.cc.

◆ SubstituteVariable()

bool operations_research::sat::SubstituteVariable ( int var,
int64_t var_coeff_in_definition,
const ConstraintProto & definition,
ConstraintProto * ct )

Replaces the variable var in ct using the definition constraint. Currently the coefficient in the definition must be 1 or -1.

This might return false and NOT modify ConstraintProto in case of overflow or other issue with the substitution.

Get the coefficient of var in the constraint. We assume positive reference here (it should always be the case now). If we don't find var, we abort.

If var appear multiple time, we add all its coefficients.

Definition at line 233 of file presolve_util.cc.

◆ SUniv()

int operations_research::sat::SUniv ( int i)
inline

Returns the ith element of the strategy S^univ proposed by M. Luby et al. in Optimal Speedup of Las Vegas Algorithms, Information Processing Letters 1993. This is used to decide the number of conflicts allowed before the next restart. This method, used by most SAT solvers, is usually referenced as Luby. Returns 2^{k-1} when i == 2^k - 1 and SUniv(i - 2^{k-1} + 1) when 2^{k-1} <= i < 2^k - 1. The sequence is defined for i > 0 and starts with: {1, 1, 2, 1, 1, 2, 4, 1, 1, 2, 1, 1, 2, 4, 8, ...}

Definition at line 92 of file restart.h.

◆ SymmetrizeArcs()

void operations_research::sat::SymmetrizeArcs ( std::vector< ArcWithLpValue > * arcs)

Regroups and sum the lp values on duplicate arcs or reversed arcs (tail->head) and (head->tail). As a side effect, we will always have tail <= head.

Definition at line 549 of file routing_cuts.cc.

◆ ToDouble()

double operations_research::sat::ToDouble ( IntegerValue value)
inline

Definition at line 73 of file integer.h.

◆ ToIntegerValueVector()

std::vector< IntegerValue > operations_research::sat::ToIntegerValueVector ( const std::vector< int64_t > & input)
inline

Model based functions.

Definition at line 116 of file cp_constraints.h.

◆ TransformToGeneratorOfStabilizer()

void operations_research::sat::TransformToGeneratorOfStabilizer ( int to_stabilize,
std::vector< std::unique_ptr< SparsePermutation > > * generators )
inline

Given the generators for a permutation group of [0, n-1], update it to a set of generators of the group stabilizing the given element.

Note
one can add symmetry breaking constraints by repeatedly doing: 1/ Call GetOrbits() using the current set of generators. 2/ Choose an element x0 in a large orbit (x0, .. xi ..) , and add x0 >= xi for all i. 3/ Update the set of generators to the one stabilizing x0.

This is more or less what is described in "Symmetry Breaking Inequalities from the Schreier-Sims Table", Domenico Salvagnin, https://link.springer.com/chapter/10.1007/978-3-319-93031-2_37

Todo
(user): Implement!

Definition at line 79 of file symmetry_util.h.

◆ TryToLinearizeConstraint()

void operations_research::sat::TryToLinearizeConstraint ( const CpModelProto & ,
const ConstraintProto & ct,
int linearization_level,
Model * model,
LinearRelaxation * relaxation,
ActivityBoundHelper * activity_helper )

Adds linearization of different types of constraints.

Add a static and a dynamic linear relaxation of the CP constraint to the set of linear constraints. The highest linearization_level is, the more types of constraint we encode. This method should be called only for linearization_level > 0. The static part is just called a relaxation and is called at the root node of the search. The dynamic part is implemented through a set of linear cut generators that will be called throughout the search.

Todo

(user): In full generality, we could encode all the constraint as an LP.

(user): Add unit tests for this method.

(user): Remove and merge with model loading.

No relaxation, just a cut generator .

Add cut generators.

Todo
(user): Use the same pattern as the other 2 scheduling methods:
  • single function
  • generate helpers once

Adds an energetic relaxation (sum of areas fits in bounding box).

Adds a completion time cut generator and an energetic cut generator.

Definition at line 1317 of file linear_relaxation.cc.

◆ TryToReconcileEncodings()

std::vector< LiteralValueValue > operations_research::sat::TryToReconcileEncodings ( const AffineExpression & size2_affine,
const AffineExpression & affine,
absl::Span< const ValueLiteralPair > affine_var_encoding,
bool put_affine_left_in_result,
IntegerEncoder * integer_encoder )

If a variable has a size of 2, it is most likely reduced to an affine expression pointing to a variable with domain [0,1] or [-1,0]. If the original variable has been removed from the model, then there are no implied values from any exactly_one constraint to its domain. If we are lucky, one of the literal of the exactly_one constraints, and its negation are used to encode the Boolean variable of the affine.

This may fail if exactly_one(l0, l1, l2, l3); l0 and l1 imply x = 0, l2 and l3 imply x = 1. In that case, one must look at the binary implications to find the missing link.

Todo
(user): Consider removing this once we are more complete in our implied bounds repository. Because if we can reconcile an encoding, then any of the literal in the at most one should imply a value on the boolean view use in the size2 affine.
Todo
(user): I am not sure how this can happen since size2_affine is supposed to be non-fixed. Maybe we miss some propag. Investigate.

Build the decomposition.

Definition at line 257 of file implied_bounds.cc.

◆ TryToReconcileSize2Encodings()

std::vector< LiteralValueValue > operations_research::sat::TryToReconcileSize2Encodings ( const AffineExpression & left,
const AffineExpression & right,
IntegerEncoder * integer_encoder )

Specialized case of encoding reconciliation when both variables have a domain of size of 2.

Definition at line 301 of file implied_bounds.cc.

◆ UnassignedVarWithLowestMinAtItsMinHeuristic()

std::function< BooleanOrIntegerLiteral()> operations_research::sat::UnassignedVarWithLowestMinAtItsMinHeuristic ( const std::vector< IntegerVariable > & vars,
Model * model )

Decision heuristic for SolveIntegerProblemWithLazyEncoding(). Like FirstUnassignedVarAtItsMinHeuristic() but the function will return the literal corresponding to the fact that the currently non-assigned variable with the lowest min has a value <= this min.

Definition at line 271 of file integer_search.cc.

◆ UnscaleObjectiveValue()

double operations_research::sat::UnscaleObjectiveValue ( const CpObjectiveProto & proto,
double value )
inline

Removes the objective scaling and offset from the given value.

Definition at line 183 of file cp_model_utils.h.

◆ UpperBound()

std::function< int64_t(const Model &)> operations_research::sat::UpperBound ( IntegerVariable v)
inline

Definition at line 1961 of file integer.h.

◆ UsedIntervals()

std::vector< int > operations_research::sat::UsedIntervals ( const ConstraintProto & ct)

Returns the sorted list of interval used by a constraint.

Definition at line 496 of file cp_model_utils.cc.

◆ UsedVariables()

std::vector< int > operations_research::sat::UsedVariables ( const ConstraintProto & ct)

Returns the sorted list of variables used by a constraint.

Note
this include variable used as a literal.

Definition at line 483 of file cp_model_utils.cc.

◆ UseObjectiveForSatAssignmentPreference()

void operations_research::sat::UseObjectiveForSatAssignmentPreference ( const LinearBooleanProblem & problem,
SatSolver * solver )

Uses the objective coefficient to drive the SAT search towards an heuristically better solution.

Because this is a minimization problem, we prefer to assign a Boolean variable to its "low" objective value. So if a literal has a positive weight when true, we want to set it to false.

Definition at line 320 of file boolean_problem.cc.

◆ ValidateBooleanProblem()

absl::Status operations_research::sat::ValidateBooleanProblem ( const LinearBooleanProblem & problem)

Tests the preconditions of the given problem (as described in the proto) and returns an error if they are not all satisfied.

Definition at line 144 of file boolean_problem.cc.

◆ ValidateCpModel()

std::string operations_research::sat::ValidateCpModel ( const CpModelProto & model,
bool after_presolve = false )

Verifies that the given model satisfies all the properties described in the proto comments. Returns an empty string if it is the case, otherwise fails at the first error and returns a human-readable description of the issue.

The extra parameter is internal and mainly for debugging. After the problem has been presolved, we have a stricter set of properties we want to enforce.

Todo
(user): Add any needed overflow validation because we are far from exhaustive. We could also run a small presolve that tighten variable bounds before the overflow check to facilitate the lives of our users, but it is a some work to put in place.

We require this precondition so that we can take any linear combination of variable with coefficient in int64_t and compute the activity on an int128 with no overflow. This is useful during cut computation.

We need to validate the intervals used first, so we add these constraints here so that we can validate them in a second pass.

By default, a constraint does not support enforcement literals except if explicitly stated by setting this to true below.

Other non-generic validations.

Because some client set fixed enforcement literal which are supported in the presolve for all constraints, we just check that there is no non-fixed enforcement.

Extra validation for constraint using intervals.

If any of these fields are set, the domain must be set.

Check that we can transform any value in the objective domain without overflow. We only check the bounds which is enough.

Definition at line 927 of file cp_model_checker.cc.

◆ ValidateInputCpModel()

std::string operations_research::sat::ValidateInputCpModel ( const SatParameters & params,
const CpModelProto & model )

Some validation (in particular the floating point objective) requires to read parameters.

Todo
(user): Ideally we would have just one ValidateCpModel() function but this was introduced after many users already use ValidateCpModel() without parameters.

Definition at line 1130 of file cp_model_checker.cc.

◆ ValidateLinearConstraintForOverflow()

bool operations_research::sat::ValidateLinearConstraintForOverflow ( const LinearConstraint & constraint,
const IntegerTrail & integer_trail )
Todo
(user): Avoid duplication with PossibleIntegerOverflow() in the checker? At least make sure the code is the same.

Makes sure that any of our future computation on this constraint will not cause overflow. We use the level zero bounds and use the same definition as in PossibleIntegerOverflow() in the cp_model.proto checker.

Namely, the sum of positive terms, the sum of negative terms and their difference shouldn't overflow. Note that we don't validate the rhs, but if the bounds are properly relaxed, then this shouldn't cause any issues.

Note(user): We should avoid doing this test too often as it can be slow. At least do not do it more than once on each constraint.

Definition at line 397 of file linear_constraint.cc.

◆ ValidateParameters()

std::string operations_research::sat::ValidateParameters ( const SatParameters & params)

Verifies that the given parameters are correct. Returns an empty string if it is the case, or an human-readable error message otherwise.

Test that all floating point parameters are not NaN or +/- infinity.

Parallelism.

Todo
(user): Consider using annotations directly in the proto for these validation. It is however not open sourced.

Feasibility jump.

Violation ls.

Test LP tolerances.

Definition at line 56 of file parameters_validation.cc.

◆ Value() [1/3]

std::function< int64_t(const Model &)> operations_research::sat::Value ( BooleanVariable b)
inline

This checks that the variable is fixed.

Definition at line 1026 of file sat_solver.h.

◆ Value() [2/3]

std::function< int64_t(const Model &)> operations_research::sat::Value ( IntegerVariable v)
inline

This checks that the variable is fixed.

Definition at line 1975 of file integer.h.

◆ Value() [3/3]

std::function< int64_t(const Model &)> operations_research::sat::Value ( Literal l)
inline

This checks that the variable is fixed.

Definition at line 1017 of file sat_solver.h.

◆ VarDebugString()

std::string operations_research::sat::VarDebugString ( const CpModelProto & proto,
int index )
Todo
(user): unfortunately, we need this indirection to get a DebugString() in a const way from an index. Because building an IntVar is non-const.

Returns a more readable and compact DebugString() than proto.variables(index).DebugString(). This is used by IntVar::DebugString() but also allow to get the same string from a const proto.

Special case for constant variables without names.

Todo
(user): Use domain pretty print function.

Definition at line 143 of file cp_model.cc.

◆ VariableIsPositive()

bool operations_research::sat::VariableIsPositive ( IntegerVariable i)
inline

Definition at line 189 of file integer.h.

◆ WeightedPick()

int operations_research::sat::WeightedPick ( absl::Span< const double > input,
absl::BitGenRef random )

This is equivalent of absl::discrete_distribution<std::size_t>(input.begin(), input.end())(random) but does no allocations. It is a lot faster when you need to pick just one elements from a distribution for instance.

Definition at line 382 of file util.cc.

◆ WeightedSumGreaterOrEqual()

template<typename VectorInt >
std::function< void(Model *)> operations_research::sat::WeightedSumGreaterOrEqual ( const std::vector< IntegerVariable > & vars,
const VectorInt & coefficients,
int64_t lower_bound )
inline

Weighted sum >= constant.

We just negate everything and use an <= constraints.

Definition at line 447 of file integer_expr.h.

◆ WeightedSumLowerOrEqual()

template<typename VectorInt >
std::function< void(Model *)> operations_research::sat::WeightedSumLowerOrEqual ( const std::vector< IntegerVariable > & vars,
const VectorInt & coefficients,
int64_t upper_bound )
inline

Weighted sum <= constant.

Model based functions.

Definition at line 436 of file integer_expr.h.

◆ WriteModelProtoToFile()

template<class M >
bool operations_research::sat::WriteModelProtoToFile ( const M & proto,
absl::string_view filename )

Definition at line 290 of file cp_model_utils.h.

Variable Documentation

◆ b

for operations_research::sat::b = 0 if j > i+1

Definition at line 100 of file diophantine.h.

◆ i

for operations_research::sat::i = 0 ... k-2

Gives a parametric description of the solutions of the Diophantine equation with n variables: sum(coeffs[i] * x[i]) = rhs. var_lbs and var_ubs are bounds on desired values for variables x_i's.

It is known that, ignoring variable bounds, the set of solutions of such an equation is

  1. either empty if the gcd(coeffs[i]) does not divide rhs;
  2. or the sum of a special solution and an element of the kernel of the equation. In case 1, the function return .has_solution = false; In case 2, if one coefficient is equal to the GCD of all (in absolute value), returns .no_reformulation_needed = true. Otherwise, it behaves as follows:

The kernel of the equation as dimension n-1.

We assume we permute the variable by index_permutation, such that the first k k terms have a gcd equal to the gcd of all coefficient (it is possible to do this with k <= 15). Under this assumption, we can find:

  • a special solution that is entirely supported by the k first variables;
  • a basis {b[0], b[1], ..., b[n-2]} of the kernel such that:

Definition at line 100 of file diophantine.h.

◆ kAffineRelationConstraint

int operations_research::sat::kAffineRelationConstraint = -2
constexpr

Definition at line 49 of file presolve_context.h.

◆ kAssumptionsConstraint

int operations_research::sat::kAssumptionsConstraint = -3
constexpr

Definition at line 50 of file presolve_context.h.

◆ kDefaultFingerprintSeed

uint64_t operations_research::sat::kDefaultFingerprintSeed = 0xa5b85c5e198ed849
constexpr

Default seed for fingerprints.

Definition at line 245 of file cp_model_utils.h.

◆ kMaxProblemSize

int operations_research::sat::kMaxProblemSize = 16
staticconstexpr

Definition at line 35 of file 2d_packing_brute_force.cc.

◆ kObjectiveConstraint

int operations_research::sat::kObjectiveConstraint = -1
constexpr

We use some special constraint index in our variable <-> constraint graph.

Definition at line 48 of file presolve_context.h.

◆ kTableAnyValue

int64_t operations_research::sat::kTableAnyValue = std::numeric_limits<int64_t>::min()
constexpr

This method tries to compress a list of tuples by merging complementary tuples, that is a set of tuples that only differ on one variable, and that cover the domain of the variable. In that case, it will keep only one tuple, and replace the value for variable by any_value, the equivalent of '*' in regexps.

This method is exposed for testing purposes.

Definition at line 601 of file util.h.

◆ kUnsatTrailIndex

const int operations_research::sat::kUnsatTrailIndex = -1

A constant used by the EnqueueDecision*() API.

Definition at line 58 of file sat_solver.h.