Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::sat::LbTreeSearch Class Reference

#include <lb_tree_search.h>

Public Member Functions

 LbTreeSearch (Model *model)
 
SatSolver::Status Search (const std::function< void()> &feasible_solution_observer)
 Explores the search space.
 

Detailed Description

Implement a "classic" MIP tree search by having an exhaustive list of open nodes.

The goal of this subsolver is to improve the objective lower bound. It is meant to be used in a multi-thread portfolio, and as such it really do not care about finding solution. It is all about improving the lower bound.

Todo
(user): What this is doing is really similar to asking a SAT solver if the current objective lower bound is reachable by solving a SAT problem. However, this code handle on the side all the "conflict" of the form objective > current_lb. As a result, when it is UNSAT, we can bump the lower bound by a bigger amount than one. We also do not completely loose everything learned so far for the next iteration.

Definition at line 59 of file lb_tree_search.h.

Constructor & Destructor Documentation

◆ LbTreeSearch()

operations_research::sat::LbTreeSearch::LbTreeSearch ( Model * model)
explicit

We should create this class only in the presence of an objective.

Todo
(user): Starts with an initial variable score for all variable in the objective at their minimum value? this should emulate the first step of the core approach and gives a similar bound.

Identify an LP with the same objective variable.

Todo
(user): if we have many independent LP, this will find nothing.

We use the normal SAT search but we will bump the variable activity slightly differently. In addition to the conflicts, we also bump it each time the objective lower bound increase in a sub-node.

Definition at line 50 of file lb_tree_search.cc.

Member Function Documentation

◆ Search()

SatSolver::Status operations_research::sat::LbTreeSearch::Search ( const std::function< void()> & feasible_solution_observer)

Explores the search space.

We currently restart the search tree from scratch from time to times:

  • Initially, every kNumDecisionsBeforeInitialRestarts, for at most kMaxNumInitialRestarts times.
  • Every time we backtrack to level zero, we count how many nodes are worse than the best known objective lower bound. If this is true for more than half of the existing nodes, we restart and clear all nodes. If if this happens during the initial restarts phase, it reset the above counter and uses 1 of the available initial restarts.

This has 2 advantages:

  • It allows our "pseudo-cost" to kick in and experimentally result in smaller trees down the road.
  • It removes large inefficient search trees.
Todo
(user): a strong branching initial start, or allowing a few decision per nodes might be a better approach.
Todo
(user): It would also be cool to exploit the reason for the LB increase even more.

If some branches already have a good lower bound, no need to call the LP on those.

Each time we are back here, we bump the activities of the variable that are part of the objective lower bound reason.

Note
this is why we prefer not to increase the lower zero lower bound of objective_var_ with the tree root lower bound, so we can exploit more reasons.
Todo
(user): This is slightly different than bumping each time we push a decision that result in an LB increase. This is also called on backjump for instance.
Todo
(user): We also need to update pseudo cost on conflict.

Propagate upward in the tree the new objective lb.

Our branch is always greater or equal to the level. We increase the objective_lb of the current node if needed.

Minor optim: sometimes, because of the LP and cuts, the reason for objective_var_ only contains lower level literals, so we can exploit that.

Todo
(user): No point checking that if the objective lb wasn't assigned at this level.
Todo
(user): Exploit the reasons further.

Propagate upward any new bounds.

We disable LP automatic propagation and only enable it:

  • at root node
  • when we go to a new branch.

This will import other workers bound if we are back to level zero. It might also decide to restart.

This is the current bound we try to improve. We cache it here to avoid getting the lock many times and it is also easier to follow the code if this is assumed constant for one iteration.

If the root lb increased, update global shared objective lb.

Forget the whole tree and restart. We will do it periodically at the beginning of the search each time we cross the kNumDecisionsBeforeInitialRestarts decision since the last restart. This will happen at most kMaxNumInitialRestarts times.

Periodic backtrack to level zero so we can import bounds.

Backtrack if needed.

Our algorithm stop exploring a branch as soon as its objective lower bound is greater than the root lower bound. We then backtrack to the first node in the branch that is not yet closed under this bound.

Todo
(user): If we remember how far we can backjump for both true/false branch, we could be more efficient.

Backtrack the solver to be in sync with current_branch_.

Dive: Follow the branch with lowest objective.

Note
we do not creates new nodes here.
Todo
(user): If we have new information and our current objective bound is higher than any bound in a whole subtree, we might want to just restart this subtree exploration?

Invariant are tricky: current_branch_ contains one entry per decision taken + the last one which we are about to take. If we don't have the last entry, it means we are about to take a new decision.

If the bound of this node is high, restart the main loop..

This will be set to the next node index.

If the variable is already fixed, we bypass the node and connect its parent directly to the relevant child.

We jump directly to the subnode. Else we will change the root.

This is probably not needed.

We never explored the other branch, so we can just clear all nodes.

Keep the saved basis.

We always make sure the root is at zero. The root is no longer at zero, that might cause issue. Cleanup.

See if we have better bounds using the current LP state.

If both lower bound are the same, we pick the literal branch. We do that because this is the polarity that was chosen by the SAT heuristic in the first place. We tried random, it doesn't seems to work as well.

If we are taking this branch for the first time, we enable the LP and make sure we solve it before taking the decision. This allow to have proper pseudo-costs, and also be incremental for the decision we are about to take.

We also enable the LP if we have no basis info for this node.

The decision might have become assigned, in which case we loop.

If we are not at the end, disable the LP propagation.

Take the decision.

Conflict?

Update the proper field and abort the dive if we crossed the threshold.

If a conflict occurred, we will backtrack.

Todo
(user): The code is hard to follow. Fix and merge that with test below.

This test allow to not take a decision when the branch is already closed (i.e. the true branch or false branch lb is high enough). Adding it basically changes if we take the decision later when we explore the branch or right now.

I feel taking it later is better. It also avoid creating unneeded nodes. It does change the behavior on a few problem though. For instance on irp.mps.gz, the search works better without this, whatever the random seed. Not sure why, maybe it creates more diversity?

Another difference is that if the search is done and we have a feasible solution, we will not report it because of this test (except if we are at the optimal).

This reuse or create a node to store the basis.

Loop on backtrack or bound improvement.

Invariant: The current branch is fully assigned, and the solver is in sync. And we are not on a "bad" path.

We are about to take a new decision, what we will do is dive until the objective lower bound increase. we will then create a bunch of new nodes in the tree.

By analyzing the reason for the increase, we can create less nodes than if we just followed the initial heuristic.

Todo
(user): In multithread, this change the behavior a lot since we dive until we beat the best shared bound. Maybe we shouldn't do that.
Todo
(user): We sometimes branch on the objective variable, this should probably be avoided.

No new decision: search done.

Todo
(user): it would be nice to mark some node as infeasible if this is the case. However this could happen after many decision and we realize with the lp that one of them should have been fixed earlier, without any infeasibility in the current branch.

Analyse the reason for objective increase. Deduce a set of new nodes to append to the tree.

Todo
(user): Try to minimize the number of decisions?

Bump activities.

Create one node per new decisions.

Todo
(user): We should probably save the basis in more cases.

Update the objective of the last node in the branch since we just improved that.

Reset the solver to a correct state since we have a subset of the current propagation. We backtrack as little as possible.

The decision level is the number of decision taken. Decision()[level] is the decision at that level.

Update bounds with reduced costs info.

Todo
(user): Uses old optimal constraint that we just potentially backtracked over?
Todo
(user): We could do all at once rather than in O(#decision * size).

Definition at line 410 of file lb_tree_search.cc.


The documentation for this class was generated from the following files: