Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
|
#include <lb_tree_search.h>
Public Member Functions | |
LbTreeSearch (Model *model) | |
SatSolver::Status | Search (const std::function< void()> &feasible_solution_observer) |
Explores the search space. | |
Implement a "classic" MIP tree search by having an exhaustive list of open nodes.
The goal of this subsolver is to improve the objective lower bound. It is meant to be used in a multi-thread portfolio, and as such it really do not care about finding solution. It is all about improving the lower bound.
Definition at line 59 of file lb_tree_search.h.
|
explicit |
We should create this class only in the presence of an objective.
Identify an LP with the same objective variable.
We use the normal SAT search but we will bump the variable activity slightly differently. In addition to the conflicts, we also bump it each time the objective lower bound increase in a sub-node.
Definition at line 50 of file lb_tree_search.cc.
SatSolver::Status operations_research::sat::LbTreeSearch::Search | ( | const std::function< void()> & | feasible_solution_observer | ) |
Explores the search space.
We currently restart the search tree from scratch from time to times:
This has 2 advantages:
If some branches already have a good lower bound, no need to call the LP on those.
Each time we are back here, we bump the activities of the variable that are part of the objective lower bound reason.
Propagate upward in the tree the new objective lb.
Our branch is always greater or equal to the level. We increase the objective_lb of the current node if needed.
Minor optim: sometimes, because of the LP and cuts, the reason for objective_var_ only contains lower level literals, so we can exploit that.
Propagate upward any new bounds.
We disable LP automatic propagation and only enable it:
This will import other workers bound if we are back to level zero. It might also decide to restart.
This is the current bound we try to improve. We cache it here to avoid getting the lock many times and it is also easier to follow the code if this is assumed constant for one iteration.
If the root lb increased, update global shared objective lb.
Forget the whole tree and restart. We will do it periodically at the beginning of the search each time we cross the kNumDecisionsBeforeInitialRestarts decision since the last restart. This will happen at most kMaxNumInitialRestarts times.
Periodic backtrack to level zero so we can import bounds.
Backtrack if needed.
Our algorithm stop exploring a branch as soon as its objective lower bound is greater than the root lower bound. We then backtrack to the first node in the branch that is not yet closed under this bound.
Backtrack the solver to be in sync with current_branch_.
Dive: Follow the branch with lowest objective.
Invariant are tricky: current_branch_ contains one entry per decision taken + the last one which we are about to take. If we don't have the last entry, it means we are about to take a new decision.
If the bound of this node is high, restart the main loop..
This will be set to the next node index.
If the variable is already fixed, we bypass the node and connect its parent directly to the relevant child.
We jump directly to the subnode. Else we will change the root.
This is probably not needed.
We never explored the other branch, so we can just clear all nodes.
Keep the saved basis.
We always make sure the root is at zero. The root is no longer at zero, that might cause issue. Cleanup.
See if we have better bounds using the current LP state.
If both lower bound are the same, we pick the literal branch. We do that because this is the polarity that was chosen by the SAT heuristic in the first place. We tried random, it doesn't seems to work as well.
If we are taking this branch for the first time, we enable the LP and make sure we solve it before taking the decision. This allow to have proper pseudo-costs, and also be incremental for the decision we are about to take.
We also enable the LP if we have no basis info for this node.
The decision might have become assigned, in which case we loop.
If we are not at the end, disable the LP propagation.
Take the decision.
Conflict?
Update the proper field and abort the dive if we crossed the threshold.
If a conflict occurred, we will backtrack.
This test allow to not take a decision when the branch is already closed (i.e. the true branch or false branch lb is high enough). Adding it basically changes if we take the decision later when we explore the branch or right now.
I feel taking it later is better. It also avoid creating unneeded nodes. It does change the behavior on a few problem though. For instance on irp.mps.gz, the search works better without this, whatever the random seed. Not sure why, maybe it creates more diversity?
Another difference is that if the search is done and we have a feasible solution, we will not report it because of this test (except if we are at the optimal).
This reuse or create a node to store the basis.
Loop on backtrack or bound improvement.
Invariant: The current branch is fully assigned, and the solver is in sync. And we are not on a "bad" path.
We are about to take a new decision, what we will do is dive until the objective lower bound increase. we will then create a bunch of new nodes in the tree.
By analyzing the reason for the increase, we can create less nodes than if we just followed the initial heuristic.
No new decision: search done.
Analyse the reason for objective increase. Deduce a set of new nodes to append to the tree.
Bump activities.
Create one node per new decisions.
Update the objective of the last node in the branch since we just improved that.
Reset the solver to a correct state since we have a subset of the current propagation. We backtrack as little as possible.
The decision level is the number of decision taken. Decision()[level] is the decision at that level.
Update bounds with reduced costs info.
Definition at line 410 of file lb_tree_search.cc.