Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
linear_assignment.h File Reference
#include <algorithm>
#include <cstdint>
#include <cstdlib>
#include <deque>
#include <limits>
#include <memory>
#include <string>
#include <utility>
#include <vector>
#include "absl/flags/declare.h"
#include "absl/flags/flag.h"
#include "absl/strings/str_format.h"
#include "ortools/base/logging.h"
#include "ortools/graph/ebert_graph.h"
#include "ortools/util/permutation.h"
#include "ortools/util/zvector.h"

Go to the source code of this file.

Classes

class  operations_research::LinearSumAssignment< GraphType >
 This class does not take ownership of its underlying graph. More...
 
class  operations_research::LinearSumAssignment< GraphType >::BipartiteLeftNodeIterator
 
class  operations_research::CostValueCycleHandler< ArcIndexType >
 
class  operations_research::ArcIndexOrderingByTailNode< GraphType >
 

Namespaces

namespace  operations_research
 In SWIG mode, we don't want anything besides these top-level includes.
 

Functions

 ABSL_DECLARE_FLAG (int64_t, assignment_alpha)
 
 ABSL_DECLARE_FLAG (int, assignment_progress_logging_period)
 
 ABSL_DECLARE_FLAG (bool, assignment_stack_order)
 

Function Documentation

◆ ABSL_DECLARE_FLAG() [1/3]

ABSL_DECLARE_FLAG ( bool ,
assignment_stack_order  )

◆ ABSL_DECLARE_FLAG() [2/3]

ABSL_DECLARE_FLAG ( int ,
assignment_progress_logging_period  )

◆ ABSL_DECLARE_FLAG() [3/3]

ABSL_DECLARE_FLAG ( int64_t ,
assignment_alpha  )

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. An implementation of a cost-scaling push-relabel algorithm for the assignment problem (minimum-cost perfect bipartite matching), from the paper of Goldberg and Kennedy (1995).

This implementation finds the minimum-cost perfect assignment in the given graph with integral edge weights set through the SetArcCost method.

The running time is O(n*m*log(nC)) where n is the number of nodes, m is the number of edges, and C is the largest magnitude of an edge cost. In principle it can be worse than the Hungarian algorithm but we don't know of any class of problems where that actually happens. An additional sqrt(n) factor could be shaved off the running time bound using the technique described in http://dx.doi.org/10.1137/S0895480194281185 (see also http://theory.stanford.edu/~robert/papers/glob_upd.ps).

Example usage:

#include "ortools/graph/graph.h" #include "ortools/graph/linear_assignment.h"

///< Choose a graph implementation (we recommend StaticGraph<>). typedef util::StaticGraph<> Graph;

///< Define a num_nodes / 2 by num_nodes / 2 assignment problem: const int num_nodes = ... const int num_arcs = ... const int num_left_nodes = num_nodes / 2; Graph graph(num_nodes, num_arcs); std::vector<operations_research::CostValue> arc_costs(num_arcs); for (int arc = 0; arc < num_arcs; ++arc) { const int arc_tail = ... ///< must be in [0, num_left_nodes) const int arc_head = ... ///< must be in [num_left_nodes, num_nodes) graph.AddArc(arc_tail, arc_head); arc_costs[arc] = ... }

///< Build the StaticGraph. You can skip this step by using a ListGraph<> ///< instead, but then the ComputeAssignment() below will be slower. It is ///< okay if your graph is small and performance is not critical though. { std::vector<Graph::ArcIndex> arc_permutation; graph.Build(&arc_permutation); util::Permute(arc_permutation, &arc_costs); }

///< Construct the LinearSumAssignment. ::operations_research::LinearSumAssignment<Graph> a(graph, num_left_nodes); for (int arc = 0; arc < num_arcs; ++arc) { ///< You can also replace 'arc_costs[arc]' by something like ///< ComputeArcCost(permutation.empty() ? arc : permutation[arc]) ///< if you don't want to store the costs in arc_costs to save memory. a.SetArcCost(arc, arc_costs[arc]); }

///< Compute the optimum assignment. bool success = a.ComputeAssignment(); ///< Retrieve the cost of the optimum assignment. operations_research::CostValue optimum_cost = a.GetCost(); ///< Retrieve the node-node correspondence of the optimum assignment and the ///< cost of each node pairing. for (int left_node = 0; left_node < num_left_nodes; ++left_node) { const int right_node = a.GetMate(left_node); operations_research::CostValue node_pair_cost = a.GetAssignmentCost(left_node); ... }

In the following, we consider a bipartite graph G = (V = X union Y, E subset XxY), where V denotes the set of nodes (vertices) in the graph, E denotes the set of arcs (edges), n = V denotes the number of nodes in the graph, and m = E denotes the number of arcs in the graph.

The set of nodes is divided into two parts, X and Y, and every arc must go between a node of X and a node of Y. With each arc is associated a cost c(v, w). A matching M is a subset of E with the property that no two arcs in M have a head or tail node in common, and a perfect matching is a matching that touches every node in the graph. The cost of a matching M is the sum of the costs of all the arcs in M.

The assignment problem is to find a perfect matching of minimum cost in the given bipartite graph. The present algorithm reduces the assignment problem to an instance of the minimum-cost flow problem and takes advantage of special properties of the resulting minimum-cost flow problem to solve it efficiently using a push-relabel method. For more information about minimum-cost flow see ortools/graph/min_cost_flow.h

The method used here is the cost-scaling approach for the minimum-cost circulation problem as described in [Goldberg and Tarjan] with some technical modifications:

  1. For efficiency, we solve a transportation problem instead of minimum-cost circulation. We might revisit this decision if it is important to handle problems in which no perfect matching exists.
  2. We use a modified "asymmetric" notion of epsilon-optimality in which left-to-right residual arcs are required to have reduced cost bounded below by zero and right-to-left residual arcs are required to have reduced cost bounded below by -epsilon. For each residual arc direction, the reduced-cost threshold for admissibility is epsilon/2 above the threshold for epsilon optimality.
  3. We do not limit the applicability of the relabeling operation to nodes with excess. Instead we use the double-push operation (discussed in the Goldberg and Kennedy CSA paper and Kennedy's thesis) which relabels right-side nodes just after they have been discharged. The above differences are explained in detail in [Kennedy's thesis] and explained not quite as cleanly in [Goldberg and Kennedy's CSA paper]. But note that the thesis explanation uses a value of epsilon that's double what we use here.

Some definitions: Active: A node is called active when it has excess. It is eligible to be pushed from. In this implementation, every active node is on the left side of the graph where prices are determined implicitly, so no left-side relabeling is necessary before pushing from an active node. We do, however, need to compute the implications for price changes on the affected right-side nodes. Admissible: A residual arc (one that can carry more flow) is called admissible when its reduced cost is small enough. We can push additional flow along such an arc without violating epsilon-optimality. In the case of a left-to-right residual arc, the reduced cost must be at most epsilon/2. In the case of a right-to-left residual arc, the reduced cost must be at most -epsilon/2. The careful reader will note that these thresholds are not used explicitly anywhere in this implementation, and the reason is the implicit pricing of left-side nodes. Reduced cost: Essentially an arc's reduced cost is its complementary slackness. In push-relabel algorithms this is c_p(v, w) = p(v) + c(v, w) - p(w), where p() is the node price function and c(v, w) is the cost of the arc from v to w. See min_cost_flow.h for more details. Partial reduced cost: We maintain prices implicitly for left-side nodes in this implementation, so instead of reduced costs we work with partial reduced costs, defined as c'_p(v, w) = c(v, w) - p(w).

We check at initialization time for the possibility of arithmetic overflow and warn if the given costs are too large. In many cases the bound we use to trigger the warning is pessimistic so the given problem can often be solved even if we warn that overflow is possible.

We don't use the interface from operations_research/algorithms/hungarian.h because we want to be able to express sparse problems efficiently.

When asked to solve the given assignment problem we return a boolean to indicate whether the given problem was feasible.

References: [ Goldberg and Kennedy's CSA paper ] A. V. Goldberg and R. Kennedy, "An Efficient Cost Scaling Algorithm for the Assignment Problem." Mathematical Programming, Vol. 71, pages 153-178, December 1995.

[ Goldberg and Tarjan ] A. V. Goldberg and R. E. Tarjan, "Finding Minimum-Cost Circulations by Successive Approximation." Mathematics of Operations Research, Vol. 15, No. 3, pages 430-466, August 1990.

[ Kennedy's thesis ] J. R. Kennedy, Jr., "Solving Unweighted and Weighted Bipartite Matching Problems in Theory and Practice." Stanford University Doctoral Dissertation, Department of Computer Science, 1995.

[ Burkard et al. ] R. Burkard, M. Dell'Amico, S. Martello, "Assignment Problems", SIAM, 2009, ISBN: 978-0898716634, http://www.amazon.com/dp/0898716632/

[ Ahuja et al. ] R. K. Ahuja, T. L. Magnanti, J. B. Orlin, "Network Flows: Theory, Algorithms, and Applications," Prentice Hall, 1993, ISBN: 978-0136175490, http://www.amazon.com/dp/013617549X.

Keywords: linear sum assignment problem, Hungarian method, Goldberg, Kennedy.