Question? Leave a message!




Ford-Fulkerson algorithm

Ford-Fulkerson algorithm
7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ Lecture slides by Kevin Wayne Copyright © 2005 PearsonAddison Wesley Copyright © 2013 Kevin Wayne http://www.cs.princeton.edu/wayne/kleinbergtardos Last updated on Sep 8, 2013 6:40 AM7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 7.1Flow network Abstraction for material flowing through the edges. Digraph G = (V, E) with source s ∈ V and sink t ∈ V. Nonnegative integer capacity c(e) for each e ∈ E. no parallel edges no edge enters s no edge leaves t capacity 9 15 10 4 15 10 s 8 10 t 5 15 6 10 4 15 16 3Minimum cut problem Def. A stcut (cut) is a partition (A, B) of the vertices with s ∈ A and t ∈ B. Def. Its capacity is the sum of the capacities of the edges from A to B. cap( A, B) = c(e) ∑ e out of A € 10 s t 5 15 capacity = 10 + 5 + 15 = 30 4Minimum cut problem Def. A stcut (cut) is a partition (A, B) of the vertices with s ∈ A and t ∈ B. Def. Its capacity is the sum of the capacities of the edges from A to B. cap( A, B) = c(e) ∑ e out of A € 10 s t 8 don't count edges from B to A 16 capacity = 10 + 8 + 16 = 34 5Minimum cut problem Def. A stcut (cut) is a partition (A, B) of the vertices with s ∈ A and t ∈ B. Def. Its capacity is the sum of the capacities of the edges from A to B. cap( A, B) = c(e) ∑ e out of A Mincut problem. Find a cut of minimum capacity. € 10 s t 8 10 capacity = 10 + 8 + 10 = 28 610 / 15 5 / 10 0 / 6 5 / 15 Maximum flow problem Def. An stflow (flow) f is a function that satisfies: 0 ≤ f(e) ≤ c(e) For each e ∈ E : capacity f (e) = f (e) ∑ ∑ For each v ∈ V – s, t : flow conservation e in to v e out of v € € capacity flow inflow at v = 5 + 5 + 0 = 10 outflow at v = 10 + 0 = 10 5 / 9 0 / 15 0 / 4 s t 5 / 5 5 / 8 10 / 10 0 / 15 0 / 4 10 / 16 7 10 / 10 10 / 10 v10 / 15 5 / 10 0 / 6 5 / 15 Maximum flow problem Def. An stflow (flow) f is a function that satisfies: 0 ≤ f(e) ≤ c(e) For each e ∈ E : capacity f (e) = f (e) ∑ ∑ For each v ∈ V – s, t : flow conservation e in to v e out of v € v( f ) = f (e) . ∑ Def. The value of a flow f is: val( f ) = e out of s € € 5 / 9 0 / 15 0 / 4 s t 5 / 5 5 / 8 10 / 10 0 / 15 0 / 4 value = 5 + 10 + 10 = 25 10 / 16 8 10 / 10 10 / 1013 / 15 8 / 10 3 / 6 2 / 15 Maximum flow problem Def. An stflow (flow) f is a function that satisfies: 0 ≤ f(e) ≤ c(e) For each e ∈ E : capacity f (e) = f (e) ∑ ∑ For each v ∈ V – s, t : flow conservation e in to v e out of v € v( f ) = f (e) . ∑ Def. The value of a flow f is: val( f ) = e out of s € Maxflow problem. Find a flow of maximum value. € 8 / 9 0 / 15 0 / 4 s t 5 / 5 8 / 8 10 / 10 0 / 15 0 / 4 value = 8 + 10 + 10 = 28 13 / 16 9 10 / 10 10 / 107. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 7.10 / 10 0 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. flow capacity 0 / 4 network G 0 / 2 0 / 6 value of flow s t 0 0 / 10 0 / 9 0 / 10 11 0 / 100 / 10 8 — 0 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. 0 / 4 network G 0 / 2 0 / 6 8 s — t 0 + 8 = 8 0 / 10 0 / 9 0 / 10 12 8 — 0 / 100 / 10 8 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. 0 / 4 network G 2— 0 / 2 0 / 6 2 2 s — t 8 + 2 = 10 0 / 10 — 0 / 9 8 / 10 13 10 — 8 / 106 — 0 / 10 8 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. 0 / 4 network G 2 / 2 6 — 0 / 6 6 8 s t 10 — 0 / 10 — 2 / 9 10 / 10 + 6 = 16 14 10 / 106 / 10 8 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. ending flow value = 16 0 / 4 network G 2 / 2 6 / 6 s t 16 6 / 10 8 / 9 10 / 10 15 10 / 109 / 10 7 / 8 Towards a maxflow algorithm Greedy algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an s↝t path P where each edge has f (e) c(e). Augment flow along path P. Repeat until you get stuck. but maxflow value = 19 3 / 4 network G 0 / 2 6 / 6 s t 19 9 / 10 9 / 9 10 / 10 16 10 / 10Residual graph Original edge: e = (u, v) ∈ E. original graph G Flow f (e). u 6 / 17 v Capacity c(e). flow capacity Residual edge. "Undo" flow sent. R e = (u, v) and e = (v, u). residual residual graph G f capacity Residual capacity: u v 11 ⎧ c(e)− f (e) if e∈ E 6 c (e) = ⎨ f R f (e) if e ∈ E ⎩ Residual graph: G = (V, E ). f f € where flow on a reverse edge Residual edges with positive residual capacity. negates flow on a forward edge R E = e : f (e) c(e) ∪ e : f (e) 0. f Key property: f ' is a flow in G iff f + f ' is a flow in G. f 17Augmenting path Def. An augmenting path is a simple s↝t path P in the residual graph G . f Def. The bottleneck capacity of an augmenting P is the minimum residual capacity of any edge in P. Key property. Let f be a flow and let P be an augmenting path in G . f Then f ' is a flow and val( f ' ) = val( f ) + bottleneck(G , P). f AUGMENT (f, c, P) b ← bottleneck capacity of path P. FOREACH edge e ∈ P IF (e ∈ E ) f (e) ← f (e) + b. R R ELSE f (e ) ← f (e ) – b. RETURN f. 18FordFulkerson algorithm FordFulkerson augmenting path algorithm. Start with f (e) = 0 for all edge e ∈ E. Find an augmenting path P in the residual graph G . f Augment flow along path P. Repeat until you get stuck. FORDFULKERSON (G, s, t, c) FOREACH edge e ∈ E : f (e) ← 0. G ← residual graph. f WHILE (there exists an augmenting path P in G ) f f ← AUGMENT (f, c, P). Update G . f RETURN f. 190 / 10 10 0 / 8 8 FordFulkerson algorithm demo flow capacity network G 0 / 4 0 / 2 0 / 6 value of flow s t 0 0 / 10 0 / 10 0 / 9 residual graph G f 4 residual capacity 2 6 s 10 t 9 10 20 0 / 10 10 0 / 10 10 8 — 0 / 8 8 FordFulkerson algorithm demo network G 0 / 4 0 / 2 0 / 6 8 s — t 0 + 8 = 8 0 / 10 0 / 10 0 / 9 residual graph G f 4 2 6 s 10 t 9 10 21 8 — 0 / 10 10 0 / 10 10 8 / 8 8 FordFulkerson algorithm demo network G 0 / 4 2— 0 / 2 0 / 6 2 2 s — t 8 + 2 = 10 0 / 10 — 8 / 10 0 / 9 residual graph G f 4 2 6 s 2 t 9 10 8 22 10 — 8 / 10 8 26 — 0 / 10 10 8 / 8 8 FordFulkerson algorithm demo network G 0 / 4 6 — 2 / 2 0 / 6 6 8 s — t 10 + 6 = 16 0 / 10 — 10 / 10 2 / 9 residual graph G f 4 2 6 s 10 t 7 10 2 23 10 / 10 10 8 — 6 / 10 6 4 8 / 8 8 FordFulkerson algorithm demo network G 2 — 0 / 4 — 0 2 / 2 6 / 6 8 s — t 16 + 2 = 18 6 / 10 10 / 10 8 / 9 residual graph G f 4 2 6 s 10 t 1 4 6 8 24 10 / 10 10 9 — 8 / 10 8 2 7 — 8 / 8 8 FordFulkerson algorithm demo network G 3 — 2 / 4 0 / 2 6 / 6 9 9 s — t 18 + 1 = 19 8 / 10 — 10 / 10 8 / 9 2 residual graph G f 2 2 6 s 10 t 1 2 8 8 25 10 / 10 10 9 / 10 9 1 7 / 8 7 1 FordFulkerson algorithm demo network G 3 / 4 0 / 2 6 / 6 max flow min cut s t 19 9 / 10 10 / 10 9 / 9 3 residual graph G f 1 nodes reachable from s 2 6 s 10 t 9 1 9 26 10 / 10 10 7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 7.210 / 15 5 / 10 0 / 6 5 / 15 Relationship between flows and cuts Flow value lemma. Let f be any flow and let (A, B) be any cut. Then, the net flow across (A, B) equals the value of f. f (e) − f (e) = v(f ) ∑ ∑ e out of A e in to A € net flow across cut = 5 + 10 + 10 = 25 5 / 9 0 / 15 0 / 4 s t value of flow = 25 5 / 5 5 / 8 10 / 10 0 / 15 0 / 4 10 / 16 28 10 / 10 10 / 1010 / 15 5 / 10 0 / 6 5 / 15 Relationship between flows and cuts Flow value lemma. Let f be any flow and let (A, B) be any cut. Then, the net flow across (A, B) equals the value of f. f (e) − f (e) = v(f ) ∑ ∑ e out of A e in to A € net flow across cut = 10 + 5 + 10 = 25 5 / 9 0 / 15 0 / 4 s t value of flow = 25 5 / 5 5 / 8 10 / 10 0 / 15 0 / 4 10 / 16 29 10 / 10 10 / 1010 / 15 5 / 10 0 / 6 5 / 15 Relationship between flows and cuts Flow value lemma. Let f be any flow and let (A, B) be any cut. Then, the net flow across (A, B) equals the value of f. f (e) − f (e) = v(f ) ∑ ∑ e out of A e in to A € net flow across cut = (10 + 10 + 5 + 10 + 0 + 0) – (5 + 5 + 0 + 0) = 25 5 / 9 edges from B to A 0 / 15 0 / 4 s t value of flow = 25 5 / 5 5 / 8 10 / 10 0 / 15 0 / 4 10 / 16 30 10 / 10 10 / 10Relationship between flows and cuts Flow value lemma. Let f be any flow and let (A, B) be any cut. Then, the net flow across (A, B) equals the value of f. f (e) − f (e) = v(f ) ∑ ∑ e out of A e in to A Pf. € v( f ) = f (e) ∑ v( f ) = f (e) ∑ v( f ) = f (e) ∑ e out of s e out of s e out of s ⎛ ⎞ ⎛ ⎞ by flow conservation, all terms ⎛ ⎞ = f (e) − f (e) ∑ ∑ ∑ ⎜ ⎟ = f (e) − f (e) ∑ ∑ ∑ ⎜ ⎟ except v = s are 0 = f (e) − f (e) ∑ ∑ ∑ ⎜ ⎟ ⎝ ⎠ v ∈A⎝ e out of v e in to v ⎠ v ∈A e out of v e in to v ⎝ ⎠ v ∈A e out of v e in to v = f (e) − f (e). ∑ ∑ = f (e) − f (e). ∑ ∑ = f (e) − f (e). ∑ ∑ ▪ e out of A e in to A e out of A e in to A e out of A e in to A € € € 3112 / 15 8 / 10 2 / 6 2 / 15 Relationship between flows and cuts Weak duality. Let f be any flow and (A, B) be any cut. Then, v( f ) ≤ cap(A, B). Pf. v(f ) = f (e)− f (e) ∑ ∑ v(f ) = f (e)− f (e) ∑ ∑ v(f ) = f (e)− f (e) ∑ ∑ v(f ) = f (e)− f (e) e out∑ of A e in t∑o A e out of A e in to A e out of A e in to A e out of A e in to A ≤ f (e) ∑ ≤ f (e) ∑ ≤ f (e) ∑ flowvalue ≤ f (e) e out∑ of A e out of A lemma e out of A e out of A ≤ c(e) ∑ ≤ c(e) ∑ ≤ c(e) ∑ ≤ c(e) e out∑ of A e out of A e out of A e out of A = cap(A,B) = cap(A,B) = cap(A,B) = cap(A,B) ▪ 8 / 9 € € € € 0 / 4 0 / 15 10 s 5 / 5 7 / 8 9 / 10 t s t 5 0 / 4 0 / 15 15 12 / 16 32 value of flow = 27 capacity of cut = 30 ≤ 10 / 10 10 / 10Maxflow mincut theorem Augmenting path theorem. A flow f is a maxflow iff no augmenting paths. Maxflow mincut theorem. Value of the maxflow = capacity of mincut. Pf. The following three conditions are equivalent for any flow f : i. There exists a cut (A, B) such that cap(A, B) = val(f ). ii. f is a maxflow. iii. There is no augmenting path with respect to f. i ii Suppose that (A, B) is a cut such that cap(A, B) = val(f ). Then, for any flow f ', val(f ') ≤ cap(A, B) = val(f ). Thus, f is a maxflow. ▪ by assumption weak duality 33Maxflow mincut theorem Augmenting path theorem. A flow f is a maxflow iff no augmenting paths. Maxflow mincut theorem. Value of the maxflow = capacity of mincut. Pf. The following three conditions are equivalent for any flow f : i. There exists a cut (A, B) such that cap(A, B) = val(f ). ii. f is a maxflow. iii. There is no augmenting path with respect to f. ii iii We prove contrapositive: iii ii. Suppose that there is an augmenting path with respect to f. Can improve flow f by sending flow along this path. Thus, f is not a maxflow. ▪ 34Maxflow mincut theorem iii i Let f be a flow with no augmenting paths. Let A be set of nodes reachable from s in residual graph G . f By definition of cut A, s ∈ A. By definition of flow f, t ∉ A. edge e = (v, w) with v ∈ B, w ∈ A must have f(e) = 0 original network G v(f ) = f (e)− f (e) ∑ ∑ v(f ) = f (e)− f (e) ∑ ∑ A e out of A e in to A B v(f ) = f (e)− f (e) ∑ ∑ e out of A e in to A = c(e) ∑ e out of A e in to A flowvalue = c(e) ∑ t e out of A lemma = c(e) ∑ e out of A = cap(A,B) e out of A = cap(A,B) = cap(A,B) s € € € edge e = (v, w) with v ∈ A, w ∈ B must have f(e) = c(e) 357. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 7.3Running time Assumption. Capacities are integers between 1 and C. Integrality invariant. Throughout the algorithm, the flow values f (e) and the residual capacities c (e) are integers. f Theorem. The algorithm terminates in at most val (f ) ≤ n C iterations. Pf. Each augmentation increases the value by at least 1. ▪ Corollary. The running time of FordFulkerson is O(m n C). Corollary. If C = 1, the running time of FordFulkerson is O(m n). Integrality theorem. Then exists a maxflow f for which every flow value f (e) is an integer. Pf. Since algorithm terminates, theorem follows from invariant. ▪ 37Bad case for FordFulkerson Q. Is generic FordFulkerson algorithm polytime in input size m, n, and log C A. No. If max capacity is C, then algorithm can take ≥ C iterations. s→v→w→t each augmenting path s→w→v→t sends only 1 unit of flow ( augmenting paths = 2C) s→v→w→t s→w→v→t … v C t s→v→w→t s→w→v→t C 1 C s C w 38Choosing good augmenting paths Use care when selecting augmenting paths. Some choices lead to exponential algorithms. Clever choices lead to polynomial algorithms. If capacities are irrational, algorithm not guaranteed to terminate Goal. Choose augmenting paths so that: Can find augmenting paths efficiently. Few iterations. 39Choosing good augmenting paths Choose augmenting paths with: Max bottleneck capacity. Sufficiently large bottleneck capacity. Fewest number of edges. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems JACK EDMONDS University of Waterloo, Waterloo, Ontario, Canada AND RICHARD M. KARP University of California, Berkeley, California ABSTRACT. This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimumcost flow problem. Upper bounds on the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps required by earlier algorithms. First, the paper states the maximum flow problem, gives the FordFulkerson labeling method for its solution, and points out that an improper choice of flow augmenting paths can lead to severe computational difficulties. Then rules of choice that avoid these difficulties are given. EdmondsKarp 1972 (USA) Dinic 1970 (Soviet Union) We show that, if each flow augmentation is made along an augmenting path having a minimum number of arcs, then a maximum flow in an nnode network will be obtained after no more than (n a n) augmentations; and then we show that if each flow change is chosen to produce a maximum increase in the flow value then, provided the capacities are integral, a maximum flow will be determined within at most 1 + logM/(M1) if(t, S) augmentations, wheref(t, s) is the value of the maximum flow and M is the maximum number of arcs across a cut. Next a new algorithm is given for the minimumcost flow problem, in which all shortestpath computations are performed on networks with all weights nonnegative. In particular, this algorithm solves the n X n assigmnent problem in O(n 3) steps. Following that we explore a "scaling" technique for solving a minimumcost flow problem by treating a sequence of derived problems with "scaled down" capacities. It is shown that, using this technique, the solution of a Iiitchcock transportation problem with m sources and n sinks, m n, and maximum flow B, requires at most (n + 2) log2 (B/n) flow augmentations. Similar results are also given for the 40 general minimumcost flow problem. An abstract stating the main results of the present paper was presented at the Calgary International Conference on Combinatorial Structures and Their Applications, June 1969. In a paper by l)inic (1970) a result closely related to the main result of Section 1.2 is obtained. Dinic shows that, in a network with n nodes and p arcs, a maximum flow can be computed in 0 (n2p) primitive operations by an algorithm which augments along shortest augmenting paths. KEY WOl¢l)S AND PHPASES: network flows, transportation problem, analysis of algorithms CR CATEGOI.IES: 5.3, 5.4, 8.3 Copyright © 1972, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted, provided that reference is made to this publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. Authors' addresses : J. Edmonds, Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario, Canada; R. M. Karp, College of Engineering, Operations Research Center, University of California, Berkeley, CA 94720; the latter author's research has been partially supported by the National Science Foundation raider Grant GP15473 with the University of California. Jcurnal of the Association for Computing Machinery, Vol. 19, No. 2, Apri 1972. pp. 248264. 102 122 102 122 Capacityscaling algorithm Intuition. Choose augmenting path with highest bottleneck capacity: it increases flow by max possible amount in given iteration. Don't worry about finding exact highest bottleneck path. Maintain scaling parameter Δ. Let G (Δ) be the subgraph of the residual graph consisting only of f arcs with capacity ≥ Δ. s s 1 t t G G (Δ), Δ = 100 f f 41 110 170 110 170Capacityscaling algorithm CAPACITYSCALING(G, s, t, c) FOREACH edge e ∈ E : f (e) ← 0. Δ ← largest power of 2 ≤ C. WHILE (Δ ≥ 1) G (Δ) ← Δresidual graph. f WHILE (there exists an augmenting path P in G (Δ)) f f ← AUGMENT (f, c, P). Update G (Δ). f Δ ← Δ / 2. RETURN f. 42Capacityscaling algorithm: proof of correctness Assumption. All edge capacities are integers between 1 and C. Integrality invariant. All flow and residual capacity values are integral. Theorem. If capacityscaling algorithm terminates, then f is a maxflow. Pf. By integrality invariant, when Δ = 1 ⇒ G (Δ) = G . f f Upon termination of Δ = 1 phase, there are no augmenting paths. ▪ 43Capacityscaling algorithm: analysis of running time Lemma 1. The outer while loop repeats 1 + ⎡log C⎤ times. 2 Pf. Initially C / 2 Δ ≤ C; Δ decreases by a factor of 2 in each iteration. ▪ Lemma 2. Let f be the flow at the end of a Δscaling phase. Then, proof on next slide the value of the maxflow ≤ val( f ) + m Δ. Lemma 3. There are at most 2m augmentations per scaling phase. Pf. Let f be the flow at the end of the previous scaling phase. LEMMA 2 ⇒ val( f ) ≤ val( f ) + 2 m Δ . Each augmentation in a Δphase increases val( f ) by at least Δ. ▪ Theorem. The scaling maxflow algorithm finds a max flow in O(m log C) 2 augmentations. It can be implemented to run in O(m log C) time. Pf. Follows from LEMMA 1 and LEMMA 3. ▪ 44Capacityscaling algorithm: analysis of running time Lemma 2. Let f be the flow at the end of a Δscaling phase. Then, the value of the maxflow ≤ val( f ) + m Δ. Pf. We show there exists a cut (A, B) such that cap(A, B) ≤ val( f ) + m Δ. Choose A to be the set of nodes reachable from s in G (Δ). f By definition of cut A, s ∈ A. edge e = (v, w) with v ∈ B, w ∈ A By definition of flow f, t ∉ A. must have f(e) ≤ Δ original network val( f ) v(f ) = f (e) − f (e) ∑ ∑ A B v(f ) = e out of Af (e) − e in to Af (e) ∑ ∑ v(f ) = e out of Af (e) − e in to Af (e) ∑ ∑ ≥ (c(e)−Δ) − Δ ∑ ∑ v(f ) = f (e) − f (e) ∑ ∑ t e out of A e in to A ≥ e out of A(c(e)−Δ) − e in to A Δ ∑ ∑ e out of A e in to A ≥ e out of A(c(e)−Δ) − e in to A Δ ∑ ∑ = c(e)− Δ − Δ ∑ ∑ ∑ ≥ (c(e)−Δ) − Δ ∑ ∑ e out of A e in to A = e out of Ac(e)− e out of AΔ − e in to AΔ ∑ ∑ ∑ e out of A e in to A = e out of Ac(e)− e out of AΔ − e in to AΔ ∑ ∑ ∑ ≥ cap(A,B) mΔ s = c(e)− Δ − Δ ∑ ∑ ∑ e out of A e out of A e in to A ≥ cap(A,B) mΔ e out of A e out of A e in to A ≥ cap(A,B) mΔ ≥ cap(A,B) mΔ edge e = (v, w) with v ∈ A, w ∈ B € must have f(e) ≥ c(e) – Δ € 45 € € 7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 17.2Shortest augmenting path Q. Which augmenting path A. The one with the fewest number of edges. can find via BFS SHORTESTAUGMENTINGPATH(G, s, t, c) FOREACH e ∈ E : f (e) ← 0. G ← residual graph. f WHILE (there exists an augmenting path in G ) f P ← BREADTHFIRSTSEARCH (G , s, t). f f ← AUGMENT (f, c, P). Update G . f RETURN f. 47Shortest augmenting path: overview of analysis L1. Throughout the algorithm, length of the shortest path never decreases. L2. After at most m shortest path augmentations, the length of the shortest augmenting path strictly increases. 2 Theorem. The shortest augmenting path algorithm runs in O(m n) time. Pf. O(m + n) time to find shortest augmenting path via BFS. O(m) augmentations for paths of length k. If there is an augmenting path, there is a simple one. ⇒ 1 ≤ k n ⇒ O(m n) augmentations. ▪ 48Shortest augmenting path: analysis Def. Given a digraph G = (V, E) with source s, its level graph is defined by: (v) = number of edges in shortest path from s to v. L = (V, E ) is the subgraph of G that contains only those edges (v,w) ∈ E G G with (w) = (v) + 1. graph G s t level graph L G s t = 0 = 1 = 2 = 3 49Shortest augmenting path: analysis Def. Given a digraph G = (V, E) with source s, its level graph is defined by: (v) = number of edges in shortest path from s to v. L = (V, E ) is the subgraph of G that contains only those edges (v,w) ∈ E G G with (w) = (v) + 1. Property. Can compute level graph in O(m + n) time. Pf. Run BFS; delete back and side edges. Key property. P is a shortest s↝v path in G iff P is an s↝v path L . G level graph L G s t = 0 = 1 = 2 = 3 50Shortest augmenting path: analysis L1. Throughout the algorithm, length of the shortest path never decreases. Let f and f ' be flow before and after a shortest path augmentation. Let L and L' be level graphs of G and G . f f ' Only back edges added to G ' f (any path with a back edge is longer than previous length) ▪ level graph L s t = 0 = 1 = 2 = 3 level graph L' s t 51Shortest augmenting path: analysis L2. After at most m shortest path augmentations, the length of the shortest augmenting path strictly increases. The bottleneck edge(s) is deleted from L after each augmentation. No new edge added to L until length of shortest path strictly increases. ▪ level graph L s t = 0 = 1 = 2 = 3 level graph L' s t 52Shortest augmenting path: review of analysis L1. Throughout the algorithm, length of the shortest path never decreases. L2. After at most m shortest path augmentations, the length of the shortest augmenting path strictly increases. 2 Theorem. The shortest augmenting path algorithm runs in O(m n) time. Pf. O(m + n) time to find shortest augmenting path via BFS. O(m) augmentations for paths of exactly k edges. O(m n) augmentations. ▪ 53Shortest augmenting path: improving the running time Note. Θ(m n) augmentations necessary on some networks. Try to decrease time per augmentation instead. 2 Simple idea ⇒ O(m n ) Dinic 1970 Dynamic trees ⇒ O(m n log n) SleatorTarjan 1983 JOURNAL OF COMPUTER AND SYSTEM SCIENCES 26, 362391 (1983) A Data Structure for Dynamic Trees DANIEL D. SLEATOR AND ROBERT ENDRE TARJAN Bell Laboratories, Murray Hill, New Jersey 07974 Received May 8, 1982; revised October 18, 1982 A data structure is proposed to maintain a collection of vertexdisjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Each operation requires O(log n) time. Using this data structure, new fast algorithms are obtained for the following problems: (1) Computing nearest common ancestors. (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows. (3) Computing certain kinds of constrained minimum spanning trees. (4) Implementing the network simplex algorithm for minimumcost flows. The most significant application is (2); an O(mn log n)time algorithm is obtained to find a maximum flow in a network of n vertices and m edges, beating by a factor of log n the fastest algorithm previously known for sparse graphs. 1. INTRDIJCTIN 54 In this paper we consider the following problem: We are given a collection of vertexdisjoint rooted trees. We want to represent the trees by a data structure that allows us to easily extract certain information about the trees and to easily update the structure to reflect changes in the trees caused by three kinds of operations: link(v, w): If u is a tree root and w is a vertex in another tree, link the trees containing v and w by adding the edge(v, w), making w the parent of v. If node v is not a tree root, divide the tree containing v into two trees by cut(v): deleting the edge from v to its parent. evert(v): Turn the tree containing vertex u “inside out” by making v the root of the tree. We propose a data structure that solves this dynamic trees problem. We give two versions of the data structure. The first has a time bound of O(log n) per operation when the time is amortized over a worstcase sequence of operations; the second, 362 00220000/83 3.00 Copyright 0 1983 by Academic Press, Inc. All rights of reproduction in any form reserved. 7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣ SECTION 18.1Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G s t level graph L G 56Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance s t s t level graph L G 57Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G augment s t level graph L G 58Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance s t s level graph L G 59Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G retreat s t level graph L G 60Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance s t t level graph L G 61Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G augment s t level graph L G 62Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance s t s level graph L G 63Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G retreat s t s level graph L G 64Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G retreat s t s level graph L G 65Blockingflow algorithm Two types of augmentations. Normal: length of shortest path does not change. Special: length of shortest path strictly increases. Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G end of phase s t level graph L G 66Blockingflow algorithm INITIALIZE(G, s, t, f, c) ADVANCE(v) L ← levelgraph of G . IF (v = t) G f P ← ∅. AUGMENT(P). Remove saturated edges from L . G GOTO ADVANCE(s). P ← ∅. GOTO ADVANCE(s). RETREAT(v) IF (there exists edge (v, w) ∈ L ) G IF (v = s) STOP. Add edge (v, w) to P. ELSE GOTO ADVANCE(w). Delete v (and all incident edges) from L . G Remove last edge (u, v) from P. ELSE GOTO RETREAT(v). GOTO ADVANCE(u). 67Blockingflow algorithm: analysis Lemma. A phase can be implemented in O(m n) time. Pf. O(m) using BFS Initialization happens once per phase. O(mn) per phase At most m augmentations per phase. (because an augmentation deletes at least one edge from L ) G At most n retreats per phase. O(m + n) per phase (because a retreat deletes one node from L ) G At most m n advances per phase. O(mn) per phase (because at most n advances before retreat or augmentation) ▪ 2 Theorem. Dinic 1970 The blockingflow algorithm runs in O(mn ) time. Pf. By lemma, O(mn) time per phase. At most n phases (as in shortest augment path analysis). ▪ 68Choosing good augmenting paths: summary Assumption. Integer capacities between 1 and C. method augmentations running time augmenting path n C O(m n C) 2 fattest augmenting path m log (mC) O(m log n log (mC)) 2 capacity scaling m log C O(m log C) improved capacity scaling m log C O(m n log C) 2 shortest augmenting path m n O(m n) 2 improved shortest augmenting path m n O(m n ) dynamic trees m n O(m n log n ) 69Maximum flow algorithms: theory year method worst case discovered by 3 1951 simplex Dantzig O(m C) 2 1955 augmenting path FordFulkerson O(m C) 3 1970 shortest augmenting path Dinic, EdmondsKarp O(m ) 2 1970 fattest augmenting path Dinic, EdmondsKarp O(m log m log( m C )) 5/2 1977 blocking flow Cherkasky O(m ) 7/3 1978 blocking flow Galil O(m ) 2 1983 dynamic trees SleatorTarjan O(m log m) 2 1985 capacity scaling Gabow O(m log C) 3/2 1997 length function GoldbergRao O(m log m log C) 2 2012 compact network Orlin O(m / log m) O(m) maxflow algorithms for sparse digraphs with m edges, integer capacities between 1 and C 70Maximum flow algorithms: practice Pushrelabel algorithm (SECTION 7.4). GoldbergTarjan 1988 Increases flow one edge at a time instead of one augmenting path at a time. A New Approach to the MaximumFlow Problem ANDREW V. GOLDBERG Massachusetts Institute of Technology, Cambridge, Massachusetts AND ROBERT E. TARJAN Princeton University, Princeton, New Jersey, and ATT Bell Laboratories, Murray Hill, New Jersey Abstract. All previously known efftcient maximumflow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortestlength augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n)) time bound on an nvertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n’/m)) time on an nvertex, medge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n’log n) time using n processors and O(m) space is obtained. This time bound matches that of the ShiloachVishkin algorithm, which also uses n processors but requires O(n’) space. Categories and Subject Descriptors: F.2.2 Analysis of Algorithms and Problem Complexity: Non 71 numerical Algorithms and Problems; G.2.2 Discrete Mathematics: Graph Theorygraph algorithms; network problems General Terms: Algorithms, Design, Theory, Verification Additional Key Words and Phrases: Dynamic trees, maximumflow problem 1. Introduction The problem of finding a maximum flow in a directed graph with edge capacities arises in many settings in operations research and other fields, and efficient algorithms for the problem have received a great deal of attention. Extensive A preliminary version of this paper appeared in the Proceedings of the 18th Annual ACM Symposium on Theory of Computing (Berkeley, Calif., May 2830). ACM, New York, 1986, pp. 136146. The work of A. V. Goldberg was supported by a Fannie and John Hertz Foundation Fellowship and by the Advanced Research Projects Agency of the Department of Defense under contract NO00 1480C 0622. The work of R. E. Tarjan was partially supported by the National Science Foundation under grant DCR8605962 and the Office of Naval Research under Contract N0001487K0467. Authors’ present addresses: A. V. Goldberg, Department of Computer Science, Stanford University, Stanford, CA 94305; R. E. Tarjan, ATT Bell Laboratories, 600 Mountain Ave., Murray Hill, NJ 079742070. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 0 1988 ACM 0004541 l/88/10000921 01.50 Journal of the Association for Computing Machinery. Vol. 35, No. 4. October 1988, pp. 921940. Maximum flow algorithms: practice Warning. Worstcase running time is generally not useful for predicting or comparing maxflow algorithm performance in practice. 3/2 Best in practice. Pushrelabel method with gap relabeling: O(m ). On Implementing PushRelabel Method for the Maximum Flow Problem EUROPEAN JOURNAL OF OPERATIONAL Boris V. Cherkassky 1 and Andrew V. Goldberg 2 RESEARCH ELSEVIER European Journal of Operational Research 97 (1997) 509542 1 Central Institute for Economics and Mathematics, Krasikova St. 32, 117418, Moscow, Russia chereemi.msk.su Theory and Methodology 2 Computer Science Department, Stanford University Stanford, CA 94305, USA Computational investigations of maximum flow algorithms goldberg cs. stanford, edu Ravindra K. Ahuja a, Murali Kodialam b, Ajay K. Mishra c, James B. Orlin d,. a Department t'lndustrial and Management Engineering. Indian Institute of Technology. Kanpur, 208 016, India Abstract. We study efficient implementations of the pushrelabel method b AT T Bell Laboratories, Holmdel, NJ 07733, USA for the maximum flow problem. The resulting codes are faster than the c KA'FZ Graduate School of Business, University of Pittsburgh, Pittsburgh, PA 15260, USA previous codes, and much faster on some problem families. The speedup d Sloun School of Management, Massachusetts Institute of Technology. Cambridge. MA 02139. USA is due to the combination of heuristics used in our implementations. We Received 30 August 1995; accepted 27 June 1996 also exhibit a family of problems for which the running time of all known methods seem to have a roughly quadratic growth rate. Abstract 1 Introduction The maximum flow algorithm is distinguished by the long line of successive contributions researchers have made in obtaining algorithms with incrementally better worstcase complexity. Some, but not all, of these theoretical improvements The rnaximum flow problem is a classical combinatorial problem that comes up have produced improvements in practice. The purpose of this paper is to test some of the major algorithmic ideas developed in the recent years and to assess their utility on the empirical front. However, our study differs from previous studies in in a wide variety of applications. In this paper we study implementations of the 72 several ways. Whereas previous studies focus primarily on CPU time analysis, our analysis goes further and provides pushrdabel 13, 17 method for the problem. detailed insight into algorithmic behavior. It not only observes how algorithms behave but also tries to explain why The basic methods for the maximum flow problem include the network sim algorithms behave that way. We have limited our study to the best previous maximum flow algorithms and some of the plex method of Dantzig 6, 7, the augmenting path method of Ford and Flker recent algorithms that are likely to be efficient in practice. Our study encompasses ten maximum flow algorithms and five classes of networks. The augmenting path algorithms tested by us include Dinic's algorithm, the shortest augmenting path son 12, the blocking flow method of Dinitz 10, and the pushrelabel method algorithm, and the capacityscaling algorithm. The preflowpush algorithms tested by us include Karzanov's algorithm, three of Goldberg and Tarjan 14, 17. (An earlier algorithm of Cherkassky 5 has implementations of GoldbergTarjan's algorithm, and three versions of AhujaOrlinTarjan's excessscaling algorithms. many features of the pushrelabel method.) The best theoretical time bounds Among many findings, our study concludes that the preflowpush algorithms are substantially faster than other classes of for the maximum flow problem, based on the latter method, are as follows. An algorithms, and the highestlabel preflowpush algorithm is the fastest maximum flow algorithm for which the growth rate in algorithm of Goldberg and Tarjan 17 runs in O(nm log(n2/m)) time, an algo the computational time is O(n LS) on four out of five of our problem classes. Further, in contrast to the results of the worstcase analysis of maximum flow algorithms, our study finds that the time to perform relabel operations (or constructing rithm of King et. al. 21 runs in O(nm + n TM) time for any constant e 0, the layered networks) takes at least as much computation time as that taken by augmentations and/or pushes. © 1997 an algorithm of Cheriyan et. al. 3 runs in O(nm + (nlogn) 2) time with high Published by Elsevier Science B.V. probability, and an algorithm of Ahuja et. al. 1 runs in O (am log ( + 2)) time. Prior to the pushrelabel method, several studies have shown that Dinitz' 1. Introduction among mathematicians, operations researchers and algorithm 10 is in practice superior to other methods, including the network computer scientists. simplex method 6, 7, Fordgiflkerson algorithm 11, 12, Karzanov's algorithm The maximum flow problem is one of the most The maximum flow problem arises in a wide 20, and Tarjan's algorithm 23. See e.g. 18. Several recent studies (e.g. 2, fundamental problems in network optimization. Its variety of situations. It occurs directly in problems as intuitive appeal, mathematical simplicity, and wide diverse as the flow of commodities in pipeline net Andrew V. Goldberg was supported in part by NSF Grant CCR9307045 and a applicability has made it a popular research topic works, parallel machine scheduling, distributed com grant from Powell Foundation. This work was done while Boris V. Cherkassky was puting on multiprocessor computers, matrix round visiting Stanford University Computer Science Department and supported by the ing problems, the baseball elimination problem, and abovementioned NSF and Powell Foundation grants. Corresponding author. the statistical security of data. The maximum flow 03772217/97/17.00 © 1997 Published by Elsevier Science B.V. All rights reserved. PII S03772217(96)00269X Maximum flow algorithms: practice Computer vision. Different algorithms work better for some dense problems that arise in applications to computer vision. In IEEE Transactions on PAMI, Vol. 26, No. 9, pp. 11241137, Sept. 2004 p.1 VERMA,BATRA:MAXFLOWREVISITED 1 An Experimental Comparison of MinCut/MaxFlow Algorithms for Energy Minimization in Vision MaxFlowRevisited: ∗ Yuri Boykov and Vladimir Kolmogorov AnEmpiricalComparisonofMaxflow AlgorithmsforDenseVisionProblems Abstract After15,31,19,8,25,5minimumcut/maximumflowalgorithmsongraphsemergedas Tanmay Verma IIITDelhi tanmay08054iiitd.ac.in Delhi, India anincreasinglyusefultool forexact orapproximateenergy minimizationinlowlevel vision. Dhruv Batra TTIChicago Thecombinatorialoptimizationliteratureprovidesmanymincut/maxflowalgorithmswith dbatrattic.edu Chicago, USA different polynomial time complexity. Their practical efficiency, however, has to date been studiedmainlyoutsidethescopeofcomputervision. Thegoalofthispaperistoprovidean experimental comparison of the efficiency of mincut/max flow algorithms for applications Abstract in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Algorithms for finding the maximum amount of flow possible in a network (or max flow)playacentralroleincomputervisionproblems. Wepresentanempiricalcompari GoldbergTarjan style “pushrelabel” methods and algorithms based on FordFulkerson son of different maxflow algorithms on modern problems. Our problem instances arise style “augmenting paths”. We benchmark these algorithms on a number of typical graphs from energy minimization problems in Object Category Segmentation, Image Deconvo in the contexts of image restoration, stereo, and segmentation. In many cases our new lution, Super Resolution, Texture Restoration, Character Completion and 3D Segmen tation. We compare 14 different implementations and find that the most popularly used algorithm works several times faster than any of the other methods making near realtime implementationofKolmogorov5isnolongerthefastestalgorithmavailable,especially performance possible. An implementation of our maxflow/mincut algorithm is available fordensegraphs. upon request for research purposes. Index Terms — Energy minimization, graph algorithms, minimum cut, maximum 1 Introduction flow, image restoration, segmentation, stereo, multicamera scene reconstruction. ∗ Yuri Boykov is with the Computer Science Department at the University of Western Ontario, Canada, yuricsd.uwo.ca. Vladimir Kolmogorov is with Microsoft Research, Cambridge, England, vnkmicrosoft.com. Over the past two decades, algorithms for finding the maximum amount of flow possible in This work was mainly done while the authors were with Siemens Corp. Research, Princeton, NJ. 73 a network (or maxflow) have become the workhorses of modern computer vision and ma chine learning – from optimal (or provablyapproximate) inference in sophisticated discrete models6,11,27,30,32toenablingrealtimeimageprocessing38,39. Perhaps the most prominent role of maxflow is due to the work of Hammer 23 and KolmogorovandZabih27,whoshowedthatafairlylargeclassofenergyfunctions–sum of submodular functions on pairs of boolean variables – can be efficiently and optimally minimized via a reduction to maxflow. Maxflow also plays a crucial role in approximate minimization of energy functions with multilabel variables 4, 6, triplet or higher order terms26,27,35,37,globalterms30,andtermsencodinglabelcosts11,32. Given the wide applicability, it is important to ask which maxflow algorithm should be used. There are numerous algorithms for maxflow with different asymptotic complexities and practical runtime behaviour. For an extensive list, we refer the reader to surveys in the literature2,7. Broadlyspeaking,therearethreemainfamiliesofmaxflowalgorithms: 1. AugmentingPath (AP) variants: algorithms 5, 13, 14, 17, 21 that maintain a valid flow during the algorithm, i.e. always satisfying the capacity and flowconservation constraints. ©2012. Thecopyrightofthisdocumentresideswithitsauthors. Itmaybedistributedunchangedfreelyinprintorelectronicforms.7. NETWORK FLOW I maxflow and mincut problems ‣ FordFulkerson algorithm ‣ maxflow mincut theorem ‣ capacityscaling algorithm ‣ shortest augmenting paths ‣ blockingflow algorithm ‣ unitcapacity simple networks ‣Bipartite matching Q. Which maxflow algorithm to use for bipartite matching Generic augmenting path: O( m f ) = O(m n). 2 2 Capacity scaling: O(m log U) = O(m ). 2 Shortest augmenting path: O(m n ). Q. Suggests "more clever" algorithms are not as good as we first thought A. No, just need more clever analysis Next. We prove that shortest augmenting path algorithm can be 1/2 implemented in O(m n ) time. SIAM J. CoMavx. Vol. 4, No. 4, December 1975 NETWORK FLOW AND TESTING GRAPH CONNECTIVITY SHIMON AND R. ENDRE EVEN" TARJAN:I: Abstract. An algorithm of Dinic for the maximum flow in a network is finding described. It is 1/2 then shown that ifthe vertex capacities are all equal to the one, algorithm requires at most O(IV IEI) 2/3. time, and if the edge capacities are all equal to the one, algorithm requires at most O(I time. IEI) VI Also, these bounds are tight for Dinic’s algorithm. 1/z. These results are used to test the vertex a connectivity of graph in O(IVI 2) time and the IEI 5/3. edge connectivity in time. O(I V IEI) Key words. Dinic’s maximum algorithm, flow, connectivity, vertex connectivity, edge connec 75 tivity 1. Network flow. Let G(V, E) be a finite directed graph, where V is the set of vertices and E is the set of Each e is O. edges. edge capacity assigned.a c(e) = One ofthe vertices, s, is called the source, and another, is called the sink.We seek t, a flow function on that for f(e) the edges such every e, 0 and such c(e) f(e) = = that the total flow which enters a other than s or will total vertex, t, equal the flow which leaves the vertex. Of all such flows, we want one for which the net total flow which emanates from s is maximum. This wellknown network flow was reexamined. A problem 1 recently solution in steps,where n isthenumberofvertices,wasproducedbyEdmonds O(n5) 2" A and Karp 2 in 1969. solution in O(I steps was published in Russian by IE) VI Dinic in 1970. 3 2. In this section we present a solution in the same as O(IVI IEI), essentially Dinic’s. version was discovered independently by S. Even and J. Hopcroft.) (This The runs in at most in number.We start with algorithm phases, zero IVI flow; that is, 0 for every e E. In each phase, the flow is increased. New f(e) are until no increase is At that phases applied possible. point, the proof of maxi is the same as that of Ford and Fulkerson and it will not be mality 1, repeated here. the to that is not a restriction of the freedom However, algorithm up point allowed Ford by the and Fulkerson algorithmas is the case with the Edmonds and Karp algorithm. The within each is computation phase through a different method of and labeling path finding. Assume that we have a present flow An edge is usable in the f(e). forward direction and it is the iff(e) c(e), usable in backward direction 0. Clearly, iff(e) an be usable in both edge may directions. Each phase starts with a breadthfirst search from s. That is, we start by label with with all ing s 0; i.e., 2(s) 0. Next, we label unlabeled vertices which are reachable from s via a usable where the usable direction is from s to single edge, Received the editors by June 27, 1974, and in revised form November 15, 1974. Science Computer Department, TechnionIsrael Institute of Technology, Israel. On Haifa, leave ofabsencefrom the Department ofApplied Mathematics, Weizmann Institute ofScience, Rehovot, Israel. Parts of this work were completed during the summers of 1972 and 1973 while he visited the Department of Computer Science, Cornell University, Ithaca, New York. Computer Science Division, University of California at Berkeley, Berkeley, California 94720. The work of this author was supported in part by the National Science Foundation under Grant NSFGJ35604X, and by a Miller Research Fellowship. 507Unitcapacity simple networks Def. A network is a unitcapacity simple network if: Every edge capacity is 1. Every node (other than s or t) has either (i) at most one entering edge or (ii) at most one leaving edge. Property. Let G be a simple unitcapacity network and let f be a 01 flow, then G is a unitcapacity simple network. f Ex. Bipartite matching. 1 1 1 76Unitcapacity simple networks Shortest augmenting path algorithm. Normal augmentation: length of shortest path does not change. Special augmentation: length of shortest path strictly increases. Theorem. EvenTarjan 1975 In unitcapacity simple networks, the shortest 1/2 augmenting path algorithm computes a maximum flow in O(m n ) time. Pf. L1. Each phase of normal augmentations takes O(m) time. 1/2 1/2 L2. After at most n phases, f ≥ f – n . 1/2 L3. After at most n additional augmentations, flow is optimal. ▪ 77Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance level graph L G 78Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G augment level graph L G 79Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance level graph L G 80Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G retreat level graph L G 81Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G advance level graph L G 82Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G augment level graph L G 83Unitcapacity simple networks Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G delete all edges in augmenting path from LG If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G end of phase level graph L G 84Unitcapacity simple networks: analysis Phase of normal augmentations. Explicitly maintain level graph L . G Start at s, advance along an edge in L until reach t or get stuck. G If reach t, augment and and update L . G If get stuck, delete node from L and go to previous node. G LEMMA 1. A phase of normal augmentations takes O(m) time. Pf. O(m) to create level graph L . G O(1) per edge since each edge traversed and deleted at most once. O(1) per node since each node deleted at most once. ▪ 85Unitcapacity simple networks: analysis 1/2 1/2 LEMMA 2. After at most n phases, f ≥ f – n . 1/2 1/2 After n phases, length of shortest augmenting path is n . 1/2 Level graph has more than n levels. 1/2 1/2 Let 1 ≤ h ≤ n be layer with min number of nodes: V ≤ n . h level graph LG for flow f V V V 1/2 0 1 h V n 86Unitcapacity simple networks: analysis 1/2 1/2 LEMMA 2. After at most n phases, f ≥ f – n . 1/2 1/2 After n phases, length of shortest augmenting path is n . 1/2 Level graph has more than n levels. 1/2 1/2 Let 1 ≤ h ≤ n be layer with min number of nodes: V ≤ n . h Let A = v : (v) h ∪ v : (v) = h and v has ≤ 1 outgoing residual edge. 1/2 1/2 cap (A, B) ≤ V ≤ n ⇒ f ≥ f – n . ▪ f h residual graph Gf residual edges A V V V 1/2 0 1 h V n 87