Question? Leave a message!




Contention Resolution: randomized protocol

Contention Resolution: randomized protocol
13. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣ Lecture slides by Kevin Wayne
 Copyright © 2005 PearsonAddison Wesley
 http://www.cs.princeton.edu/wayne/kleinbergtardos Last updated on 2/15/17 6:04 PMRandomization Algorithmic design patterns. Greedy. Divideandconquer. Dynamic programming. Network flow. Randomization. in practice, access to a pseudorandom number generator Randomization. Allow fair coin flip in unit time. Why randomize Can lead to simplest, fastest, or only known algorithm for a particular problem. Ex. Symmetry breaking protocols, graph algorithms, quicksort, hashing, load balancing, Monte Carlo integration, cryptography. 213. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Contention resolution in a distributed system Contention resolution. Given n processes P , …, P , each competing for 1 n access to a shared database. If two or more processes access the database simultaneously, all processes are locked out. Devise protocol to ensure all processes get through on a regular basis. Restriction. Processes can't communicate. Challenge. Need symmetrybreaking paradigm. P 1 P 2 .
 .
 . P n 4Contention resolution: randomized protocol Protocol. Each process requests access to the database at time t with probability p = 1/n. Claim. Let Si, t = event that process i succeeds in accessing the database at time t. Then 1 / (e⋅ n) ≤ Pr S(i, t) ≤ 1/(2n). n – 1 Pf. By independence, Pr S(i, t) = p (1 – p) . process i requests access none of remaining n1 processes request access n – 1 Setting p = 1/n, we have Pr S(i, t) = 1/n (1 – 1/n) . ▪ 
 value that maximizes PrS(i, t) between 1/e and 1/2 Useful facts from calculus. As n increases from 2, the function: n 1 (1 – 1/n) converges monotonically from 1/4 up to 1 / e. n – 1 (1 – 1/n) converges monotonically from 1/2 down to 1 / e. 5Contention Resolution: randomized protocol Claim. The probability that process i fails to access the database in
 c en rounds is at most 1 / e. After e ⋅ n (c ln n) rounds, the probability ≤ n . Pf. Let Fi, t = event that process i fails to access database in rounds 1 through t. By independence and previous claim, we have
 t Pr Fi, t ≤ (1 – 1/(en)) . en en 1 1 1 PrF(i, t) ≤ 1− ≤ 1− ≤ Choose t = ⎡e⋅ n⎤: ( ) ( ) en en e c ln n −c Choose t = ⎡e⋅ n⎤ ⎡c ln n⎤: 1 PrF(i, t) ≤ = n ( ) e € € 6Contention Resolution: randomized protocol Claim. The probability that all processes succeed within 2e ⋅ n ln n rounds
 is ≥ 1 – 1 / n. Pf. Let Ft = event that at least one of the n processes fails to access database in any of the rounds 1 through t. n n " t 1 Pr Ft = Pr Fi,t ≤ PrFi,t ≤ n 1− ∑ ∪ ( ) en ' i=1 i=1 union bound previous slide € 2 Choosing t = 2 ⎡en⎤ ⎡c ln n⎤ yields PrFt ≤ n · n = 1 / n. ▪ n n " Pr E ≤ PrE ∑ ∪ i i Union bound. Given events E , …, E , ' 1 n i=1 i=1 7 € 13. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Global minimum cut Global min cut. Given a connected, undirected graph G = (V, E),
 find a cut (A, B) of minimum cardinality. Applications. Partitioning items in a database, identify clusters of related documents, network reliability, network design, circuit design, TSP solvers. Network flow solution. Replace every edge (u, v) with two antiparallel edges (u, v) and (v, u). Pick some vertex s and compute min s v cut separating s from each other vertex v ∈ V. False intuition. Global mincut is harder than min st cut. 9Contraction algorithm Contraction algorithm. Karger 1995 Pick an edge e = (u, v) uniformly at random. Contract edge e. replace u and v by single new supernode w preserve edges, updating endpoints of u and v to w keep parallel edges, but delete selfloops Repeat until graph has just two nodes v and v . 1 1 Return the cut (all nodes that were contracted to form v ). 1 a b c a b c ⇒ u v w d contract uv e f f 10Contraction algorithm Contraction algorithm. Karger 1995 Pick an edge e = (u, v) uniformly at random. Contract edge e. replace u and v by single new supernode w preserve edges, updating endpoints of u and v to w keep parallel edges, but delete selfloops Repeat until graph has just two nodes v and v . 1 1 Return the cut (all nodes that were contracted to form v ). 1 Reference: Thore Husfeldt 11Contraction algorithm 2 Claim. The contraction algorithm returns a min cut with prob ≥ 2 / n . Pf. Consider a global mincut (A, B) of G. Let F be edges with one endpoint in A and the other in B. Let k = F = size of min cut. In first step, algorithm contracts an edge in F probability k / E . Every node has degree ≥ k since otherwise (A, B) would not be
 a mincut ⇒ E ≥ ½ k n. Thus, algorithm contracts an edge in F with probability ≤ 2 / n. B A F 12Contraction algorithm 2 Claim. The contraction algorithm returns a min cut with prob ≥ 2 / n . Pf. Consider a global mincut (A, B) of G. Let F be edges with one endpoint in A and the other in B. Let k = F = size of min cut. Let G' be graph after j iterations. There are n' = n – j supernodes. Suppose no edge in F has been contracted. The mincut in G' is still k. Since value of mincut is k, E' ≥ ½ k n'. Thus, algorithm contracts an edge in F with probability ≤ 2 / n'. Let E = event that an edge in F is not contracted in iteration j. j PrE ∩E ∩E = PrE × PrE E × × PrE E ∩E∩E 1 2 n−2 1 2 1 n−2 1 2 n−3 2 2 2 2 ≥ 1− 1− 1− 1− ( ) ( ) ( ) ( ) n n−1 4 3 n−2 n−3 2 1 = ( ) ( ) ( ) ( ) n n−1 4 3 2 = n(n−1) 2 ≥ 2 n 13 € Contraction algorithm Amplification. To amplify the probability of success, run the contraction algorithm many times. with independent random choices, 2 Claim. If we repeat the contraction algorithm n ln n times,
 2 then the probability of failing to find the global mincut is ≤ 1 / n . Pf. By independence, the probability of failure is at most 2lnn 2 2 1 n lnn n ) , 2 2lnn 2 2 1 −1 + . 1− = 1− ≤ e = ( ( ( ) 2 2 2 ' ' n n n + . x (1 – 1/x) ≤ 1/e € 14Contraction algorithm: example execution trial 1 trial 2 trial 3 trial 4 trial 5 (finds min cut) trial 6 ... Reference: Thore Husfeldt 15Global min cut: context 2 Remark. Overall running time is slow since we perform Θ(n log n) iterations and each takes Ω(m) time. 2 3 Improvement. Karger–Stein 1996 O(n log n). Early iterations are less risky than later ones: probability of contracting an edge in min cut hits 50 when n / √2 nodes remain. Run contraction algorithm until n / √2 nodes remain. Run contraction algorithm twice on resulting graph and
 return best of two cuts. Extensions. Naturally generalizes to handle positive weights. 3 Best known. Karger 2000 O(m log n). faster than best known max flow algorithm or
 deterministic global min cut algorithm 1613. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Expectation Expectation. Given a discrete random variable X, its expectation EX
 is defined by: ∞ EX = j PrX = j ∑ j=0 Waiting for a first success. Coin is heads with probability p and tails with € probability 1– p. How many independent flips X until first heads ∞ ∞ ∞ p p 1− p 1 j−1 j EX = j⋅ PrX = j = j (1− p) p = j (1− p) = ⋅ = ∑ ∑ ∑ 2 1− p 1− p p p j=0 j=0 j=0 j –1 tails 1 head € 18Expectation: two properties Useful property. If X is a 0/1 random variable, EX = PrX = 1. ∞ 1 Pf. EX = j⋅ PrX = j = j⋅ PrX = j = PrX =1 ∑ ∑ j=0 j=0 not necessarily independent € Linearity of expectation. Given two random variables X and Y defined over the same probability space, EX + Y = EX + EY. Benefit. Decouples a complex calculation into simpler pieces. 19Guessing cards Game. Shuffle a deck of n cards; turn them over one at a time;
 try to guess each card. Memoryless guessing. No psychic abilities; can't even remember what's been turned over already. Guess a card from full deck uniformly at random. Claim. The expected number of correct guesses is 1. Pf. surprisingly effortless using linearity of expectation th Let X = 1 if i prediction is correct and 0 otherwise. i Let X = number of correct guesses = X + … + X . 1 n EX = PrX = 1 = 1 / n. i i EX = EX + … + EX = 1 / n + … + 1 / n = 1. ▪ 1 n linearity of expectation 20Guessing cards Game. Shuffle a deck of n cards; turn them over one at a time;
 try to guess each card. Guessing with memory. Guess a card uniformly at random from cards
 not yet seen. Claim. The expected number of correct guesses is Θ(log n). Pf. th Let X = 1 if i prediction is correct and 0 otherwise. i Let X = number of correct guesses = X + … + X . 1 n EX = PrX = 1 = 1 / (n – ( i – 1)). i i EX = EX + … + EX = 1 / n + … + 1 / 2 + 1 / 1 = H(n). ▪ 1 n ln(n+1) H(n) 1 + ln n linearity of expectation 21Coupon collector Coupon collector. Each box of cereal contains a coupon. There are n different types of coupons. Assuming all boxes are equally likely to contain each coupon, how many boxes before you have ≥ 1 coupon of each type Claim. The expected number of steps is Θ(n log n). Pf. Phase j = time between j and j + 1 distinct coupons. Let X = number of steps you spend in phase j. j Let X = number of steps in total = X + X + … + X . 0 1 n–1 n−1 n−1 n n 1 EX = EX = = n = nH(n) ∑ ∑ ∑ j n− j i j=0 j=0 i=1 prob of success = (n – j) / n
 expected waiting time = n / (n – j) € 2213. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Maximum 3satisfiability exactly 3 distinct literals per clause Maximum 3satisfiability. Given a 3SAT formula, find a truth assignment that satisfies as many clauses as possible. C = x ∨ x ∨ x 1 2 3 4 C = x ∨ x ∨ x 2 2 3 4 C = x ∨ x ∨ x 3 1 2 4 C = x ∨ x ∨ x 4 1 2 3 C = x ∨ x ∨ x 5 1 2 4 € Remark. NPhard search problem. Simple idea. Flip a coin, and set each variable true with probability ½, independently for each variable. 24Maximum 3satisfiability: analysis Claim. Given a 3SAT formula with k clauses, the expected number of clauses satisfied by a random assignment is 7k / 8. " 1 if clause C is satisfied j Pf. Consider random variable Z = j 0 otherwise. Let Z = number of clauses satisfied by random assignment. € k EZ = EZ ∑ j j=1 k linearity of expectation = Prclause C is satisfied ∑ j j=1 7 = k 8 € 25The Probabilistic Method Corollary. For any instance of 3SAT, there exists a truth assignment that satisfies at least a 7/8 fraction of all clauses. Pf. Random variable is at least its expectation some of the time. ▪ Probabilistic method. Paul Erdös Prove the existence of a nonobvious property by showing that a random construction produces it with
 positive probability 26Maximum 3satisfiability: analysis Q. Can we turn this idea into a 7/8approximation algorithm
 A. Yes (but a random variable can almost always be below its mean). Lemma. The probability that a random assignment satisfies ≥ 7k / 8 clauses is at least 1 / (8k). Pf. Let p be probability that exactly j clauses are satisfied;
 j let p be probability that ≥ 7k / 8 clauses are satisfied. 7 k = EZ = j p ∑ j 8 j≥0 = j p + j p ∑ ∑ j j j 7k /8 j≥7k /8 7k 1 ≤ ( − ) p + k p ∑ ∑ j j 8 8 j 7k /8 j≥7k /8 7 1 ≤ ( k− ) ⋅ 1 + k p 8 8 Rearranging terms yields p ≥ 1 / (8k). ▪ 27 € Maximum 3satisfiability: analysis Johnson's algorithm. Repeatedly generate random truth assignments until one of them satisfies ≥ 7k / 8 clauses. Theorem. Johnson's algorithm is a 7/8approximation algorithm. Pf. By previous lemma, each iteration succeeds with probability ≥ 1 / (8k).
 By the waitingtime bound, the expected number of trials to find the satisfying assignment is at most 8k. ▪ 28Maximum satisfiability Extensions. Allow one, two, or more literals per clause. Find max weighted set of satisfied clauses. Theorem. Asano–Williamson 2000 There exists a 0.784approximation algorithm for MAXSAT. Theorem. Karloff–Zwick 1997, Zwick+computer 2002 There exists a 7/8 approximation algorithm for version of MAX3SAT in which each clause has at most 3 literals. Theorem. Håstad 1997 Unless P = NP, no ρapproximation algorithm for MAX3SAT (and hence MAXSAT) for any ρ 7/8. very unlikely to improve over simple randomized
 algorithm for MAX3SAT 29Monte Carlo vs. Las Vegas algorithms Monte Carlo. Guaranteed to run in polytime, likely to find correct answer. Ex: Contraction algorithm for global min cut. Las Vegas. Guaranteed to find correct answer, likely to run in polytime. Ex: Randomized quicksort, Johnson's MAX3SAT algorithm. stop algorithm after a certain point 
 Remark. Can always convert a Las Vegas algorithm into Monte Carlo,
 but no known method (in general) to convert the other way. 30RP and ZPP RP. Monte Carlo Decision problems solvable with onesided error in polytime. can decrease probability of false negative
 Onesided error. 100 to 2 by 100 independent repetitions If the correct answer is no, always return no. If the correct answer is yes, return yes with probability ≥ ½. ZPP. Las Vegas Decision problems solvable in expected polytime. running time can be unbounded,
 but fast on average Theorem. P ⊆ ZPP ⊆ RP ⊆ NP. Fundamental open questions. To what extent does randomization help
 Does P = ZPP Does ZPP = RP Does RP = NP 3113. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Dictionary data type Dictionary. Given a universe U of possible elements, maintain a subset
 S ⊆ U so that inserting, deleting, and searching in S is efficient. Dictionary interface. create(): initialize a dictionary with S = φ. insert(u): add element u ∈ U to S. delete(u): delete u from S (if u is currently in S). lookup(u): is u in S Challenge. Universe U can be extremely large so defining an array of
 size U is infeasible. Applications. File systems, databases, Google, compilers, checksums P2P networks, associative arrays, cryptography, web caching, etc. 33Hashing Hash function. h : U → 0, 1, …, n – 1 . Hashing. Create an array a of length n. When processing element u,
 access array element ah(u). birthday paradox Collision. When h(u) = h(v) but u ≠ v. A collision is expected after Θ(√n) random insertions. Separate chaining: ai stores linked list of elements u with h(u) = i. a0 jocularly seriously null a1 a2 suburban untravelled considerating browsing an1 34Adhoc hash function Adhoc hash function. int hash(String s, int n) int hash = 0; for (int i = 0; i s.length(); i++) hash = (31 hash) + si; return hash n; hash function à la Java string library 2 Deterministic hashing. If U ≥ n , then for any fixed hash function h,
 there is a subset S ⊆ U of n elements that all hash to same slot.
 Thus, Θ(n) time per lookup in worstcase. Q. But isn't adhoc hash function good enough in practice 35Algorithmic complexity attacks When can't we live with adhoc hash function Obvious situations: aircraft control, nuclear reactor, pace maker, .… Surprising situations: denialofservice attacks. malicious adversary learns your adhoc hash function (e.g., by reading Java API) and causes a big pileup in a single slot that grinds performance to a halt Real world exploits. Crosby–Wallach 2003 Linux 2.4.20 kernel: save files with carefully chosen names. Perl 5.8.0: insert carefully chosen strings into associative array. Bro server: send carefully chosen packets to DOS the server,
 using less bandwidth than a dialup modem. 36Hashing performance Ideal hash function. Maps m elements uniformly at random to n hash slots. Running time depends on length of chains. Average length of chain = α = m / n. Choose n ≈ m ⇒ expect O(1) per insert, lookup, or delete. Challenge. Explicit hash function h that achieves O(1) per operation.
 Approach. Use randomization for the choice of h. adversary knows the randomized algorithm you're using, but doesn't know random choices that the algorithm makes 37Universal hashing (Carter–Wegman 1980s) A universal family of hash functions is a set of hash functions H mapping a universe U to the set 0, 1, …, n – 1 such that For any pair of elements u ≠ v : Pr h(u) = h(v) ≤ 1/n h∈ H Can select random h efficiently. chosen uniformly at random Can compute h(u) efficiently. € Ex. U = a, b, c, d, e, f , n = 2. H = h , h 1 2 Pr h(a) = h(b) = 1/2 ∈ h H a b c d e f not universal Pr h(a) = h(c) = 1
 ∈ h H h (x) 0 1 0 1 0 1 1 Pr h(a) = h(d) = 0
 ∈ h (x) h H 0 0 0 1 1 1 2 . . . H = h , h , h , h 1 2 3 4 a b c d e f Pr h(a) = h(b) = 1/2
 ∈ h H h (x) 0 1 0 1 0 1 1 universal Pr h(a) = h(c) = 1/2 ∈ h H h (x) 0 0 0 1 1 1 2 Pr h(a) = h(d) = 1/2 ∈ h H h (x) 0 0 1 0 1 1 3 Pr h(a) = h(e) = 1/2 ∈ h H h (x) 1 0 0 1 1 0 4 Pr h(a) = h(f) = 0 ∈ h H 38 . . .Universal hashing: analysis Proposition. Let H be a universal family of hash functions mapping a universe U to the set 0, 1, …, n – 1 ; let h ∈ H be chosen uniformly at random from H; let S ⊆ U be a subset of size at most n; and let u ∉ S.
 Then, the expected number of items in S that collide with u is at most 1. Pf. For any s ∈ S, define random variable X = 1 if h(s) = h(u), and 0 otherwise. s Let X be a random variable counting the total number of collisions with u. 1 1 E X = E X = EX = PrX =1 ≤ = S ≤ 1 ∑ ∑ ∑ ∑ h∈H s s s s∈S s∈S s∈S s∈S n n linearity of expectation X is a 01 random variable universal s € 
 Q. OK, but how do we design a universal class of hash functions 39Designing a universal family of hash functions Modulus. We will use a prime number p for the size of the hash table. Integer encoding. Identify each element u ∈ U with a basep integer of r digits: x = (x , x , …, x ). 1 2 r Hash function. Let A = set of all rdigit, basep integers. For each
 a = (a , a , …, a ) where 0 ≤ a p, define 1 2 r i r h (x) = a x mod p ∑ maps universe U to set 0, 1, …, p – 1 ( a i i ' i=1 
 Hash function family. H = h : a ∈ A . a € 40Designing a universal family of hash functions Theorem. H = h : a ∈ A is a universal family of hash functions. a Pf. Let x = (x , x , …, x ) and y = (y , y , …, y ) be two distinct elements of U.
 1 2 r 1 2 r We need to show that Prh (x) = h (y) ≤ 1 / p. a a Since x ≠ y, there exists an integer j such that x ≠ y . j j We have h (x) = h (y) iff a a a ( y − x ) ≡ = a (x − y ) mod p ∑ j j j i i i " i≠ j " z m Can assume a was chosen uniformly at random by first selecting all coordinates a where i ≠ j, then selecting a at random. Thus, we can i j € assume a is fixed for all coordinates i ≠ j. i Since p is prime, a z ≡ m mod p has at most one solution among p j see lemma on next slide possibilities. Thus Prh (x) = h (y) ≤ 1 / p. ▪ a a 41Number theory fact Fact. Let p be prime, and let z ≢ 0 mod p. Then α z ≡ m mod p has
 at most one solution 0 ≤ α p. Pf. Suppose 0 ≤ α p and 0 ≤ α p are two different solutions. 1 2 Then (α – α ) z ≡ 0 mod p; hence (α – α ) z is divisible by p. 1 2 1 2 Since z ≢ 0 mod p, we know that z is not divisible by p. It follows that (α – α ) is divisible by p. 1 2 here’s where we This implies α = α . ▪ 1 2 use that p is prime Bonus fact. Can replace "at most one" with "exactly one" in above fact. Pf idea. Euclid's algorithm. 42Universal hashing: summary Goal. Given a universe U, maintain a subset S ⊆ U so that insert, delete, and lookup are efficient. Universal hash function family. H = h : a ∈ A . a r h (x) = a x mod p ∑ ( a i i 
 ' i=1 Choose p so that n ≤ p ≤ 2n, where n = S . can find such a prime using Fact: there exists a prime between n and 2n. € another randomized algorithm () Consequence. Space used = Θ(n). Expected number of collisions per operation is ≤ 1
 O(1) time per insert, delete, or lookup. 4313. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Chernoff Bounds (above mean) Theorem. Suppose X , …, X are independent 01 random variables. Let X = 1 n X + … + X . Then for any µ ≥ EX and for any δ 0, we have 1 n sum of independent 01 random variables
 is tightly centered on the mean Pf. We apply a number of simple transformations. For any t 0, tX t(1+δ)µ −t(1+δ)µ tX PrX (1+δ)µ = Pr e e ≤ e ⋅Ee tX f(x) = e is monotone in x Markov's inequality: PrX a ≤ EX / a € t X tX ∑ tX i i i Ee = Ee = Ee ∏ i Now definition of X independence 45 € Chernoff Bounds (above mean) Pf. continued Let p = Pr X = 1. Then, i i α for any α ≥ 0, 1+α ≤ e Combining everything: t t −t(1+δ)µ tX −t(1+δ)µ p (e −1) −t(1+δ)µ µ(e −1) i i PrX (1+δ)µ ≤ e Ee ≤ e e ≤ e e ∏ ∏ i i previous slide inequality above ∑ p = EX ≤ µ i i € Finally, choose t = ln(1 + δ). ▪ 46Chernoff Bounds (below mean) Theorem. Suppose X , …, X are independent 01 random variables.
 1 n Let X = X + … + X . Then for any µ ≤ E X and for any 0 δ 1, we have 1 n Pf idea. Similar. Remark. Not quite symmetric since only makes sense to consider δ 1. 4713. RANDOMIZED ALGORITHMS contention resolution ‣ global min cut ‣ linearity of expectation ‣ max 3satisfiability ‣ universal hashing ‣ Chernoff bounds ‣ load balancing ‣Load Balancing Load balancing. System in which m jobs arrive in a stream and need to be processed immediately on m identical processors. Find an assignment that balances the workload across processors. Centralized controller. Assign jobs in roundrobin manner. Each processor receives at most ⎡ m / n⎤ jobs. Decentralized controller. Assign jobs to processors uniformly at random. How likely is it that some processor is assigned "too many" jobs 49Load balancing Analysis. Let X = number of jobs assigned to processor i. i Let Y = 1 if job j assigned to processor i, and 0 otherwise. ij We have EY = 1/n. ij Thus, X = ∑ Y , and μ = EX = 1. i j i j i Applying Chernoff bounds with δ = c – 1 yields x Let γ(n) be number x such that x = n, and choose c = e γ(n). Union bound ⇒ with probability ≥ 1 – 1/n no processor receives more than e γ(n) = Θ(log n / log log n) jobs. Bonus fact: with high probability, some processor receives Θ(logn / log log n) jobs 50Load balancing: many jobs Theorem. Suppose the number of jobs m = 16 n ln n. Then on average,
 each of the n processors handles μ = 16 ln n jobs. With high probability,
 every processor will have between half and twice the average load. Pf. Let X , Y be as before. i ij Applying Chernoff bounds with δ = 1 yields 2 1 1 − (16nlnn) 1 ( ) 2 2 1 Pr X µ e = i 2 2 n Union bound ⇒ every processor has load between half and
 € twice the average with probability ≥ 1 – 2/n. ▪ 51
Website URL
Comment