Local search ppt

local search algorithms and optimistic problems ppt local search heuristics in approximation algorithm ppt
Dr.AlexanderTyler Profile Pic
Published Date:21-07-2017
Your Website URL(Optional)
12. LOCAL SEARCH gradient descent ‣ Metropolis algorithm ‣ Hopfield neural networks ‣ maximum cut ‣ Nash equilibria ‣ Lecture slides by Kevin Wayne Copyright © 2005 Pearson-Addison Wesley Copyright © 2013 Kevin Wayne http://www.cs.princeton.edu/wayne/kleinberg-tardos Last updated on Sep 8, 2013 6:54 AMCoping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality. Solve problem in polynomial time. Solve arbitrary instances of the problem. 212. LOCAL SEARCH gradient descent ‣ Metropolis algorithm ‣ Hopfield neural networks ‣ maximum cut ‣ Nash equilibria ‣ SECTION 12.1Gradient descent: vertex cover Vertex cover. Given a graph G = (V, E), find a subset of nodes S of minimal cardinality such that for each (u, v) ∈ E, either u or v (or both) are in S. Neighbor relation. S ∼ S ' if S ' can be obtained from S by adding or deleting a single node. Each vertex cover S has at most n neighbors. Gradient descent. Start with S = V. If there is a neighbor S ' that is a vertex cover and has lower cardinality, replace S with S '. Remark. Algorithm terminates after at most n steps since each update decreases the size of the cover by one. 4Gradient descent: vertex cover Local optimum. No neighbor is strictly better. optimum = all nodes on left side optimum = center node only local optimum = all nodes on right side local optimum = all other nodes optimum = even nodes local optimum = omit every third node 5Local search Local search. Algorithm that explores the space of possible solutions in sequential fashion, moving from a current solution to a "nearby" one. Neighbor relation. Let S ∼ S ' be a neighbor relation for the problem. Gradient descent. Let S denote current solution. If there is a neighbor S ' of S with strictly lower cost, replace S with the neighbor whose cost is as small as possible. Otherwise, terminate the algorithm. A funnel a jagged funnel 612. LOCAL SEARCH gradient descent ‣ Metropolis algorithm ‣ Hopfield neural networks ‣ maximum cut ‣ Nash equilibria ‣ SECTION 12.2Metropolis algorithm THE 0 R Y 0 F T RAe KEF FEe T SIN R A D I 0 L Y SIS 0 F W ATE R 1087 instead, only water molecules with different amounts of paths (a) and (b) can be designated H 0 and those 2 Metropolis algorithm. excitation energy. These may follow any of three paths: following path (c) can be designated H 0t. It seems 2 reasonable to assume for the purpose of these calcula- (a) The excitation energy is lost without dissociation Simulate behavior of a physical system accor tions that the ionized ding to principles of H 0 molecules will become the 2 into radicals (by collision, or possibly radiation, as in H 0t molecules, but this is not likely to be a complete 2 aromatic hydrocarbons). correspondence. (b) The molecules dissociate, but the resulting radi- statistical mechanics. In conclusion we would like to emphasize that the cals recombine without escaping from the liquid cage. qualitative result of this section is not critically de- (c) The molecules dissociate and escape from the pendent on the exact values of the physical parameters Globally biased toward "downhill" steps, but occasionally cage. In this case we would not expect them to move used. However, this treatment is classical, and a correct more than a few molecular diameters through the dense treatment must be wave mechanical; therefore the makes "uphill" steps to break out of local minima. medium before being thermalized. result of this section cannot be taken as an a priori theoretical prediction. The success of the radical diffu- In accordance with the notation introduced by sion model given above lends some plausibility to the Burton, Magee, and Samuel,22 the molecules following occurrence of electron capture as described by this crude calculation. Further work is clearly needed. 22 Burton, Magee, and Samuel, J. Chern. Phys. 20, 760 (1952). THE JOURNAL OF CHEMICAL PHYSICS VOLUME 21, NUMBER 6 JUNE, 1953 Equation of State Calculations by Fast Computing Machines NICHOLAS METROPOLIS, ARIANNA W. ROSENBLUTH, MARSHALL N. ROSENBLUTH, AND AUGUSTA H. TELLER, Los Alamos Scientific Laboratory, Los Alamos, New Mexico AND EDWARD TELLER, Department of Physics, University of Chicago, Chicago, Illinois (Received March 6, 1953) A general method, suitable for fast computing machines, for investigatiflg such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion. I. INTRODUCTION II. THE GENERAL METHOD FOR AN ARBITRARY POTENTIAL BETWEEN THE PARTICLES HE purpose of this paper is to describe a general In order to reduce the problem to a feasible size for T method, suitable for fast electronic computing numerical work, we can, of course, consider only a finite machines, of calculating the properties of any substance number of particles. This number N may be as high as which may be considered as composed of interacting 8 several hundred. Our system consists of a squaret con- individual molecules. Classical statistics is assumed, taining N particles. In order to minimize the surface only two-body forces are considered, and the potential effects we suppose the complete substance to be periodic, field of a molecule is assumed spherically symmetric. consisting of many such squares, each square contain- These are the usual assumptions made in theories of ing N particles in the same configuration. Thus we liquids. Subject to the above assumptions, the method define dAB, the minimum distance between particles A is not restricted to any range of temperature or density. and B, as the shortest distance between A and any of This paper will also present results of a preliminary two- the particles B, of which there is one in each of the dimensional calculation for the rigid-sphere system. squares which comprise the complete substance. If we Work on the two-dimensional case with a Lennard- have a potential which falls off rapidly with distance, Jones potential is in progress and will be reported in a there will be at most one of the distances AB which later paper. Also, the problem in three dimensions is can make a substantial contribution; hence we need consider only the minimum distance dAB. being investigated. Now at the Radiation Laboratory of the University of Cali- t We will use two-dimensional nomenclature here since it fornia, Livermore, California. is easier to visualize. The extension to three dimensions is obvious. Downloaded 01 Jan 2013 to Redistribution subject to AIP license or copyright; see http://jcp.aip.org/about/rights_and_permissionsGibbs-Boltzmann function Gibbs-Boltzmann function. The probability of finding a physical system in a -E / (kT) state with energy E is proportional to e , where T 0 is temperature and k is a constant. For any temperature T 0, function is monotone decreasing function of energy E. System more likely to be in a lower energy state than higher one. T large: high and low energy states have roughly same probability T small: low energy states are much more probable 9Metropolis algorithm Metropolis algorithm. Given a fixed temperature T, maintain current state S. Randomly perturb current state S to new state S ' ∈ N(S). If E(S ') ≤ E(S), update current state to S '. - ΔE / (kT) Otherwise, update current state to S ' with probability e , where ΔE = E(S ') – E(S) 0. Theorem. Let f (t) be fraction of first t steps in which simulation is in state S. S Then, assuming some technical conditions, with probability 1: 1 −E(S) /(kT ) lim f (t) = e , S Z t→∞ −E(S) /(kT ) where Z = e . ∑ S∈ N (S) Intuition. Simulation spends roughly the right amount of time in each state, € according to Gibbs-Boltzmann equation. 10Simulated annealing Simulated annealing. T large ⇒ probability of accepting an uphill move is large. T small ⇒ uphill moves are almost never accepted. Idea: turn knob to control T. Cooling schedule: T = T(i) at iteration i. Physical analog. Take solid and raise it to high temperature, we do not expect it to maintain a nice crystal structure. Take a molten solid and freeze it very abruptly, we do not expect to get a perfect crystal either. Annealing: cool material gradually from high temperature, allowing it to reach equilibrium at succession of intermediate lower temperatures. 1112. LOCAL SEARCH gradient descent ‣ Metropolis algorithm ‣ Hopfield neural networks ‣ maximum cut ‣ Nash equilibria ‣ SECTION 12.3Hopfield neural networks Hopfield networks. Simple model of an associative memory, in which a large collection of units are connected by an underlying network, and neighboring units try to correlate their states. Input: Graph G = (V, E) with integer (positive or negative) edge weights w. Configuration. Node assignment s = ± 1. u Intuition. If w 0, then u and v want to have the same state; uv if w 0 then u and v want different states. uv Note. In general, no configuration respects all constraints. 7 5 6 13Hopfield neural networks Def. With respect to a configuration S, edge e = (u, v) is good if w s s 0. That is, if w 0 then s = s ; if w 0, then s ≠ s . 𐄂 𐄂 e u v e u v e u v Def. With respect to a configuration S, a node u is satisfied if the weight of incident good edges ≥ weight of incident bad edges. w s s ≤ 0 ∑ e u v v: e=(u,v)∈ E Def. A configuration is stable if all nodes are satisfied. € -1 4 -10 satisfied node: 5 – 4 – 1 – 1 0 -5 -1 bad edge Goal. Find a stable configuration, if such a configuration exists. 14Hopfield neural networks Goal. Find a stable configuration, if such a configuration exists. State-flipping algorithm. Repeated flip state of an unsatisfied node. HOPFIELD-FLIP (G, w) ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ S ← arbitrary configuration. WHILE (current configuration is not stable) u ← unsatisfied node. s ← -s . u u RETURN S. ____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ 15State-flipping algorithm example unsatisfied node 10 – 8 0 unsatisfied node 8 – 4 – 1 – 1 0 stable 16State-flipping algorithm: proof of correctness Theorem. The state-flipping algorithm terminates with a stable configuration after at most W = Σ w iterations. e e Pf attempt. Consider measure of progress Φ(S) = satisfied nodes. 17State-flipping algorithm: proof of correctness Theorem. The state-flipping algorithm terminates with a stable configuration after at most W = Σ w iterations. e e Pf. Consider measure of progress Φ(S) = Σ w . e good e Clearly 0 ≤ Φ(S) ≤ W. We show Φ(S) increases by at least 1 after each flip. When u flips state: all good edges incident to u become bad all bad edges incident to u become good all other edges remain the same Φ(S') = Φ(S) − w + w ≥ Φ(S) + 1 ∑ ∑ e e e: e = (u,v)∈ E e: e = (u,v)∈ E e is bad e is good u is unsatisfied € 18Complexity of Hopfield neural network Hopfield network search problem. Given a weighted graph, find a stable configuration if one exists. Hopfield network decision problem. Given a weighted graph, does there exist a stable configuration? Remark. The decision problem is trivially solvable (always yes), but there is no known poly-time algorithm for the search problem. polynomial in n and log W 1912. LOCAL SEARCH gradient descent ‣ Metropolis algorithm ‣ Hopfield neural networks ‣ maximum cut ‣ Nash equilibria ‣ SECTION 12.4

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.