Mathematical methods of optimization

mathematical optimization add maths and mathematical optimization exact method and mathematical optimization examples
ImogenCameron Profile Pic
Published Date:14-07-2017
Your Website URL(Optional)
Convex Optimization — Boyd & Vandenberghe 1. Introduction • mathematical optimization • least-squares and linear programming • convex optimization • example • course goals and topics • nonlinear optimization • brief history of convex optimization 1–1Mathematical optimization (mathematical) optimization problem minimize f (x) 0 subject to f (x)≤b, i = 1,...,m i i • x = (x ,...,x ): optimization variables 1 n n • f :R →R: objective function 0 n • f :R →R, i = 1,...,m: constraint functions i ⋆ optimal solution x has smallest value of f among all vectors that 0 satisfy the constraints Introduction 1–2Examples portfolio optimization • variables: amounts invested in different assets • constraints: budget, max./min. investment per asset, minimum return • objective: overall risk or return variance device sizing in electronic circuits • variables: device widths and lengths • constraints: manufacturing limits, timing requirements, maximum area • objective: power consumption data fitting • variables: model parameters • constraints: prior information, parameter limits • objective: measure of misfit or prediction error Introduction 1–3Solving optimization problems general optimization problem • very difficult to solve • methods involve some compromise, e.g., very long computation time, or not always finding the solution exceptions: certain problem classes can be solved efficiently and reliably • least-squares problems • linear programming problems • convex optimization problems Introduction 1–4Least-squares 2 minimize kAx−bk 2 solving least-squares problems ⋆ T −1 T • analytical solution: x = (A A) A b • reliable and efficient algorithms and software k×n 2 • computation time proportional to n k (A∈R ); less if structured • a mature technology using least-squares • least-squares problems are easy to recognize • a few standard techniques increase flexibility (e.g., including weights, adding regularization terms) Introduction 1–5Linear programming T minimize c x T subject to a x≤b, i = 1,...,m i i solving linear programs • no analytical formula for solution • reliable and efficient algorithms and software 2 • computation time proportional to n m if m≥n; less with structure • a mature technology using linear programming • not as easy to recognize as least-squares problems • a few standard tricks used to convert problems into linear programs (e.g., problems involving ℓ - or ℓ -norms, piecewise-linear functions) 1 ∞ Introduction 1–6Convex optimization problem minimize f (x) 0 subject to f (x)≤b, i = 1,...,m i i • objective and constraint functions are convex: f (αx+βy)≤αf (x)+βf (y) i i i if α+β = 1, α≥ 0, β≥ 0 • includes least-squares problems and linear programs as special cases Introduction 1–7solving convex optimization problems • no analytical solution • reliable and efficient algorithms 3 2 • computation time (roughly) proportional to maxn ,n m,F, where F is cost of evaluating f ’s and their first and second derivatives i • almost a technology using convex optimization • often difficult to recognize • many tricks for transforming problems into convex form • surprisingly many problems can be solved via convex optimization Introduction 1–8Example m lamps illuminating n (small, flat) patches lamp power p j r kj θ kj illumination I k intensity I at patch k depends linearly on lamp powers p : k j m X −2 I = a p , a =r maxcosθ ,0 k kj j kj kj kj j=1 problem: achieve desired illumination I with bounded lamp powers des minimize max logI −logI k=1,...,n k des subject to 0≤p ≤p , j = 1,...,m j max Introduction 1–9how to solve? 1. use uniform power: p =p, vary p j 2. use least-squares: P n 2 minimize (I −I ) k des k=1 round p if p p or p 0 j j max j 3. use weighted least-squares: P P n m 2 2 minimize (I −I ) + w (p −p /2) k des j j max k=1 j=1 iteratively adjust weights w until 0≤p ≤p j j max 4. use linear programming: minimize max I −I k=1,...,n k des subject to 0≤p ≤p , j = 1,...,m j max which can be solved via linear programming of course these are approximate (suboptimal) ‘solutions’ Introduction 1–105. use convex optimization: problem is equivalent to minimize f (p) = max h(I /I ) 0 k=1,...,n k des subject to 0≤p ≤p , j = 1,...,m j max with h(u) = maxu,1/u 5 4 3 2 1 0 0 1 2 3 4 u f is convex because maximum of convex functions is convex 0 exact solution obtained with effort≈ modest factor× least-squares effort Introduction 1–11 h(u)additional constraints: does adding 1 or 2 below complicate the problem? 1. no more than half of total power is in any 10 lamps 2. no more than half of the lamps are on (p 0) j • answer: with (1), still easy to solve; with (2), extremely difficult • moral: (untrained) intuition doesn’t always work; without the proper background very easy problems can appear quite similar to very difficult problems Introduction 1–12Course goals and topics goals 1. recognize/formulate problems (such as the illumination problem) as convex optimization problems 2. develop code for problems of moderate size (1000 lamps, 5000 patches) 3. characterize optimal solution (optimal power distribution), give limits of performance, etc. topics 1. convex sets, functions, optimization problems 2. examples and applications 3. algorithms Introduction 1–13Nonlinear optimization traditional techniques for general nonconvex problems involve compromises local optimization methods (nonlinear programming) • find a point that minimizes f among feasible points near it 0 • fast, can handle large problems • require initial guess • provide no information about distance to (global) optimum global optimization methods • find the (global) solution • worst-case complexity grows exponentially with problem size these algorithms are often based on solving convex subproblems Introduction 1–14Brief history of convex optimization theory (convex analysis): ca1900–1970 algorithms • 1947: simplex algorithm for linear programming (Dantzig) • 1960s: early interior-point methods (Fiacco & McCormick, Dikin, . . . ) • 1970s: ellipsoid method and other subgradient methods • 1980s: polynomial-time interior-point methods for linear programming (Karmarkar 1984) • late 1980s–now: polynomial-time interior-point methods for nonlinear convex optimization (Nesterov & Nemirovski 1994) applications • before 1990: mostly in operations research; few in engineering • since 1990: many new applications in engineering (control, signal processing, communications, circuit design, . . . ); new problem classes (semidefinite and second-order cone programming, robust optimization) Introduction 1–15Convex Optimization — Boyd & Vandenberghe 2. Convex sets • affine and convex sets • some important examples • operations that preserve convexity • generalized inequalities • separating and supporting hyperplanes • dual cones and generalized inequalities 2–1Affine set line through x , x : all points 1 2 x =θx +(1−θ)x (θ∈R) 1 2 x 1 θ = 1.2 θ = 1 θ = 0.6 x 2 θ = 0 θ =−0.2 affine set: contains the line through any two distinct points in the set example: solution set of linear equationsxAx =b (conversely, every affine set can be expressed as solution set of system of linear equations) Convex sets 2–2Convex set line segment between x and x : all points 1 2 x =θx +(1−θ)x 1 2 with 0≤θ≤ 1 convex set: contains line segment between any two points in the set x ,x ∈C, 0≤θ≤ 1 =⇒ θx +(1−θ)x ∈C 1 2 1 2 examples (one convex, two nonconvex sets) Convex sets 2–3Convex combination and convex hull convex combination of x ,. . . , x : any point x of the form 1 k x =θ x +θ x +···+θ x 1 1 2 2 k k with θ +···+θ = 1, θ ≥ 0 1 k i convex hull convS: set of all convex combinations of points in S Convex sets 2–4Convex cone conic (nonnegative) combination of x and x : any point of the form 1 2 x =θ x +θ x 1 1 2 2 with θ ≥ 0, θ ≥ 0 1 2 x 1 x 2 0 convex cone: set that contains all conic combinations of points in the set Convex sets 2–5