one dimensional optimization algorithm and also one-dimensional cutting stock optimization in consecutive time periods
GregDeamons,New Zealand,Professional
Published Date:03-08-2017
Your Website URL(Optional)
Comment
cha01102_ch07_182-204.qxd 12/18/10 1:57 PM Page 182
7
Optimization
CHAPTER OBJECTIVES
The primary objective of this chapter is to introduce you to how optimization can be
used to determine minima and maxima of both one-dimensional and multidimensional
functions. Specific objectives and topics covered are
Understanding why and where optimization occurs in engineering and scientific
problem solving.
Recognizing the difference between one-dimensional and multidimensional
optimization.
Distinguishing between global and local optima.
Knowing how to recast a maximization problem so that it can be solved with a
minimizing algorithm.
Being able to define the golden ratio and understand why it makes one-
dimensional optimization efficient.
Locating the optimum of a single-variable function with the golden-section search.
Locating the optimum of a single-variable function with parabolic interpolation.
Knowing how to apply the fminbnd function to determine the minimum of a
one-dimensional function.
Being able to develop MATLAB contour and surface plots to visualize two-
dimensional functions.
Knowing how to apply the fminsearch function to determine the minimum of a
multidimensional function.
YOU’VE GOT A PROBLEM
n object like a bungee jumper can be projected upward at a specified velocity. If it
is subject to linear drag, its altitude as a function of time can be computed as
A
m mg mg
−(c/m)t
z = z + v + 1 − e − t (7.1)
0 0
c c c
182cha01102_ch07_182-204.qxd 12/20/10 7:21 AM Page 183
7.1 INTRODUCTION AND BACKGROUND 183
z, m 200
Maximum
100
elevation
t, s
0
2468 10 12
–100
FIGURE 7.1
Elevation as a function of time for an object initially projected upward with an initial velocity.
where z = altitude (m) above the earths surface (defined as z = 0), z = the initial altitude
0
(m), m = mass (kg), c = a linear drag coefficient (kg/s), v = initial velocity (m/s), and t =
0
time (s). Note that for this formulation, positive velocity is considered to be in the upward
2
direction. Given the following parameter values: g = 9.81 m/s , z = 100 m, v = 55 m/s,
0 0
m = 80 kg, and c = 15 kg/s, Eq. (7.1) can be used to calculate the jumpers altitude. As
displayed in Fig. 7.1, the jumper rises to a peak elevation of about 190 m at about t = 4 s.
Suppose that you are given the job of determining the exact time of the peak elevation.
The determination of such extreme values is referred to as optimization. This chapter will
introduce you to how the computer is used to make such determinations.
7.1 INTRODUCTION AND BACKGROUND
In the most general sense, optimization is the process of creating something that is as
effective as possible. As engineers, we must continuously design devices and products that
perform tasks in an efficient fashion for the least cost. Thus, engineers are always con-
fronting optimization problems that attempt to balance performance and limitations. In
addition, scientists have interest in optimal phenomena ranging from the peak elevation of
projectiles to the minimum free energy.
From a mathematical perspective, optimization deals with finding the maxima and
minima of a function that depends on one or more variables. The goal is to determine the
values of the variables that yield maxima or minima for the function. These can then be
substituted back into the function to compute its optimal values.
Although these solutions can sometimes be obtained analytically, most practical
optimization problems require numerical, computer solutions. From a numerical stand-
point, optimization is similar in spirit to the root-location methods we just covered in
Chaps. 5 and 6. That is, both involve guessing and searching for a point on a function. The
fundamental difference between the two types of problems is illustrated in Fig. 7.2. Root
location involves searching for the location where the function equals zero. In contrast,
optimization involves searching for the functions extreme points.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 184
184 OPTIMIZATION
f(x) = 0
f (x)
Maximum
f(x) 0
f (x) = 0
Root
0
x
Root Root
f(x) = 0
Minimum
f(x) 0
FIGURE 7.2
A function of a single variable illustrating the difference between roots and optima.
As can be seen in Fig. 7.2, the optimums are the points where the curve is flat. In math-
ematical terms, this corresponds to the x value where the derivative f (x) is equal to zero.
Additionally, the second derivative, f (x), indicates whether the optimum is a minimum or
a maximum: if f (x) 0, the point is a maximum; if f (x) 0, the point is a minimum.
Now, understanding the relationship between roots and optima would suggest a possi-
ble strategy for finding the latter. That is, you can differentiate the function and locate the
root (i.e., the zero) of the new function. In fact, some optimization methods do just this by
solving the root problem: f (x) = 0.
EXAMPLE 7.1 Determining the Optimum Analytically by Root Location
Problem Statement. Determine the time and magnitude of the peak elevation based on
2
Eq. (7.1). Use the following parameter values for your calculation: g = 9.81 m/s ,
z = 100 m, v = 55 m/s, m = 80 kg, and c = 15 kg/s.
0 0
Solution. Equation (7.1) can be differentiated to give
dz mg
−(c/m)t −(c/m)t
= v e − 1 − e (E7.1.1)
0
dt c
Note that because v = dz/dt, this is actually the equation for the velocity. The maximum
elevation occurs at the value of t that drives this equation to zero. Thus, the problem
amounts to determining the root. For this case, this can be accomplished by setting the de-
rivative to zero and solving Eq. (E7.1.1) analytically for
m cv
0
t = ln 1 +
c mg
Substituting the parameters gives
80 15(55)
t = ln 1 + = 3.83166 s
15 80(9.81)cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 185
7.1 INTRODUCTION AND BACKGROUND 185
This value along with the parameters can then be substituted into Eq. (7.1) to compute the
maximum elevation as
80 80(9.81) 80(9.81)
−(15/80)3.83166
z = 100+ 50 + 1 − e − (3.83166) = 192.8609 m
15 15 15
We can verify that the result is a maximum by differentiating Eq. (E7.1.1) to obtain the
second derivative
2
d z c m
−(c/m)t −(c/m)t
=− v e − ge =−9.81
0
2 2
dt m s
The fact that the second derivative is negative tells us that we have a maximum. Further,
the result makes physical sense since the acceleration should be solely equal to the force of
gravity at the maximum when the vertical velocity (and hence drag) is zero.
Although an analytical solution was possible for this case, we could have obtained the
same result using the root-location methods described in Chaps. 5 and 6. This will be left
as a homework exercise.
Although it is certainly possible to approach optimization as a roots problem, a variety
of direct numerical optimization methods are available. These methods are available for both
one-dimensional and multidimensional problems. As the name implies, one-dimensional
problems involve functions that depend on a single dependent variable. As in Fig. 7.3a,the
search then consists of climbing or descending one-dimensional peaks and valleys. Multi-
dimensional problems involve functions that depend on two or more dependent variables.
FIGURE 7.3
(a) One-dimensional optimization. This figure also illustrates how minimization of f (x) is
equivalent to the maximization of − f (x). (b) Two-dimensional optimization. Note that this
figure can be taken to represent either a maximization (contours increase in elevation up to
the maximum like a mountain) or a minimization (contours decrease in elevation down to the
minimum like a valley).
f (x)
y Optimum f (x , y )
f (x)
f (x, y)
x
Minimum f (x)
y
x
Maximum f (x)
f (x)
x
x
(a) (b)cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 186
186 OPTIMIZATION
In the same spirit, a two-dimensional optimization can again be visualized as searching out
peaks and valleys (Fig. 7.3b). However, just as in real hiking, we are not constrained to walk
a single direction; instead the topography is examined to efficiently reach the goal.
Finally, the process of finding a maximum versus finding a minimum is essentially
∗
identical because the same value x both minimizes f (x) and maximizes − f (x). This
equivalence is illustrated graphically for a one-dimensional function in Fig. 7.3a.
In the next section, we will describe some of the more common approaches for one-
dimensional optimization. Then we will provide a brief description of how MATLAB can
be employed to determine optima for multidimensional functions.
7.2 ONE-DIMENSIONAL OPTIMIZATION
This section will describe techniques to find the minimum or maximum of a function of a
single variable f (x). A useful image in this regard is the one-dimensional roller
coasterlike function depicted in Fig. 7.4. Recall from Chaps. 5 and 6 that root location
was complicated by the fact that several roots can occur for a single function. Similarly,
both local and global optima can occur in optimization.
A global optimum represents the very best solution. A local optimum, though not the
very best, is better than its immediate neighbors. Cases that include local optima are called
multimodal. In such cases, we will almost always be interested in finding the global optimum.
In addition, we must be concerned about mistaking a local result for the global optimum.
Just as in root location, optimization in one dimension can be divided into bracketing
and open methods. As described in the next section, the golden-section search is an example
of a bracketing method that is very similar in spirit to the bisection method for root location.
This is followed by a somewhat more sophisticated bracketing approachparabolic inter-
polation. We will then show how these two methods are combined and implemented with
MATLABs fminbnd function.
FIGURE 7.4
A function that asymptotically approaches zero at plus and minus ∞ and has two maximum and
two minimum points in the vicinity of the origin. The two points to the right are local optima,
whereas the two to the left are global.
f(x)
Local
maximum
Global
maximum
x
Local
Global
minimum
minimumcha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 187
7.2 ONE-DIMENSIONAL OPTIMIZATION 187
7.2.1 Golden-Section Search
In many cultures, certain numbers are ascribed magical qualities. For example, we in the West
are all familiar with lucky 7 and Friday the 13th. Beyond such superstitious quantities,
there are several well-known numbers that have such interesting and powerful mathematical
properties that they could truly be called magical. The most common of these are the ratio
of a circles circumference to its diameter π and the base of the natural logarithm e.
Although not as widely known, the golden ratio should surely be included in the pan-
theon of remarkable numbers. This quantity, which is typically represented by the Greek
letter φ (pronounced: fee), was originally defined by Euclid (ca. 300 BCE) because of its
role in the construction of the pentagram or five-pointed star. As depicted in Fig. 7.5,
Euclids definition reads: A straight line is said to have been cut in extreme and mean ratio
when, as the whole line is to the greater segment, so is the greater to the lesser.
The actual value of the golden ratio can be derived by expressing Euclids definition as
+
1 2 1
= (7.2)
1 2
Multiplying by / and collecting terms yields
1 2
2
φ − φ − 1 = 0 (7.3)
where φ = / . The positive root of this equation is the golden ratio:
1 2
√
1 + 5
φ = = 1.61803398874989... (7.4)
2
The golden ratio has long been considered aesthetically pleasing in Western cultures.
In addition, it arises in a variety of other contexts including biology. For our purposes, it
provides the basis for the golden-section search, a simple, general-purpose method for de-
termining the optimum of a single-variable function.
The golden-section search is similar in spirit to the bisection approach for locating
roots in Chap. 5. Recall that bisection hinged on defining an interval, specified by a lower
guess (x ) and an upper guess (x ) that bracketed a single root. The presence of a root be-
l u
tween these bounds was verified by determining that f (x ) and f (x ) had different signs.
l u
The root was then estimated as the midpoint of this interval:
x + x
l u
x = (7.5)
r
2
FIGURE 7.5
Euclid’s definition of the golden ratio is based on dividing a line into two segments so that the
ratio of the whole line to the larger segment is equal to the ratio of the larger segment to the
smaller segment. This ratio is called the golden ratio.
1 2
1 2cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 188
188 OPTIMIZATION
The final step in a bisection iteration involved determining a new smaller bracket. This was
done by replacing whichever of the bounds x or x had a function value with the same sign
l u
as f (x ). A key advantage of this approach was that the new value x replaced one of the old
r r
bounds.
Now suppose that instead of a root, we were interested in determining the minimum of
a one-dimensional function. As with bisection, we can start by defining an interval that
contains a single answer. That is, the interval should contain a single minimum, and hence
is called unimodal. We can adopt the same nomenclature as for bisection, where x and x
l u
defined the lower and upper bounds, respectively, of such an interval. However, in contrast
to bisection, we need a new strategy for finding a minimum within the interval. Rather than
using a single intermediate value (which is sufficient to detect a sign change, and hence a
zero), we would need two intermediate function values to detect whether a minimum
occurred.
The key to making this approach efficient is the wise choice of the intermediate points.
As in bisection, the goal is to minimize function evaluations by replacing old values with
new values. For bisection, this was accomplished by choosing the midpoint. For the
golden-section search, the two intermediate points are chosen according to the golden
ratio:
x = x + d (7.6)
1 l
x = x − d (7.7)
2 u
where
d = (φ − 1)(x − x ) (7.8)
u l
The function is evaluated at these two interior points. Two results can occur:
1. If, as in Fig. 7.6a, f (x ) f (x ), then f (x ) is the minimum, and the domain of x to the
1 2
1
left of x , from x to x , can be eliminated because it does not contain the minimum. For
2 l 2
this case, x becomes the new x for the next round.
2 l
2. If f (x ) f (x ), then f (x ) is the minimum and the domain of x to the right of x , from
2 1
2 1
x to x would be eliminated. For this case, x becomes the new x for the next round.
1 u 1 u
Now, here is the real benefit from the use of the golden ratio. Because the original x
1
and x were chosen using the golden ratio, we do not have to recalculate all the function
2
values for the next iteration. For example, for the case illustrated in Fig. 7.6, the old x be-
1
comes the new x . This means that we already have the value for the new f (x ), since it is
2 2
the same as the function value at the old x .
1
To complete the algorithm, we need only determine the new x . This is done with
1
Eq. (7.6) with d computed with Eq. (7.8) based on the new values of x and x . A similar
l u
approach would be used for the alternate case where the optimum fell in the left subinterval.
For this case, the new x would be computed with Eq. (7.7).
2
As the iterations are repeated, the interval containing the extremum is reduced rapidly.
In fact, each round the interval is reduced by a factor of φ − 1 (about 61.8%). That means
10
that after 10 rounds, the interval is shrunk to about 0.618 or 0.008 or 0.8% of its initial
length. After 20 rounds, it is about 0.0066%. This is not quite as good as the reduction
achieved with bisection (50%), but this is a harder problem.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 189
7.2 ONE-DIMENSIONAL OPTIMIZATION 189
f(x)
Eliminate
Minimum
x
x
x
l d
1
x x
d
2 u
(a)
f(x)
Old Old
x x
2 1
x
x x x x
l 2 1 u
New
(b)
FIGURE 7.6
(a) The initial step of the golden-section search algorithm involves choosing two interior points
according to the golden ratio. (b) The second step involves defining a new interval that
encompasses the optimum.
EXAMPLE 7.2 Golden-Section Search
Problem Statement. Use the golden-section search to find the minimum of
2
x
f (x) = − 2 sin x
10
within the interval from x = 0 to x = 4.
l u
Solution. First, the golden ratio is used to create the two interior points:
d = 0.61803(4 − 0) = 2.4721
x = 0 + 2.4721 = 2.4721
1
x = 4 − 2.4721 = 1.5279
2cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 190
190 OPTIMIZATION
The function can be evaluated at the interior points:
2
1.5279
f (x ) = − 2 sin(1.5279)=−1.7647
2
10
2
2.4721
f (x ) = − 2 sin(2.4721)=−0.6300
1
10
Because f (x ) f (x ), our best estimate of the minimum at this point is that it is
2 1
located at x = 1.5279 with a value of f (x) = 1.7647. In addition, we also know that the
minimum is in the interval defined by x , x , and x . Thus, for the next iteration, the lower
l 2 1
bound remains x = 0, and x becomes the upper bound, that is, x = 2.4721. In addition,
l 1 u
the former x value becomes the new x , that is, x = 1.5279. In addition, we do not have to
2 1 1
recalculate f (x ), it was determined on the previous iteration as f (1.5279) = 1.7647.
1
All that remains is to use Eqs. (7.8) and (7.7) to compute the new value of d and x :
2
d = 0.61803(2.4721 − 0) = 1.5279
x = 2.4721 − 1.5279 = 0.9443
2
The function evaluation at x is f (0.9943)=−1.5310. Since this value is less than the
2
function value at x , the minimum is f (1.5279)=−1.7647, and it is in the interval pre-
1
scribed by x , x , and x . The process can be repeated, with the results tabulated here:
2 1 u
ix f (x ) x f (x ) x f (x ) x f (x ) d
l l 2 2 1 1 u u
1 0 0 1.5279 −1.7647 2.4721 −0.6300 4.0000 3.1136 2.4721
2 0 0 0.9443 −1.5310 1.5279 −1.7647 2.4721 −0.6300 1.5279
3 0.9443 −1.5310 1.5279 −1.7647 1.8885 −1.5432 2.4721 −0.6300 0.9443
4 0.9443 −1.5310 1.3050 −1.7595 1.5279 −1.7647 1.8885 −1.5432 0.5836
5 1.3050 −1.7595 1.5279 −1.7647 1.6656 −1.7136 1.8885 −1.5432 0.3607
6 1.3050 −1.7595 1.4427 −1.7755 1.5279 −1.7647 1.6656 −1.7136 0.2229
7 1.3050 −1.7595 1.3901 −1.7742 1.4427 −1.7755 1.5279 −1.7647 0.1378
8 1.3901 −1.7742 1.4427 −1.7755 1.4752 −1.7732 1.5279 −1.7647 0.0851
Note that the current minimum is highlighted for every iteration. After the eighth
iteration, the minimum occurs at x = 1.4427 with a function value of −1.7755. Thus, the
result is converging on the true value of −1.7757 at x = 1.4276.
Recall that for bisection (Sec. 5.4), an exact upper bound for the error can be calcu-
lated at each iteration. Using similar reasoning, an upper bound for golden-section search
can be derived as follows: Once an iteration is complete, the optimum will either fall in one
of two intervals. If the optimum function value is at x , it will be in the lower interval (x ,
2 l
x , x ). If the optimum function value is at x , it will be in the upper interval (x , x , x ).
2 1 1 2 1 u
Because the interior points are symmetrical, either case can be used to define the error.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 191
7.2 ONE-DIMENSIONAL OPTIMIZATION 191
Looking at the upper interval (x , x , x ), if the true value were at the far left, the max-
2 1 u
imum distance from the estimate would be
x = x − x
a 1 2
= x + (φ − 1)(x − x ) − x + (φ − 1)(x − x )
l u l u u l
= (x − x ) + 2(φ − 1)(x − x )
l u u l
= (2φ − 3)(x − x )
u l
or 0.2361 (x − x ). If the true value were at the far right, the maximum distance from the
u l
estimate would be
x = x − x
b u 1
= x − x − (φ − 1)(x − x )
u l u l
= (x − x ) − (φ − 1)(x − x )
u l u l
= (2 − φ)(x − x )
u l
or 0.3820 (x − x ). Therefore, this case would represent the maximum error. This result can
u l
then be normalized to the optimal value for that iteration x to yield
opt
x − x
u l
ε = (2 − φ) × 100% (7.9)
a
x
opt
This estimate provides a basis for terminating the iterations.
An M-file function for the golden-section search for minimization is presented in
Fig. 7.7. The function returns the location of the minimum, the value of the function, the
approximate error, and the number of iterations.
The M-file can be used to solve the problem from Example 7.1.
g=9.81;v0=55;m=80;c=15;z0=100;
z=(t) -(z0+m/c(v0+mg/c)(1-exp(-c/mt))-mg/ct);
xmin,fmin,ea,iter=goldmin(z,0,8)
xmin =
3.8317
fmin =
-192.8609
ea =
6.9356e-005
Notice how because this is a maximization, we have entered the negative of Eq. (7.1).
Consequently, fmin corresponds to a maximum height of 192.8609.
You may be wondering why we have stressed the reduced function evaluations of the
golden-section search. Of course, for solving a single optimization, the speed savings
would be negligible. However, there are two important contexts where minimizing the
number of function evaluations can be important. These are
1. Many evaluations. There are cases where the golden-section search algorithm may be a
part of a much larger calculation. In such cases, it may be called many times. Therefore,
keeping function evaluations to a minimum could pay great dividends for such cases.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 192
192 OPTIMIZATION
function x,fx,ea,iter=goldmin(f,xl,xu,es,maxit,varargin)
% goldmin: minimization golden section search
% x,fx,ea,iter=goldmin(f,xl,xu,es,maxit,p1,p2,...):
% uses golden section search to find the minimum of f
% input:
% f = name of function
% xl, xu = lower and upper guesses
% es = desired relative error (default = 0.0001%)
% maxit = maximum allowable iterations (default = 50)
% p1,p2,... = additional parameters used by f
% output:
% x = location of minimum
% fx = minimum function value
% ea = approximate relative error (%)
% iter = number of iterations
if nargin3,error('at least 3 input arguments required'),end
if nargin4isempty(es), es=0.0001;end
if nargin5isempty(maxit), maxit=50;end
phi=(1+sqrt(5))/2;
iter=0;
while(1)
d = (phi-1)(xu - xl);
x1 = xl + d;
x2 = xu - d;
if f(x1,varargin:) f(x2,varargin:)
xopt = x1;
xl = x2;
else
xopt = x2;
xu = x1;
end
iter=iter+1;
if xopt=0, ea = (2 - phi) abs((xu - xl) / xopt) 100;end
if ea = es iter = maxit,break,end
end
x=xopt;fx=f(xopt,varargin:);
FIGURE 7.7
An M-file to determine the minimum of a function with the golden-section search.
2. Time-consuming evaluation. For pedagogical reasons, we use simple functions in
most of our examples. You should understand that a function can be very complex
and time-consuming to evaluate. For example, optimization can be used to estimate
the parameters of a model consisting of a system of differential equations. For such
cases, the function involves time-consuming model integration. Any method that
minimizes such evaluations would be advantageous.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 193
7.2 ONE-DIMENSIONAL OPTIMIZATION 193
Parabolic
True maximum
approximation
f(x)
of maximum
True function
Parabolic
function
x x x x
x
1 2 4 3
FIGURE 7.8
Graphical depiction of parabolic interpolation.
7.2.2 Parabolic Interpolation
Parabolic interpolation takes advantage of the fact that a second-order polynomial often
provides a good approximation to the shape of f(x) near an optimum (Fig. 7.8).
Just as there is only one straight line connecting two points, there is only one parabola
connecting three points. Thus, if we have three points that jointly bracket an optimum, we
can fit a parabola to the points. Then we can differentiate it, set the result equal to zero, and
solve for an estimate of the optimal x. It can be shown through some algebraic manipula-
tions that the result is
2 2
1 (x − x ) f (x ) − f (x ) − (x − x ) f (x ) − f (x )
2 1 2 3 2 3 2 1
x = x − (7.10)
4 2
2 (x − x ) f (x ) − f (x ) − (x − x ) f (x ) − f (x )
2 1 2 3 2 3 2 1
where x , x , and x are the initial guesses, and x is the value of x that corresponds to the
1 2 3 4
optimum value of the parabolic fit to the guesses.
EXAMPLE 7.3 Parabolic Interpolation
Problem Statement. Use parabolic interpolation to approximate the minimum of
2
x
f (x) = − 2 sin x
10
with initial guesses of x = 0, x = 1, and x = 4.
1 2 3
Solution. The function values at the three guesses can be evaluated:
x = 0 f(x ) = 0
1 1
x = 1 f(x ) =−1.5829
2 2
x = 4 f(x ) = 3.1136
3 3cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 194
194 OPTIMIZATION
and substituted into Eq. (7.10) to give
2 2
1 (1 − 0) −1.5829 − 3.1136 − (1 − 4) −1.5829 − 0
x = 1 − = 1.5055
4
2 (1 − 0) −1.5829 − 3.1136 − (1 − 4) −1.5829 − 0
which has a function value of f(1.5055) =−1.7691.
Next, a strategy similar to the golden-section search can be employed to determine
which point should be discarded. Because the function value for the new point is lower
than for the intermediate point (x ) and the new x value is to the right of the intermediate
2
point, the lower guess (x ) is discarded. Therefore, for the next iteration:
1
x = 1 f(x ) =−1.5829
1 1
x = 1.5055 f(x ) =−1.7691
2 2
x = 4 f(x ) = 3.1136
3 3
which can be substituted into Eq. (7.10) to give
2 2
1 (1.5055 − 1) −1.7691 − 3.1136 − (1.5055 − 4) −1.7691 − (−1.5829)
x = 1.5055 −
4
2 (1.5055 − 1) −1.7691 − 3.1136 − (1.5055 − 4) −1.7691 − (−1.5829)
= 1.4903
which has a function value of f(1.4903) =−1.7714. The process can be repeated, with the
results tabulated here:
ix f (x ) x f (x ) x f (x ) x f (x )
1 1 2 2 3 3 4 4
1 0.0000 0.0000 1.0000 −1.5829 4.0000 3.1136 1.5055 −1.7691
2 1.0000 −1.5829 1.5055 −1.7691 4.0000 3.1136 1.4903 −1.7714
3 1.0000 −1.5829 1.4903 −1.7714 1.5055 −1.7691 1.4256 −1.7757
4 1.0000 −1.5829 1.4256 −1.7757 1.4903 −1.7714 1.4266 −1.7757
5 1.4256 −1.7757 1.4266 −1.7757 1.4903 −1.7714 1.4275 −1.7757
Thus, within five iterations, the result is converging rapidly on the true value of −1.7757
at x = 1.4276.
7.2.3 MATLAB Function: fminbnd
Recall that in Sec. 6.4 we described Brents method for root location, which combined sev-
eral root-finding methods into a single algorithm that balanced reliability with efficiency.
Because of these qualities, it forms the basis for the built-in MATLAB function fzero.
Brent also developed a similar approach for one-dimensional minimization which
forms the basis for the MATLAB fminbnd function. It combines the slow, dependable
golden-section search with the faster, but possibly unreliable, parabolic interpolation. It
first attempts parabolic interpolation and keeps applying it as long as acceptable results are
obtained. If not, it uses the golden-section search to get matters in hand.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 195
7.3 MULTIDIMENSIONAL OPTIMIZATION 195
A simple expression of its syntax is
xmin, fval = fminbnd(function,x1,x2)
where x and fval are the location and value of the minimum, function is the name of the
function being evaluated, and x1 and x2 are the bounds of the interval being searched.
Here is a simple MATLAB session that uses fminbnd to solve the problem from
Example 7.1.
g=9.81;v0=55;m=80;c=15;z0=100;
z=(t) -(z0+m/c(v0+mg/c)(1-exp(-c/mt))-mg/ct);
x,f=fminbnd(z,0,8)
x =
3.8317
f =
-192.8609
As with fzero, optional parameters can be specified using optimset. For example,
we can display calculation details:
options = optimset('display','iter');
fminbnd(z,0,8,options)
Func-count x f(x) Procedure
1 3.05573 -189.759 initial
2 4.94427 -187.19 golden
3 1.88854 -171.871 golden
4 3.87544 -192.851 parabolic
5 3.85836 -192.857 parabolic
6 3.83332 -192.861 parabolic
7 3.83162 -192.861 parabolic
8 3.83166 -192.861 parabolic
9 3.83169 -192.861 parabolic
Optimization terminated:
the current x satisfies the termination criteria using
OPTIONS.TolX of 1.000000e-004
ans =
3.8317
Thus, after three iterations, the method switches from golden to parabolic, and after eight
iterations, the minimum is determined to a tolerance of 0.0001.
7.3 MULTIDIMENSIONAL OPTIMIZATION
Aside from one-dimensional functions, optimization also deals with multidimensional
functions. Recall from Fig. 7.3a that our visual image of a one-dimensional search was like
a roller coaster. For two-dimensional cases, the image becomes that of mountains and
valleys (Fig. 7.3b). As in the following example, MATLABs graphic capabilities provide
a handy means to visualize such functions.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 196
196 OPTIMIZATION
EXAMPLE 7.4 Visualizing a Two-Dimensional Function
Problem Statement. Use MATLABs graphical capabilities to display the following
function and visually estimate its minimum in the range 2 ≤ x ≤ 0 and 0 ≤ x ≤ 3:
1 2
2 2
f (x , x ) = 2 + x − x + 2x + 2x x + x
1 2 1 2 1 2
1 2
Solution. The following script generates contour and mesh plots of the function:
x=linspace(-2,0,40);y=linspace(0,3,40);
X,Y = meshgrid(x,y);
Z=2+X-Y+2X.2+2X.Y+Y.2;
subplot(1,2,1);
cs=contour(X,Y,Z);clabel(cs);
xlabel('x_1');ylabel('x_2');
title('(a) Contour plot');grid;
subplot(1,2,2);
cs=surfc(X,Y,Z);
zmin=floor(min(Z));
zmax=ceil(max(Z));
xlabel('x_1');ylabel('x_2');zlabel('f(x_1,x_2)');
title('(b) Mesh plot');
As displayed in Fig. 7.9, both plots indicate that function has a minimum value of about
f(x , x ) = 0 to 1 located at about x =−1 and x = 1.5.
1 2 1 2
FIGURE 7.9
(a) Contour and (b) mesh plots of a two-dimensional function.
(a) Contour plot (b) Mesh plot
3
7
3
6
2.5
8
4
1
6
2
2
4
1.5
2
3
1
0
3
0
0.5
2
2
4
–1
x
1
2
x
7 5
8 6 1
0
0 –2
–2 –1.5 –1 –0.5 0
x
1
x
2
f (x , x )
1 2cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 197
7.4 CASE STUDY 197
Techniques for multidimensional unconstrained optimization can be classified in a
number of ways. For purposes of the present discussion, we will divide them depending
on whether they require derivative evaluation. Those that require derivatives are called
gradient, or descent (or ascent), methods. The approaches that do not require derivative eval-
uation are called nongradient, or direct, methods. As described next, the built-in MATLAB
functionfminsearch is a direct method.
7.3.1 MATLAB Function: fminsearch
Standard MATLAB has a function fminsearch that can be used to determine the mini-
mum of a multidimensional function. It is based on the Nelder-Mead method, which is a
direct-search method that uses only function values (does not require derivatives) and
handles non-smooth objective functions. A simple expression of its syntax is
xmin, fval = fminsearch(function,x0)
wherexmin andfval are the location and value of the minimum,function is the name of
the function being evaluated, andx0 is the initial guess. Note that x0 can be a scalar, vector,
or a matrix.
Here is a simple MATLAB session that uses fminsearch to determine minimum for
the function we just graphed in Example 7.4:
f=(x) 2+x(1)-x(2)+2x(1)2+2x(1)x(2)+x(2)2;
x,fval=fminsearch(f,-0.5,0.5)
x =
-1.0000 1.5000
fval =
0.7500
7.4 CASE STUDY EQUILIBRIUM AND MINIMUM POTENTIAL ENERGY
Background. As in Fig. 7.10a, an unloaded spring can be attached to a wall mount.
When a horizontal force is applied, the spring stretches. The displacement is related to the
force by Hookes law, F = kx. The potential energy of the deformed state consists of the dif-
ference between the strain energy of the spring and the work done by the force:
(7.11)
2
PE(x) = 0.5kx − Fx
FIGURE 7.10
(a) An unloaded spring attached to a wall mount. (b) Application of a horizontal force stretches
the spring where the relationship between force and displacement is described by Hooke’s law.
k
(a)
x
F
(b)cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 198
198 OPTIMIZATION
7.4 CASE STUDY continued
Equation (7.11) defines a parabola. Since the potential energy will be at a minimum at
equilibrium, the solution for displacement can be viewed as a one-dimensional optimiza-
tion problem. Because this equation is so easy to differentiate, we can solve for the dis-
placement as x = F/k. For example, if k = 2 N/cm and F = 5 N, x = 5N/(2 N/cm) =
2.5 cm.
A more interesting two-dimensional case is shown in Fig. 7.11. In this system, there
are two degrees of freedom in that the system can move both horizontally and vertically.
In the same way that we approached the one-dimensional system, the equilibrium defor-
mations are the values of x and x that minimize the potential energy:
1 2
2
2
2
PE(x , x ) = 0.5k x + (L − x ) − L
1 2 a a 2 a
1
2
2
2
+ 0.5k x + (L + x ) − L − F x − F x (7.12)
b b 2 b 1 1 2 2
1
If the parameters are k = 9 N/cm, k = 2 N/cm, L = 10 cm, L = 10 cm, F = 2 N, and
a b a b 1
F = 4 N, use MATLAB to solve for the displacements and the potential energy.
2
FIGURE 7.11
A two-spring system: (a) unloaded and (b) loaded.
F
2
L k
a a
F
1
x
2
x
1
L k
b b
(a) (b)cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 199
PROBLEMS 199
7.4 CASE STUDY continued
Solution. An M-file can be developed to hold the potential energy function:
function p=PE(x,ka,kb,La,Lb,F1,F2)
PEa=0.5ka(sqrt(x(1)2+(La-x(2))2)-La)2;
PEb=0.5kb(sqrt(x(1)2+(Lb+x(2))2)-Lb)2;
W=F1x(1)+F2x(2);
p=PEa+PEb-W;
The solution can be obtained with the fminsearch function:
ka=9;kb=2;La=10;Lb=10;F1=2;F2=4;
x,f=fminsearch(PE,-0.5,0.5,,ka,kb,La,Lb,F1,F2)
x =
4.9523 1.2769
f =
-9.6422
Thus, at equilibrium, the potential energy is −9.6422 N·cm. The connecting point is
located 4.9523 cm to the right and 1.2759 cm above its original position.
PROBLEMS
7.1 Perform three iterations of the Newton-Raphson method (c) Differentiate the function and then use a root-location
to determine the root of Eq. (E7.1.1). Use the parameter val- method to solve for the maximum f(x) and the corre-
ues from Example 7.1 along with an initial guess of t = 3 s. sponding value of x.
7.2 Given the formula 7.5 Solve for the value of x that maximizes f(x) in Prob. 7.4
using the golden-section search. Employ initial guesses of
2
f (x)=−x + 8x − 12
x = 0 and x = 2, and perform three iterations.
l u
(a) Determine the maximum and the corresponding value of
7.6 Repeat Prob. 7.5, except use parabolic interpolation.
x for this function analytically (i.e., using differentiation).
Employ initial guesses of x = 0, x = 1, and x = 2, and per-
1 2 3
(b) Verify that Eq. (7.10) yields the same results based on
form three iterations.
initial guesses of x = 0, x = 2, and x = 6.
7.7 Employ the following methods to find the maximum of
1 2 3
7.3 Consider the following function:
2 3 4
f (x) = 4x − 1.8x + 1.2x − 0.3x
2 3 4
f (x) = 3 + 6x + 5x + 3x + 4x
(a) Golden-section search (x = 2, x = 4, ε = 1%).
l u s
Locate the minimum by finding the root of the derivative of
(b) Parabolic interpolation (x = 1.75, x = 2, x = 2.5,
1 2 3
this function. Use bisection with initial guesses of x =−2
l
iterations = 5).
and x = 1.
u
7.8 Consider the following function:
7.4 Given
4 3 2
6 4
f (x) = x + 2x + 8x + 5x
f (x)=−1.5x − 2x + 12x
(a) Plot the function. Use analytical and graphical methods to show the function
(b) Use analytical methods to prove that the function is con- has a minimum for some value of x in the range
cave for all values of x. −2 ≤ x ≤ 1.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 200
200 OPTIMIZATION
7.9 Employ the following methods to find the minimum of
T
air
the function from Prob. 7.8:
r
i
(a) Golden-section search (x =−2, x = 1, ε = 1%).
l u s
(b) Parabolic interpolation (x =−2, x =−1, x = 1,
1 2 3
r
w
iterations = 5).
7.10 Consider the following function: T
w
3
f (x) = 2x +
x
Perform 10 iterations of parabolic interpolation to locate
the minimum. Comment on the convergence of your results
(x = 0.1, x = 0.5, x = 5)
1 2 3
7.11 Develop a single script to (a) generate contour and
FIGURE P7.14
mesh subplots of the following temperature field in a similar
Cross-section of an insulated wire.
fashion to Example 7.4:
2 2
T(x, y) = 2x + 3y − 4xy − y − 3x
and (b) determine the minimum with fminsearch.
in testing the response of the truss to a force exerted in any
7.12 The head of a groundwater aquifer is described in
number of directions designated by the angle θ. The
Cartesian coordinates by
parameters for the problem are E = Youngs modulus
1
2
11
= 2 × 10 Pa, A = cross-sectional area = 0.0001 m , w =
h(x, y) =
2 2
1 + x + y + x + xy
width = 0.44 m, = length = 0.56 m, and h = height
= 0.5 m. The displacements x and y can be solved by deter-
Develop a single script to (a) generate contour and mesh
mining the values that yield a minimum potential energy.
subplots of the function in a similar fashion to Example 7.4,
Determine the displacements for a force of 10,000 N and a
and (b) determine the maximum with fminsearch.
range of θs from 0° (horizontal) to 90° (vertical).
7.13 Recent interest in competitive and recreational cycling
7.14 As electric current moves through a wire (Fig. P7.14),
has meant that engineers have directed their skills toward the
heat generated by resistance is conducted through a layer of
design and testing of mountain bikes (Fig. P7.13a). Suppose
insulation and then convected to the surrounding air. The
that you are given the task of predicting the horizontal
steady-state temperature of the wire can be computed as
and vertical displacement of a bike bracketing system in
response to a force. Assume the forces you must analyze can
q 1 r + r 1 1
w i
be simplified as depicted in Fig. P7.13b. You are interested T = T + ln +
air
2π k r h r + r
w w i
Determine the thickness of insulation r (m) that minimizes
i
the wires temperature given the following parameters: q =
heat generation rate = 75 W/m, r = wire radius = 6 mm,
w
w k = thermal conductivity of insulation = 0.17 W/(m K),
2
h = convective heat transfer coefficient = 12 W/(m K),
and T = air temperature = 293 K.
air
7.15 Develop an M-file that is expressly designed to locate
h
a maximum with the golden-section search. In other words,
x
set if up so that it directly finds the maximum rather than
finding the minimum of −f(x). The function should have the
F
y
following features:
(a) (b)
Iterate until the relative error falls below a stopping cri-
terion or exceeds a maximum number of iterations.
FIGURE P7.13
Return both the optimal x and f(x).
(a) A mountain bike along with (b) a free-body diagram
for a part of the frame.
Test your program with the same problem as Example 7.1.cha01102_ch07_182-204.qxd 12/17/10 8:06 AM Page 201
PROBLEMS 201
7.22 The normal distribution is a bell-shaped curve defined by
7.16 Develop an M-file to locate a minimum with the
2
golden-section search. Rather than using the maximum itera- −x
y = e
tions and Eq. (7.9) as the stopping criteria, determine the
Use the golden-section search to determine the location of
number of iterations needed to attain a desired tolerance. Test
the inflection point of this curve for positive x.
your function by solving Example 7.2 using E = 0.0001.
a,d
7.23 Use the fminsearch function to determine the
7.17 Develop an M-file to implement parabolic interpola-
minimum of
tion to locate a minimum. The function should have the
following features:
2 2
f (x, y) = 2y − 2.25xy − 1.75y + 1.5x
Base it on two initial guesses, and have the program
7.24 Use the fminsearch function to determine the
generate the third initial value at the midpoint of the
maximum of
interval.
2 4 2
Check whether the guesses bracket a maximum. If not,
f (x, y) = 4x + 2y + x − 2x + 2xy − 3y
the function should not implement the algorithm, but
7.25 Given the following function:
should return an error message.
Iterate until the relative error falls below a stopping cri- 2 2
f (x, y)=−8x + x + 12y + 4y − 2xy
terion or exceeds a maximum number of iterations.
Return both the optimal x and f(x).
Determine the minimum (a) graphically, (b) numerically
with the fminsearch function, and (c) substitute the result
Test your program with the same problem as Example 7.3.
of (b) back into the function to determine the minimum
7.18 Pressure measurements are taken at certain points
f (x, y).
behind an airfoil over time. These data best fit the curve
7.26 The specific growth rate of a yeast that produces an
y = 6 cos x − 1.5 sin x from x = 0 to 6 s. Use four iterations
antibiotic is a function of the food concentration c:
of the golden-search method to find the minimum pressure.
Set x = 2 and x = 4. 2c
l u
g =
2 3
7.19 The trajectory of a ball can be computed with
4 + 0.8c + c + 0.2c
g
2
y = (tanθ )x − x + y As depicted in Fig. P7.26, growth goes to zero at very low
0 0
2 2
2v cos θ
0
0
concentrations due to food limitation. It also goes to zero at
high concentrations due to toxicity effects. Find the value of
where y = the height (m), θ = the initial angle (radians),
0
c at which growth is a maximum.
v = the initial velocity (m/s), g = the gravitational
0
2
7.27 A compound A will be converted into B in a stirred
constant = 9.81 m/s , and y = the initial height (m). Use the
0
tank reactor. The product B and unreacted A are purified in a
golden-section search to determine the maximum height
◦
separation unit. Unreacted A is recycled to the reactor. A
given y = 1 m, v = 25 m/s, and θ = 50 . Iterate until the
0 0 0
process engineer has found that the initial cost of the system
approximate error falls below ε = 1% using initial guesses
s
of x = 0 and x = 60 m.
l u
7.20 The deflection of a uniform beam subject to a linearly
increasing distributed load can be computed as
0.4
w
0
5 2 3 4
y = (−x + 2L x − L x)
120EIL g
1
0.2
(d )
2 4
Given that L = 600 cm, E = 50,000 kN/cm , I = 30,000 cm ,
and w = 2.5 kN/cm, determine the point of maximum de-
0
flection (a) graphically, (b) using the golden-section search
0
until the approximate error falls below ε = 1% with initial 0 10
5
s
guesses of x = 0 and x = L. c (mg/L)
l u
7.21 A object with a mass of 90 kg is projected upward
from the surface of the earth at a velocity of 60 m/s. If the
FIGURE P7.26
object is subject to linear drag (c = 15 kg/s), use the golden-
The specific growth rate of a yeast that produces an
section search to determine the maximum height the object
antibiotic versus the food concentration.
attains.
Advise:Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.