Lecture notes on FINANCIAL MARKETS

lecture notes on money banking and financial markets and lecture notes on financial markets and institutions. how financial markets impact economic growth. pdf free
NealRexwal Profile Pic
NealRexwal,Canada,Professional
Published Date:17-07-2017
Your Website URL(Optional)
Comment
Lecture notes on FINANCIAL MARKETS Marco Li Calzi Dipartimento di Matematica Applicata Universit` a “Ca’ Foscari” di Venezia February 2002M. Li Calzi, Lecture notes on Financial Markets Copyright c 2002 by Marco Li CalziIntroduction In 2000, when Bocconi University launched its Master in Quantitative Finance, I was asked todevelopacoursetouchinguponthemanyaspectsofdecisiontheory,financialeconomics, and microstructure that could not otherwise fit in the tight schedule of the program. Re- flecting this heterogeneity, the course was dubbed “Topics in Economics” and I was given a fair amount of leeway in its development. My only constraint was that I had to choose what I thought best and then compress it in 15lectures. Thesenotesdetailmychoicesaftertwoyearsofteaching“TopicsinEconomics” at the Master in Quantitative Finance of Bocconi University and a similar class more aptly named “Microeconomics of financial markets” at the Master of Economics and Finance of the Venice International University. The material is arranged into 15 units, upon whose contents I make no claim of origi- nality. Each unit corresponds to a 90-minutes session. Some units (most notably, Unit 5) contain too much stuff, reflectingeither the accretion of different choices or my desire to offer a more complete view. Unit 7 requires less time than the standard session: I usually take advantage of the time left to begin exploring Unit 8. Ihaveconstantlykeptrevisingmychoices(andmynotes),andIwillmostlikelydosoin thefutureaswell,postingupdatesonmywebsiteat http://helios.unive.it/˜licalzi. iiiContents 1. Expected utility and stochastic dominance 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Decisions under risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Decisions under uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Irreversible investments and flexibility 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Price uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Real options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4 Assessingyour real option . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3. Optimal growth and repeated investments 11 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 The log-optimal growth strategy . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4. Risk aversion and mean-variance preferences 16 4.1 Risk attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 Risk attitude and expected utility . . . . . . . . . . . . . . . . . . . . . . . 17 4.3 Mean-variance preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4 Risk attitude and wealth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.5 Risk bearingover contingent outcomes . . . . . . . . . . . . . . . . . . . . . 19 5. Information structures and no-trade theorems 21 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2 The Dutch book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 The red hats puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.4 Different degrees of knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.5 Can we agree to disagree? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.6 No trade under heterogenous priors . . . . . . . . . . . . . . . . . . . . . . . 27 6. Herding and informational cascades 29 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.2 Some terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3 Public offerings and informational cascades . . . . . . . . . . . . . . . . . . 30 6.4 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 7. Normal-CARA markets 33 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7.2 Updatingnormal beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7.3 Cara preferences in a normal world . . . . . . . . . . . . . . . . . . . . . . . 34 iii7.4 Demand for a risky asset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8. Transmission of information and rational expectations 36 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 8.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 8.3 Computinga rational expectations equilibrium . . . . . . . . . . . . . . . . 39 8.4 An assessment of the rational expectations model . . . . . . . . . . . . . . . 39 9. Market microstructure: Kyle’s model 41 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 9.2 Themodel..................................... 41 9.3 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 9.4 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 10. Market microstructure: Glosten and Milgrom’s model 45 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 10.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 10.3 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 10.4 Comments on the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 10.5 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 11. Market microstructure: market viability 50 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 11.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 11.3Competitiveversusmonopolisticmarketmaking............... 51 11.4 The basic steps in the model . . . . . . . . . . . . . . . . . . . . . . . . . . 51 11.5 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 12. Noise trading: limits to arbitrage 54 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 12.2Amodelwithnoisetrading........................... 54 12.3 Relative returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 12.4 An appraisal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 12.5 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 13. Noise trading: simulations 60 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 13.2 A simple dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 13.3 An artificial stock market . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 14. Behavioral finance: evidence from psychology 65 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 14.2 Judgement biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 14.3 Distortions in derivingpreferences . . . . . . . . . . . . . . . . . . . . . . . 66 14.4 Framingeffects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 iv15. Behavioral finance: asset pricing 70 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 15.2 Myopic loss aversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 15.3 A partial equilibrium model . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 15.4 An equilibrium pricingmodel . . . . . . . . . . . . . . . . . . . . . . . . . . 73 vvi1. Expected utility and stochastic dominance 1.1 Introduction Most decisions in finance are taken under a cloud of uncertainty. When you plan to invest your money in a long-term portfolio, you do not know how much will its price be at the timeofdisinvestingit. Therefore,youfaceaprobleminchoosingthe“right”portfoliomix. Decision theory is that branch of economic theory which works on models to help you sort out this kind of decisions. There are two basic sorts of models. The first class is concerned with what is known as decisions under risk and the second class with decisions under uncertainty. 1.2 Decisions under risk Here is a typical decision under risk. Your investment horizon is one year. There is a family of investment funds. You must invest all of your wealth in a single fund. The return on each fund is not known with certainty, but you know its distribution of past returns. For lack of better information, you have decided to use this distribution as a proxy for the probability 1 distribution of future returns. Let us model this situation. There is a set C of consequences, typified by the one- year returns you will be able to attain. There is a setA of alternatives (i.e., the funds) out of which you must choose one. Each alternative inA is associated with a probability distributionovertheconsequences. Forinstance,assumingthereareonlythreefunds,your choice problem may be summarized by the followingtable. Fundα Fundβ Fundγ return prob.ty return prob.ty return prob.ty -1% 20% -3% 55% 2.5% 100% +2% 40% +10% 45% +5% 40% Havingdescribed the problem, the next step is to develop a systematic way to make a choice. Def. 1.1 Expected utility under risk Define a real-valued utility functionu over conse- quences. Compute the expected value of utility for each alternative. Choose an alternative which maximizes the expected utility. 1 The law requires an investment fund to warn you that past returns are not guaranteed. Trusting the distribution of past returns is a choice you make at your own peril. 1How would this work in practice? Suppose that your utility function over a return of r% in the previous example isu(r)=r. The expected utility of Fundα is U(α)=−1· 0.2+2· 0.4+5· 0.4=2.6. Similarly, the expected utility of Fundβ andγ are respectivelyU(β)=2.85 andU(γ)=2.5. Accordingto the expected utility criterion, you should go for Fund β and rankα andγ respectively second and third. If you had a different utility function, the ranking and your final choice might change. √ For instance, ifu(r)= r + 3, we findU(α) ≈ 2.31,U(β) ≈ 1.62 andU(γ) ≈ 2.35. The best choice is nowγ, which however was third under the previous utility function. All of this sounds fine in class, but let us look a bit more into it. Before you can get her to use this, there are a few questions that your CEO would certainly like you to answer. Is expected utility the “right” way to decide? Thank God (or free will), nobody can pretend to answer this. Each one of us is free to develop his own way to reach a decision. However, if you want to consider what expected utility has in it, mathematicians have developed a partial answer. Usingexpected utility is equivalent to takingdecisions that satisfy three criteria: 1) consistency; 2) continuity; 3) independence. Consistency means that your choices do not contradict each other. If you pickα overβ andβ overγ, then you will pickα overγ as well. If you pickα overβ, you do not pickβ overα. Continuity means that your preferences do not change abruptly if you slightly change theprobabilitiesaffectingyourdecision. Ifyoupick α overβ, it must be possible to generate   a third alternativeα by perturbingslightly the probabilities of α and still likeα better thanβ. Independenceisthemostdemandingcriterion. Let α andβ be two alternatives. Choose  a third alternativeγ. Consider two lotteries: α gets you α or γ with equal probability,  whileβ gets youβ orγ with equal probability. If you’d pickα overβ, then you should   also pickα overβ . Ifyouarewillingtosubscribethesethreecriteriasimultaneously,usingexpectedutility guarantees that you will fulfill them. On the other hand, if you adopt expected utility as your decision makingtool, you will be (knowingly or not) obeyingthese criteria. The answer I’d offer to your CEO is: “if you wish consistency, continuity and independence, expected utility is right”. Caveat emptor There is plenty of examples where very reasonable people do not want to fulfill one of the three criteria above. The most famous example originated with Allais who, among other things, got the Nobel prize in Economics in 1988. Suppose the consequences are given as payoffs in millions of Euro. Between the two alternatives α β payoff prob.ty payoff prob.ty 0 1% 1 100% 1 89% 5 10% 2Allais would have pickedβ. Between the two alternatives γ δ payoff prob.ty payoff prob.ty 0 90% 0 89% 5 10% 1 11% he would have pickedγ. You can easily check (yes, do it) that these two choices cannot simultaneously be made by someone who is willingto use the expected utility criterion. Economists and financial economics, untroubled by this, assume that all agents abide by expected utility. This is partly for the theoretical reasons sketched above, but mostly for convenience. To describe the choices of an expected utility maximizer, an economist needs only to know the consequences, the probability distribution over consequences for each alternative, how to compute the expected value and the utility function over the consequences. When theorizing, we’ll do as economists do: we assume knowledge of conse- quences, alternatives and utility functions and we compute the expected utility maximizing choice. For the moment, however, let us go back to your CEO waiting for your hard-earned wisdom to enlighten her. What is the “right” utility function? The utility function embeds the agent’s prefer- ences under risk. In the example above, when the utility function wasu(r)=r, the optimal choice is Fundβ which looks a lot like a risky stock fund. When the utility function was √ u(r)= r + 3, the optimal choice was Fund γ, not much different from a standard 12- month Treasury bill. It is the utility function which makes you prefer one over another. Pickingthe right utility function is a matter of describinghow comfortable we feel about taking(or leaving) risks. This is a tricky issue, but I’ll say more about it in Lecture 4. Sometimes, we are lucky enough that we can make our choice without even knowing what exactly is our utility function. Suppose that consequences are monetary payoffs and assume (as it is reasonable) that the utility function is increasing. Are there pairs of alternativesα andβ such thatα is (at least, weakly) preferred by all sorts of expected utility maximizers? In mathematical terms, letF andG be the cumulative probability distributions respec- tively forα andβ. What is the sufficient condition such that   u(x)dF(x)≥ u(x)dG(x) for all increasingutility functions u? Def. 1.2 Stochastic dominance Given two random variables α and β with respective cumulative probability distributionsF andG, we say thatα stochastically dominatesβ if F(x)≤G(x) for allx. Stochastic dominance ofα overβ means thatF(x)=P(α≤x)≤P(β≤x)=G(x) for allx. That is,α is less likely thanβ to be smaller thanx. In this sense,α is less likely to be small. 3If you happen to compare alternatives such that one stochastically dominates the other ones and you believe in expected utility, you can safely pick the dominatingone without evenworryingtofindoutwhatyour“right”utilityfunctionshouldbe. Thismaynothappen often, but let us try not to overlook checkingfor this clearcut comparison. Isn’t this “expected utility business” too artificial? Well, it might be. But we are not askingyou to use expected utility to take your decisions. Expected utility is what economists use to model your behavior under risk. If you happen to use a different route which fulfills the three criteria of consistency, continuity and independence, an economist will be able to fit your past choices to an expected utility model and guess your future choices perfectly. We can put it down to a matter of decision procedures. Expected utility is one: it is simple to apply but it requires to you to swallow the idea of a utility function. There are other procedures which lead you to choices that are compatible with expected utility maximization in a possibly more natural way. 2 Here is an example of an alternative procedure. Suppose that you are a fund manager and that your compensation depends on a benchmark. Your alternatives are the different investing strategies you may follow. Each strategy will lead to a payoff at the end of the year which is to be compared against the benchmark. If you beat the benchmark, you’ll get a fixed bonus; otherwise, you will not. The performance of the benchmark is a random variable B. Usingpast returns, you estimate its cumulative probability distribution H. Moreover, since you are only one of many fund managers, you assume that the performance of the benchmark is independent of which investingstrategy you follow. Your best bet is to maximize the probability of getting the bonus. If your investing strategy leads to a random performance α with c.d.f. F, the probability of getting the bonus is simply   P(α≥B)= P(x≥B)dF(x)= H(x)dF(x). While (naturally) tryingto maximize your chances of gettingyour bonus, you will be be- havingas if (artificially) tryingto maximize a utility function u(x)=H(x). What is the “right” probability distribution for an alternative? Ah, that’s a good question. You might have read it already but, thank God (or free will), nobody can answer this. Each one of us is free to develop his own way to assess the probabilities. In the example above, I mischievously assumed that you were willingto use the distribution of past returns but this may sometimes be plainly wrong. Economists have since longrecognized the importance of this question. Their first- cut answer is to isolate the problem by assumingthat the probabilities have already been estimated by someone else and are known to the agent. Whenever this assumption holds, wespeakofdecisionmakingunderrisk. Wheneverwedonotassumethatprobabilitiesare already known, we enter the realm of decisions under uncertainty. 2 This is drawn from research developed in Bocconi and elsewhere. See Bordley and LiCalzi (2000) for the most recent summary. 41.3 Decisions under uncertainty Here is a typical decision under uncertainty. Your investment horizon is one year. There is a family of investment funds. You must invest all of your wealth in a single fund. The return on each fund is not known with certainty and you do not think that the distribution of past returns is a good proxy. However, there is a list of scenarios upon which the return on your investment depends. Let us model this situation. There is a setC of consequences, typified by the one-year returns you will be able to attain. There is a listS of possible future scenarios. There is a setA of alternatives (i.e., the funds) out of which you must choose one. Each alternative α inA is a function which tells you which consequencec you will be able to attain under scenarios: that is, α(s)=c. Assumingthere are only three funds, your choice problem may be summarizedby the followingtable. Fundα Fundβ Fundγ scenario return return return s -1% -3% 2.5% 1 s +2% -3% 2.5% 2 s +2% -3% 2.5% 3 s +2% +10% 2.5% 4 s +5% +10% 2.5% 5 Def. 1.3 Expected utility under uncertainty Assess probabilities for each scenario. De- fine a utility function over consequences. Compute the expected value of utility for each alternative over the scenarios. Choose an alternative which maximizes the expected utility. After you assess probabilities for each scenario, you fall back to the case of decision under risk. For instance, assessingP(s ) = 20%,P(s ) = 30%,P(s )=P(s ) = 5%, and 1 2 3 4 P(s ) = 40% gets you back to the case studied above. If your utility function wereu(r)=r, 5 the optimal choice would again beβ. However, stayingwith the same utility function, if you’d happen to assess P(s )= P(s )= P(s )= P(s ) = 5%, and P(s ) = 80%, the 1 2 4 5 3 optimal choice would beγ. Under uncertainty, the analysis is more refined. What matters is not only your attitude to risk (as embedded in your choice ofu), but your beliefs as well (as embedded in your probability assessment). Explicitingscenarios may matter in a surprisingway, as it was noted in Castagnoli (1984). Suppose the consequences are given as payoffs in millions of Euro. Consider the followingdecision problem under uncertainty. Fundα Fundβ scenario payoff payoff s 0 4 1 s 1 0 2 s 2 1 3 s 3 2 4 s 4 3 5 5Suppose that you assess probabilitiesP(s )=1/3, andP(s )=P(s )=P(s )=P(s )= 1 2 3 4 5 1/6. Thenβ would stochastically dominateα even though the probability thatα beatsβ isP(α≥β)=2/3. Anyexpectedutilitymaximizer(ifusinganincreasingutilityfunction) would pickβ overα. However, if you are interested only in choosingwhichever alternative pays more between the two, you should go forα. References 1 R. Bordley e M. Li Calzi (2000), “Decision analysis usingtargets instead of utility functions”, Decisions in Economics and Finance 23, 2000, 53–74. 2 E. Castagnoli (1984), “Some remarks on stochastic dominance”,Rivista di matematica per le Scienze Economiche e Sociali 7, 15–28. 62. Irreversible investments and flexibility 2.1 Introduction Under no uncertainty, NPV is the common way to assess an investment. (In spite of contrary advice from most academics, consultants and hence practitioners use the payback time and the IRR as well.) If you have to decide whether to undertake an investment, do so only if itsNPVispositive. Ifyouhavetopickoneamongmanypossibleinvestments,picktheone with the greatest NPV. Whenuncertaintyentersthepicture,theeasywayoutistokeepdoingNPVcalculations usingexpected payoffs instead of the actual payffos, which are not known for sure. This might work as a first rough cut, but it could easily led you astray. The aim of this lecture is to alert you about what you could be missing. Most of the material is drawn from Chapter 2 in Dixit and Pyndick (1994). 2.2 Price uncertainty Consider a firm that must decide whether to invest in a widget factory. The investment is irreversible: the factory can only be used to produce widgets and, if the markets for widgets should close down, the firm could not be scrapped down and sold to someone else. The firm can be built at a cost ofc = 1600 and will produce one widget per year forever, with zero operatingcost. The current price of a widget is p = 200, but next year this will rise to 0 p = 300 with probabilityq =1/2 or drop top = 100 with probability 1−q =1/2. After 1 1 this, the price will not change anymore. The risk over the future price of widgets is fully diversifiable and therefore we use the risk-free rate of interestr = 10%. Presented with this problem, a naive CFO would compute the expected pricep = 200 from the next year on. Usingthe expected price, the NPV of the project is ∞  200 NPV =−1600 + ≈ 600. t (1.1) t=0 Since the NPV is positive, the project gets the green light and the firm invests right away. A clever CFO would consider also the possibility of waitingone year. At the cost of giving up a profit of 2000 in yeart = 0, one gains the option to invest if the price rises and to not invest otherwise. The NPV for this investment policy is   ∞  1 −1600 300 NPV = + ≈ 773. t 2 1.1 (1.1) t=1 This is higher than 600, and therefore it is better to wait than to invest right away. The value of the flexibility to postpone the investment is 773− 600 = 173. 7Ex. 2.1 For a different way to assess the value of flexibility, check that the opportunity to build a widget factory now and only now at a cost ofc = 1600 yields the same NPV as the opportunity of buildinga widget factory now or next year at a cost of (about) 1980. Ex. 2.2 Suppose that there exists a futures market for widgets, with the futures price for delivery one year from now equal to the expected future spot price of 200. Would this make us anticipate the investment decision? The answer is no. To see why, check that you could hedge away price risk by sellingshort futures for 11 widgets, endingup with a sure NPV of 2200. Subtract a cost of 1600 for buildingthe factory, and you are left exactly with an NPV of 600 as before. The futures market allows the firm to get rid of the risk but does not improve the NPV of investingnow. 2.3 Real options We can view the decision to invest now or next year as the analogof an american option. An american option gives the right to buy a security any time before expiration and receive a random payoff. Here, we have the right to make an investment expenditure now or next year and receive a random NPV. Our investment option begins “in the money” (if it were exercised today, it would yield a positive NPV), but waitingis better than exercisingnow. This sort of situations, where the underlyingsecurity is a real investment, are known as real options. The use of real options is getting increasingly popular in the assessment of projects under uncertainty. Letuscomputethevalueofourinvestmentopportunityusingtherealoptionsapproach. Denote byF the value the option today, and byF the value next year. ThenF is a random 0 1 1 variable, which can take value   ∞  300 − 1600≈ 1700 t (1.1) t=0 with probability 1/2 (if the widget price goes up to 300) and value 0 with probability 1/2 (if it goes down). We want to find out what isF . 0 Usinga standard trick in arbitrage theory, consider a portfolio in which one holds the investment opportunity and sells shortn widgets at a price ofP . The value of this portfolio 0 today is Π =F −nP =F −200n. The value of the portfolio next year is Π =F −nP . 0 0 0 0 1 1 1 SinceP = 300 or 100, the possible values of Π are 1700− 300n or−100n. We can choose 1 1 n and make the portfolio risk-free by solving1700 − 300n =−100n, which givesn =8.5. This number of widgets gives a sure value Π =−850 for the portfolio. 1 The return from holdingthis portfolio is the capital gain Π − Π minus the cost of 1 0 shortingthewidgets;thatis,Π − Π − 170 =−850− (F − 1700)− 170 = 680−F . Since 1 0 0 0 this portfolio is risk-free, it must earn the risk-free rate ofr = 10%; that is, 680−F = 0 (0.1)Π =0.1(F − 1700), which givesF = 773. This is of course the same value we have 0 0 0 already found. 82.4 Assessing your real option Once we view an investment opportunity as a real option, we can compute its dependence on various parameters and get a better understanding. In particular, let us determine how the value of the option — and the decision to invest — depend on the costc of the investment, on the initial priceP of the widgets, on the magnitudes of the up and down movements in 0 price next period, and on the probabilityq that the price will rise next period. a) Cost of the investment. Using the arbitrage argument, we find (please, do it) that the short position on widgets needed to obtain a risk-free portfolio isn =16.5− 0.005c. Hence, Π =F −nP =0.5c− 1650 and Π =F − 3300 +c. Imposinga risk-free rate of 1 1 1 0 0 r = 10% yields F = 1500− 0.455c, (1) 0 which gives the value of the investment opportunity as a function of the cost c of the investment. We can use this relationship to find out for what values ofc investingtoday is better thaninvestingnextyear. Investingtodayisbetteraslongasthevalue V frominvestingis 0 greater than the direct costc plus the opportunity costF . Since the NPV of the payoffs 0 frominvestingtodayis2200,weshouldinvesttodayif2200c+F . Substitutingfrom(1), 0 we should invest as longas c 1284. In the terminology of financial options, for low values ofc the option is “deep in the money” and immediate exercise is preferable, because the cost of waiting(the sacrifice of the immediate profit) outweighs the benefit of waiting (the ability to decide optimally after observingwhether the price has gone up or down). b) Initial price. Fix againc = 1600 and let us now varyP . Assume that with equal 0 probability the priceP will be twice or half the current priceP (and remain at this level 1 0 ever after). Suppose that we want to invest when the price goes up and we do not want to invest if it goes down (we will consider other options momentarily). Set up the usual portfolio and check (yes, do it) that its value is Π =16.5P − 1600− 1.5nP if the price 1 0 0 goes up and Π =−0.5nP is the price goes down. Equating these two values, we find that 1 0 n=16.5− (1600/P ) is the number of widgets that we need to short to make the portfolio 0 risk-free, in which case Π = 800− 8.25P whether the price goes up or down. 1 0 Recall that the short position requires a payment of 0.1nP =1.65P −160 and compute 0 0 thereturnonthisportfolio. Imposingarisk-freerateof r = 10%, we have Π −Π −1.65P − 1 0 0 160 = 0.1Π which yields 0 F =7.5P − 727. (2) 0 0 This value of the option to invest has been calculated assumingthat we would only want to invest if the price goes up next year. However, ifP is low enough we might never want 0 to invest, and ifP is high enough it might be better to invest now rather than waiting. 0 Let us find for which price we would never invest. From (2), we see thatF = 0 when 0 P ≈ 97. Below this level, there is no way to recoup the cost of the investment even if the 0 price rises by 50% next year. Analogously, let us compute for which price we would always 9invest today. We should invest now if the NPV of current payoffs (i.e., 11P ) exceeds the 0 ˆ ˆ total cost 1600+F ofinvestingnow. Thecriticalprice P satisfies 11P −1600 =F which, 0 0 0 0 ˆ after substitutingfrom (2), gives P = 249. Summarizing, the investment rule is: 0 Price region Option value Investment rule ifP ≤ 97 thenF = 0 and you never invest; 0 0 if 97P ≤ 249 thenF =7.5P − 727 and you invest next year if price goes up; 0 0 0 ifP 249 thenF =11P − 1600 and you invest today. 0 0 0 c) Probabilities. Fix an arbitraryP and let us varyq. In our standard portfolio, the 0 number of widgets needed to construct a risk-free position isn =8.5 and is independent ofq (yes, check it). The expected price of widgets next year isE(P )=q(1.5P )+(1− 1 0 q)(0.5P )=(q+0.5P ); therefore the expected capital gain on widgets is E(P )−P /P = 0 0 1 0 0 q− 0.5. Since the longowner of a widget demands a riskless return of r = 10% but gets already a capital gain ofq− 0.5, she will ask a payment of 0.1− (q− 0.5)P =(0.6−q)P 0 0 per widget. Setting Π −Π −(0.6−q)nP =0.1Π withn=8.5 we find (forP 97) that 1 0 0 0 0 the value of the option is F = (15P − 1455)q. (3) 0 0 ForP ≤ 97 we would never invest andF =0. 0 0 What about the decision to invest? It is better to wait than to invest today as longas  ∞ t F V −c. SinceV =P + (q+0.5)P /(1.1) =(6+10q)P , it is better to wait as 0 0 0 0 0 0 1 ˆ ˆ longas P P = (1600− 1455q)/(6− 5q) — yes, check this. Note thatP decreases as 0 0 0 q increases: a higher probability of a price increase makes the firm more willing to invest today. Why? d) Magnitudes. Fix q =0.5 and let us change the magnitudes of variations in price from 50% to 75%. This leavesE(P )=P but increases the variance ofP . As usual, we 1 0 1 construct a risk-free portfolio by shortingn widgets. The two possible values for Π are 1 19.25P − 1600− 1.75nP if the price goes up and−0.25nP if the price goes down — yes, 0 0 0 checkthis. Equatingthesetwovaluesandsolvingfor n givesn=12.83− (1067/P ), which 0 makes Π = 267− 3.21P irrespective ofP . Imposinga risk-free rate of r = 10% (please 1 0 1 fill in the missingsteps) yields F =8.75P − 727. (4) 0 0 At a priceP = 200, this gives a valueF = 1023 for the option to invest significantly higher 0 0 than the 773 we found earlier. Why does an increase in uncertainty increase the value of this option? Ex. 2.3 Showthatthecriticalinitialpricesufficienttowarrantinvestingnowinsteadrather ˆ than waitingis P ≈ 388, much larger than the 249 found before. Can you explain why? 0 References 1 A.K. Dixit and R.S. Pyndick (1994), Investment under Uncertainty, Princeton (NJ): Princeton University Press. 103. Optimal growth and repeated investments 3.1 Introduction Standard portfolio theory treats investments as a single-period problem. You choose your investment horizon, evaluate consequences and probabilities for each investment opportunity and pick a portfolio which, for a given level of risk, maximizes the expected return over the investment horizon. The basic lesson from this static approach is that volatility is “bad” and diversification is “good”. The implicit assumptions are that you know your investment horizon and that you plan to make your investment choice once and for all. However, when we begin working over multiperiod investment problems, some of the lessons of the static approach take a whole new flavour. The aim of this lecture is to alert you about some of the subtleties involved. Most of the material is drawn from Chapter 15 in Luenberger (1998). 3.2 An example Ateachperiod,youareofferedthreeinvestmentopportunities. Thefollowingtablereports their payoffs to an investment of Euro 100. An identical but independently distributed selection is offered each period, so that the payoffs to each investment are correlated within each period, but not across time. α β γ scenario payoff payoff payoff prob.ty s 300 0 0 1/2 1 s 0 200 0 1/3 2 s 0 0 600 1/6 3 You start with Euro 100 and can invest part or all of your money repeatedly, reinvesting your winnings at later periods. You are not allowed to go short, but you can apportion your investment over different opportunities. What should you do? Consider the static choice over a single period. There is an obvious trade-off between pursuingthegrowthofthecapitalandavoidingtheriskoflosingitall. Moreprecisely,for an investment of 100, the first lottery has an expected value of 150 and a 50% probability of losingthe capital. The second lottery has an expected value of (about) 67 and a 66.6% probabilityoflosingthecapital. Thethirdlotteryhasanexpectedvalueof100anda83.3% probability of losingthe capital. Comparing β againstγ, Lotteryβ minimizes the risk of beingruinedwhile γ offers the highest expected return. However, note thatα dominatesβ andγ under both respects. If you want to maximize your expected gain, investing 100 in α is the best choice. 11This intuition does not carry over to the case of a multiperiod investment. If you always invest all the current capital inα, sooner or later this investment will yield 0 and therefore you are guaranteed to lose all of your money. Instead of maximizing your return, repeatedly bettingthe whole capital on α guarantees your ruin. Let us consider instead the policy of reinvestingyour capital each period in a fixed- proportion portfolio (α,α,α ), withα ≥ 0 fori=1, 2, 3 andα +α +α ≤ 1. Each of 1 2 3 i 1 2 3 these portfolios leads to a series of (random) multiplicative factors that govern the growth of capital. For instance, suppose that you invest Euro 100 usingthe (1 /2, 0,0) portfolio. With probability 50%, you obtain a favorable outcome and double your capital; with probability 50%, you obtain an unfavorable outcome and your capital is halved. Therefore, the multi- plicative factors for one period are 2 and 1/2,eachwithprobability50%. Overalongseries ofinvestmentsfollowingthisstrategy,theinitialcapitalwillbemultipliedbyamultipleof the form           1 1 1 1 1 1 (2) (2) (2) ... (2) 2 2 2 2 2 2 with about an equal number of 2’s and (1/2)’s. The overall factor is likely to be about 1. This means that over time the capital will tend to fluctuate up and down, but is unlikely to grow appreciably. Supposenowtoinvestusingthe(1/4,0, 0) portfolio. In the case of a favorable outcome, the capital grows by a multiplicative factor 3/2; in the case of an unfavorable outcome, the multiplicative factor is 3/4. Since the two outcomes are equally likely, the average multi- plicative factor over two periods is (3/2) (3/4) = 9/8. Therefore, the average multiplicative factor over one period is (9/8)≈ 1.06066. With this strategy, your money will grow, on average, by over 6% per period. Ex. 3.4 Prove that this is the highest rate of growth that you can attain using a (k,0,0) portfolio withk in 0, 1. Ex. 3.5 Prove that a fixed-proportions strategy investing in a portfolio (α,α,α ) with 1 2 3 min α 0 and max α 1 guarantees that ruin cannot occur in finite time. i i i i 3.3 The log-optimal growth strategy The example is representative of a large class of investment situations where a given strategy leads to a random growth process. For each periodt =1, 2,..., letX denote the capital t at periodt. The capital evolves accordingto the equation X =RX , (5) t t t−1 whereR is the random return on the capital. We assume that the random returnsR are t t independent and identically distributed. In the general capital growth process, the capital at the end ofn trials is X =(R R ...R R )X . n n n−1 2 1 0 12

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.