Applied linear algebra notes

applied numerical linear algebra pdf, applied numerical linear algebra solution manual, applied linear algebra tutorial, linear algebra applied to statistics and applied linear algebra the decoupling principle
Prof.SteveBarros Profile Pic
Prof.SteveBarros,United Kingdom,Teacher
Published Date:28-07-2017
Your Website URL(Optional)
Comment
Charles L. Byrne Department of Mathematical Sciences University of Massachusetts Lowell Applied and Computational Linear Algebra: A First CourseChapter 1 Introduction 1.1 Chapter Summary ::::::::::::::::::::::::::::::::::::::::::::::: 1 1.2 Overview of this Course :::::::::::::::::::::::::::::::::::::::::: 1 1.3 Solving Systems of Linear Equations :::::::::::::::::::::::::::: 2 1.4 Imposing Constraints :::::::::::::::::::::::::::::::::::::::::::: 2 1.5 Operators ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 2 1.6 Acceleration :::::::::::::::::::::::::::::::::::::::::::::::::::::: 3 1.1 Chapter Summary This chapter introduces some of the topics to be considered in this course. 1.2 Overview of this Course We shall focus here on applications that require the solution of systems of linear equations, often subject to constraints on the variables. These systems are typically large and sparse, that is, the entries of the matrices are predominantly zero. Transmission and emission tomography provide good examples of such applications. Fourier-based methods, such as ltered back-projection and the Fast Fourier Transform (FFT), are the standard tools for these applications, but statistical methods involving likelihood maximization are also employed. Because of the size of these problems and the nature of the constraints, iterative algorithms are essential. Because the measured data is typically insucient to specify a single unique solution, optimization methods, such as least-squares, likelihood maximization, and entropy maximization, are often part of the solution process. In the companion text "A First Course in Optimization", we present the fundamentals of optimization theory, and discuss problems of optimization, in which optimizing a function of one or several variables is the primary goal. Here, in contrast, our focus is on problems of inference, 12 Applied and Computational Linear Algebra: A First Course optimization is not our primary concern, and optimization is introduced to overcome the non-uniqueness of possible solutions. 1.3 Solving Systems of Linear Equations Many of the problems we shall consider involve solving, as least approx- imately, systems of linear equations. When an exact solution is sought and the number of equations and the number of unknowns are small, meth- ods such as Gauss elimination can be used. It is common, in applications such as medical imaging, to encounter problems involving hundreds or even thousands of equations and unknowns. It is also common to prefer inexact solutions to exact ones, when the equations involve noisy, measured data. Even when the number of equations and unknowns is large, there may not be enough data to specify a unique solution, and we need to incorporate prior knowledge about the desired answer. Such is the case with medical tomographic imaging, in which the images are arti cially discretized ap- proximations of parts of the interior of the body. 1.4 Imposing Constraints The iterative algorithms we shall investigate begin with an initial guess 0 k x of the solution, and then generate a sequencefxg, converging, in the best cases, to our solution. When we use iterative methods to solve opti- mization problems, subject to constraints, it is necessary that the limit of k the sequencefxg of iterates obey the constraints, but not that each of the k x do. An iterative algorithm is said to be an interior-point method if each k vectorx obeys the constraints. For example, suppose we wish to minimize J f(x) over allx inR having non-negative entries; an interior-point iterative k method would have x non-negative for each k. 1.5 Operators Most of the iterative algorithms we shall study involve an operator, that J J is, a function T : R R . The algorithms begin with an initial guess,Introduction 3 0 k k+1 k k x , and then proceed from x to x = Tx . Ideally, the sequencefxg converges to the solution to our optimization problem. To minimize the function f(x) using a gradient descent method with xed step-length , for example, the operator is Tx =x rf(x): In problems with non-negativity constraints our solution x is required to have non-negative entries x . In such problems, the clipping operator T , j with (Tx) = maxfx ; 0g, plays an important role. j j J A subsetC ofR is convex if, for any two points inC, the line segment connecting them is also within C. As we shall see, for any x outside C, there is a point c within C that is closest to x; this point c is called the orthogonal projection of x onto C, and we write c = P x. Operators of C the typeT =P play important roles in iterative algorithms. The clipping C operator de ned previously is of this type, for C the non-negative orthant J ofR , that is, the set J J R =fx2R jx  0; j = 1;:::;Jg: j + 1.6 Acceleration For problems involving many variables, it is important to use algorithms that provide an acceptable approximation of the solution in a reasonable amount of time. For medical tomography image reconstruction in a clinical setting, the algorithm must reconstruct a useful image from scanning data in the time it takes for the next patient to be scanned, which is roughly fteen minutes. Some of the algorithms we shall encounter work ne on small problems, but require far too much time when the problem is large. Figuring out ways to speed up convergence is an important part of iterative optimization. One approach we shall investigate in some detail is the use of block-iterative or partial gradient methods.Chapter 2 An Overview of Applications 2.1 Chapter Summary ::::::::::::::::::::::::::::::::::::::::::::::: 6 2.2 Transmission Tomography ::::::::::::::::::::::::::::::::::::::: 6 2.2.1 Brief Description :::::::::::::::::::::::::::::::::::::::: 6 2.2.2 The Theoretical Problem :::::::::::::::::::::::::::::::: 7 2.2.3 The Practical Problem :::::::::::::::::::::::::::::::::: 7 2.2.4 The Discretized Problem :::::::::::::::::::::::::::::::: 8 2.2.5 Mathematical Tools ::::::::::::::::::::::::::::::::::::: 8 2.3 Emission Tomography ::::::::::::::::::::::::::::::::::::::::::: 8 2.3.1 Coincidence-Detection PET ::::::::::::::::::::::::::::: 9 2.3.2 Single-Photon Emission Tomography ::::::::::::::::::: 9 2.3.3 The Line-Integral Model for PET and SPECT ::::::::: 10 2.3.4 Problems with the Line-Integral Model ::::::::::::::::: 10 2.3.5 The Stochastic Model: Discrete Poisson Emitters :::::: 11 2.3.6 Reconstruction as Parameter Estimation ::::::::::::::: 11 2.3.7 X-Ray Fluorescence Computed Tomography ::::::::::: 12 2.4 Magnetic Resonance Imaging :::::::::::::::::::::::::::::::::::: 12 2.4.1 Alignment :::::::::::::::::::::::::::::::::::::::::::::::: 13 2.4.2 Precession :::::::::::::::::::::::::::::::::::::::::::::::: 13 2.4.3 Slice Isolation :::::::::::::::::::::::::::::::::::::::::::: 13 2.4.4 Tipping :::::::::::::::::::::::::::::::::::::::::::::::::: 13 2.4.5 Imaging :::::::::::::::::::::::::::::::::::::::::::::::::: 13 2.4.6 The Line-Integral Approach ::::::::::::::::::::::::::::: 14 2.4.7 Phase Encoding :::::::::::::::::::::::::::::::::::::::::: 14 2.4.8 A New Application :::::::::::::::::::::::::::::::::::::: 14 2.5 Intensity Modulated Radiation Therapy ::::::::::::::::::::::::: 14 2.5.1 Brief Description :::::::::::::::::::::::::::::::::::::::: 15 2.5.2 The Problem and the Constraints ::::::::::::::::::::::: 15 2.5.3 Convex Feasibility and IMRT ::::::::::::::::::::::::::: 15 2.6 Array Processing ::::::::::::::::::::::::::::::::::::::::::::::::: 16 2.7 A Word about Prior Information :::::::::::::::::::::::::::::::: 17 56 Applied and Computational Linear Algebra: A First Course 2.1 Chapter Summary The theory of linear algebra, applications of that theory, and the asso- ciated computations are the three threads that weave their way through this course. In this chapter we present an overview of the applications we shall study in more detail later. 2.2 Transmission Tomography Although transmission tomography (TT) is commonly associated with medical diagnosis, it has scienti c uses, such as determining the sound- speed pro le in the ocean, industrial uses, such as searching for faults in girders, mapping the interior of active volcanos, and security uses, such as the scanning of cargo containers for nuclear material. Previously, when peo- ple spoke of a \CAT scan" they usually meant x-ray transmission tomog- raphy, although the term is now used by lay people to describe any of the several scanning modalities in medicine, including single-photon emission computed tomography (SPECT), positron emission tomography (PET), ultrasound, and magnetic resonance imaging (MRI). 2.2.1 Brief Description Computer-assisted tomography (CAT) scans have revolutionized med- ical practice. One example of CAT is transmission tomography. The goal here is to image the spatial distribution of various matter within the body, by estimating the distribution of radiation attenuation. At least in theory, the data are line integrals of the function of interest. In transmission tomography, radiation, usually x-ray, is transmitted through the object being scanned. The object of interest need not be a living human being; King Tut has received a CAT-scan and industrial uses of transmission scanning are common. Recent work 235 has shown the practicality of using cosmic rays to scan cargo for hidden nuclear mate- rial; tomographic reconstruction of the scattering ability of the contents can reveal the presence of shielding. Because of their ability to penetrate granite, cosmic rays are being used to obtain transmission-tomographic three-dimensional images of the interior of active volcanos, to measure the size of the magma column and help predict the size and occurrence of eruptions. In the simplest formulation of transmission tomography, the beams areAn Overview of Applications 7 assumed to travel along straight lines through the object, the initial inten- sity of the beams is known and the intensity of the beams, as they exit the object, is measured for each line. The goal is to estimate and image the x-ray attenuation function, which correlates closely with the spatial distri- bution of attenuating material within the object. Unexpected absence of attenuation can indicate a broken bone, for example. As the x-ray beam travels along its line through the body, it is weak- ened by the attenuating material it encounters. The reduced intensity of the exiting beam provides a measure of how much attenuation the x-ray encountered as it traveled along the line, but gives no indication of where along that line it encountered the attenuation; in theory, what we have learned is the integral of the attenuation function along the line. It is only by repeating the process with other beams along other lines that we can begin to localize the attenuation and reconstruct an image of this non- negative attenuation function. In some approaches, the lines are all in the same plane and a reconstruction of a single slice through the object is the goal; in other cases, a fully three-dimensional scanning occurs. The word \tomography" itself comes from the Greek \tomos", meaning part or slice; the word \atom"was coined to describe something supposed to be \without parts". 2.2.2 The Theoretical Problem In theory, we will have the integral of the attenuation function along every line through the object. The Radon Transform is the operator that assigns to each attenuation function its integrals over every line. The math- ematical problem is then to invert the Radon Transform, that is, to recap- ture the attenuation function from its line integrals. Is it always possible to determine the attenuation function from its line integrals? Yes. One way to show this is to use the Fourier transform to prove what is called the Central Slice Theorem. The reconstruction is then inversion of the Fourier transform; various methods for such inversion rely on frequency-domain ltering and back-projection. 2.2.3 The Practical Problem Practise, of course, is never quite the same as theory. The problem, as we have described it, is an over-simpli cation in several respects, the main one being that we never have all the line integrals. Ultimately, we will construct a discrete image, made up of nitely many pixels. Consequently, it is reasonable to assume, from the start, that the attenuation function to be estimated is well approximated by a function that is constant across small squares (or cubes), called pixels (or voxels), and that the goal is to determine these nitely many pixel values.8 Applied and Computational Linear Algebra: A First Course 2.2.4 The Discretized Problem When the problem is discretized in this way, di erent mathematics be- gins to play a role. The line integrals are replaced by nite sums, and the problem can be viewed as one of solving a large number of linear equations, subject to side constraints, such as the non-negativity of the pixel values. The Fourier transform and the Central Slice Theorem are still relevant, but in discrete form, with the fast Fourier transform (FFT) playing a major role in discrete ltered back-projection methods. This approach provides fast reconstruction, but is limited in other ways. Alternatively, we can turn to iterative algorithms for solving large systems of linear equations, subject to constraints. This approach allows for greater inclusion of the physics into the reconstruction, but can be slow; accelerating these iterative reconstruc- tion algorithms is a major concern, as is controlling sensitivity to noise in the data. 2.2.5 Mathematical Tools As we just saw, Fourier transformation in one and two dimensions, and frequency-domain ltering are important tools that we need to discuss in some detail. In the discretized formulation of the problem, periodic convo- lution of nite vectors and its implementation using the fast Fourier trans- form play major roles. Because actual data is always nite, we consider the issue of under-determined problems that allow for more than one answer, and the need to include prior information to obtain reasonable reconstruc- tions. Under-determined problems are often solved using optimization, such as maximizing the entropy or minimizing the norm of the image, subject to the data as constraints. Constraints are often described mathematically using the notion of convex sets. Finding an image satisfying several sets of constraints can often be viewed as nding a vector in the intersection of convex sets, the so-called convex feasibility problem (CFP). 2.3 Emission Tomography Unlike transmission tomography, emission tomography (ET) is used only with living beings, principally humans and small animals. Although this modality was initially used to uncover pathologies, it is now used to study normal functioning, as well. In emission tomography, including positron emission tomography (PET) and single photon emission tomog- raphy (SPECT), the patient inhales, swallows, or is injected with, chemi- cals to which radioactive material has been chemically attached 263. TheAn Overview of Applications 9 chemicals are designed to accumulate in that speci c region of the body we wish to image. For example, we may be looking for tumors in the abdomen, weakness in the heart wall, or evidence of brain activity in a selected region. In some cases, the chemicals are designed to accumulate more in healthy regions, and less so, or not at all, in unhealthy ones. The opposite may also be the case; tumors may exhibit greater avidity for certain chemicals. The patient is placed on a table surrounded by detectors that count the number of emitted photons. On the basis of where the various counts were obtained, we wish to determine the concentration of radioactivity at various locations throughout the region of interest within the patient. Although PET and SPECT share some applications, their uses are gen- erally determined by the nature of the chemicals that have been designed for this purpose, as well as by the half-life of the radionuclides employed. Those radioactive isotopes used in PET generally have half-lives on the order of minutes and must be manufactured on site, adding to the expense of PET. The isotopes used in SPECT have half-lives on the order of many hours, or even days, so can be manufactured o -site and can also be used in scanning procedures that extend over some appreciable period of time. 2.3.1 Coincidence-Detection PET In a typical PET scan to detect tumors, the patient receives an injection 18 of glucose, to which a radioactive isotope of uorine, F, has been chem- ically attached. The radionuclide emits individual positrons, which travel, on average, between 4 mm and 2:5 cm (depending on their kinetic energy) before encountering an electron. The resulting annihilation releases two gamma-ray photons that then proceed in essentially opposite directions. Detection in the PET case means the recording of two photons at nearly the same time at two di erent detectors. The locations of these two detec- tors then provide the end points of the line segment passing, more or less, through the site of the original positron emission. Therefore, each possi- ble pair of detectors determines a line of response (LOR). When a LOR is recorded, it is assumed that a positron was emitted somewhere along that line. The PET data consists of a chronological list of LOR that are recorded. Because the two photons detected at either end of the LOR are not detected at exactly the same time, the time di erence can be used in time-of- ight PET to further localize the site of the emission to a smaller segment of perhaps 8 cm in length. 2.3.2 Single-Photon Emission Tomography Single-photon computed emission tomography (SPECT) is similar to PET and has the same objective: to image the distribution of a radionu- 99m clide, such as technetium Tc, within the body of the patient. In SPECT10 Applied and Computational Linear Algebra: A First Course the radionuclide employed emits single gamma-ray photons, which then travel through the body of the patient and, in some fraction of the cases, are detected. Detections in SPECT correspond to individual sensor loca- tions outside the body. The data in SPECT are the photon counts at each of the nitely many detector locations. Unlike PET, in SPECT lead col- limators are placed in front of the gamma-camera detectors to eliminate photons arriving at oblique angles. While this helps us narrow down the possible sources of detected photons, it also reduces the number of detected photons and thereby decreases the signal-to-noise ratio. 2.3.3 The Line-Integral Model for PET and SPECT To solve the reconstruction problem we need a model that relates the count data to the radionuclide density function. A somewhat unsophisti- cated, but computationally attractive, model is taken from transmission tomography: to view the count at a particular detector as the line integral of the radionuclide density function along the line from the detector that is perpendicular to the camera face. The count data then provide many such line integrals and the reconstruction problem becomes the familiar one of estimating a function from noisy measurements of line integrals. Viewing the data as line integrals allows us to use the Fourier transform in reconstruction. The resulting ltered back-projection (FBP) algorithm is a commonly used method for medical imaging in clinical settings. The line-integral model for PET assumes a xed set of possible LOR, with most LOR recording many emissions. Another approach is list-mode PET, in which detections are recording as they occur by listing the two end points of the associated LOR. The number of potential LOR is much higher in list-mode, with most of the possible LOR being recording only once, or not at all 173, 216, 61. 2.3.4 Problems with the Line-Integral Model It is not really accurate, however, to view the photon counts at the detectors as line integrals. Consequently, applying ltered back-projection to the counts at each detector can lead to distorted reconstructions. There are at least three degradations that need to be corrected before FBP can be successfully applied 181: attenuation, scatter, and spatially dependent resolution. In the SPECT case, as in most such inverse problems, there is a trade-o to be made between careful modeling of the physical situation and compu- tational tractability. The FBP method slights the physics in favor of com- putational simplicity and speed. In recent years, iterative methods, such as the algebraic reconstruction technique (ART), its multiplicative vari- ant, MART, the expectation maximization maximum likelihood (MLEMAn Overview of Applications 11 or EMML) method, and the rescaled block-iterative EMML (RBI-EMML), that incorporate more of the physics have become competitive. 2.3.5 The Stochastic Model: Discrete Poisson Emitters In iterative reconstruction we begin by discretizing the problem; that is, we imagine the region of interest within the patient to consist of nitely many tiny squares, called pixels for two-dimensional processing or cubes, called voxels for three-dimensional processing. We imagine that each pixel has its own level of concentration of radioactivity and these concentration levels are what we want to determine. Proportional to these concentration levels are the average rates of emission of photons. To achieve our goal we must construct a model that relates the measured counts to these concen- tration levels at the pixels. The standard way to do this is to adopt the model of independent Poisson emitters. Any Poisson-distributed random variable has a mean equal to its variance. The signal-to-noise ratio (SNR) is usually taken to be the ratio of the mean to the standard deviation, which, in the Poisson case, is then the square root of the mean. Conse- quently, the Poisson SNR increases as the mean value increases, which points to the desirability (at least, statistically speaking) of higher dosages to the patient. 2.3.6 Reconstruction as Parameter Estimation The goal is to reconstruct the distribution of radionuclide intensity by estimating the pixel concentration levels. The pixel concentration levels can be viewed as parameters and the data are instances of random variables, so the problem looks like a fairly standard parameter estimation problem of the sort studied in beginning statistics. One of the basic tools for statistical parameter estimation is likelihood maximization, which is playing an in- creasingly important role in medical imaging. There are several problems, however. One problem is that the number of parameters is quite large, as large as the number of data values, in most cases. Standard statistical parameter estimation usually deals with the estimation of a handful of parameters. Another problem is that we do not quite know the relationship between the pixel concentration levels and the count data. The reason for this is that the probability that a photon emitted from a given pixel will be detected at a given detector will vary from one patient to the next, since whether or not a photon makes it from a given pixel to a given detector depends on the geometric relationship between detector and pixel, as well as what is in the patient's body between these two locations. If there are ribs or skull getting in the way, the probability of making it goes down. If there are just lungs, the probability goes up. These probabilities can change during the12 Applied and Computational Linear Algebra: A First Course scanning process, when the patient moves. Some motion is unavoidable, such as breathing and the beating of the heart. Determining good values of the probabilities in the absence of motion, and correcting for the e ects of motion, are important parts of SPECT image reconstruction. 2.3.7 X-Ray Fluorescence Computed Tomography X-ray uorescence computed tomography (XFCT) is a form of emission tomography that seeks to reconstruct the spatial distribution of elements of interest within the body 191. Unlike SPECT and PET, these elements need not be radioactive. Beams of synchrotron radiation are used to stim- ulate the emission of uorescence x-rays from the atoms of the elements of interest. These uorescence x-rays can then be detected and the distribu- tion of the elements estimated and imaged. As with SPECT, attenuation is a problem; making things worse is the lack of information about the distribution of attenuators at the various uorescence energies. 2.4 Magnetic Resonance Imaging Protons have spin, which, for our purposes here, can be viewed as a charge distribution in the nucleus revolving around an axis. Associated with the resulting current is a magnetic dipole moment collinear with the axis of the spin. In elements with an odd number of protons, such as hydrogen, the nucleus itself will have a net magnetic moment. The objective in magnetic resonance imaging (MRI) is to determine the density of such elements in a volume of interest within the body. The basic idea is to use strong magnetic elds to force the individual spinning nuclei to emit signals that, while too weak to be detected alone, are detectable in the aggregate. The signals are generated by the precession that results when the axes of the magnetic dipole moments are rst aligned and then perturbed. In much of MRI, it is the distribution of hydrogen in water molecules that is the object of interest, although the imaging of phosphorus to study energy transfer in biological processing is also important. There is ongo- ing work using tracers containing uorine, to target speci c areas of the body and avoid background resonance. Because the magnetic properties of blood change when the blood is oxygenated, increased activity in parts of the brain can be imaged through functional MRI (fMRI). Non-radioactive isotopes of gadolinium are often injected as contrast agents because of their ability to modify certain parameters called the T1 relaxation times.An Overview of Applications 13 2.4.1 Alignment In the absence of an external magnetic eld, the axes of these magnetic dipole moments have random orientation, dictated mainly by thermal ef- fects. When an external magnetic eld is introduced, it induces a small 5 fraction, about one in 10 , of the dipole moments to begin to align their axes with that of the external magnetic eld. Only because the number of protons per unit of volume is so large do we get a signi cant number of moments aligned in this way. A strong external magnetic eld, about 20; 000 times that of the earth's, is required to produce enough alignment to generate a detectable signal. 2.4.2 Precession When the axes of the aligned magnetic dipole moments are perturbed, they begin to precess, like a spinning top, around the axis of the external magnetic eld, at the Larmor frequency, which is proportional to the in- tensity of the external magnetic eld. If the magnetic eld intensity varies spatially, then so does the Larmor frequency. Each precessing magnetic dipole moment generates a signal; taken together, they contain informa- tion about the density of the element at the various locations within the body. As we shall see, when the external magnetic eld is appropriately chosen, a Fourier relationship can be established between the information extracted from the received signal and this density function. 2.4.3 Slice Isolation When the external magnetic eld is the static eld, then the Larmor frequency is the same everywhere. If, instead, we impose an external mag- netic eld that varies spatially, then the Larmor frequency is also spatially varying. This external eld is now said to include a gradient eld. 2.4.4 Tipping When a magnetic dipole moment is given a component out of its axis of alignment, it begins to precess around its axis of alignment, with frequency equal to its Larmor frequency. To create this o -axis component, we apply a radio-frequency eld (rf eld) for a short time. The e ect of imposing this rf eld is to tip the aligned magnetic dipole moment axes away from the axis of alignment, initiating precession. The dipoles that have been tipped ninety degrees out of their axis of alignment generate the strongest signal.14 Applied and Computational Linear Algebra: A First Course 2.4.5 Imaging The information we seek about the proton density function is contained within the received signal. By carefully adding gradient elds to the exter- nal eld, we can make the Larmor frequency spatially varying, so that each frequency component of the received signal contains a piece of the information we seek. The proton density function is then obtained through Fourier transformations. Fourier-transform estimation and extrapolation techniques play a major role in this rapidly expanding eld 157. 2.4.6 The Line-Integral Approach By appropriately selecting the gradient eld and the radio-frequency eld, it is possible to create a situation in which the received signal comes primarily from dipoles along a given line in a preselected plane. Performing an FFT of the received signal gives us line integrals of the density func- tion along lines in that plane. In this way, we obtain the three-dimensional Radon transform of the desired density function. The Central Slice Theo- rem for this case tells us that, in theory, we have the Fourier transform of the density function. 2.4.7 Phase Encoding In the line-integral approach, the line-integral data is used to obtain values of the Fourier transform of the density function along lines through the origin in Fourier space. It would be more convenient for the FFT if we have Fourier-transform values on the points of a rectangular grid. We can obtain this by selecting the gradient elds to achieve phase encoding. 2.4.8 A New Application A recent article 262 in The Boston Globe describes a new application of MRI, as a guide for the administration of ultra-sound to kill tumors and perform bloodless surgery. In MRI-guided focused ultra-sound, the sound waves are focused to heat up the regions to be destroyed and real-time MRI imaging shows the doctor where this region is located and if the sound waves are having the desired e ect. The use of this technique in other areas is also being studied: to open up the blood-brain barrier to permit chemo-therapy for brain cancers; to cure hand tremors, chronic pain, and some e ects of stroke, epilepsy, and Parkinson's disease; and to remove uterine broids.An Overview of Applications 15 2.5 Intensity Modulated Radiation Therapy A fairly recent addition to the list of applications using linear alge- bra and the geometry of Euclidean space is intensity modulated radiation therapy (IMRT). Although it is not actually an imaging problem, intensity modulated radiation therapy is an emerging eld that involves some of the same mathematical techniques used to solve the medical imaging problems discussed previously, particularly methods for solving the convex feasibility problem. 2.5.1 Brief Description In IMRT beamlets of radiation with di erent intensities are transmitted into the body of the patient. Each voxel within the patient will then absorb a certain dose of radiation from each beamlet. The goal of IMRT is to direct a sucient dosage to those regions requiring the radiation, those that are designated planned target volumes (PTV), while limiting the dosage received by the other regions, the so-called organs at risk (OAR). 2.5.2 The Problem and the Constraints The intensities and dosages are obviously non-negative quantities. In addition, there are implementation constraints; the available treatment ma- chine will impose its own requirements, such as a limit on the di erence in intensities between adjacent beamlets. In dosage space, there will be a lower bound on the acceptable dosage delivered to those regions designated as the PTV, and an upper bound on the acceptable dosage delivered to those regions designated as the OAR. The problem is to determine the intensities of the various beamlets to achieve these somewhat con icting goals. 2.5.3 Convex Feasibility and IMRT The CQ algorithm 62, 63 is an iterative algorithm for solving the split feasibility problem. Because it is particularly simple to implement in many cases, it has become the focus of recent work in IMRT. In 84 Censor et al. extend the CQ algorithm to solve what they call the multiple-set split feasibility problem (MSSFP) . In the sequel 82 it is shown that the constraints in IMRT can be modeled as inclusion in convex sets and the extended CQ algorithm is used to determine dose intensities for IMRT that satisfy both dose constraints and radiation-source constraints. One drawback to the use of x-rays in radiation therapy is that they continue through the body after they have encountered their target. A re-16 Applied and Computational Linear Algebra: A First Course cent technology, proton-beam therapy, directs a beam of protons at the target. Since the protons are heavy, and have mass and charge, their tra- jectories can be controlled in ways that x-ray trajectories cannot be. The new proton center at Massachusetts General Hospital in Boston is one of the rst to have this latest technology. As with most new and expensive medical procedures, there is some debate going on about just how much of an improvement it provides, relative to other methods. 2.6 Array Processing Passive SONAR is used to estimate the number and direction of dis- tant sources of acoustic energy that have generated sound waves prop- agating through the ocean. An array, or arrangement, of sensors, called hydrophones, is deployed to measure the incoming waveforms over time and space. The data collected at the sensors is then processed to provide estimates of the waveform parameters being sought. In active SONAR, the party deploying the array is also the source of the acoustic energy, and what is sensed are the returning waveforms that have been re ected o of distant objects. Active SONAR can be used to map the ocean oor, for ex- ample. Radar is another active array-processing procedure, using re ected radio waves instead of sound to detect distant objects. Radio astronomy uses array processing and the radio waves emitted by distant sources to map the heavens. To illustrate how array processing operates, consider Figure 2.1. Imagine a source of acoustic energy suciently distant from the line of sensors that the incoming wavefront is essentially planar. As the peaks and troughs of the wavefronts pass over the array of sensors, the measurements at the sensors give the elapsed time between a peak at one sensor and a peak at the next sensor, thereby giving an indication of the angle of arrival. In practice, of course, there are multiple sources of acoustic energy, so each sensor receives a superposition of all the plane-wave fronts from all directions. because the sensors are spread out in space, what each receives is slightly di erent from what its neighboring sensors receive, and this slight di erence can be exploited to separate the spatially distinct components of the signals. What we seek is the function that describes how much energy came from each direction. When we describe the situation mathematically, using the wave equa- tion, we nd that what is received at each sensor is a value of the Fourier transform of the function we want. Because we have only nitely many sensors, we have only nitely many values of this Fourier transform. So, weAn Overview of Applications 17 have the problem of estimating a function from nitely many values of its Fourier transform. 2.7 A Word about Prior Information An important point to keep in mind when applying linear-algebraic methods to measured data is that, while the data is usually limited, the information we seek may not be lost. Although processing the data in a reasonable way may suggest otherwise, other processing methods may reveal that the desired information is still available in the data. Figure 2.2 illustrates this point. The original image on the upper right of Figure 2.2 is a discrete rect- angular array of intensity values simulating a slice of a head. The data was obtained by taking the two-dimensional discrete Fourier transform of the original image, and then discarding, that is, setting to zero, all these spatial frequency values, except for those in a smaller rectangular region around the origin. The problem then is under-determined. A minimum two-norm solution would seem to be a reasonable reconstruction method. The minimum two-norm solution is shown on the lower right. It is cal- culated simply by performing an inverse discrete Fourier transform on the array of modi ed discrete Fourier transform values. The original image has relatively large values where the skull is located, but the minimum two- norm reconstruction does not want such high values; the norm involves the sum of squares of intensities, and high values contribute disproportionately to the norm. Consequently, the minimum two-norm reconstruction chooses instead to conform to the measured data by spreading what should be the skull intensities throughout the interior of the skull. The minimum two- norm reconstruction does tell us something about the original; it tells us about the existence of the skull itself, which, of course, is indeed a promi- nent feature of the original. However, in all likelihood, we would already know about the skull; it would be the interior that we want to know about. Using our knowledge of the presence of a skull, which we might have obtained from the minimum two-norm reconstruction itself, we construct the prior estimate shown in the upper left. Now we use the same data as before, and calculate a minimum weighted two-norm solution, using as the weight vector the reciprocals of the values of the prior image. This minimum weighted two-norm reconstruction is shown on the lower left; it is clearly almost the same as the original image. The calculation of the minimum weighted two-norm solution can be done iteratively using the ART algorithm, as discussed in 238.18 Applied and Computational Linear Algebra: A First Course When we weight the skull area with the inverse of the prior image, we allow the reconstruction to place higher values there without having much of an e ect on the overall weighted norm. In addition, the reciprocal weighting in the interior makes spreading intensity into that region costly, so the interior remains relatively clear, allowing us to see what is really present there. When we try to reconstruct an image from limited data, it is easy to assume that the information we seek has been lost, particularly when a reasonable reconstruction method fails to reveal what we want to know. As this example, and many others, show, the information we seek is often still in the data, but needs to be brought out in a more subtle way.An Overview of Applications 19 FIGURE 2.1: A uniform line array sensing a plane-wave eld.20 Applied and Computational Linear Algebra: A First Course FIGURE 2.2: Extracting information in image reconstruction.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.