Topics in Applied Mathematics
Purpose of Course showclose
Mathematics was coined the “queen of sciences” by the “prince of mathematicians,” Carl Friedrich Gauss, one of the greatest mathematicians of all time. Indeed, the name of Gauss is associated with essentially all areas of mathematics, and in this respect, there is really no clear boundary between “pure mathematics” and “applied mathematics.” To ensure financial independence, Gauss decided on a stable career in astronomy, which is one of the oldest sciences and was perhaps the most popular one during the eighteenth and nineteenth centuries. In his study of celestial motion and orbits and a diversity of disciplines later in his career, including (in chronological order): geodesy, magnetism, dioptrics, and actuarial science, Gauss has developed a vast volume of mathematical methods and tools that are still instrumental to our current study of applied mathematics.
During the twentieth century, with the exciting development of quantum field theory, with the prosperity of the aviation industry, and with the bullish activity in financial market trading, and so forth, the sovereignty of the “queen of sciences” has turned her attention to the theoretical development and numerical solutions of partial differential equations (PDEs). Indeed, the nonrelativistic modeling of quantum mechanics is described by the Schrödinger equation; the fluid flow formulation, as an extension of Newtonian physics by incorporating motion and stress, is modeled by the NavierStokes equation; and option stock trading with minimum risk can be modeled by the BlackScholes equation. All of these equations are PDEs. In general, PDEs are used to describe a wide variety of phenomena, including: heat diffusion, sound wave propagation, electromagnetic wave radiation, vibration, electrostatics, electrodynamics, fluid flow, and elasticity, just to name a few. For this reason, the theoretical and numerical development of PDEs has been considered the core of applied mathematics, at least in the academic environment.
On the other hand, over the past decade, we have been facing a rapidly increasing volume of “information” contents to be processed and understood. For instance, the popularity and significant impact of the open education movement (OEM) has contributed to an enormous amount of educational information on the web that is difficult to sort out, due to unavoidable redundancy, occasional contradiction, extreme variation in quality, and even erroneous opinions. This motivated the founding of the “Saylor Foundation courseware” to provide perhaps one of the most valuable, and certainly more reliable, highquality educational materials, with endtoend solutions, that are free to all. With the recent advances of various hightech fields and the popularity of social networking, the trend of exponential growth of easily accessible information is certainly going to continue well into the twentyfirst century, and the bottleneck created by this information explosion will definitely require innovative solutions from the scientific and engineering communities, particularly those technologists with better understanding of and a strong background in applied mathematics. In this regard, mathematics extends its influence and impact by providing innovative theory, methods, and algorithms to virtually every discipline, far beyond sciences and engineering, for processing, transmitting, receiving, understanding, and visualizing data sets, which could be very large or live in some highdimensional space.
Of course the basic mathematical tools, such as PDE methods and leastsquares approximation introduced by Gauss, are always among the core of the mathematical toolbox for applied mathematics. But other innovations and methods must be integrated in this toolbox as well. One of the most essential ideas is the notion of “frequency” of the data information. Joseph Fourier, a contemporary of Gauss, instilled this important concept to our study of physical phenomena by his innovation of trigonometric series representations, along with powerful mathematical theory and methods, to significantly expand the core of the toolbox for applied mathematics. The frequency content of a given data set facilitates the processing and understanding of the data information. Another important idea is the “multiscale” structure of data sets. Less than three decades ago, with the birth of another exciting mathematical subject, called “wavelets,” the data set of information can be put in the wavelet domain for multiscale processing as well. On the other hand, it is unfortunate that some essential basic mathematical tools for information processing are not commonly taught in a regular applied mathematics course in the university. Among the commonly missing ones, the topics that will be addressed in this Saylor course MA304 include: information coding, data dimensionality reduction, data compression, and image manipulation.
The objective of this course is to study the basic theory and methods in the toolbox of the core of applied mathematics, with a central scheme that addresses “information processing” and with an emphasis on manipulation of digital image data. Linear algebra in the Saylor Foundation’s MA211 and MA212 are extended to “linear analysis” with applications to principal component analysis (PCA) and data dimensionality reduction (DDR). For data compression, the notion of entropy is introduced to quantify coding efficiency as governed by Shannon’s Noiseless Coding theorem. Discrete Fourier transform (DFT) followed by an efficient computational algorithm, called fast Fourier transform (FFT), as well as a realvalued version of the DFT, called discrete cosine transform (DCT) are discussed, with application to extracting frequency content of the given discrete data set that facilitates reduction of the entropy and thus significant improvement of the coding efficiency. DFT can be viewed as a discrete version of the Fourier series, which will be studied in some depth, with emphasis on orthogonal projection, the property of positive approximate identity of Fejer’s kernels, Parseval’s identity and the concept of completeness. The integral version of the sequence of Fourier coefficients is called the Fourier transform (FT). Analogous to the Fourier series, the formulation of the inverse Fourier transform (IFT) is derived by applying the Gaussian function as a sliding timewindow for simultaneous timefrequency localization, with optimality guaranteed by the Uncertainty Principle. Local timefrequency basis functions are also introduced in this course by discretization of the frequencymodulated sliding timewindow function at the integer lattice points. Replacing the frequency modulation by modulation with the cosines avoids the BalianLow stability restriction on the local timefrequency basis functions, with application to elimination of blocky artifact caused by quantization of tiled DCT in image compression. Gaussian convolution filtering also provides the solution of the heat (partial differential) equation with the realline as the spatial domain. When this spatial domain is replaced by a bounded interval, the method of separation of variables is applied to separate the PDE into two ordinary differential equations (ODEs). Furthermore, when the two endpoints of the interval are insulated from heat loss, solution of the spatial ODE is achieved by finding the eigenvalue and eigenvector pairs, with the same eigenvalues to govern the exponential rate of decay of the solution of the time ODE. Superposition of the products of the spatial and time solutions over all eigenvalues solves the heat PDE, when the Fourier coefficients of the initial heat content are used as the coefficients of the terms of the superposition. This method is extended to the twodimensional rectangular spatial domain, with application to image noise reduction. The method of separation of variables is also applied to solving other typical linear PDEs. Finally, multiscale data analysis is introduced and compared with the Fourier frequency approach, and the architecture of multiresolution analysis (MRA) is applied to the construction of wavelets and formulation of the multiscale wavelet decomposition and reconstruction algorithms. The lifting scheme is also introduced to reduce the computational complexity of these algorithms, with applications to digital image manipulation for such tasks as progressive transmission, image edge extraction, and image enhancement.
Course Information showclose
Primary Resources: This course is comprised of a range of different free, online materials. However, the course makes primary use of the following materials:
 MIT: Professor Gilbert Strang’s Linear Algebra Lectures
 MIT: Professor Gilbert Strang’s Computational Science and Engineering I Lectures
 University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics
 The Final Exam
In order to “pass” this course, you will need to earn a 70% or higher on the Final Exam. Your score on the exam will be tabulated as soon as you complete it. If you do not pass the exam, you may take it again.
Time Commitment: Each unit includes a “time advisory” that lists the amount of time you should spend on each subunit. These should help you plan your time accordingly. It may be useful to take a look at these time advisories and to determine how much time you have over the next few weeks to complete each unit, and then to set goals for yourself. For example, Unit 1 should take you 12.5 hours. Perhaps you can sit down with your calendar and decide to complete subunit 1.1 (a total of 3.75 hours) on Monday night; subunit 1.2 (a total of 3.75 hours) on Tuesday night; etc.
Tips/Suggestions: As noted in the “Course Requirements,” there are several mathematics prerequisites for this course. If you are struggling with the mathematics as you progress through this course, consider taking a break and revisiting the applicable course listed as a prerequisite.
As you read, take careful notes on a separate sheet of paper. Mark down any important equations, formulas, and definitions that stand out to you. These notes will serve as a useful review as you prepare and study for your Final Exam.
Learning Outcomes showclose
 Compute singular values of rectangular and singular square matrices.
 Perform singular value decomposition of rectangular matrices.
 Solve an arbitrary system of linear equations.
 Compute linear leastsquares estimation.
 Compute principal components.
 Reduce data dimension.
 Compute DFT of vectors.
 Compute inverse DFT.
 Compute DCT of vectors.
 Compute inverse DCT.
 Compute twodimensional DCT of matrix data array.
 Compute histogram of data sets.
 Formulate probability distribution based on histograms.
 Compute entropy from probability distribution.
 Construct Huffman trees and Huffman codes.
 Perform color transform.
 Outline the JPEG image compression scheme.
 Explain video compression.
 Compute Fourier series.
 Compute Fourier cosine series.
 Describe the importance of the Dirichlet and Fejer kernels.
 Apply the property of positive approximate identity to prove convergence theorems.
 Compute meansquare error of approximation by partial sums of Fourier series.
 Solve the Basel problem and its extension to higher even powers.
 Compute the Fourier transform of some simple functions.
 Compute the Fourier transform of the affine transformation of some simple functions.
 Compute the convolution of some simple functions with certain filters.
 Describe and apply the important Fourier transform property of mapping the convolution operation to product of the Fourier transform of the individual functions.
 Explain and apply the concept of localized Fourier and inverse Fourier transforms.
 Formulate the Fourier transform of a general Gaussian function.
 Explain the Uncertainty Principle.
 Compute the Gabor transform of some simple functions.
 Formulate local timefrequency basis functions from a given sliding timewindow function.
 Formulate local cosine basis functions from a given sliding timewindow function.
 Apply the Gaussian to solve the heat equation with the entire ddimensional Euclidean space as the spatial domain, where d is any positive integer.
 Apply the method of separation of variables to separate a given linear PDE into a finite family of ODEs.
 Solve the corresponding eigenvalue problems for the spatial ODEs.
 Apply the Fourier series of the input function to formulate the superposition solution of boundary value problems.
 Give the relationship between scale and frequency for a given wavelet filter.
 Perform matrix extension to compute wavelet filters.
 Compute multiscale data representation by applying the wavelet decomposition algorithm for the Haar wavelet.
 Identify the order of vanishing moments of a given wavelet.
 Apply the wavelet decomposition and reconstruction algorithms to multiscale data analysis.
 Apply wavelets to digital image manipulation.
Course Requirements showclose
√ Have access to a computer.
√ Have continuous broadband Internet access.
√ Have the ability/permission to install plugins (e.g. Adobe Reader or Flash) and software.
√ Have the ability to download and save files and documents to a computer.
√ Have the ability to open Microsoft Office files and documents (.doc, .ppt, .xls, etc.).
√ Have competency in the English language.
√ Have read the Saylor Student Handbook.
√ Have completed the following courses from “The Core Program” of the mathematics major: MA101: SingleVariable Calculus I; MA102: SingleVariable Calculus II; MA103: Multivariable Calculus; MA211: Linear Algebra; MA221: Differential Equations; and MA241: Real Analysis I
√ Have completed the following courses from the “Advanced Mathematics” section of the mathematics major: MA212: Linear Algebra II; MA243: Complex Analysis; and MA222: Introduction to Partial Differential Equations.
Unit Outline show close

Unit 1: Linear Analysis
In Unit 1, the theory of linear algebra studied in the Saylor Foundation’s MA211 and MA212 are extended to linear analysis in that matrices are extended to linear operators that include certain differential operators. In this unit, you will study the inner product and its corresponding norm defined on a vector space, along with their important properties that depend on the CauchySchwarz inequality. In addition, you will review the eigenvalue problem, and you will study singular values with an application to spectral decomposition. This leads to the discussion of singular value decomposition (SVD) of rectangular matrices that allows us to generalize the inversion of nonsingular matrices, studied in the Saylor course MA211, to the “inversion” of rectangular and singular square matrices with applications to solving arbitrary systems of linear equations and to the introduction of the method of principal component analysis (PCA). As an application of PCA, the formulation and theory for data dimensionality reduction (DDR) will also be studied in this first unit.
Unit 1 Learning Outcomes show close
 1.1 Inner Product and Norm Measurements

1.1.1 Definition of Inner Product
 Reading: Cambridge University Press: Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory: “6A: Some Functional Analysis: Inner Products”
Link: Cambridge University Press: Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory:“6A: Some Functional Analysis: Inner Products”(PDF)
Instructions: Please click on the link above to access the PDF, and study Section 6A on pages 103–105, stopping at Section 6B, to learn about inner products.
Studying this reading should take approximately 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics
Link: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics (PDF)
Instructions: Please click on the link above, and then select the link “PDF version of the book” to download the text. Study Section 9.1 on pages 117–119 for a definition and examples of inner product. You will be using this text throughout the course, so you may find it helpful to save the PDF to your desktop.
Studying this reading should take approximately 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: Cambridge University Press: Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory: “6A: Some Functional Analysis: Inner Products”

1.1.2 CauchySchwarz Inequality
 Lecture: Khan Academy’s “Derivation of CauchySchwarz Inequality”
Link: Khan Academy’s “Derivation of CauchySchwarz Inequality” (YouTube)
Instructions: Please click on the link above, and view the derivation of the Cauchy Schwarz inequality for the Euclidean space.
Viewing the lecture and pausing to take notes should take approximately 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Reading: Northern Illinois University: John A. Beachy’s “Theorem 5.3: CauchySchwarz Inequality” and Wikipedia’s “CauchySchwarz Inequality”
Links: Northern Illinois University: John A. Beachy’s “Theorem 5.3: CauchySchwarz Inequality” (HTML) and Wikipedia’s “CauchySchwarz Inequality” (HTML)
Instructions: Please click on the links above, and read these webpages in their entirety to study the proof of the CauchySchwarz inequality for the general innerproduct space. Please note that these readings also apply to the topics outlined in subsubunits 1.1.3 and 1.1.4.
Studying the proofs in the reading materials takes approximately 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Lecture: Khan Academy’s “Derivation of CauchySchwarz Inequality”

1.1.3 Norm Measurement and Angle between Vectors
 Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics
Link: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics (PDF)
Instructions: Please click on the link titled “PDF version of the book” to access the text. Study Sections 9.3 through 9.6 on pages 119–135 for information on the general theory and properties of the inner product and its associated norm.
Studying this text should take approximately 1 hour and 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics

1.1.4 GramSchmidt Orthogonalization Process
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt”
Link: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt” (YouTube)
Instructions: Please click on the link above, and view this entire lecture to learn about orthogonal matrices, orthonormal families, and the GramSchmidt procedure for finding an orthonormal family from a given linearly independent family.
Viewing this lecture and pausing to take notes should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt”
 1.2 Eigenvalue Problems

1.2.1 Linear Transformations
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 30: Linear Transformations and Their Matrices”
Link: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 30: Linear Transformations and Their Matrices” (YouTube)
Instructions: Please click on the link above, and view the entire lecture on linear transformations.
Viewing this lecture and pausing to take notes and understanding the lecture should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 30: Linear Transformations and Their Matrices”

1.2.2 Bounded Linear Functionals and Operators
 Reading: Bounded Linear Functionals and Operators
Bounded Linear Functionals and Operators
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Bounded Linear Functionals and Operators

1.2.3 Eigenvalues and Eigenspaces
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering I: “Lecture 6: Eigen Values (Part 2) and Positive Definite (Part 1)”
Link: MIT: Professor Gilbert Strang’s Computational Science and Engineering I: “Lecture 6: Eigen Values (Part 2) and Positive Definite (Part 1)” (YouTube)
Instructions: Please click on the link above, and view the entire video to learn about eigenvalues.
Viewing this lecture and pausing to take notes should take approximately 1 hour and 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics: “Section 7: Eigenvalues and Eigenvectors”
Link: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics (PDF)
Instructions: Please click on the link above, and select the link to download the PDF file of the text. Study Sections 7.2 and 7.3 on pages 83–86 to address eigenvalue problems.
Studying this reading should take approximately 15–20 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering I: “Lecture 6: Eigen Values (Part 2) and Positive Definite (Part 1)”

1.2.4 SelfAdjoint Positive Definite Operators
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 27: Positive Definite Matrices”
Link: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 27: Positive Definite Matrices” (YouTube)
Instructions: Please click on the link above, and view the entire lecture on positive definite matrices.
Viewing this lecture and pausing to take notes should take approximately 1 hour and 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 27: Positive Definite Matrices”

1.3 Singular Value Decomposition (SVD)
 Lecture: MIT: Gilbert Strang’s Linear Algebra: “Lecture 29: Singular Value Decomposition”
Link: MIT: Gilbert Strang’s Linear Algebra: “Lecture 29: Singular Value Decomposition” (YouTube)
Instructions: Please click on the link above, and view the entire lecture to learn about singular value decomposition. Please note that this video lecture also covers the topics outlined in subsubunits 1.3.1 through 1.3.3.
Viewing this lecture and pausing to take notes should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Gilbert Strang’s Linear Algebra: “Lecture 29: Singular Value Decomposition”

1.3.1 Normal Operators and Spectral Decomposition
 Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics
Link: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics (PDF)
Instructions: Please click on the link above, and select the link to download the PDF version of the text. Study Sections 11.1–11.3 on pages 144–149. Please note that this reading also covers topics outlined in subsubunits 1.3.2 and 1.3.3.
Studying this reading should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of California, Davis: Isaiah Lankham, Bruno Nachtergaele, and Anne Schilling’s Linear Algebra: As an Introduction to Abstract Mathematics

1.3.2 Singular Values
Note: This topic is covered by the reading assigned below subsubunit 1.3.1

1.3.3 Reduced Singular Value Decomposition
Note: This topic is partially covered by the reading assigned below subsubunit 1.3.1.
 Reading: Reduced Singular Value Decomposition
Reduced Singular Value Decomposition
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Reduced Singular Value Decomposition

1.3.4 Full Singular Value Decomposition
 Reading: Full Singular Value Decomposition
Full Singular Value Decomposition
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Full Singular Value Decomposition
 1.4 Principal Component Analysis (PCA)

1.4.1 Frobenius Norm Measurement
 Reading: Frobenius Norm Measurement
Frobenius Norm Measurement
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Frobenius Norm Measurement

1.4.2 Principal Components for DataDependent Basis
 Reading: Principal Components for DataDependent Basis
Principal Components for DataDependent Basis
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Principal Components for DataDependent Basis

1.4.3 Pseudoinverses
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 33: Left and Right Inverses: Pseudoinverse”
Link: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 33: Left and Right Inverses: Pseudoinverse” (YouTube)
Instructions: Please click on the link above, and view the entire lecture to learn about the topic of matrix pseudoinverses and its application to leastsquares estimation.
Viewing this lecture, pausing to take notes, and studying the material in the lecture should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 33: Left and Right Inverses: Pseudoinverse”

1.4.4 MinimumNorm LeastSquares Estimation
 Reading: MinimumNorm LeastSquares Estimation
MinimumNorm LeastSquares Estimation
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: MinimumNorm LeastSquares Estimation

1.5 Application to Data Dimensionality Reduction
 Reading: Application to Data Dimensionality Reduction
Application to Data Dimensionality Reduction
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Application to Data Dimensionality Reduction
 1.5.1 Representation of Matrices by Sum of Norm1 Matrices
 1.5.2 Approximation by Matrices of Lower Ranks
 1.5.3 Motivation to DataDimensionality Reduction
 1.5.4 Principal Components as Basis for DimensionReduced Data

Unit 2: Data Compression
A natural continuation of DDR studied in Unit 1 is the subject of data compression. To prepare for this investigation, we will introduce the discrete Fourier transform (DFT). For efficient computation, we will introduce the fast Fourier transform (FFT) for computing the npoint DFT, for n equal to an integer power of 2. A realvalued version of the DFT, called discrete cosine transform (DCT), is derived for application to image compression. The importance of DFT and DCT is their functionality to extracting frequency content of discrete data. A given data set may be considered as an information source, and the histogram of the source gives rise to its probability distribution, which in turn is used to define the entropy of the source. In this unit, you will study information coding, including Shannon’s Noiseless Coding theorem and construction of the Huffman code, for reversible (or lossless) compression of the data. To significantly improve the compression efficiency, DCT followed by an appropriate quantization may be applied to reduce the entropy. This procedure is irreversible, but certainly most effective, particularly for image and video compression. In this regard, you will study the JPEG image compression standard and the video compression scheme. This discussion includes the necessity of color transform.
Unit 2 Time Advisory show close
Unit 2 Learning Outcomes show close

2.1 Discrete and Fast Fourier Transform (FFT)
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt”
Link: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt” (YouTube)
Instructions: Please click on the link above, and view the entire video to learn about Orthogonal matrices, GramSchmidt orthogonalization process. You may also click on the tab for “Transcript,” and download the transcript to read along with the video lecture.
Viewing this lecture and pausing to take notes should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: Stanford University’s “Lecture 1: The Fourier Transform and Its Applications,” “Lecture 6: The Fourier Transform and Its Applications,” “Lecture 8: The Fourier Transform and Its Applications,” and “Lecture 8: Discrete Time Fourier Transform”
Links: YouTube: Stanford University’s “Lecture 1: The Fourier Transform and Its Applications” (YouTube), “Lecture 6: The Fourier Transform and Its Applications” (YouTube), “Lecture 8: The Fourier Transform and Its Applications” (YouTube), and “Lecture 8: Discrete Time Fourier Transform” (YouTube)
Instructions: Please click on the links above, and view the entire video lectures for an overview of the Fourier transform, including DFT, FFT, DCT, and in particular the use of Tiled DCT for image compression, and applications. Please note that this resource also covers the topics outlined in subsubunits 2.1.1 and 2.1.3.
Viewing these lectures and pausing to take notes should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Linear Algebra: “Lecture 17: Orthogonal Matrices and GramSchmidt”

2.1.1 Definition of DFT
Note: This topic is covered by the lectures assigned below subunit 2.1.

2.1.2 Lanczos’ Matrix Factorization
 Reading: Lanczos’ Matrix Factorization
Lanczos’ Matrix Factorization
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Lanczos’ Matrix Factorization

2.1.3 FFT for Fast Computation
Note: This topic is covered by the lectures assigned below subunit 2.1.

2.2 Discrete Cosine Transform (DCT)
 Reading: Discrete Cosine Transform (DCT)
Discrete Cosine Transform (DCT)
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Discrete Cosine Transform (DCT)
 2.2.1 Derivation of DCT from DFT
 2.2.2 8point DCT
 2.2.3 2dimensional DCT

2.3 Information Coding
 Reading: Information Coding
Information Coding
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Information Coding
 2.3.1 Probability Distribution
 2.3.2 Histogram
 2.3.3 Entropy
 2.3.4 Binary Codes

2.4 Data Compression Schemes
 Lecture: YouTube: National Program on Technology Enhanced Learning (NPTEL)’s “Lecture 19: Data Compression”
Link: YouTube: National Program on Technology Enhanced Learning (NPTEL)’s “Lecture 19: Data Compression” (YouTube)
Instructions: Please click on the link above, and view the entire lecture for an overview of data compression. Please note that this video also covers the topics outlined in subsubunits 2.3.3, 2.4.1, and 2.4.3.
Viewing this lecture and pausing to take notes should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: YouTube: National Program on Technology Enhanced Learning (NPTEL)’s “Lecture 19: Data Compression”

2.4.1 Lossless and Lossy Compression
Note: This topic is covered by the lecture assigned below subunit 2.4.
 Lecture: YouTube: National Programme on Technology Enhanced Learning (NPTEL)’s “Lecture 17: Lossy Image Compression: DCT” and “Lecture 18: DCT Quantization and Limitations"
Links: YouTube: National Programme on Technology Enhanced Learning (NPTEL)’s “Lecture 17: Lossy Image Compression: DCT” (YouTube) and “Lecture 18: DCT Quantization and Limitations” (YouTube)
Instructions: Please click on the links above, and view these video lectures to learn about lossy image compression. The first video is on DCT, and the second video is on quantization of DCT and its limitations.
Viewing these videos and pausing to take notes should take approximately 2 hours and 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: YouTube: National Programme on Technology Enhanced Learning (NPTEL)’s “Lecture 17: Lossy Image Compression: DCT” and “Lecture 18: DCT Quantization and Limitations"

2.4.2 Kraft Inequality
 Reading: Kraft Inequality
Kraft Inequality
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Kraft Inequality

2.4.3 Huffman Coding Scheme
Note: This topic is covered by the lecture assigned below subunit 2.4.
 Web Media: YouTube: CSLearning101’s “Huffman Coding Tutorial”
Link: YouTube: CSLearning101’s “Huffman Coding Tutorial” (YouTube)
Instructions: Please click on the link above, and view the entire video to learn about Huffman Coding.
Viewing this video and pausing to take notes should take approximately 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Web Media: YouTube: CSLearning101’s “Huffman Coding Tutorial”

2.4.4 Noiseless Coding Theorem
 Reading: Noiseless Coding Theorem
Noiseless Coding Theorem
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Noiseless Coding Theorem

2.5 Image and Video Compression Schemes and Standards
 Reading: John Loomis’s “JPEG Tutorial”
Link: John Loomis’s “JPEG Tutorial” (HTML)
Instructions: Please click on the link above, and read the entire webpage to study a tutorial on JPEGs. Please note that this reading also covers the topics outlined in subsubunits 2.5.1 through 2.5.7.
Studying this reading should take approximately 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: National Program on Technology Enhanced Learning (NPTEL)’s “Lecture 16: Introduction to Image and Video Coding,” “Lecture 23: Video Coding: Basic Building Blocks,” “Lecture 24: Motion Estimation Techniques,” and “Lecture 26: Video Coding Standards”
Links: YouTube: National Program on Technology Enhanced Learning (NPTEL)’s “Lecture 16: Introduction to Image and Video Coding” (YouTube), “Lecture 23: Video Coding: Basic Building Blocks” (YouTube), “Lecture 24: Motion Estimation Techniques” (YouTube), and “Lecture 26: Video Coding Standards” (YouTube)
Instructions: Please click on the links above, and view these videos (about 1 hour each) to learn about video compressions methods and standards. Please note that these video lectures also cover the topics outlined in subsubunits 2.5.1 through 2.5.7.
Viewing these videos and pausing to take notes should take approximately 5 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above videos.
 Reading: John Loomis’s “JPEG Tutorial”

2.5.1 Image Compression Scheme
Note: This topic is covered by the lectures assigned below subunit 2.5.

2.5.2 Quantization
Note: This topic is covered by the lectures assigned below subunit 2.5

2.5.3 Huffman, DPCM, and RunLength Coding
Note: This topic is covered by the lectures assigned below subunit 2.5.

2.5.4 Decoder
Note: This topic is covered by the lectures assigned below subunit 2.5.

2.5.5 I, P, and B Video Frames
Note: This topic is covered by the lectures assigned below subunit 2.5.

2.5.6 MacroBlocks
Note: This topic is covered by the lectures assigned below subunit 2.5.

2.5.7 Motion Search and Compensation
Note: This topic is covered by the lectures assigned below subunit 2.5.

Unit 3: Fourier Methods
The matrix transformation DFT introduced in Unit 2 is a discrete version of the Fourier series to be studied in this unit. The theory of Fourier series is very rich. For example, partial sums of the Fourier series are orthogonal projection of the function it represents to the corresponding subspaces of trigonometric polynomials. In addition, these partial sums can be formulated as convolution of the function with the “Dirichlet kernels.” Since averaging of the Dirichlet kernels yields the “Fejer kernels” that constitute a positive “approximate identity,” it follows that convergence, in the meansquare sense, of the sequence of trigonometric polynomials, resulting from convolution of the function with the Fejer kernels, to the function itself is assured. Consequently, being orthogonal projections, the partial sums of the Fourier series also converge to the function represented by the Fourier series, again in the meansquare sense. This introduces the concept of completeness, which is shown to be equivalent to Parseval’s identity, with such interesting applications as solving the famous the Basel problem. This unit explores examples of the extension of the original Basel problem from powers of 2 to powers of 4 and to powers of 6. The completeness property of Fourier series will be applied to solving boundary value problems of PDE in Unit 5.
Unit 3 Learning Outcomes show close

3.1 Fourier Series
 Reading: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 3: Fourier Series:” “Section 3.1: Eigensolutions to Linear Evolution Equations”
Link: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 3: Fourier Series:” “Section 3.1: Eigensolutions to Linear Evolution Equations” (PDF)
Instructions: Please click on the link above to access the table of contents for the text, and select the link to download the PDF of “Chapter 3: Fourier Series.” Study Section 3.1 on pages 63–71 to learn about eigensolutions to linear evolution equations.
Studying this reading should take approximately 45 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering I: “Lecture 28: Fourier Series (Part 1)” and “Lecture 29: Fourier Series (Part 2)”
Link: MIT: Professor Gilbert Strang’s Computational Science and Engineering I: “Lecture 28: Fourier Series (Part 1)” (YouTube) and “Lecture 29: Fourier Series (Part 2)” (YouTube)
Instructions: Please click on the links above, and view this twopart lecture on Fourier series.
Viewing these lectures and pausing to take notes should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 3: Fourier Series:” “Section 3.1: Eigensolutions to Linear Evolution Equations”

3.1.1 Notion of Fourier Series
 Reading: Notion of Fourier Series
Notion of Fourier Series
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Notion of Fourier Series

3.1.2 Orthogonality and Computation
 Reading: Orthogonality and Computation
Orthogonality and Computation
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Orthogonality and Computation

3.2 Orthogonal Projection
 Reading: Orthogonal Projection
Orthogonal Projection
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Orthogonal Projection
 3.2.1 Pythagorean Theorem
 3.2.2 Parallelogram Law
 3.2.3 Best MeanSquare Approximation

3.3 Dirichlet’s and Fejer’s Kernels
 Reading: Dirichlet’s and Fejer’s Kernels
Dirichlet’s and Fejer’s Kernels
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Dirichlet’s and Fejer’s Kernels
 3.3.1 Partial Sums as Convolution with Dirichlet’s Kernels
 3.3.2 Cesaro Means and Derivation of Fejer’s Kernels
 3.3.3 Positive Approximate Identity

3.4 Completeness
 Reading: Completeness
Completeness
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Completeness
 3.4.1 Pointwise and Uniform Convergence
 3.4.2 Trigonometric Approximation

3.5 Parseval’s Identity
 Reading: Parseval’s Identity
Parseval’s Identity
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Parseval’s Identity
 3.5.1 Derivation
 3.5.2 The Basel Problem and Fourier Method
 3.5.3 Bernoulli Numbers and Euler’s Solution

Unit 4: TimeFrequency Analysis
The Fourier transform (FT) introduced in this unit is the analogue of the sequence of Fourier coefficients of the Fourier series discussed in Unit 3 in that the normalized integral over the “circle” in the definition of Fourier coefficients is replaced by the integral over the real line to define the FT. While the Fourier series is used to recover the given function it represents from the sequence of Fourier coefficients, it is nontrivial to justify the validity of the seemingly obvious formulation of the inverse Fourier transform (IFT) for the recovery of a function from its FT. This unit will introduce the notions of localized FT (LFT) and localized IFT (LIFT). We will also establish an identity that governs the relationship between LFT and LIFT, when the sliding frequencywindow function for the LIFT is complex conjugate of the Fourier transform of the sliding timewindow function in for the LFT. Because the Fourier transform of a Gaussian function remains to be a Gaussian function, any Gaussian function can be used as a timesliding window for simultaneous timefrequency localization. This same identity is also applied to justify the validity of the formulation of the IFT by taking the variance of the sliding Gaussian timewindow to zero. Another important consequence of this identity is the Uncertainty Principle, which states that the Gaussian is the only window function that provides optimal simultaneous timefrequency localization with area of the timefrequency window equal to 2. Discretization of any frequencymodulated sliding timewindow of the LFT at the integer lattice yields a family of local timefrequency basis functions. Unfortunately, the BalianLow restriction excludes any sliding timewindow function, including the Gaussian, to attain finite area of the timefrequency window, while providing stability for the family of local timefrequency basis functions, called a “frame.” This unit ends with a discussion of a way for avoiding the BalianLow restriction by replacing the frequencymodulation of the sliding timewindow function with modulation by certain cosine functions. More precisely, a family of stable local cosine basis functions, sometimes called Malvar “wavelets,” is introduced to achieve good timefrequency localization. As an application, undesirable blocky artifact of highly compressed JPEG pictures, as discussed in Unit 2, can be removed by replacing the 8point DCT with certain appropriate discretized local cosine basis function for each of the 8 by 8 image tiles.
Unit 4 Learning Outcomes show close

4.1 Fourier Transform
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 33: Filters, Fourier Integral Transform” and “Lecture 34: Fourier Integral Transform (Part 2)”
Links: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 33: Filters, Fourier Integral Transform” (YouTube) and “Lecture 34: Fourier Integral Transform (Part 2)” (YouTube)
Instructions: Please click on the links above, and view these video lectures to learn more about the essence of the Fourier Transform and filtering. Please note that these videos cover the topics outlined for subsubunits 4.1.1 and 4.1.2.
Viewing these lectures and pausing to take notes should take approximately 2 hours and 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 33: Filters, Fourier Integral Transform” and “Lecture 34: Fourier Integral Transform (Part 2)”

4.1.1 Definition and Essence of the Fourier Transform
 Reading: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 8: Fourier Transforms”
Links:University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 8: Fourier Transforms” (PDF)
Instructions: Please click on the link above, and then select the link to “Chapter 8: Fourier Transforms” to download the PDF file. Study pages 283–298 on the concept and properties of the Fourier Transform. Note that this reading covers the topics outlined for subsubunits 4.1.1 and 4.1.2.
Studying this reading should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 8: Fourier Transforms”

4.1.2 Properties of the Fourier Transform
Note: This topic is covered by the reading and lectures assigned below subsubunit 4.1.1.

4.1.3 Sampling Theorem
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 36: Sampling Theorem”
Links: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 36: Sampling Theorem” (YouTube)
Instructions: Please click on the link above, and view this lecture to learn about the application of the Fourier transform and Fourier series to deriving and understanding the essence of the Sampling Theorem.
Viewing this lecture and pausing to take notes should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 36: Sampling Theorem”

4.1.4 Applications of the Fourier Transform
 Lecture: YouTube: Stanford University: Department of Electrical Engineering’s “Lecture 1: The Fourier Transforms and Its Applications,” “Lecture 6: The Fourier Transforms and Its Applications,” and “Lecture 8: The Fourier Transforms and Its Applications”
Links: YouTube: Stanford University: Department of Electrical Engineering’s “Lecture 1: The Fourier Transforms and Its Applications” (YouTube), “Lecture 6: The Fourier Transforms and Its Applications” (YouTube), and “Lecture 8: The Fourier Transforms and Its Applications” (YouTube)
Instructions: Please click on the links above, and view the video lectures to learn about applications of the Fourier Transforms.
Viewing these video lectures and pausing to take notes should take approximately 3 hours and 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Lecture: YouTube: Stanford University: Department of Electrical Engineering’s “Lecture 1: The Fourier Transforms and Its Applications,” “Lecture 6: The Fourier Transforms and Its Applications,” and “Lecture 8: The Fourier Transforms and Its Applications”
 4.2 Convolution Filter and Gaussian Kernel

4.2.1 Convolution Filter
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 32: Convolution (Part 2), Filtering”
Link: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 32: Convolution (Part 2), Filtering” (YouTube)
Instructions: Please click on the above link above, and view the video lecture on the convolution filter.
Viewing this lecture and pausing to take notes should take approximately 1 hour and 15 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Lecture: MIT: Professor Gilbert Strang’s Computational Science and Engineering: “Lecture 32: Convolution (Part 2), Filtering”

4.2.2 Fourier Transform of the Gaussian
 Reading: Fourier Transform of the Gaussian
Fourier Transform of the Gaussian
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Fourier Transform of the Gaussian

4.2.3 Inverse Fourier Transform
 Reading: Inverse Fourier Transform
Inverse Fourier Transform
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Inverse Fourier Transform

4.3 Localized Fourier Transform
 Reading: Localized Fourier Transform
Localized Fourier Transform
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Localized Fourier Transform
 4.3.1 Shorttime Fourier Transform (STFT)
 4.3.2 Gabor Transform
 4.3.3 Inverse of Localized Fourier Transform

4.4 Uncertainty Principle
 Reading: Uncertainty Principle
Uncertainty Principle
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Uncertainty Principle
 4.4.1 TimeFrequency Localization Window Measurement
 4.4.2 Gaussian as Optimal TimeFrequency Window
 4.4.3 Derivation of the Uncertainty Principle

4.5 TimeFrequency Bases
 Reading: TimeFrequency Bases
TimeFrequency Bases
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: TimeFrequency Bases
 4.5.1 BalianLow Restriction
 4.5.2 Frames
 4.5.3 Localized Cosine Basis
 4.5.4 Malvar Wavelets

Unit 5: PDE Methods
When the variance of the Gaussian convolution filter is replaced by ct, where c is a fixed positive constant and t is used as the time parameter, then the convolution filtering of any input function f(x) describes the heat diffusion process with initial temperature given by f(x). More precisely, if u(x, t) denotes the temperature at the position x and time t, then u(x,t), obtained by the Gaussian convolution of the initial temperature f(x), is the solution of the heat diffusion PDE with initial condition u(x, 0) = f(x), where the constant c is the heat conductivity constant. However, this elegant example has little practical value, because the spatial domain is the entire xaxis,but it serves the purpose as a convincing motivation for the study of linear PDE methods, to be studied in this unit. To solve the same heat diffusion PDE as in this example, but with initial heat source given on a bounded interval and with insulation at the two endpoints to avoid any heat loss, the method of “separation of variables” is introduced. This method separates the PDE into two ordinary differential equations (ODEs) that can be easily solved by appealing to the eigenvalue problem, studied in Unit 1, for linear differential operators with eigenfunctions given by the cosine function in x and with frequency governed by the eigenvalues, which also dictate the rate of exponential decay in the time variable t. Superposition of the product of these corresponding eigenfunctions with coefficients given by the Fourier coefficients of the Fourier series representation of the initial heat content, studied in Unit 3, solves this heat equation. In this unit, you will study an extension of the method of separation of variables to the study of boundary value problems on a rectangular spatial domain as well as the solution of other popular linear PDEs. The diffusion process can be applied to image noise reduction.
Unit 5 Learning Outcomes show close

5.1 From Gaussian Convolution to Diffusion Process
 Reading: From Gaussian Convolution to Diffusion Process
From Gaussian Convolution to Diffusion Process
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: From Gaussian Convolution to Diffusion Process
 5.1.1 Gaussian as Solution for Delta Heat Source
 5.1.2 Gaussian Convolution as Solution of Heat Equation for the RealLine
 5.1.3 Gaussian Convolution as Solution of Heat Equation on the Euclidean Space

5.2 The Method of Separation of Variables
 Reading: University of Minnesota: Peter Olver’s Introduction to Partial Differential Equations: “Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations”
Link: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations” (PDF)
Instructions: Please click on the link above, and then select the link for “Chapter 4: Separation of Variables” to download the text. Study Chapter 4 on pages 103–109 to learn about the method of separation of variables (for the special case of one spatial variable), particularly for solving the heat equation.
Studying this reading should take approximately 1 hour and 30 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Reading: Cambridge University Press: Professor Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory
Link: Cambridge University Press: Professor Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory (PDF)
Instructions: Pleaseclick on the link above to download the PDF of the text. Study Part I (on some motivating examples) and Part II (on the more general theory), particularly to learn about the abstract theory as a companion to the study of the reading by Professor Olver.
Studying this reading should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Minnesota: Peter Olver’s Introduction to Partial Differential Equations: “Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations”

5.2.1 Separation of Time and Spatial Variables
 Reading: Separation of Time and Spatial Variables
Separation of Time and Spatial Variables
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Separation of Time and Spatial Variables

5.2.2 Superposition Solution
 Reading: Superposition Solution
Superposition Solution
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Superposition Solution

5.3 Fourier Series Solution
 Reading: University of Minnesota: Peter Olver’s Introduction to Partial Differential Equations: Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations”
Link: University of Minnesota: Professor Peter Olver’s Introduction to Partial Differential Equations: “Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations” (PDF)
Instructions: Please click on the link in above, and then select the link to download “Chapter 4: Separation of Variables.” Study the Fourier series solution on pages 109–140.
Studying this reading should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Reading: Cambridge University Press: Professor Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory
Link: Cambridge University Press: Professor Marcus Pivato’s Linear Partial Differential Equations and Fourier Theory (PDF)
Instructions: Please click on the link above to access the PDF. Study Part III (on Fourier series solutions) and Part IV (on Boundary value solutions).
Studying this reading should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Minnesota: Peter Olver’s Introduction to Partial Differential Equations: Chapter 4: Separation of Variables: Introduction and the Diffusion and Heat Equations”

5.3.1 Fourier Series Representation for Spatial Solution
Note: This topic is covered by the lectures assigned below subunit 5.3

5.3.2 Extension to Higher Dimensional Spatial Domain
 Reading: Extension to Higher Dimensional Spatial Domain
Extension to Higher Dimensional Spatial Domain
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Extension to Higher Dimensional Spatial Domain

5.4 Boundary Value Problems
 Reading: Boundary Value Problems
Boundary Value Problems
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Boundary Value Problems
 5.4.1 The Neumann Boundary Value Problem
 5.4.2 Anisotropic Diffusion
 5.4.3 Solution in Terms of Eigenvalue Problems

5.5 Application to Image Denoising
 Reading: Application to Image Denoising
Application to Image Denoising
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Application to Image Denoising
 5.5.1 Diffusion as Quantizer for Image Compression
 5.5.2 Diffusion for Noise Reduction
 5.5.3 Enhanced JPEG Image Compression

Unit 6: Wavelet Methods
This final unit is concerned with the study of multiscale data analysis. This unit will introduce you to the notions of multiresolution analysis (MRA) and wavelet transform (WT) as well as the associated wavelet decomposition and reconstruction algorithms. The WT of a finite Fourier series is discussed to introduce the relationship between scale and frequency, in particular with a group of frequencies, called a frequency band. We will also derive the inversion formula for recovering the function from its WT. The MRA architecture is demonstrated by using Bspline functions. Construction of wavelets by appealing to the MRA is achieved by matrix extension. To reduce the computational complexity of the wavelet decomposition and reconstruction algorithms, you will also study lifting schemes. To extend to the wavelet transform of functions of two variables, we use tensorproducts of the wavelets with the corresponding scaling functions of the MRA. This unit ends with embedding a digital image in the waveletdomain for image manipulation, such as progressive transmission, image edge extraction, and image enhancement.
Unit 6 Learning Outcomes show close

6.1 TimeScale Analysis
 Reading: TimeScale Analysis
TimeScale Analysis
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: TimeScale Analysis
 6.1.1 Wavelet Transform
 6.1.2 Frequency versus Scale
 6.1.3 Partition into Frequency Bands

6.2 Multiresolution Analysis (MRA)
 Reading: Multiresolution Analysis (MRA)
Multiresolution Analysis (MRA)
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Multiresolution Analysis (MRA)
 6.2.1 Function Refinement
 6.2.2 Bspline Examples
 6.2.3 The MRA Architecture

6.3 Wavelet Construction
 Reading: Wavelet Construction
Wavelet Construction
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Wavelet Construction
 6.3.1 Matrix Extension
 6.3.2 Quadrature Mirror Filter
 6.3.3 Orthogonal and BiOrthogonal Wavelets

6.4 Wavelet Algorithms
 Reading: MIT: Professor Gilbert Strang’s “Lecture Notes: Handouts 1–16”
Link: MIT: Professor Gilbert Strang’s “Lecture Notes: Handouts 1–16” (PDF)
Instructions: Please click on the link above, and select the PDF links for the slides and handouts for 1–16. Study these lecture notes and handouts to learn about computational schemes of wavelet decomposition and reconstruction, filter banks, and the lifting scheme.
Studying these lecture slides and reflecting on the material should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: MIT: Professor Gilbert Strang’s “Lecture Notes: Handouts 1–16”

6.4.1 Wavelet Decomposition and Reconstruction
 Reading: Wavelet Decomposition and Reconstruction
Wavelet Decomposition and Reconstruction
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Wavelet Decomposition and Reconstruction

6.4.2 Filter Banks
 Reading: Filter Banks
Filter Banks
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Filter Banks

6.4.3 The Lifting Scheme
 Reading: The Lifting Scheme
The Lifting Scheme
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: The Lifting Scheme

6.5 Application to Image Coding
 Reading: Application to Image Coding
Application to Image Coding
The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Reading: Application to Image Coding
 6.5.1 Mapping Digital Images to the Wavelet Domain
 6.5.2 Progressive Image Transmission
 6.5.3 Lossless JPEG2000 Compression