Topics in Applied Mathematics

Purpose of Course  showclose

Mathematics was coined the “queen of sciences” by the “prince of mathematicians,” Carl Friedrich Gauss, one of the greatest mathematicians of all time.  Indeed, the name of Gauss is associated with essentially all areas of mathematics, and in this respect, there is really no clear boundary between “pure mathematics” and “applied mathematics.”  To ensure financial independence, Gauss decided on a stable career in astronomy, which is one of the oldest sciences and was perhaps the most popular one during the eighteenth and nineteenth centuries.  In his study of celestial motion and orbits and a diversity of disciplines later in his career, including (in chronological order): geodesy, magnetism, dioptrics, and actuarial science, Gauss has developed a vast volume of mathematical methods and tools that are still instrumental to our current study of applied mathematics.

During the twentieth century, with the exciting development of quantum field theory, with the prosperity of the aviation industry, and with the bullish activity in financial market trading, and so forth, the sovereignty of the “queen of sciences” has turned her attention to the theoretical development and numerical solutions of partial differential equations (PDEs).  Indeed, the non-relativistic modeling of quantum mechanics is described by the Schrödinger equation; the fluid flow formulation, as an extension of Newtonian physics by incorporating motion and stress, is modeled by the Navier-Stokes equation; and option stock trading with minimum risk can be modeled by the Black-Scholes equation.  All of these equations are PDEs.  In general, PDEs are used to describe a wide variety of phenomena, including: heat diffusion, sound wave propagation, electromagnetic wave radiation, vibration, electrostatics, electrodynamics, fluid flow, and elasticity, just to name a few.  For this reason, the theoretical and numerical development of PDEs has been considered the core of applied mathematics, at least in the academic environment.

On the other hand, over the past decade, we have been facing a rapidly increasing volume of “information” contents to be processed and understood.  For instance, the popularity and significant impact of the open education movement (OEM) has contributed to an enormous amount of educational information on the web that is difficult to sort out, due to unavoidable redundancy, occasional contradiction, extreme variation in quality, and even erroneous opinions.  This motivated the founding of the “Saylor Foundation courseware” to provide perhaps one of the most valuable, and certainly more reliable, high-quality educational materials, with end-to-end solutions, that are free to all.  With the recent advances of various high-tech fields and the popularity of social networking, the trend of exponential growth of easily accessible information is certainly going to continue well into the twenty-first century, and the bottleneck created by this information explosion will definitely require innovative solutions from the scientific and engineering communities, particularly those technologists with better understanding of and a strong background in applied mathematics.  In this regard, mathematics extends its influence and impact by providing innovative theory, methods, and algorithms to virtually every discipline, far beyond sciences and engineering, for processing, transmitting, receiving, understanding, and visualizing data sets, which could be very large or live in some high-dimensional space.

Of course the basic mathematical tools, such as PDE methods and least-squares approximation introduced by Gauss, are always among the core of the mathematical toolbox for applied mathematics.  But other innovations and methods must be integrated in this toolbox as well.  One of the most essential ideas is the notion of “frequency” of the data information.  Joseph Fourier, a contemporary of Gauss, instilled this important concept to our study of physical phenomena by his innovation of trigonometric series representations, along with powerful mathematical theory and methods, to significantly expand the core of the toolbox for applied mathematics.  The frequency content of a given data set facilitates the processing and understanding of the data information.  Another important idea is the “multi-scale” structure of data sets.  Less than three decades ago, with the birth of another exciting mathematical subject, called “wavelets,” the data set of information can be put in the wavelet domain for multi-scale processing as well.  On the other hand, it is unfortunate that some essential basic mathematical tools for information processing are not commonly taught in a regular applied mathematics course in the university.  Among the commonly missing ones, the topics that will be addressed in this Saylor course MA304 include: information coding, data dimensionality reduction, data compression, and image manipulation.

The objective of this course is to study the basic theory and methods in the toolbox of the core of applied mathematics, with a central scheme that addresses “information processing” and with an emphasis on manipulation of digital image data.  Linear algebra in the Saylor Foundation’s MA211 and MA212 are extended to “linear analysis” with applications to principal component analysis (PCA) and data dimensionality reduction (DDR).  For data compression, the notion of entropy is introduced to quantify coding efficiency as governed by Shannon’s Noiseless Coding theorem.  Discrete Fourier transform (DFT) followed by an efficient computational algorithm, called fast Fourier transform (FFT), as well as a real-valued version of the DFT, called discrete cosine transform (DCT) are discussed, with application to extracting frequency content of the given discrete data set that facilitates reduction of the entropy and thus significant improvement of the coding efficiency.  DFT can be viewed as a discrete version of the Fourier series, which will be studied in some depth, with emphasis on orthogonal projection, the property of positive approximate identity of Fejer’s kernels, Parseval’s identity and the concept of completeness.  The integral version of the sequence of Fourier coefficients is called the Fourier transform (FT).  Analogous to the Fourier series, the formulation of the inverse Fourier transform (IFT) is derived by applying the Gaussian function as a sliding time-window for simultaneous time-frequency localization, with optimality guaranteed by the Uncertainty Principle.  Local time-frequency basis functions are also introduced in this course by discretization of the frequency-modulated sliding time-window function at the integer lattice points.  Replacing the frequency modulation by modulation with the cosines avoids the Balian-Low stability restriction on the local time-frequency basis functions, with application to elimination of blocky artifact caused by quantization of tiled DCT in image compression.  Gaussian convolution filtering also provides the solution of the heat (partial differential) equation with the real-line as the spatial domain.  When this spatial domain is replaced by a bounded interval, the method of separation of variables is applied to separate the PDE into two ordinary differential equations (ODEs).  Furthermore, when the two end-points of the interval are insulated from heat loss, solution of the spatial ODE is achieved by finding the eigenvalue and eigenvector pairs, with the same eigenvalues to govern the exponential rate of decay of the solution of the time ODE.  Superposition of the products of the spatial and time solutions over all eigenvalues solves the heat PDE, when the Fourier coefficients of the initial heat content are used as the coefficients of the terms of the superposition.  This method is extended to the two-dimensional rectangular spatial domain, with application to image noise reduction.  The method of separation of variables is also applied to solving other typical linear PDEs.  Finally, multi-scale data analysis is introduced and compared with the Fourier frequency approach, and the architecture of multiresolution analysis (MRA) is applied to the construction of wavelets and formulation of the multi-scale wavelet decomposition and reconstruction algorithms.  The lifting scheme is also introduced to reduce the computational complexity of these algorithms, with applications to digital image manipulation for such tasks as progressive transmission, image edge extraction, and image enhancement.

Course Information  showclose

Welcome to MA304: Topics in Applied Mathematics.  Below, please find some information on the course and its requirements.
 
Primary Resources: This course is comprised of a range of different free, online materials.  However, the course makes primary use of the following materials:
Requirements for Completion: In order to complete this course, you will need to work through each unit and all of its assigned materials.  You will also need to complete:
  • The Final Exam
Note that you will only receive an official grade on your Final Exam.  However, in order to adequately prepare for this exam, you will need to work through the resources in each unit.
 
In order to “pass” this course, you will need to earn a 70% or higher on the Final Exam.  Your score on the exam will be tabulated as soon as you complete it.  If you do not pass the exam, you may take it again.
 
Time Commitment: Each unit includes a “time advisory” that lists the amount of time you should spend on each subunit.  These should help you plan your time accordingly.  It may be useful to take a look at these time advisories and to determine how much time you have over the next few weeks to complete each unit, and then to set goals for yourself.  For example, Unit 1 should take you 12.5 hours.  Perhaps you can sit down with your calendar and decide to complete subunit 1.1 (a total of 3.75 hours) on Monday night; subunit 1.2 (a total of 3.75 hours) on Tuesday night; etc.
 
Tips/Suggestions: As noted in the “Course Requirements,” there are several mathematics pre-requisites for this course.  If you are struggling with the mathematics as you progress through this course, consider taking a break and revisiting the applicable course listed as a pre-requisite.
           
As you read, take careful notes on a separate sheet of paper.  Mark down any important equations, formulas, and definitions that stand out to you.  These notes will serve as a useful review as you prepare and study for your Final Exam. 

Learning Outcomes  showclose

Upon successful completion of this course, the student will be able to:
  • Compute singular values of rectangular and singular square matrices.
  • Perform singular value decomposition of rectangular matrices.
  • Solve an arbitrary system of linear equations.
  • Compute linear least-squares estimation.
  • Compute principal components.
  • Reduce data dimension.
  • Compute DFT of vectors.
  • Compute inverse DFT.
  • Compute DCT of vectors.
  • Compute inverse DCT.
  • Compute two-dimensional DCT of matrix data array.
  • Compute histogram of data sets.
  • Formulate probability distribution based on histograms.
  • Compute entropy from probability distribution.
  • Construct Huffman trees and Huffman codes.
  • Perform color transform.
  • Outline the JPEG image compression scheme.
  • Explain video compression. 
  • Compute Fourier series.
  • Compute Fourier cosine series.
  • Describe the importance of the Dirichlet and Fejer kernels.
  • Apply the property of positive approximate identity to prove convergence theorems.
  • Compute mean-square error of approximation by partial sums of Fourier series.
  • Solve the Basel problem and its extension to higher even powers.
  • Compute the Fourier transform of some simple functions.
  • Compute the Fourier transform of the affine transformation of some simple functions.
  • Compute the convolution of some simple functions with certain filters.
  • Describe and apply the important Fourier transform property of mapping the convolution operation to product of the Fourier transform of the individual functions.
  • Explain and apply the concept of localized Fourier and inverse Fourier transforms.
  • Formulate the Fourier transform of a general Gaussian function.
  • Explain the Uncertainty Principle.
  • Compute the Gabor transform of some simple functions.
  • Formulate local time-frequency basis functions from a given sliding time-window function.
  • Formulate local cosine basis functions from a given sliding time-window function.
  • Apply the Gaussian to solve the heat equation with the entire d-dimensional Euclidean space as the spatial domain, where d is any positive integer.
  • Apply the method of separation of variables to separate a given linear PDE into a finite family of ODEs.
  • Solve the corresponding eigenvalue problems for the spatial ODEs.
  • Apply the Fourier series of the input function to formulate the superposition solution of boundary value problems.
  • Give the relationship between scale and frequency for a given wavelet filter.
  • Perform matrix extension to compute wavelet filters.
  • Compute multi-scale data representation by applying the wavelet decomposition algorithm for the Haar wavelet.
  • Identify the order of vanishing moments of a given wavelet.
  • Apply the wavelet decomposition and reconstruction algorithms to multi-scale data analysis.
  • Apply wavelets to digital image manipulation.

Course Requirements  showclose

In order to take this course, you must:

√    Have access to a computer.

√    Have continuous broadband Internet access.

√    Have the ability/permission to install plug-ins (e.g. Adobe Reader or Flash) and software.

√    Have the ability to download and save files and documents to a computer.

√    Have the ability to open Microsoft Office files and documents (.doc, .ppt, .xls, etc.).

√    Have competency in the English language.

√    Have read the Saylor Student Handbook.

√    Have completed the following courses from “The Core Program” of the mathematics major: MA101: Single-Variable Calculus I; MA102: Single-Variable Calculus II; MA103: Multivariable Calculus; MA211: Linear Algebra; MA221: Differential Equations; and MA241: Real Analysis I

√    Have completed the following courses from the “Advanced Mathematics” section of the mathematics major: MA212: Linear Algebra II; MA243: Complex Analysis; and MA222: Introduction to Partial Differential Equations.

Unit Outline show close


Expand All Resources Collapse All Resources
  • Unit 1: Linear Analysis  

    In Unit 1, the theory of linear algebra studied in the Saylor Foundation’s MA211 and MA212 are extended to linear analysis in that matrices are extended to linear operators that include certain differential operators.  In this unit, you will study the inner product and its corresponding norm defined on a vector space, along with their important properties that depend on the Cauchy-Schwarz inequality.  In addition, you will review the eigenvalue problem, and you will study singular values with an application to spectral decomposition.  This leads to the discussion of singular value decomposition (SVD) of rectangular matrices that allows us to generalize the inversion of nonsingular matrices, studied in the Saylor course MA211, to the “inversion” of rectangular and singular square matrices with applications to solving arbitrary systems of linear equations and to the introduction of the method of principal component analysis (PCA).  As an application of PCA, the formulation and theory for data dimensionality reduction (DDR) will also be studied in this first unit. 

    Unit 1 Learning Outcomes   show close
  • 1.1 Inner Product and Norm Measurements  
  • 1.1.1 Definition of Inner Product  
  • 1.1.2 Cauchy-Schwarz Inequality  
  • 1.1.3 Norm Measurement and Angle between Vectors  
  • 1.1.4 Gram-Schmidt Orthogonalization Process  
  • 1.2 Eigenvalue Problems  
  • 1.2.1 Linear Transformations  
  • 1.2.2 Bounded Linear Functionals and Operators  
  • 1.2.3 Eigenvalues and Eigenspaces  
  • 1.2.4 Self-Adjoint Positive Definite Operators  
  • 1.3 Singular Value Decomposition (SVD)  
  • 1.3.1 Normal Operators and Spectral Decomposition  
  • 1.3.2 Singular Values  

    Note: This topic is covered by the reading assigned below sub-subunit 1.3.1

  • 1.3.3 Reduced Singular Value Decomposition  

    Note: This topic is partially covered by the reading assigned below sub-subunit 1.3.1. 

    • Reading: Reduced Singular Value Decomposition

      Reduced Singular Value Decomposition

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 1.3.4 Full Singular Value Decomposition  
    • Reading: Full Singular Value Decomposition

      Full Singular Value Decomposition

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 1.4 Principal Component Analysis (PCA)  
  • 1.4.1 Frobenius Norm Measurement  
    • Reading: Frobenius Norm Measurement

      Frobenius Norm Measurement

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 1.4.2 Principal Components for Data-Dependent Basis  
  • 1.4.3 Pseudoinverses  
  • 1.4.4 Minimum-Norm Least-Squares Estimation  
    • Reading: Minimum-Norm Least-Squares Estimation

      Minimum-Norm Least-Squares Estimation

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 1.5 Application to Data Dimensionality Reduction  
  • 1.5.1 Representation of Matrices by Sum of Norm-1 Matrices  
  • 1.5.2 Approximation by Matrices of Lower Ranks  
  • 1.5.3 Motivation to Data-Dimensionality Reduction  
  • 1.5.4 Principal Components as Basis for Dimension-Reduced Data  
  • Unit 2: Data Compression  

    A natural continuation of DDR studied in Unit 1 is the subject of data compression.  To prepare for this investigation, we will introduce the discrete Fourier transform (DFT).  For efficient computation, we will introduce the fast Fourier transform (FFT) for computing the n-point DFT, for n equal to an integer power of 2.  A real-valued version of the DFT, called discrete cosine transform (DCT), is derived for application to image compression.  The importance of DFT and DCT is their functionality to extracting frequency content of discrete data.  A given data set may be considered as an information source, and the histogram of the source gives rise to its probability distribution, which in turn is used to define the entropy of the source.  In this unit, you will study information coding, including Shannon’s Noiseless Coding theorem and construction of the Huffman code, for reversible (or lossless) compression of the data.  To significantly improve the compression efficiency, DCT followed by an appropriate quantization may be applied to reduce the entropy.  This procedure is irreversible, but certainly most effective, particularly for image and video compression.  In this regard, you will study the JPEG image compression standard and the video compression scheme.  This discussion includes the necessity of color transform.

    Unit 2 Time Advisory   show close
    Unit 2 Learning Outcomes   show close
  • 2.1 Discrete and Fast Fourier Transform (FFT)  
  • 2.1.1 Definition of DFT  

    Note: This topic is covered by the lectures assigned below subunit 2.1.

  • 2.1.2 Lanczos’ Matrix Factorization  
    • Reading: Lanczos’ Matrix Factorization

      Lanczos’ Matrix Factorization

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 2.1.3 FFT for Fast Computation  

    Note: This topic is covered by the lectures assigned below subunit 2.1.

  • 2.2 Discrete Cosine Transform (DCT)  
    • Reading: Discrete Cosine Transform (DCT)

      Discrete Cosine Transform (DCT)

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 2.2.1 Derivation of DCT from DFT  
  • 2.2.2 8-point DCT  
  • 2.2.3 2-dimensional DCT  
  • 2.3 Information Coding  
    • Reading: Information Coding

      Information Coding

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 2.3.1 Probability Distribution  
  • 2.3.2 Histogram  
  • 2.3.3 Entropy  
  • 2.3.4 Binary Codes  
  • 2.4 Data Compression Schemes  
  • 2.4.1 Lossless and Lossy Compression  

    Note: This topic is covered by the lecture assigned below subunit 2.4.

  • 2.4.2 Kraft Inequality  
    • Reading: Kraft Inequality

      Kraft Inequality

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 2.4.3 Huffman Coding Scheme  

    Note: This topic is covered by the lecture assigned below subunit 2.4.

  • 2.4.4 Noiseless Coding Theorem  
    • Reading: Noiseless Coding Theorem

      Noiseless Coding Theorem

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 2.5 Image and Video Compression Schemes and Standards  
  • 2.5.1 Image Compression Scheme  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • 2.5.2 Quantization  

    Note: This topic is covered by the lectures assigned below subunit 2.5

  • 2.5.3 Huffman, DPCM, and Run-Length Coding  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • 2.5.4 Decoder  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • 2.5.5 I, P, and B Video Frames  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • 2.5.6 Macro-Blocks  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • 2.5.7 Motion Search and Compensation  

    Note: This topic is covered by the lectures assigned below subunit 2.5. 

  • Unit 3: Fourier Methods  

    The matrix transformation DFT introduced in Unit 2 is a discrete version of the Fourier series to be studied in this unit.  The theory of Fourier series is very rich.  For example, partial sums of the Fourier series are orthogonal projection of the function it represents to the corresponding subspaces of trigonometric polynomials.  In addition, these partial sums can be formulated as convolution of the function with the “Dirichlet kernels.”  Since averaging of the Dirichlet kernels yields the “Fejer kernels” that constitute a positive “approximate identity,” it follows that convergence, in the mean-square sense, of the sequence of trigonometric polynomials, resulting from convolution of the function with the Fejer kernels, to the function itself is assured.  Consequently, being orthogonal projections, the partial sums of the Fourier series also converge to the function represented by the Fourier series, again in the mean-square sense.  This introduces the concept of completeness, which is shown to be equivalent to Parseval’s identity, with such interesting applications as solving the famous the Basel problem.  This unit explores examples of the extension of the original Basel problem from powers of 2 to powers of 4 and to powers of 6.  The completeness property of Fourier series will be applied to solving boundary value problems of PDE in Unit 5.

    Unit 3 Learning Outcomes   show close
  • 3.1 Fourier Series  
  • 3.1.1 Notion of Fourier Series  
    • Reading: Notion of Fourier Series

      Notion of Fourier Series 

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.1.2 Orthogonality and Computation  
    • Reading: Orthogonality and Computation

      Orthogonality and Computation

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.2 Orthogonal Projection  
    • Reading: Orthogonal Projection

      Orthogonal Projection 

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.2.1 Pythagorean Theorem  
  • 3.2.2 Parallelogram Law  
  • 3.2.3 Best Mean-Square Approximation  
  • 3.3 Dirichlet’s and Fejer’s Kernels  
    • Reading: Dirichlet’s and Fejer’s Kernels

      Dirichlet’s and Fejer’s Kernels

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.3.1 Partial Sums as Convolution with Dirichlet’s Kernels  
  • 3.3.2 Cesaro Means and Derivation of Fejer’s Kernels  
  • 3.3.3 Positive Approximate Identity  
  • 3.4 Completeness  
    • Reading: Completeness

      Completeness

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.4.1 Pointwise and Uniform Convergence  
  • 3.4.2 Trigonometric Approximation  
  • 3.5 Parseval’s Identity  
    • Reading: Parseval’s Identity

      Parseval’s Identity 

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 3.5.1 Derivation  
  • 3.5.2 The Basel Problem and Fourier Method  
  • 3.5.3 Bernoulli Numbers and Euler’s Solution  
  • Unit 4: Time-Frequency Analysis  

    The Fourier transform (FT) introduced in this unit is the analogue of the sequence of Fourier coefficients of the Fourier series discussed in Unit 3 in that the normalized integral over the “circle” in the definition of Fourier coefficients is replaced by the integral over the real line to define the FT.  While the Fourier series is used to recover the given function it represents from the sequence of Fourier coefficients, it is non-trivial to justify the validity of the seemingly obvious formulation of the inverse Fourier transform (IFT) for the recovery of a function from its FT.  This unit will introduce the notions of localized FT (LFT) and localized IFT (LIFT).  We will also establish an identity that governs the relationship between LFT and LIFT, when the sliding frequency-window function for the LIFT is complex conjugate of the Fourier transform of the sliding time-window function in for the LFT.  Because the Fourier transform of a Gaussian function remains to be a Gaussian function, any Gaussian function can be used as a time-sliding window for simultaneous time-frequency localization.  This same identity is also applied to justify the validity of the formulation of the IFT by taking the variance of the sliding Gaussian time-window to zero.  Another important consequence of this identity is the Uncertainty Principle, which states that the Gaussian is the only window function that provides optimal simultaneous time-frequency localization with area of the time-frequency window equal to 2.  Discretization of any frequency-modulated sliding time-window of the LFT at the integer lattice yields a family of local time-frequency basis functions.  Unfortunately, the Balian-Low restriction excludes any sliding time-window function, including the Gaussian, to attain finite area of the time-frequency window, while providing stability for the family of local time-frequency basis functions, called a “frame.”  This unit ends with a discussion of a way for avoiding the Balian-Low restriction by replacing the frequency-modulation of the sliding time-window function with modulation by certain cosine functions.  More precisely, a family of stable local cosine basis functions, sometimes called Malvar “wavelets,” is introduced to achieve good time-frequency localization.  As an application, undesirable blocky artifact of highly compressed JPEG pictures, as discussed in Unit 2, can be removed by replacing the 8-point DCT with certain appropriate discretized local cosine basis function for each of the 8 by 8 image tiles.  

    Unit 4 Learning Outcomes   show close
  • 4.1 Fourier Transform  
  • 4.1.1 Definition and Essence of the Fourier Transform  
  • 4.1.2 Properties of the Fourier Transform  

    Note: This topic is covered by the reading and lectures assigned below sub-subunit 4.1.1.

  • 4.1.3 Sampling Theorem  
  • 4.1.4 Applications of the Fourier Transform  
  • 4.2 Convolution Filter and Gaussian Kernel  
  • 4.2.1 Convolution Filter  
  • 4.2.2 Fourier Transform of the Gaussian  
    • Reading: Fourier Transform of the Gaussian

      Fourier Transform of the Gaussian

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 4.2.3 Inverse Fourier Transform  
    • Reading: Inverse Fourier Transform

      Inverse Fourier Transform

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 4.3 Localized Fourier Transform  
    • Reading: Localized Fourier Transform

      Localized Fourier Transform

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 4.3.1 Short-time Fourier Transform (STFT)  
  • 4.3.2 Gabor Transform  
  • 4.3.3 Inverse of Localized Fourier Transform  
  • 4.4 Uncertainty Principle  
    • Reading: Uncertainty Principle

      Uncertainty Principle

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 4.4.1 Time-Frequency Localization Window Measurement  
  • 4.4.2 Gaussian as Optimal Time-Frequency Window  
  • 4.4.3 Derivation of the Uncertainty Principle  
  • 4.5 Time-Frequency Bases  
    • Reading: Time-Frequency Bases

      Time-Frequency Bases

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 4.5.1 Balian-Low Restriction  
  • 4.5.2 Frames  
  • 4.5.3 Localized Cosine Basis  
  • 4.5.4 Malvar Wavelets  
  • Unit 5: PDE Methods  

    When the variance of the Gaussian convolution filter is replaced by ct, where c is a fixed positive constant and t is used as the time parameter, then the convolution filtering of any input function f(x) describes the heat diffusion process with initial temperature given by f(x).  More precisely, if u(x, t) denotes the temperature at the position x and time t, then u(x,t), obtained by the Gaussian convolution of the initial temperature f(x), is the solution of the  heat diffusion PDE with initial condition u(x, 0) = f(x), where the constant c is the heat conductivity constant.  However, this elegant example has little practical value, because the spatial domain is the entire x-axis,but it serves the purpose as a convincing motivation for the study of linear PDE methods, to be studied in this unit.  To solve the same heat diffusion PDE as in this example, but with initial heat source given on a bounded interval and with insulation at the two end-points to avoid any heat loss, the method of “separation of variables” is introduced.  This method separates the PDE into two ordinary differential equations (ODEs) that can be easily solved by appealing to the eigenvalue problem, studied in Unit 1, for linear differential operators with eigenfunctions given by the cosine function in x and with frequency governed by the eigenvalues, which also dictate the rate of exponential decay in the time variable t.  Superposition of the product of these corresponding eigenfunctions with coefficients given by the Fourier coefficients of the Fourier series representation of the initial heat content, studied in Unit 3, solves this heat equation.  In this unit, you will study an extension of the method of separation of variables to the study of boundary value problems on a rectangular spatial domain as well as the solution of other popular linear PDEs.  The diffusion process can be applied to image noise reduction.

    Unit 5 Learning Outcomes   show close
  • 5.1 From Gaussian Convolution to Diffusion Process  
  • 5.1.1 Gaussian as Solution for Delta Heat Source  
  • 5.1.2 Gaussian Convolution as Solution of Heat Equation for the Real-Line  
  • 5.1.3 Gaussian Convolution as Solution of Heat Equation on the Euclidean Space  
  • 5.2 The Method of Separation of Variables  
  • 5.2.1 Separation of Time and Spatial Variables  
  • 5.2.2 Superposition Solution  
    • Reading: Superposition Solution

      Superposition Solution   

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 5.3 Fourier Series Solution  
  • 5.3.1 Fourier Series Representation for Spatial Solution  

    Note: This topic is covered by the lectures assigned below subunit 5.3

  • 5.3.2 Extension to Higher Dimensional Spatial Domain  
  • 5.4 Boundary Value Problems  
    • Reading: Boundary Value Problems

      Boundary Value Problems 

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 5.4.1 The Neumann Boundary Value Problem  
  • 5.4.2 Anisotropic Diffusion  
  • 5.4.3 Solution in Terms of Eigenvalue Problems  
  • 5.5 Application to Image De-noising  
    • Reading: Application to Image De-noising

      Application to Image De-noising

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 5.5.1 Diffusion as Quantizer for Image Compression  
  • 5.5.2 Diffusion for Noise Reduction  
  • 5.5.3 Enhanced JPEG Image Compression  
  • Unit 6: Wavelet Methods  

    This final unit is concerned with the study of multi-scale data analysis.  This unit will introduce you to the notions of multiresolution analysis (MRA) and wavelet transform (WT) as well as the associated wavelet decomposition and reconstruction algorithms.  The WT of a finite Fourier series is discussed to introduce the relationship between scale and frequency, in particular with a group of frequencies, called a frequency band.  We will also derive the inversion formula for recovering the function from its WT.  The MRA architecture is demonstrated by using B-spline functions.  Construction of wavelets by appealing to the MRA is achieved by matrix extension.  To reduce the computational complexity of the wavelet decomposition and reconstruction algorithms, you will also study lifting schemes.  To extend to the wavelet transform of functions of two variables, we use tensor-products of the wavelets with the corresponding scaling functions of the MRA.  This unit ends with embedding a digital image in the wavelet-domain for image manipulation, such as progressive transmission, image edge extraction, and image enhancement.

    Unit 6 Learning Outcomes   show close
  • 6.1 Time-Scale Analysis  
    • Reading: Time-Scale Analysis

      Time-Scale Analysis

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.1.1 Wavelet Transform  
  • 6.1.2 Frequency versus Scale  
  • 6.1.3 Partition into Frequency Bands  
  • 6.2 Multiresolution Analysis (MRA)  
    • Reading: Multiresolution Analysis (MRA)

      Multiresolution Analysis (MRA)

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.2.1 Function Refinement  
  • 6.2.2 B-spline Examples  
  • 6.2.3 The MRA Architecture  
  • 6.3 Wavelet Construction  
    • Reading: Wavelet Construction

      Wavelet Construction

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.3.1 Matrix Extension  
  • 6.3.2 Quadrature Mirror Filter  
  • 6.3.3 Orthogonal and Bi-Orthogonal Wavelets  
  • 6.4 Wavelet Algorithms  
    • Reading: MIT: Professor Gilbert Strang’s “Lecture Notes: Handouts 1–16”

      Link: MIT: Professor Gilbert Strang’s “Lecture Notes: Handouts 1–16” (PDF)

      Instructions:  Please click on the link above, and select the PDF links for the slides and handouts for 1–16.  Study these lecture notes and handouts to learn about computational schemes of wavelet decomposition and reconstruction, filter banks, and the lifting scheme.  
      Studying these lecture slides and reflecting on the material should take approximately 4 hours to complete.
       
      Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.

  • 6.4.1 Wavelet Decomposition and Reconstruction  
  • 6.4.2 Filter Banks  
    • Reading: Filter Banks

      Filter Banks

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.4.3 The Lifting Scheme  
    • Reading: The Lifting Scheme

      The Lifting Scheme

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.5 Application to Image Coding  
    • Reading: Application to Image Coding

      Application to Image Coding

      The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.

      Submit Materials

  • 6.5.1 Mapping Digital Images to the Wavelet Domain  
  • 6.5.2 Progressive Image Transmission  
  • 6.5.3 Lossless JPEG-2000 Compression  
  • Final Exam