Numerical Analysis
Purpose of Course showclose
Numerical analysis is the study of the methods used to solve problems involving continuous variables. It is a highly applied branch of mathematics and computer science, wherein abstract ideas and theories become the quantities describing things we can actually touch and see. The real number line is an abstraction where many interesting and useful ideas live, but to actually realize these ideas, we are forced to employ approximations of the real numbers. For example, consider marking a ruler at \sqrt{2}. We know that \sqrt{2} \approx 1.4142, but if we put the mark there, we know we are in error for there is an infinite sequence of nonzero digits following the 2. Even more: a number doesn’t have any width, yet any mark we make would have a width, and in that width lives an infinite number of real numbers. You may ask yourself: isn’t it sufficient to represent \sqrt{2} with 1.414? This is the kind of question that this course will explore. We have been trying to answer such questions for over 2,000 years (it is said that people have given their lives for the idea of \sqrt{2}, and they certainly wouldn’t think 1.414 sufficient). Modern computers can perform billions of arithmetic operations per second and trying to predict the path of a tropical storm can require many trillions of operations. How do we carry out such simulations and how do our approximations affect the result? The answer to the first question is certainly colored by the second!
Numerical analysis is a broad and growing discipline with many open questions. This course is designed to be a first look at the discipline. Over the course of this semester, we will survey some of the basic problems and methods needed to simulate the solutions of ordinary differential equations. We will build the methods ourselves, starting with computer arithmetic, so that you will understand all of the pieces and how they fit together in state of the art algorithms. Along the way, we will write programs to solve equations, plot curves, integrate functions, and solve initial value problems. At the end of some chapters we will suggest – in a section called “Of Things Not Covered” – some topics that would have been included if we had more time or other avenues to explore if you are interested in the topics presented in the unit.
The prerequisites for taking this course are MA211: Linear Algebra, MA221: Differential Equations, and either MA302/CS101: Introduction to Computer Science or a background in some programming language. Programming ideas will be illustrated in pseudocode and implemented in the opensource highlevel computing environment.
Numerical Analysis is the field of mathematics that applies numerical approximations in order to solve mathematical problems of continuous variables.In most cases, numerical analysis does not have the goal of finding exact answers to complex problems, as most mathematical problems cannot be solved through the application of a finite number of elementary mathematical operations.Rather, numerical analysis focuses on the development and study of algorithms that will quickly obtain approximate solutions.By analyzing these algorithms, we can evaluate their errors and stability and in turn decide when it is safe to use a particular numerical algorithm.
The first known application of the methods of numerical analysis appears on Babylonian tablet YBC 7289, which is roughly dated between 17002000 BC.(Evidence suggests that the writer was a mathematics student.)The tablet features an incised square whose sides have a length of 0.5 units and a diagonal line that connects opposite corners of the square.The diagonal line is labeled 0:42 25 35 (in sexagesimal notation), which tells us that the Babylonians thought that the square root of 2 is 1.41421296 (in decimal notation).(The actual square root of 2 is 1.41421356… to 9 decimal places.)The Babylonian value is in error by roughly 7 parts in 100,000—an accuracy that could not have been obtained by direct measurement.As the square root of 2 is an irrational number, it cannot be directly calculated.Although not known for sure, it is likely that the value for the square root of two was originally calculated by Heron’s method, a simple version of the NewtonRaphson method for finding successively better approximations to the roots of a function.
This course will focus on the applications of the methods of numerical analysis.We will cover enough of the mathematical background to allow you to intelligently discuss the convergence, error, and stability properties of numerical analysis algorithms, but will place emphasis on solving certain classes of problems that often arise in scientific or engineering contexts.These include approximating functions, finding roots of polynomial and other nonlinear functions, solving systems of linear equations, finding eigenvalues and eigenfunctions, optimizing constrained multidimensional functions, evaluating integrals, and solving ordinary and partial differential equations.
The prerequisites for taking MA213 are MA211 (Linear Algebra), MA221 (Differential Equations), and either MA302/CS101 (Introduction to Computer Science) or a solid background in JAVA programming.While not necessary, treating MA222 (Partial Differential Equations) as a corequisite is advised.Many problems and methods will be presented and used within an opensource Javabased computing environment.
Course Information showclose
Course Designers: Professor Baccouch
Primary Resources: This course primarily relies on content from the following resources:
• University of Arkansas: Mark Arnold’s “Numerical Analysis Pages”
• University of Southern Florida: Autar Kaw’s “Numerical Methods Lectures”
Requirements for Completion: Viewing videos, reading and working through pages and exercises, and writing 5 programs in Octave.
Time Commitment: This course should take about 129 hours to complete.
Tips/Suggestions: As with other math courses, the only way to gain competency is to do problems. Here the problems consist of some computations, some theoretical arguments, and some computer programming. Most of the readings will require 2 or three times through, often with pen and paper to help you convince yourself of understanding.
Learning Outcomes showclose
 Show how numbers are represented on the computer, and how errors from this representation affect arithmetic.
 Analyze errors and have an understanding of error estimation.
 Be able to use polynomials in several ways to approximate both functions and data, and to match the type of polynomial approximation to a given type of problem.
 Be able to solve equations in one unknown real variable using iterative methods and to understand how long these methods take to converge to a solution.
 Derive formulas to approximate the derivative of a function at a point, and formulas to compute the definite integral of a function of one or more variables.
 Choose and apply any of several modern methods for solving systems of initial value problems based on properties of the problem.
Course Requirements showclose
√ Have access to a computer.
√ Have continuous broadband Internet access.
√ Have the ability/permission to install plugins or software (e.g., Adobe Reader or Flash).
√ Have the ability to download and save files and documents to a computer.
√ Have the ability to open Microsoft files and documents (.doc, .ppt, .xls, etc.).
√ Be competent in the English language.
√ Have read the Saylor Student Handbook.
Unit Outline show close
Expand All Resources Collapse All Resources

Unit 1: Computer Arithmetic
We see how real numbers are represented in computers for scientific computation. We cannot represent all real numbers, so we must choose which finite subset of the real numbers we will use. Most modern scientific computing uses a set of floating point numbers. The properties of floating point numbers affect arithmetic; in fact, we do not even expect the computer to add two numbers correctly. We will follow these errors through simple computations and learn some basic rules and techniques for tracking errors. Finally, we will write a simple program that pays careful attention to these considerations.
Unit 1 Learning Outcomes show close
 1.1 Finite Representations of Real Numbers

1.1.1 FloatingPoint Numbers
 Reading: University of Arkansas: Mark Arnold’s “Comparing Numbers” and “Floating Point Numbers”
Link: University of Arkansas: Mark Arnold’s “Comparing Numbers” (PDF) and “Floating Point Numbers” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. The “Comparing Numbers” reading is here to get you thinking about how we measure errors in computation. Our primary tool is the absolute difference relative to the true answer, or the relative error. When is the relative error equal to the absolute error? The Floating Point Representation Theorem and the Fundamental Axiom of Floating Point Arithmetic are the two basic rules we will use to estimate rounding errors.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Floating Point Representation”
Link: YouTube: University of South Florida: Autar Kaw’s “Floating Point Representation” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 8 videos. These lectures will be useful for all of units 1.1 and 1.2.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Comparing Numbers” and “Floating Point Numbers”

1.1.2 FloatingPoint Representation Theorem
 Reading: University of Arkansas: Mark Arnold’s “Pictures of a Toy Floating Point System”
Reading: University of Arkansas: Mark Arnold’s “Pictures of a Toy Floating Point System”
Link: of Arkansas: Mark Arnold’s “Pictures of a Toy Floating Point System” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. This is a plot of a 1byte (8 bits) floating point system. The floats are too close to each other on the top plot, so the subsequent plots are zoomins. We want to relate the machine epsilon from the previous reading to points on this plot. On the plot(s) identify the floating point range, and the machine epsilon.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Pictures of a Toy Floating Point System”

1.2 Fundamental Axiom of FloatingPoint Arithmetic
 Reading: University of Arkansas: Mark Arnold’s “Floating Point Arithmetic”
Link: University of Arkansas: Mark Arnold’s “Floating Point Arithmetic” (PDF)
Instructions:. Click on the link above, then select the appropriate link to download a pdf of the reading. Pretend you are a (base10) computer and add and multiply some 4decimal digit numbers. Prove the Fundamental Axiom of Floating Point Arithmetic using the additional hypothesis that “an arithmetic operation on two such floating point numbers returns the floating point number closest to the true value”.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Floating Point Arithmetic”
 1.3 Error Analysis

1.3.1 Forward Error Analysis
 Reading: University of Arkansas: Mark Arnold’s “Error Analysis”
Link: University of Arkansas: Mark Arnold’s “Error Analysis”(PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. This reading exhibits both a forward rounding error analysis and a backward rounding error analysis. Make sure you clearly understand the distinction. For the vast majority of computations a forward error analysis does not provide useful error bounds (they are too pessimistic). Do a forward error analysis for the product of two real numbers; you should be able to mimic the forward analysis for a+b that is in the reading. We will use this reading for the next section.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Error Analysis”

1.3.2 Backward Error Analysis
 Reading: University of Arkansas: Mark Arnold’s “What is a Solution?” and “Conditioning and Stability”
Link: University of Arkansas: Mark Arnold’s “What is a Solution?” (PDF) and “Conditioning and Stability”(PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. In the “Error Analysis” reading we show that if a, b and a+b are real numbers in the floating point range, then fl(a+b) is the exact sum of two numbers relatively close to a and b. Do the same for fl(ab). At this point you have seen that (i) while an addition (or subtraction) may be computed with large relative error, (ii) it is also the exact sum of two numbers very close to a and b. Thus (ii) does not guarantee a small error. You have shown that multiplication (which doesn’t over/underflow) will always be computed with small relative error. We perform error analysis both to gain insight into a method (where are its weak points?) and to predict how good our computed solution is. This reading shows how the relationship between backward error analysis and problem conditioning can give us an error estimate.
This resource should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “What is a Solution?” and “Conditioning and Stability”

1.3.3 Swamping and Cancellation
 Reading: University of Arkansas: Mark Arnold’s “Swamping and Cancellation” and “Three Perspectives on Cancellation”
Link: University of Arkansas: Mark Arnold’s “Swamping and Cancellation” (PDF) and “Three Perspectives on Cancellation” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Rounding errors occur for nearly every arithmetic operation, but sometimes circumstances converge to set us up for a particularly bad result: cancellation. You have seen cancellation in simple examples and in your forward error analysis. Show that there is a risk of cancellation any time we additively compute a result that is small compared to its addends.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Swamping and Cancellation” and “Three Perspectives on Cancellation”

1.4 Programming Case Study: The Quadratic Formula
 Reading: University of Arkansas: Mark Arnold’s “Computing the Roots of a Quadratic Polynomial” and “Implementing the Quadratic Equation”
Link: University of Arkansas: Mark Arnold’s “Computing the Roots of a Quadratic Polynomial” PDF and “Implementing the Quadratic Equation” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. Write a program in Octave that computes the roots of ax^2+bx+c given the real numbers a, b and c. Make sure that your program does not divide by 0, that it does not suffer from cancellation (except possibly in b^24ac), and all input variables (a,b,c) and all output variables (r1, r2 maybe others?) are described carefully. There is a good tutorial on Octave at http://en.wikibooks.org/wiki/Octave_Programming_Tutorial. Since you already have some experience programming, Octave should be relatively easy to learn (it was born as a Matlab clone, and is very much like fortran90. Matlab tutorials are plentiful on the web, you might find them useful as well). As with learning most new languages, the learning curve is steep with gross syntax; but never fear, Octave is quite simple.
This resource should take approximately 10 hours to complete (including the Octave download/install process).
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Computing the Roots of a Quadratic Polynomial” and “Implementing the Quadratic Equation”

1.5 Of Things Not Covered
 Reading: Penn State University: David Goldberg’s “What Every Computer Scientist Should Know about Floating Point Arithmetic”
Link: Penn State University: David Goldberg’s “What Every Computer Scientist Should Know about Floating Point Arithmetic” (PDF)
Instructions: Click on the link above, and download the reading using the “Download Links” menu in the upper right corner. Scan the paper so that you know what is covered. This paper is a very good resource for the IEEE 754 standard. You will want to have it as a reference as you work through the material in this course.
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: Penn State University: David Goldberg’s “What Every Computer Scientist Should Know about Floating Point Arithmetic”

Unit 1 Assessment
 Assessment: The Saylor Foundation's "Unit 1 Assessment"
Link: The Saylor Foundation's "Unit 1 Assessment" (PDF) and "Guide to Responding” (PDF).
Instructions: Carefully answer the three questions below. You may consider them essay questions, but back up or explain with formulas and/or theorems as needed. When you are finished, you should check yourself by looking at the response guide. You may not have all of the points given in the response guide, or you may have noted something that was not included in the guide, and that may be just fine, but this is a time to reflect on differences between your response and the guide response and possibly review some material. This assessment should take about 40 minutes to complete.
 Assessment: The Saylor Foundation's "Unit 1 Assessment"

Unit 2: Polynomials and Polynomial Interpolation
Some say that 75% of applied mathematics is polynomials (the other 75% being linear algebra!). We will therefore use this unit to review some things you already know about polynomials and (hopefully) introduce some new ideas as well. Here, we are primarily interested in using polynomials to approximate unknown functions, more complicated functions, or just sets of points. Finally, we will write a program to find a cubic polynomial that passes through two points with predefined slopes.
Unit 2 Learning Outcomes show close
 2.1 Polynomial Functions

2.1.1 Definition of a Polynomial
 Reading: University of Arkansas: Mark Arnold’s “Polynomials” and “Polynomial Evaluation”
Link: University of Arkansas: Mark Arnold’s “Polynomials” (PDF) and “Polynomial Evaluation” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf’s. Refer to your linear algebra material, if needed, and determine the dimension of the vector space P_n of all polynomials of degree n or less. Write down two distinct bases for P_n. Why do you think Horner’s method is also called synthetic division?
This resource should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Polynomials” and “Polynomial Evaluation”

2.1.2 Weierstrauss Approximation Theorem
 Reading: University of Arkansas: Mark Arnold’s “The Taylor Polynomial”
Link: University of Arkansas: Mark Arnold’s “The Taylor Polynomial” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. The last two readings refer to the Weierstrauss Approximation Theorem. In your own words, describe what the theorem implies about the ability of polynomials to approximate continuous functions. What do you think (i) lots of wiggles, (ii) a sharp corner, and/or (iii) a large interval of approximation, means to the degree of an approximating polynomial?
This resource should take approximately 1 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “The Taylor Polynomial”

2.1.3 Vector Spaces of Polynomials
 Reading: University of Arkansas: Mark Arnold’s “Polynomials are Vectors”
Link: University of Arkansas: Mark Arnold’s “Polynomials are Vectors" (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Notice that the definition for the inner product of two polynomials depends on an interval and a weight function. Therefore there are many popular types of polynomial inner product, and we will choose an appropriate inner product depending on our application. An inner product in a vector space defines a natural notion of length and angle. Find a Calculus III formula that relates an inner product, vector lengths, and an angle. How would you define the angle between two polynomials? We skipped some algebra when finding the best polynomial approximation to a function f; fill in the gaps.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Polynomials are Vectors”

2.1.4 Fundamental Theorem of Algebra
 Reading: University of Arkansas: Mark Arnold’s “Fundamental Theorem of Algebra
Link: University of Arkansas: Mark Arnold’s “Fundamental Theorem of Algebra" (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. As you see, the Fundamental Theorem of Algebra (FTA) is an existence theorem. It does not tell us how to factor a polynomial. We will discuss methods for factoring polynomials later. The FTA does give us yet another way to represent a polynomial; describe how it allows us to represent a polynomial of degree n having real coefficients using n+1 real numbers (some possibly repeated). Find a formula for the derivative of a polynomial p(x) in terms of its roots (hint: use the product rule).
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Fundamental Theorem of Algebra
 2.2 Polynomial Interpolation

2.2.1 Taylor Polynomials
 Reading: University of Arkansas: Mark Arnold’s “The Taylor Polynomial”, “Rate of Convergence” and “A Taylor Polynomial Picture”
University of Arkansas: Mark Arnold’s “The Taylor Polynomial”, “Rate of Convergence” and “A Taylor Polynomial Picture”
Link: University of Arkansas: Mark Arnold’s “The Taylor Polynomial” (PDF), “Rate of Convergence” (PDF) and “A Taylor Polynomial Picture” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of each reading. In many ways the Taylor Polynomial is the central tool of mathematical physics, and hence applied mathematics, and engineering. Find a calculus or physics text or website that introduces the differential equation modeling the simple pendulum. It is fundamentally different than spring (or simple harmonic) motion. We cannot solve the differential equation for the pendulum (the solution is impossible to write even as an integral of functions we know). Show how the small angle approximation for the pendulum is a Taylor polynomial approximation. You should memorize the formula for the Taylor polynomial P(x) of degree n about x_0 for an arbitrary function f(x), including the error term. Show that P^{(k)}(x_0) = f^{(k)}(x_0) for k=0,1,…,n. Does this property define P?
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Taylor’s Theorem Revisited”
Link: YouTube: University of South Florida: Autar Kaw’s “Taylor’s Theorem Revisited” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. Here there are 4 lectures that have been split into 4 videos.
This resource should take approximately 1/2 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “The Taylor Polynomial”, “Rate of Convergence” and “A Taylor Polynomial Picture”

2.2.2 Lagrange Interpolation
 Reading: University of Arkansas: Mark Arnold’s “Lagrange Interpolation” and “FFT Tease”
Link: University of Arkansas: Mark Arnold’s “Lagrange Interpolation" (PDF) and "FFT Tease" (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. Yes, we have yet another way to represent a polynomial. Show that if the nodes are distinct, then there is exactly one polynomial of degree <= n passing through n+1 knots (hint: we know that there is at least one: the Lagrange interpolator P. Assume there is another, say Q. Think about the zeros if (PQ)(x)…) This existence and uniqueness means n+1 knots with distinct nodes represents a polynomial: the Lagrange interpolator. The Vandermonde matrix is usually a hard matrix to work with, so the Lagrange (and other) pictures are applied more often. But there is one place where the Vandermonde picture rules: If the nodes are roots of unity (a set of n+1 points, including 1, which are equally spaced around the circle of radius 1 in the complex plane), then the Vandermonde picture is a discrete Fourier transform (DFT); we mention this because you may have heard of the FFT (one of the most important algorithms ever discovered). The FFT is simply a fast way to solve this special Vandermonde system.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Lagrangian Interpolation”
Link: YouTube: University of South Florida: Autar Kaw’s “Lagrangian Interpolation” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 6 videos.
This resource should take approximately 50 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.  Lecture: YouTube: University of South Florida: Duc Nguyen’s “Discrete Fourier Transform”
Link: YouTube: University of South Florida: Duc Nguyen’s “Discrete Fourier Transform” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 3 lectures that have been split into 7 videos.
This resource should take approximately 1.6 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Lagrange Interpolation” and “FFT Tease”

2.2.3 Hermite Interpolation
 Reading: University of Arkansas: Mark Arnold’s “Hermite Interpolation”
Link: University of Arkansas: Mark Arnold’s “Hermite Interpolation” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. The Hermite interpolator really shines when we know not only the knots, but also the slopes of the curve at each knot (for example, when solving an initial value problem). What degree Hermite interpolator is required if there are two nodes? Go ahead and work out the formula for the Hermite interpolator for (x_0,y_0), (x_0,y’_0), (x_1,y_1) and (x_1,y’_1). This little polynomial is a real workhorse in computer aided design and manufacturing and differential equations!
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Hermite Interpolation”

2.2.4 General Polynomial Interpolation
 Reading: University of Arkansas: Mark Arnold’s “Kissing Polynomials”
Link: University of Arkansas: Mark Arnold’s “Kissing Polynomials” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. It may be that we include this section for aesthetic (or theoretical) reasons. Well, communicating an aesthetic is an important component of a mathematics course and we don’t want you underserved! Explain how the Taylor polynomial, the Lagrange polynomial, and the Hermite polynomial are all special cases of the osculating polynomial (precisely what are the interpolation conditions for each?).
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Kissing Polynomials”
 2.3 Programming Case Study: A 2Node Hermite Interpolator

2.3.1 Principles of Good Programming
 Reading: University of Arkansas: Mark Arnold’s “Some Programming Principles”
Link: University of Arkansas: Mark Arnold’s “Some Programming Principles” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. The reading describes several generally agreed upon principles of good code. How you balance these is up to you, but they are all worth considering when writing any program.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Some Programming Principles”

2.3.2 The Interpolator
 Reading: University of Arkansas: Mark Arnold’s “Constructing a Hermite Cubic Interpolator”
Link: University of Arkansas: Mark Arnold’s “Constructing a Hermite Cubic Interpolator” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. The reading describes the 2node Hermite interpolator in both the Lagrange and Vandermonde picture. Write a program in Octave to compute either form of the Hermite cubic h(x) (you choose). Input will be the nodes, x_0 and x_1, the function values y_0 and y_1, and the slopes y’_0 and y’_1, and output will be either the coefficients of h in the standard ordered basis (Vandermonde picture), or the polynomials H_{10}, H_{11}, hat{H)_{10} and hat{H}_{11} in standard form. Use your program to plot h, and by all means: play with your program! What shapes can it make? Can you break it? What shapes are sensitive to small changes, etc.
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Constructing a Hermite Cubic Interpolator”

2.4 Of Things Not Covered
 Reading: University of Arkansas: Mark Arnold’s “Interpolating Splines”, “Least Squares Approximation”, “Polynomial Least Squares Example”, and “Lagrange Interpolation Example”
Link: University of Arkansas: Mark Arnold’s “Interpolating Splines” (PDF), “Least Squares Approximation” (PDF), “Polynomial Least Squares Example” (PDF), and “Lagrange Interpolation Example” (PDF)
Instructions: Click on the link above, then select the appropriate links to download the PDFs of the reading. A drawback of working with polynomials is that they naturally oscillate, especially for high degree (a polynomial of degree n has n1 critical points). Furthermore, polynomials cannot form true corners, and require very high degree to approximate a corner well. Polynomial spline functions are simply piecewise polynomial functions. Polynomials can be pieced together smoothly or to form corners.
On the other hand, one might ask if interpolation is always the best approach to representing data. For example, if one suspects that there may be errors in the knot values, then a useful alternative to interpolation is least squares approximation.
This resource should take approximately 5 hours to complete.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Spline Interpolation”
Link: YouTube: University of South Florida: Autar Kaw’s “Spline Interpolation” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 6 videos.
This resource should take approximately 50 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Interpolating Splines”, “Least Squares Approximation”, “Polynomial Least Squares Example”, and “Lagrange Interpolation Example”

Unit 2 Assessment
 Assessment: The Saylor Foundation's "Unit 2 Assessment"
Link: The Saylor Foundation's "Unit 2 Assessment" (PDF) and “Guide to Responding” (PDF).
Instructions: Carefully answer the three questions below. You may consider them essay questions, but back up or explain with formulas and/or theorems as needed. When you are finished, you should check yourself by looking at the response guide. You may not have all of the points given in the response guide, or you may have noted something that was not included in the guide, and that may be just fine, but this is a time to reflect on differences between your response and the guide response and possibly review some material. This assessment should take about 30 minutes to complete.
 Assessment: The Saylor Foundation's "Unit 2 Assessment"

Unit 3: Nonlinear Equations of a Single Real Variable
From the very first algebra course you took, you have been asked to “find x” for many different problems. You will now learn how to pass that task on to the computer! You might not be surprised to know that there are slowandsure methods as well as fastandrisky methods. We will develop tools that allow us to measure just how fast a given method converges to an answer. Some methods are best only in specialized situations, and some work well generally or in combination with others. Finally, we will write programs to compare both the safest and fastest of our methods.
Unit 3 Learning Outcomes show close

3.1 Roots of Nonlinear Equations
 Reading: University of Arkansas: Mark Arnold’s “Solving Equations in one Real Variable”
Link: University of Arkansas: Mark Arnold’s “Solving Equations in one Real Variable” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Solving Equations in one Real Variable”

3.2 The Method of Bisection
 Reading: University of Arkansas: Mark Arnold’s “Bisection”
Link: University of Arkansas: Mark Arnold’s “Bisection” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Although it may not be clear yet, the method of bisection has some exceptional properties. The fact that it can give us an upper bound on the error at each iteration is one such property. The price we pay is a relatively slow speed of convergence and the need to begin with a root bracketing interval. There is not much to analyze in this method, our fundamental computation is to compute the sign of f(p) correctly, where p=(a+b)/2. Now it turns out that it is better to compute p as p=a+(ba)/2. Why?
This resource should take approximately 1.5 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Bisection Method”
Link: YouTube: University of South Florida: Autar Kaw’s “Bisection Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 4 videos.
This resource should take approximately 40 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Bisection”

3.3 The NewtonRaphson Method
 Reading: University of Arkansas: Mark Arnold’s “Newton’s Method”
Link: University of Arkansas: Mark Arnold’s “Newton’s Method” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. So here is a classic and very simple use of the idea of linearization. At each step we replace f(x) with the line tangent to f at (x_p, f(x_p)). The new iterate x_{p+1} is simply the root of that tangent line. This method can be very fast, but is can also fail. Can you think of two ways it can fail?
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “NewtonRaphson Method”
Link: YouTube: University of South Florida: Autar Kaw’s “NewtonRaphson Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 6 lectures that have been split into 9 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Newton’s Method”

3.4 The Secant Method
 Reading: University of Arkansas: Mark Arnold’s “The Secant Method” and “Newton & Secant Example”
Link: University of Arkansas: Mark Arnold’s “The Secant Method” (PDF) and “Newton & Secant Example” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf’s. The secant method is a Newtonlike method that is general purpose. It is arguably faster than Newton’s method, but like Newton’s method, can fail.
This resource should take approximately 1.5 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Secant Method”
Link: YouTube: University of South Florida: Autar Kaw’s “Secant Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 3 lectures that have been split into 4 videos.
This resource should take approximately 1/2 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “The Secant Method” and “Newton & Secant Example”

3.5 Order of Convergence
 Reading: University of Arkansas: Mark Arnold’s “Order of Convergence”, “Convergence of Newton’s Method” and “Convergence of the Secant Method”
Link: University of Arkansas: Mark Arnold’s “Order of Convergence” (PDF), “Convergence of Newton’s Method” (PDF) and “Convergence of the Secant Method” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf’s. There is a lot in this section. The idea of linearization we now associate with a Taylor polynomial. The definition of order of convergence is worth putting to memory, and it is as easy as “the new error is approximately a constant times the old error raised to the alpha”. If we have superlinear convergence, then we also have a useful (but not guaranteed) error estimate. In light of this, you should think about how you could determine when to stop a Newton or secant iteration. If someone said that Newton’s method was faster than the secant method, would you agree? How would you argue your point?
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Order of Convergence”, “Convergence of Newton’s Method” and “Convergence of the Secant Method”

3.6 Hybrid Methods
 Reading: University of Arkansas: Mark Arnold’s “Hybrid Methods for Root Finding”
Link: University of Arkansas: Mark Arnold’s “Hybrid Methods for Root Finding” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. There are other hybrid methods for the root finding problem. The reading presents a simple example. Construct a list of what properties your ideal method would have. Does bisection combined with secant possess all of your properties? Where (if at all) does this method fall short?
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “FalsePosition Method”
Link: YouTube: University of South Florida: Autar Kaw’s “FalsePosition Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there is 1 lecture that has been split into 3 videos.
This resource should take approximately 45 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Hybrid Methods for Root Finding”

3.7 Programming Case Study: Compare Bisection and Secant
 Reading: University of Arkansas: Mark Arnold’s “Bisection and Secant”
Link: University of Arkansas: Mark Arnold’s “Bisection and Secant” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. So, off to write your programs! Please be careful about division by “something too close to zero”, but keep in mind the numerator may also be small… Does your output look correct? Does your stopping criterion match what is documented? Are your programs carefully documented?
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Bisection and Secant”

3.8 Of Things Not Covered
 Reading: University of Arkansas: Mark Arnold’s “Roots of a Polynomial” and “Example of a Bad Polynomial”
Link: University of Arkansas: Mark Arnold’s “Roots of a Polynomial" (PDF) and “Example of a Bad Polynomial” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Maybe you have already thought of alternatives to Newton’s or the secant method. There are alternatives. When higher higher order derivatives are available, or when we know that a function is smooth enough, we can use polynomials of higher order (like Mueller’s method). We can also use quadratic interpolation with the roles of x and y reversed; this is called inverse quadratic interpolation… There are many methods, even for problems with only one unknown variable; imagine what’s waiting to be invented for f(x_1,x_2,…,x_n)=0? Even for a polynomial function of 1 variable, finite precision arithmetic can make for challenging computational problems; the example in the reading is an ill posed polynomial root finding problem.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Roots of a Polynomial” and “Example of a Bad Polynomial”

Unit 3 Assessment
 Assessment: The Saylor Foundation’s “Unit 3 Assessment”
Link: The Saylor Foundation’s “Unit 3 Assessment” (PDF) and “Unit 3 Guide to Responding” (PDF).
Instructions: Carefully answer the two questions in this assessment. You may consider them essay questions, but back up or explain with formulas and/or theorems as needed. When you are finished, you should check yourself by looking at the response guide. You may not have all of the points given in the response guide, or you may have noted something that was not included in the guide, and that may be just fine, but this is a time to reflect on differences between your response and the guide response and possibly review some material.
 Assessment: The Saylor Foundation’s “Unit 3 Assessment”

Unit 4: Differentiation and Integrations of Functions
Before we talk about differential equations, we should expect some calculus. In this unit, we will address one of the most fundamental challenges of floating point arithmetic: finding the slope of a tangent line. You will see that numerical differentiation is actually harder than numerical integration! That said, numerical integration is especially interesting for all of the new ideas that we can explore. We will also create an algorithm (and write a program) that adapts itself to different integration problems.
Unit 4 Learning Outcomes show close
 4.1 Numerical Differentiation

4.1.1 Finite Difference Formulas
 Reading: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Easy”
Link: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Easy” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. We will play this game again and again: If we approximate a function f by an interpolating polynomial P, then D(f) is approximately D(P). Here D is the derivative operator, and the formulas are called finite difference formulas. Every different interpolating polynomial gives a different finite difference formula for the slope of f at a point. The formulas here are given by uniformly spaced nodes, but you can make up your own formulas for any set of (pairwise distinct) nodes.
This resource should take approximately 1 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Differentiation of Continuous Functions”
Link: YouTube: University of South Florida: Autar Kaw’s “Differentiation of Continuous Functions” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 6 lectures that have been split into 9 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Easy”

4.1.2 In the Face of Rounding Errors
 Reading: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Hard”
Link: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Hard" (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Please carefully read the pdf. As easy as it was for us to find finite difference formulas, they depend on h being small. In finite precision arithmetic there is a limit to how small h can be. Are you clear about how truncation error and rounding error conspire to compromise your derivative estimate? Explain why higher order formulas ameliorate this problem.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Numerical Differentiation is Hard”
 4.2 Quadrature

4.2.1 NewtonCotes Rules
 Reading: University of Arkansas: Mark Arnold’s “Quadrature”
Link: University of Arkansas: Mark Arnold’s “Quadrature” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Please carefully read the pdf. Again, if P approximates f, then I(P) is approximately I(f), right? Well this time the answer is… yes! Integrating a Lagrange interpolating polynomial for f, can give a very good approximation for the integral of f, even in the face of rounding errors. What degree polynomial do we need if f oscillates m times over [a,b]?
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Quadrature”

4.2.2 Composite NewtonCotes Rules
 Reading: University of Arkansas: Mark Arnold’s “Numerical Integration is Easy”
Link: University of Arkansas: Mark Arnold’s “Numerical Integration is Easy” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. The composite rule rules. The property that the integral from a to b can be split into the sum of the integrals from a to c and c to b is fundamentally what make quadrature a nice problem. The challenge now is to compute I(f) quickly. In many situations, we have values of f already computed on a uniform grid (x_{i+1}x_i = h, for all i). Composite Simpson’s rule is a very popular choice among methods which require uniform spacing of nodes; can you argue why?
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Trapezoidal Rule”
Link: YouTube: University of South Florida: Autar Kaw’s “Trapezoidal Rule” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 8 lectures that have been split into 9 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Simpson’s 1/3 Rule”
Link: YouTube: University of South Florida: Autar Kaw’s “Simpson’s 1/3 Rule” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 6 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Numerical Integration is Easy”

4.2.3 Adaptive Quadrature
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Quadrature” and “Adaptive Quadrature: recursive mfile”
Link: University of Arkansas: Mark Arnold’s “Adaptive Quadrature” (PDF) and “Adaptive Quadrature: recursive mfile” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. If we do have the freedom to evaluate f at arbitrary points in [a,b], then maybe a uniform grid is not optimum. There may be regions where f varies little, while on other regions f oscillates a lot. How does the adaptive quadrature idea deal with such a function? Why do we require the error to be halved when we branch to a lower level? Do you think our (wish) is more or less likely to be true as we branch to lower levels? Could we create an adaptive method without an error estimate?
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Quadrature” and “Adaptive Quadrature: recursive mfile”

4.3 Of Things Not Covered
 Reading: University of Arkansas: Mark Arnold’s “MonteCarlo Integration”
Link: University of Arkansas: Mark Arnold’s “MonteCarlo Integration” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf’s. There are two families of integration techniques that we have omitted. One, Gaussian quadrature, requires that we can evaluate f at arbitrary points in [a,b], but achieves double the accuracy of the uniformly spaced NewtonCotes methods. The other type of method is philosophically opposite to adaptive or Gaussian quadrature: evaluate f at random points (not uniform, not optimal, not smart: random). As you might guess, this type of method cannot compete with those we have discussed… unless we are integrating a function of many variables over a high dimensional region. These MonteCarlo methods are methods of last resort, and as such are very important indeed!
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Gauss Quadrature Rule”
Link: YouTube: University of South Florida: Autar Kaw’s “Gauss Quadrature Rule” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 7 lectures that have been split into 9 videos.
This resource should take approximately 1.25 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “MonteCarlo Integration”

4.4 Programming Case Study: Recursion and Adaptive Quadrature
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Quadrature Program”
Link: University of Arkansas: Mark Arnold’s “Adaptive Quadrature Program” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Write a recursive adaptive quadrature program that meets the specifications required on the assignment.
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Quadrature Program”

Unit 4 Assessment
 Assessment: The Saylor Foundation's "Unit 4 Assessment"
Link: The Saylor Foundation's "Unit 4 Assessment" (PDF) and “Unit 4 Guide to Responding” (PDF).
Instructions: Carefully answer the two questions in this assessment. You may consider them essay questions, but back up or explain with formulas and/or theorems as needed. When you are finished, you should check yourself by looking at the response guide. You may not have all of the points given in the response guide, or you may have noted something that was not included in the guide, and that may be just fine, but this is a time to reflect on differences between your response and the guide response and possibly review some material.
 Assessment: The Saylor Foundation's "Unit 4 Assessment"

Unit 5: Differential Equations
The subject addressed in this unit is where we have been headed from the start of this course. We now have the tools we need to understand and construct methods to solve ordinary differential equations. We will begin by carefully defining our problem, if and when it has a solution, and how we mean to approximate that solution. Some methods will have memory; some will not. Some can adapt to the problem; some cannot. Some problems are particularly troublesome to some methods. We will develop many methods, and the art of the numerical analyst is to match a given problem to an appropriate method. Finally, we will implement an adaptive hybrid method and test it on a challenging problem.
Unit 5 Learning Outcomes show close
 5.1 Initial Value Problems

5.1.1 Definition and Slope Field
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method”
Link: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read first three paragraphs of the pdf. If it helps, I think of an initial value problem as analogous to tracking the path of a leaf floating in a stream on a windless day. The current of the stream is our slope field, f(t,y). Can you think of other analogies.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method”

5.1.2 WellPosed Initial Value Problems
 Reading: University of Arkansas: Mark Arnold’s “Can We Solve This IVP?”
Link: University of Arkansas: Mark Arnold’s “Can We Solve This IVP?” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. We have talked about the conditioning of a problem. In fact, the idea of conditioning is predated by this idea of ‘wellposedness’. A problem is either wellposed or illposed. The conditioning of a problem is a finer measure of its difficulty. Here is the connection: A problem is illposed if it is infinitely illconditioned. Justify that statement. Then explain why the existence of a bounded partial derivative of f with respect to y is sufficient for a continuous (IVP) to be wellposed.
This resource should take approximately 1 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Can We Solve This IVP?”

5.2 All Initial Value Problems Are First Order (If You Like)
 Reading: University of Arkansas: Mark Arnold’s “Systems of Differential Equations” and “Example: High Order to First Order”
Link: University of Arkansas: Mark Arnold’s “Systems of Differential Equations” (PDF) and “Example: High Order to First Order” (PDF)
Instructions: Click on the link above, then select the appropriate link to download pdf’s of the reading. What we are working on here is a first order IVP. Explain why the ability to solve first order systems of IVP’s essentially gives the ability to solve arbitrarily high order IVP’s (or systems thereof). Remind yourself that what when we are developing methods for first order IVP’s, we are developing something much more general.
This resource should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Higher Order/Coupled Differential Equations”
Link: YouTube: University of South Florida: Autar Kaw’s “Higher Order/Coupled Differential Equations” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 8 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Systems of Differential Equations” and “Example: High Order to First Order”

5.3 SingleStep Methods
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method” and “A Picture Euler’s Method”
Link: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method” (PDF) and “A Picture Euler’s Method” (PDF)
Instructions: Click on the link above, then select the appropriate link to download the pdf’s of the reading. Carefully study paragraphs 4 and 5 of the pdf, including the pseudocode. The second reading should help you get a visual understanding. Euler’s method, while not used except by people who don’t know any better (or don’t need to worry about execution time or accuracy), is an excellent method to study. Why? It is as simple as one can get. Every time stepping method is a descendant of Euler’s method.
This resource should take approximately 3 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Euler’s Method”
Link: YouTube: University of South Florida: Autar Kaw’s “Euler’s Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 4 lectures that have been split into 4 videos.
This resource should take approximately 1/2 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method” and “A Picture Euler’s Method”
 5.3.1 Taylor Methods

5.3.1.1 Euler’s Method
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method”
Link: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Finally read the remainder of the pdf. Pay careful attention to the error bound. Unfortunately, the error in Euler’s method could grow exponentially in time (notice the e^{t_ia} term in the formula). In addition, when we include rounding errors, we see that this problem cannot be completely solved by making h small. Find an optimal h for a given IVP assuming you know a Lipschitz constant L and an upper bound M for f’’ on [a,b]. While I hope you are successful, note that we typically do not know L or M… A rule of thumb for Euler’s method is we should not take h smaller than sqrt{(machine epsilon)}, but taking h that small will typically require a very many time steps.
This resource should take approximately 1.5 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “The IVP and Euler’s Method”

5.3.1.2 Local Truncation Error
 Reading: University of Arkansas: Mark Arnold’s “Single Step Methods and Local Truncation Error”
Link: University of Arkansas: Mark Arnold’s “Single Step Methods and Local Truncation Error” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pfd. A natural approach to addressing the drawbacks of Euler’s method would be to construct a more accurate alternative. We will measure accuracy here by local truncation error. Find the local truncation error for Euler’s method. Our first higher order method is the Taylor method of order 2. The Taylor method of order n is not any more difficult to understand. Explain why the Taylor methods of order n > 1 are not general purpose. As an exercise, please find a formula for d^3/dt^3 f(t,y) in terms of partial derivatives of f.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Single Step Methods and Local Truncation Error”
 5.3.2 RungeKutta Methods

5.3.2.1 RungeKutta Methods of Order 2
 Reading: University of Arkansas: Mark Arnold’s “RungeKutta Methods Order 2”
Link: Reading: University of Arkansas: Mark Arnold’s “RungeKutta Methods Order 2” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. So the RungeKutta (RK) methods that give us higher order methods without the need for derivatives of f. This makes these methods tremendously important. How would you describe the RK method with respect to the average derivative of f over the interval [t_i, t_i + h]?
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “RungeKutta 2nd Order Method”
Link: YouTube: University of South Florida: Autar Kaw’s “RungeKutta 2^{nd} Order Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 6 lectures that have been split into 8 videos.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “RungeKutta Methods Order 2”

5.3.2.2 Higher Order RungeKutta Methods
 Reading: University of Arkansas: Mark Arnold’s “RungeKutta Methods”
Link: University of Arkansas: Mark Arnold’s “RungeKutta Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. Here we describe the RK methods as sampling f in the interval [t_i, t_i + h] in order to approximate the average value of y’ over that interval. This subject is too rich to explore very deeply in this course, but notice that as the local truncation error becomes higher order, the number of function evaluations increases greater than linearly.
This resource should take approximately 1.5 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Runge Kutta 4th Order Methods”
Link: YouTube: University of South Florida: Autar Kaw’s “Runge Kutta 4^{th} Order Methods” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 2 lectures that have been split into 3 videos.
This resource should take approximately 1/2 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “RungeKutta Methods”

5.3.2.3 Adaptive RungeKutta Methods
 Reading: University of Arkansas: Mark Arnold’s “Adaptive RungeKutta Methods”
Link: University of Arkansas: Mark Arnold’s “Adaptive RungeKutta Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. We saw with the adaptive quadrature method that an error estimate provided the opportunity to adapt. In the quadrature method, the adaptation was the subdivision of an interval of integration; here the adaptation will be our step size, h. The idea is very general, and this is an active area of research. We estimate y(t+h) using two different methods, and a truncation error analysis provides a new suggested step size. This reading provided the details for a famous method that combines a RK method of order 4 and one of order 5. You do the analysis for two methods of order 2 and 3.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Adaptive RungeKutta Methods”
 5.4 MultiStep Methods

5.4.1 AdamsBashforth Methods
 Reading: University of Arkansas: Mark Arnold’s “Multistep Methods”
Link: University of Arkansas: Mark Arnold’s “Multistep Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the first 3 paragraphs of the pdf. The multistep methods have a memory. This is what can give them high order lte with fewer function evaluations. But for the first m1 steps, the history is incomplete. In this sense, the RK methods are prior to, or more fundamental that the multistep methods. The notation here may be confusing, so compute a few steps of the explicit method of order 2 (AdamsBashforth 2step) for the IVP y’(t)=t+2y, y(0)=2, using h=0.1. What order RK method should be used to generate w_1?
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Multistep Methods”

5.4.2 AdamsMoulton Methods
 Reading: University of Arkansas: Mark Arnold’s “Multistep Methods”
Link: University of Arkansas: Mark Arnold’s “Multistep Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Read the remainder of the pdf carefully. The implicit multistep methods are not explicitly solved for w_{i+1} (it appears on both sides of the equation and as an argument to f). This is part of the reason we investigated solving equations early in this course. We could use a few iterations of the Newton or secant method to approximate w_{i+1} (possibly using w_i as an initial guess). Conceptually this is not a problem, but it requires costly function evaluations. Compute a few steps of the implicit method of order 2 (AdamsMoulton 2step) for the IVP y’(t)=t+2y, y(0)=2, using h=0.1. Use 2 secant iterations to compute w_2, using w_1 and your w_2 from the previous unit.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Multistep Methods”

5.4.3 Predictor Corrector Methods
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Multistep Methods”
Link: University of Arkansas: Mark Arnold’s “Adaptive Multistep Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. Carefully read the pdf. In the last unit we met a fundamental difficulty associated with implicit methods: the need to solve an equation at each time step. We will see later that implicit methods are so important that we should not dismiss them simply because of this complication. One rather elegant approach is to use an explicit method to predict w_{i+1}, and then to correct the prediction by substituting w_{i+1} in the right hand side by that prediction. Naturally enough, this is called a predictorcorrector method, and since we are using two distinct methods to arrive at w_{i+1}, we can implement a step size correction if we like. Compute a few steps of a predictorcorrector method using an AdamsBashforth 2step and an AdamsMoulton 2step for the IVP y’(t)=t+2y, y(0)=2, using (a constant) h=0.1. Discuss the difficulties of changing step size for a multistep method.
This resource should take approximately 2 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Adaptive Multistep Methods”

5.5 Stiff Problems
 Reading: University of Arkansas: Mark Arnold’s “Stiff Problems”
Link: University of Arkansas: Mark Arnold’s “Stiff Problems” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading. We have seen that adaptive step sizes can lead to much greater efficiencies than uniform steps. Stiff differential equations can easily fool some methods, forcing them to take small step sizes when there is little or no variation in the solution.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “Stiff Problems”

5.6 Stability of Methods
 Reading: Scholarpedia: John Butcher’s “Stability”
Link: Scholarpedia: John Butcher’s “Stability ” (HTML)
Instructions: Click on the link above and read the article. All things being equal, we would like to use stable methods; but of course, we would also like to use the fastest most general methods. Regions of stability give us a necessary condition for bounding a time step. Methods with large stability regions are especially important in the case of stiff systems. Methods with large stability regions are usually implicit methods.
This resource should take approximately 1 hour to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: Scholarpedia: John Butcher’s “Stability”

5.7 Shooting Methods for Boundary Value Problems
 Reading: University of Arkansas: Mark Arnold’s “Shooting Methods”
Link: University of Arkansas: Mark Arnold’s “Shooting Methods” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading.This resource should take approximately 1 hour to complete.
 Lecture: YouTube: University of South Florida: Autar Kaw’s “Shooting Method” (YouTube)
Link: University of South Florida: Autar Kaw’s “Shooting Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 3 lectures that have been split into 6 videos.
This resource should take approximately 50 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.  Lecture: YouTube: University of South Florida: Autar Kaw’s “Finite Difference Method”
Link: YouTube: University of South Florida: Autar Kaw’s “Finite Difference Method” (YouTube)
Instructions: Click on the link above, then watch the video lectures in the chapter listed above. In this case there are 3 lectures that have been split into 6 videos.
This resource should take approximately 50 minutes to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpages above.
 Reading: University of Arkansas: Mark Arnold’s “Shooting Methods”

5.8 Programming Case Study: A Predictor Corrector Implementation
 Reading: University of Arkansas: Mark Arnold’s “PredictorCorrector Program”
Link: University of Arkansas: Mark Arnold’s “PredictorCorrector Program” (PDF)
Instructions: Click on the link above, then select the appropriate link to download a pdf of the reading.
This resource should take approximately 4 hours to complete.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
 Reading: University of Arkansas: Mark Arnold’s “PredictorCorrector Program”

Unit 5 Assessment
 Assessment: The Saylor Foundation's "Unit 5 Assessment"
Link: The Saylor Foundation's "Unit 5 Assessment" (PDF) and “Guide to Responding” (PDF).
Instructions: Carefully answer the four questions below. The first three are entirely computational, and while it may be tedious, it will be good for you to do them by hand. If you keep 7 or more significant (decimal) digits throughout your computations, then your results should match the answers given in the guide. Most mistakes arise from using the wrong value for the time parameter t. Question 4 also has an essay question asking you to reflect on the results of your experiment; you can then compare your conclusions with the guide. Now that you have done this by hand at least once, please feel free to construct other similar experiments which you can implement with your programs.
 Assessment: The Saylor Foundation's "Unit 5 Assessment"

Final Exam
 Final Exam: The Saylor Foundation's MA213 Final Exam
Link: The Saylor Foundation's MA213 Final Exam
Instructions: You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after clicking the link.
 Final Exam: The Saylor Foundation's MA213 Final Exam