This set of exercises is designed to give you a familiarity with numerical differentiation. Hints and solutions are available.
Do the following problem with pen and paper.
Using the Taylor series expansion of the form:
Hint: you will need to include more terms in your Taylor expansion.
Let
and
. Then the first two terms of the Taylor expansion give:
Rearranging this gives:
the forward difference rule.
Similarly, using
and
gives, on rearranging:
the backward difference rule.
Adding the two previous formulae together and dividing by 2 gives:
the central difference rule.
Finally, if we include the third term in the expansions in 1. and 2. then we get:
and
Adding these together and rearranging gives:
the second derivative approximation.
Let
be equally spaced points over the interval
.
Let
, so that
.
Let
, and so
and
.
Begin by choosing
:
using forward differences.
using backward differences.Modify the code from part 1.
Approximate the first derivative at the points
using central differences.
Approximate the second derivative at the points
.
Plot your approximations and the true values of
and
on the same graphs (one for
and one for
).
What is the maximum absolute error for each approximation?
The code below will calculate all the approximations and plot them on the same graph as the exact solutions.
clear all
close all
N=10; % The number of intervals to use
h=6/N; % the step size
x=linspace(-3,3,N+1);
%
%% Calculate the exact solutions
y=x.*x.*x.*x;
yprime=4*x.*x.*x;
yprimeprime=12*x.*x;
%
%% Calculate the approximations
forwarddiff(1) = (y(2)-y(1))/h
for i=2:N
forwarddiff(i) = (y(i+1)-y(i))/h;
backwarddiff(i-1) = (y(i)-y(i-1))/(h);
centraldiff(i-1)=(y(i+1)-y(i-1))/(2*h);
secondderiv(i-1)=(y(i-1)-2*y(i)+y(i+1))/(h*h);
end
backwarddiff(N) = (y(N+1)-y(N))/h
%
%% Evaluate the maximum error for each approximation
MaxErrorInForwardDifference=max(abs(yprime(1:N)-forwarddiff))
MaxErrorInBackwardDifference=max(abs(yprime(2:N+1)-backwarddiff))
MaxErrorInCentralDifference=max(abs(yprime(2:N)-centraldiff))
MaxErrorInSecondDerivative=max(abs(yprimeprime(2:N)-secondderiv))
%
%% Plot the approximations witht the actual derivatives
plot(x,yprime,x(1:N),forwarddiff,x(2:N+1),backwarddiff,x(2:N),centraldiff)
title('First Derivative')
xlabel('x')
ylabel('y')
legend('True','Forward','Back','Central')
%
figure
plot(x,yprimeprime,x(2:N),secondderiv)
title('Second Derivative')
xlabel('x')
ylabel('y')
legend('True','Approx')
The maximum error for each approximation is:
Repeat for
and
.
i) In each case, how big does
need to be so that the maximum absolute error of the derivative at each discrete point is less than 0.01?
ii) Comment on how the maximum absolute error for each scheme decreases as
is doubled.
The maximum errors for different values of
can be calculated by modifying the code from a) above and are given in the table below.
N | Forward Difference | Backward Difference | Central Difference | Second Derivative |
---|---|---|---|---|
10 | 28.2960 | 28.2960 | 3.4560 | 0.7200 |
20 | 15.1470 | 15.1470 | 0.9720 | 0.1800 |
40 | 7.8344 | 7.8344 | 0.2565 | 0.0450 |
To get a maximum absolute error of less than 0.01 we require:
for forward and backward differences;
for central differences;
for the second derivative.For forward and backward differences, the error roughly halves as
is doubled.
For central differences and second derivatives, the error is divided by roughly 4 as
is doubled.
In mathematical terms, the first two are ‘first order’ and the second two are ‘second order’ as
increases.
Don’t worry if you don’t know what it means.