Quadrature/Numerical Integration

December 17, 2009

Quadrature is the numerical evaluation of definit inegrals in the following form:

\displaystyle F(x) = \int_{a}^{b} {f(x) dx}

This can approximated by in this form:

\displaystyle F(x) \approx \sum_{k} w_k f(x_k)

There are many forms of quadratures available. Each of these methods are in the same form above. The only difference is the choice of x_k called the quadrature nodes, and weights w_k.

Interpolatory Quadrature:

Previous subject area had been about polynomial interpolation. The integral of the polynomial intepolant p_n(x) of the f(x) maybe be used to derive the quadrature formula of F(x). Lets this polynomial integral be denoted as \hat{F(x)} from this point on.

\displaystyle p_n(x) \approx f(x) through n+1 points.

\displaystyle \hat{F(x)}=\displaystyle \hat{F(x)}=\int_a^b {p_n(x)}dx

Example using Lagrange form:

The polynomial interpolation in the Lagrange form is: \displaystyle p_n(x) = \sum_{k=0}^n {L_{n,k}(x) f(x_k)}. The integral of a given function f(x) which is interpolated in Lagrange form can be approximated by:

\displaystyle \hat{F(x)}=\int_a^b{(\sum_{k=0}^n {L_{n,k}(x) f(x_k)}})dx

Interchanging the order of summation and integration, we obtain:

\hat{F(x)}=\displaystyle \sum_{k=0}^n(\int_a^b{L_{n,k}(x) dx})f(x_k)

The quadrature formula is then:

\displaystyle \hat{F(x)} = \sum_{k=0}^n{w_k f(x_k)}

the weight is now \displaystyle w_k = \int_a^b {L_{n,k}(x) dx}. The most commonly used and the simplest quadrature formula are those with equally spaced nodes and its knows as the (n+1) Newton-Cotes formulas. Here is an example of 2-point closed Newton-Cotes formula using the Lagrange form above:

set x_0=a , x_1=b and \Delta{x}=h=b-a. The interpolatory quadrature is equivalent to \displaystyle \hat{F_1 (x)} = w_0 f(x_0) + w_1 f(x_1), where

\displaystyle w_0 =\int_{x_0}^{x_1} L_{1,0}(x)dx and \displaystyle w_1 = \int_{x_0}^{x_1} L_{1,1}(x)dx

From my previous on Lagrange Interpolation,

\displaystyle w_0 = \int_{x_0}^{x_1} {\frac{x-x_1}{x_0-x_1}dx} and \displaystyle w_1 = \int_{x_0}^{x_1} {\frac{x-x_0}{x_1-x_0}dx}

Here is the Interpolatory quadrature that Andrew Perry and I wrote:

INSERT CODE HERE!

INSERT NUMERICAL RESULT HERE!

This form can be extended for n+1 point of any Polynomial base interpolation: 

\displaystyle \hat{F_n (x)}= \sum_{k=0}^n w_k f(x_k) =w_0 f(x_0) + w_1 f(x_1) + \cdots + w_{n-1} f(x_{n-1) + w_{n} f(x_n)}

The above equation can be used for any of the Rienmann Sum rules.

Gauss-Chebyshev Quadrature Rule:

The Newton-Cotes formulas are based on interpolating polynomials. However, interpolation at equidistant points experiences the Runge Phenomenon even for well-behaved function. For explicitly given f(x), other choices of quadrature should be considered, such as the Gaussian quadrature.

\displaystyle \int_{-1}^1 f(x) dx = \int_{-1}^{1} \frac{1}{\sqrt{1-x^2}} g(x) dx = \sum_{i=0}^n w_i g(x_i)

where

\displaystyle g(x_i) = f(x_i) \sqrt{1-x^2} and the weight is w_i=\frac{\pi}{n} for i=0,1,2, \cdots n.

Here is a Matlab code implementing Gauss-Chebyshev Quadrature:

INSERT CODE HERE

INSERT NUMERICAL RESULT


Interpolation Using Splines

December 10, 2009

This method has advantages over the use of high degree polynomial, which have tendency to oscillate between data values. This method is a series of  third degree polynomials joining the points together.  Only four unknown coefficients need to computed to construct the cubic spline.

Given n+1 data points, there will be n cubic polynomials s_n(x):

s_0 (x) = a_0 + b_{0}x + c_0 x^2 + d_0 x^3 \\ s_1 (x) = a_1 + b_{1}x + c_1 x^2 + d_1 x^3 \\ s_2 (x) = a_2 + b_{2}x + c_2 x^2 + d_2 x^3 \\ \vdots \\ s_n (x) = a_n + b_n x + c_n x^2 + d_nx^3

Each of these cubic polynomial must pass through the two points it joins.

At the point (x_0,y_0) and (x_1,y_y), s_0(x) will pass through both points. At the second point,s_1(x) will also pass trough.

Smoothness conditions is most important in the spline method. Therefore, continuity of the slope f'(x) and the second derivative f''(x), which determines the concavity of the function f(x), must also be agree between the adjacent data points. These conditions controls the oscillations that usually happens in the high order polynomials from happening in Cubic Spline.

Properties of Cubic Spline:

  1. S(x)=s_k(x)=s_{k,0} + s_{k,1}(x-x_k) + s_{k,2}(x-x_k)^2 + s_{k,3}(x-x_k)^3 for k=0,1,2, \cdots n
  2. s(x_k)= f(x_k) for k=0,1,2, \cdots n. The splines passes through each data points.
  3. s_k(x_{k+1})=s_{k+1}(x_{k+1}) for k=0,1,2, \cdots n. The spline forms a continuous functions on the interval.
  4. s'_k(x_{k+1})=s'_{k+1}(x_{k+1}) for k=0,1,2,\cdots n. The spline forms a smooth function.
  5. s''_k(x_{k+1})=s''_{k+1}(x_{k+1}) for k=0,1,2,\cdots n. The second derivative is continuous.

The properties of cubic splines makes sure that Runge phenomenon does not occur in this interpolation method.

Matlab Code:

close all; clc; clear all

%x = linspace(-1,1,15);

x=-cos(pi*(0:20)/20); % the quadrature points [-1,1]

y = 1./(1+25*x.^2);

xx = linspace(-1,1,100);

yy= 1./(1+25*xx.^2);

y = spline(x,y,xx);

%plot(x,y,’o’,xx,yy)

plot(xx,yy,’b’)

hold on

plot(xx,y,’r’)

The result of this code is the following:


10 equally spaced points

10 chebyshev points

15 points

The Chebyshev points distributions visual results is a lot better than equally spaced points (with 10 points).

Newton Interpolation Polynomial

December 10, 2009

Polynomial interpolation can be used to construct the polynomial of  degree n  that passes through n+1 points (x_k,y_k)_{k=0}^n just like many other polynomials .  The difference is that Newton’s Method uses divided difference method for calculating its polynomial coefficients.

Assume that  [Graphics:Images/NewtonPolyMod_gr_11.gif] and  [Graphics:Images/NewtonPolyMod_gr_12.gif] for  [Graphics:Images/NewtonPolyMod_gr_13.gif] are distinct  values.  Then

[Graphics:Images/NewtonPolyMod_gr_14.gif]

Divided Difference is a recursive division process. The are symmetric in their arguments. The order in which the date points are supplied does not affect the leading coefficient (associative property).

Computing Newton’s Polynomial’s leading coefficient:

a_0=f(x_0)

a_1=\frac{f(x_1)-f(x_0)}{x_1-x_0}

a_2=\frac{f(x_2)-f(x_0)-(x_2-x_0) a_1}{(x_2-x_0)(x_2-x_1)}

This method in essence is adding weights to the leading coefficients, by calculating different “centers”. A huge difference between this polynomial and the many others.

Using these coefficients, newton’s polynomial is constructed  to approximate  [Graphics:Images/NewtonPolyMod_gr_16.gif],

[Graphics:Images/NewtonPolyMod_gr_17.gif]

The divided differences for a function [Graphics:Images/NewtonPolyMod_gr_28.gif] are defined as follows:

The coefficients needed to construct Newton’s polynomial is the top diagonal of the divided-difference matrix.


Runge’s Phenomenon

December 10, 2009

Runge’s Phenomenon:

Its a problem that occurs with large data during interpolation of polynomial of higher degrees. The more data points used, the higher the polynomial degree. To investigate this phenomena, I will use f(x)=\frac{1}{1+25x^2} on [-1,1].

Equally Spaced Nodes:

error plot of f(x) with 10 interp point
f(x) with 10 interpolation points

The error plot looks great. Anyone would be happy to get an error like that, almost machine precision(10e-16). However, the reconstruction of the function(in blue) in poor. With more points, we see the runge’s phenomena, the oscillation toward the end:

The error plot of this interpolation shows the impact of Runge’s Phenomena:

The error toward the end point, where the oscillations occur in the interpolation, is very poor.

Chebyshev Points:

x = \cos \frac{k\pi}{n}\text{ for }0 \le k \le n.

Chebyshev point is better in polynomial interpolation because it reduces the ocilation as n increases. Below is a plot of f(x) with just 16 points.

Newton Divided Difference:


Vandermonde Matrix & Lagrange Polynomial

December 10, 2009

Vandermonde Matrix & Lagrange Interpolation:

(I got this idea from Chuck. During our chat, he pointed out that they are really not a big difference between the two method except for how the coefficient are gathered. Thanks Chuck!)

The quadratic equation example above is in fact Vandermonde Matrix with only 3 points. However, lets consider n+1 points. The polynomial can be expressed in this form:

P_n(x)=\sum_{k=0}^n{a_k x^k} and

f(x_k)=P_n(x_k), k=0,1,2 \cdots ,n

For large data points, usually the case in real life problem (compare to P_2(x) example above), we will obtain a linear system of equations for the coefficients a_0,a_1,a_2, \cdots, n:

\begin{bmatrix} 1 & x_0 & x_0^2 & \cdots & x_0^n \\ 1 & x_1 & x_1^2 & \cdots & x_1^2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^n \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\a_n \end{bmatrix}=\begin{bmatrix} f(x_0) \\ f(x_1) \\ \vdots \\f(x_n) \end{bmatrix}

The coefficient matrix is known as the Vandermonde Matrix.

Here is the Matlab code written in class:

clear

N=5

for j=1:N

x(j)=((j-1)*(2*(pi))/(N-1));% set up grid

end

%alternativel use x=linspace(0,2*pi,N)

f=cos(x’); %set up function values

for j=1:N

V(:,j)=x.^(j-1)

end

c=inv(V)*f;; % polynomial coefficients

%now make the polynomial

for j=1:N

cp(N+1-j)=c(j)% rearrange order of the coefficients

end

y=x’

P=polyval(cp,y)

courtesy of ceedeepee.wordpress.com

You can increase the number of points to achieve better accurate approximation. However, the numerical solution of the linear equations above for large n is very expensive and give rise to stability problems. Using matlab, you will start getting the following warning:

Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 9.377592e-48.

And the approximation(in blue) vs. the function( f(x)=cos(x) on [0,2 \pi] in red) would look like this:

The error of this interpolation is on the right. It is plotted again the number of point. As you increase data point, the error increases tremendously. This is not good. You should’ve expected the error to decrease. The figures above are with just 50 data points. If more points are added, you can see the error even more vividly.

For this reason, a better option for polynomial interpolation is needed. Say you’re trying to find an interpolation polynomial that agrees with the function f(x) at two points, a P_1(x). The function can then be approximated by taking linear combination of two polynomial of degree 1:

P_1(x)=L_{1,0}(x)+L_{1,1}f(x_1)

L_{1,0}=\frac{x_1 -x}{x_1 -x_0}  L_{1,0}=\frac{x -x_0}{x_1 -x_0}

Hence, whenever L_{1,0}(x_0)=1 then L_{1,1}(x_1)=0, and whenever L_{1,1}(x_0)=0 then L_{1,1}(x_1)=1.This way, the following equation will creates secant lines connecting to each other: P_1(x)=L_{1,0}(x)+L_{1,1}f(x_1).

The above equation can be extended for n+1 points as the following:

P_n(x)= \sum_{k=0}^{n} {L_{n,k}}(x) f(x_k)

The index n denotes the degree of the polynomial and k indicates f(x_k) being the coefficient in P_n(x).

L_{n,k}(x)=\prod_{j=0, j \neq k}^n \frac{x-x_j}{x_k-x_j}

The Lagrange polynomials satisfies:

L_{n,k}(x_j) = \begin{cases} 1 & j = k\\ 0 & j \neq k \end{cases}

The same thing happens with Lagrange Interpolation that happened in Vandermonde Matrix. This problem, or misbehavoir, is known as Runge’s phenomenom.


Polynomial Interpolation: Function Approximation

December 10, 2009

http://en.wikipedia.org/wiki/Vandermonde_matrix

The search for polynomials fits to data

One of the common polynomial interpolation that is familiar to all is Linear Interpolation, it is also the simplest form. This is when two points are given (x_0,y_o) \& (x_1,y_1) and we use them to find the equation of a line that passes through them. In algebra, we will use the slope intercept form and the slope equation to find such a line:

m=\frac{y_1-y_0}{x_1-x_0}

y=mx+b

This model can be used to approximating any f(x^*) given that x^* \in [x_0,x_1]. This is interpolation approximation.When  two point are given, we can build a linear model. This linear model is a polynomial of degree one. Given a set of n+1 point, we can build a interpolating polynomial of degree n.

An example for a a quadratic polynomial, P_2(x), interpolation would be the following:

Given 3 points, (1,3), (2,12),(5,63). It can be assumed that the interpolation polynomial takes this form:

P_2(x)=a_0 +a_{1}x+a_{2}x^2,

where the polynomial coefficients are a_0,a_1,a_2. These coefficients can then be solved for using the condition:

P_{2}(1)=3, P_{2}(2)=12, P_{2}(5)=63

This can be written into:

a_0+a_{1}+a_{2}=3

a_0+2a_{1}+4a_{2}=12

a_0+5a_{1}+25a_{2}=63

We can transform this into a matrix and have Matlab solve it for us.

A= \begin{bmatrix} 1 & 1 & 1 \\ 1 & 2 & 4 \\ 1 & 5 & 25 \end{bmatrix}       x=\begin{bmatrix} a_0 \\ a_1 \\ a_2 \end{bmatrix}   b=\begin{bmatrix} 3 \\ 12 \\ 63 \end{bmatrix}

We can solve the coefficient, the x=\frac{A}{b}= \begin{bmatrix} -2 \\ 3 \\2 \end{bmatrix} Hereby constructing P_2(x)=-2+3x+2x^2

Weierstrass Approximation Theorem:

for a given \epsilon >0 there exists a polynomial P(x) defined on [a,b] such that

|f(x)-P-n(x)| \leq \epsilon for all x\in [a,b]

This theorem says that a continuous function can be approximated by an interpolation polynomial. There are many different interpolation polynomial that could be used. Taylor Polynomials, Lagrange Polynomial, Cubic Spline Interpolation, these are just a few.


Newton’s Method

October 29, 2009

Given a function f(x) and whose f'(x) continuous near a root r, then the newton’s method can be used to produce a sequence {{p_k}} that converges faster than Bisection method. Newton method is also a one-point method (i.e it only requires one initial guess), whereas the bisection is a two-point method.

Newton method uses the tangent line at one point, the initial guess x_0, on the given function y=f(x). The equation of the tangent at the initial guess is the  written as the following:

y-f(x_0)=f'(x_0)(x-x_0)

Newton’s Method Theorem:

Assume that f(x) \in C[a,b] & \exists root r \in [a,b] such that f(r)=0. The Newton’s Method sequence is defined by the following iteration:

x_{k+1}=f(x)={x_k}-\frac{f(x_k)}{f'(x_k)} for k \geq 0

This is x_{k+1} is the intersection about the x-axis of the tangent line above. For the next iteration, our initial guess is replaced by the x-intercept obtain in the first calculation. This iteration will continue until the root r is found or the user is satisfied with level of accuracy.

The Newton Method iteration, similar to the fixed point iteration, will converge to the root r for any initial guess close to the root and if f'(x) \neq 0 on [a,b].

Python Code for Newton Method:

#Sidafa Conde
#Newton Method
#MTH 361
from math import*
def F(x): #Defines the function here
f=pow(x,3)+2*x-4
return f
def F1(x): #derivative function
f=3*pow(x,2)+2
return f
def newtonmethod():
p=[]
p.insert(0,1) #initial condition
for i in range(0,10,1):
t=p[i]
x=p[i]-(F(t)/F1(t)) #newton method formula
p.insert(i+1,x)
print(p[i])

I will test this code to find sqrt(2)=1.41421356237 with 20 iterations. I will the function as f(x)=x^2-2. x_0=5. Here is the program output:

5
2.7
1.72037037037
1.44145536818
1.41447098137
1.4142135858
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237
1.41421356237


Follow

Get every new post delivered to your Inbox.