A root finding algorithm is a numerical method of finding an for a given function such that . This particular value of is the root. There are many different methods of finding roots of a function. Below are sum of them.

**Bisection Method:**

Bisection method is a simple root finding method. It requires prior knowledge of the value of a continuous function at the endpoints of a given interval where the root exist. The function value at the endpoints must have different signs (i.e ). Using the Intermediate Value Theorem, bisection method iterates and reduces the interval into halves, disregarding the where no sign change occurs. It will iterate until it converges to the root or the level of desired accuracy is reached.

Bisection Method Algorithm

:

- Input (i.e level of accuracy)
- Set (The interval)
- Compute
- If
- If (i.e function is positive at the midpoint), replace midpoint with the endpoint where function was positive
- If (i.e negative), replace midpoint with endpoint where function was negative
- If (if root is not found and level of desired accuracy is not reached, repeat 3-6)

The bisection method algorithm constructs the sequence {} that will converge to a root of the given function that is

The following is a python code to find the root of a given function using bisection method:#Bisection Method

#Sidafa Conde

#MTH 361

from math import*

def F(x):

#Defines the function here

f=pow(x,2)-4

return f

def bisection():

a=float(input(“Left Interval: “))

b=float(input(“Right Interval: “))

N=int(input(“How many iterations? “))

Fleft=F(a) #computes the left function value

Fright=F(b) #computes the right function value

if Fleft*Fright<0: #when a sign change occur

for i in range (0,N,1): #Iteration

a=a

b=b

c=(a+b)/2 #computes the midpoint of the interval

print(c)

Fmid=F(c) #Midpoint interval value

Fleft=F(a) #Left interval value

Fright=F(b) #right interval value

#print(Fmid)

if Fleft*Fmid<0: #if the sign change happens on the first half of the interval

a=a

b=c

elif Fmid*Fright<0:#if the sign change happens on the second half of the interval

a=c

b=b

elif math.fabs(a-b)<10**-10:

print(“root is:”)

print(c)

This code is written to compute the optimal maximum number of iteration. You only need to give it a tolerance level.

**Multiple Partitioning Method**

Many question arises. If we can bisect the interval on which is continuous on, is it not better to do multiple partitions? To answer this question, one must define ‘better’. Let be the number of partitions. Is it better to do partitions in terms of number of iterations required? Is it better to do partitions in terms of work?

let be the number of iterations and denote the value of .

, , is the original interval.

a root= in or (i.e the new interval where sign change occur). Hence: = = . Combining the two equations together, we get:

= = . Simplifies to

= . From calculus 2, we know that will converge when . Therefore, as .

If . Solving for , we get:

.

By the property of the inequality above, as the number of iteration required to find the root will decrease. Bisection (i.e k=2) requires more iterations to find the root of a given function then Trisection(i.e k=3).

The graph above shows the number of iterations decreasing as the number of partitions increase.

Which method is better with respect to how much work done(i.e what is the cost)?

We know that there are function evaluation. If we multiply this by the number of iteration, we should get the cost. With just some algebra, work . By the property of this inequality, as the number of partitions increase (i.e ) it would require more work, more computation to converge to a root.

The following is the modified bisection method code into multi-partition method:

%To find the root of f(x)=0 in the interval of (a,b) using n partition

clear all, close all, clc

f = @(x) x.^3; %given function

a=10; %left endpoint of the interval

b=-10; %right endpoint of the interval

n= 4; % number of partitions (2 for bisection, 3 for trisection, etc…)

ep=10e-15; %tolerance lebvel

nmax=abs(log(ep/(b-a))/log(2)); %calculates the number of iteration needed

Nmax=ceil(nmax); %number of iteration

ya=f(a); yb=f(b); %computes the function value at the endpoint of the given interval

if ya*yb > 0,display ‘no root in this interval’,break,end %if no sign change

%while abs(b-a)>ep %iterate the program until the level of desired accuracy is achieved

for j=1:Nmax

x= linspace(a,b,n+1);% divide the interval into equal section with respect to n

for i=1:(n);

if (f(x(i))*f(x(i+1)))<=0; %when a sign change occur

tempa=x(i); %lower endpoint of the interval assignment

tempb=x(i+1); % upper endpoint of the interval assignment

end

end

a=tempa; b=tempb;

end

root=a

The partition methods (i.e bisection, trisection, etc…) are guarantee to find a root to a given function on interval.

The above methods of root finding are also refered to as Two-Point Methods. The following methods will be One-Point Method, requiring only one initial guess.

**Fixed-Iteration Method:**

A fixed point of a given function satisfies . Given a continuous function , & (i.e the function maps it domain onto itself

*Proof (fixed point)*

- If , fixed point is
- If , fixed point is
- If satisfy , , then has a fixed point in .

When is fixed point unique?

Given that the same conditions above are satisfied:

if exist on and that , the fixed point is unique.

Proof:suppose that and are fixed point of (i.e & ).By the Mean Value theorem of derivative, , where .

. is a unique point.

How to find fixed point?

It needs an initial point . this is the fixed point iteration.

Proof:let (i.e fixed point). are the sequence of guesses.

MVT .

where is b/n and .

- “]
- f(x)=cos( p*x) on [-pi/2, pi/2 Courtesy of Andrew Perry’s code

Fixed point is the solution of the equation , as shown on the image above. The fixed point iteration algorithm will generate the sequence . ===, then is the fixed point of the given function (i.e ).

I am trying to learn Python. So I’ve written a fixed point iteration code in Python as well.

#Fixed Iteration Method

#Sidafa Conde

#MTH 361

import math

def F(x): #Defines the function here

f=8/(x+2)

return f

def fixedpoint():

p=[]

p.insert(0,4) #initial condition

for i in range(0,19,1):

t=p[i]

x=F(t)

p.insert(i+1,x)

print(p[i])

To find the root of a given function , this can be written in the fixed point iteration () for some so that the fixed-point of become the root of the . A good is very important.

Example: Say you wish to find the root of . You may use or (just solving for x). Using the code above, you will the following output with initial condition of :

does not converge:

2

1.5

-0.8125

-2.76818847656

-13.1061306749

-1128.12436634

-717861967.275

-1.84966397745e+26

-3.16408775755e+78Whereas will converge:

2

2.08008382305

2.0923506778

2.09421699601

2.09450065219

2.09454375753

2.09455030781

2.09455130318

2.09455145444

2.09455147742

2.09455148092

2.09455148145

2.09455148153

2.09455148154

This selection of is not a good choice.

**Newton Method:**

Given a function and whose continuous near a root , then the newton’s method can be used to produce a sequence {} that converges faster than Bisection method. Newton method is also a one-point method (i.e it only requires one initial guess), whereas the bisection is a two-point method.

Newton method uses the tangent line at one point, the initial guess , on the given function . The equation of the tangent at the initial guess is the written as the following:

**Newton’s Method Theorem:**

Assume that & root such that . The Newton’s Method sequence is defined by the following iteration:

= for

This is is the intersection about the x-axis of the tangent line above. For the next iteration, our initial guess is replaced by the x-intercept obtain in the first calculation. This iteration will continue until the root is found or the user is satisfied with level of accuracy.

The Newton Method iteration, similar to the fixed point iteration, will converge to the root for any initial guess close to the root and if on .

Python Code for Newton Method:

#Sidafa Conde

#Newton Method

#MTH 361

from math import*

def F(x): #Defines the function here

f=pow(x,3)+2*x-4

return f

def F1(x): #derivative function

f=3*pow(x,2)+2

return f

def newtonmethod():

p=[]

p.insert(0,1) #initial condition

for i in range(0,10,1):

t=p[i]

x=p[i]-(F(t)/F1(t)) #newton method formula

p.insert(i+1,x)

print(p[i])

I will test this code to find with 20 iterations. I will the function as . . Here is the program output:

5

2.7

1.72037037037

1.44145536818

1.41447098137

1.4142135858

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

1.41421356237

There are times when Newton Method’s starting point enters a cycle. Lets take and start it at

01.00.01.00.01.00.01.00.01.00.01.00.01.00.01.00.01.00.0

Hi, our blogs are rather similar. It seems as you have stopped posting though, which is a real shame as it is high-quality stuff!

I’ve only read your discussion on the bisection method, and it is really nice.

Thanks for sharing.

find the root of y= x 3 + 2x – 5 using bisection method ? please help

Are you writing a code? If so, what have you done so far?