Sei sulla pagina 1di 7

Algorithms and Integer Multiplication

Pi19404
July 28, 2013

Contents

Contents
Algorithms and Integer Multiplication
0.1 Introduction . . . . . . . . . . . . . . . . . . . . . 0.2 Recursive method for Integer Multiplication 0.3 Karatsuba Multiplication /Gauss Trick . . . . 0.3.1 Implementation . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3
3 3 5 6 6

2 | 7

Algorithms and Integer Multiplication

Algorithms and Integer Multiplication


0.1 Introduction
In this article we will look at few algorithms for integer multiplication.

0.2 Recursive method for Integer Multiplication


The common method for multplication of two N digit integers take about N 2 multiplication and N 2 additions roughly N 2 basic operations. Consider multiplication of two large integers X,Y which have N digits respectively Let us assume we have a function whose inputs are two 2 N digit numbers X and Y and output is product (X*Y). X and Y can be expressed as
X Y

= 10

( (

N=2)a N=2)c

+b +d

(1) (2) (3)

= 10

where a,b,c,d have N = 2.

(N=2)

digits. if we consider 2 digit numbers then

We are assuming that we have even number of digits for simplicity We are required to compute
X

If we compute it using classical approach we get the result as


X

= 10 (a

c)

+ 10

N=2

(a

+b

c)

+ (b

d)

(4)

All digits being multiplied have (N=2) digits. can also be computed using same approach

hence their products

3 | 7

Algorithms and Integer Multiplication

Thus idea would be to recursively compute ac,ad,bc,bd.Each of which are a smaller sum problems of multiplications of (N=2) digits. We compute the produts and add the suitably to get the result. We need to evaluate how this algorithm compares with the classical algorithm for multiplication. Let T (N ) denote the worst case runtime of the algorithm for N digit numbers. Since we are using a recursive function of run times of recursive calls. Let consider the simplest case of T (1)  constant This is base case.
T (N )

is expressed in terms and we will assume

= 1

we also perform some addition operations apart from recursive calls which are denoted by O(N )
T (N )

4

T (N=2)

+ O (N )

(5)

There are 4 multiplication of size (N=2).Thus basically a problem of size N is divided into 4 sub problem of size (N=2). the results of which result in addition of 4 (N=2) digit numbers. Addition of 2 N digit number is approximately linear time. computation required is expressed as T (N )  4 T (N=2) + O(N ) ,which again needs to expressed in terms of N .This is done using the master method. In general the computation required can be expressed as
T (N )

aT (

N b

) + O (N )T (N )

O (1)

for small N

where

 

a is number of recursive calls b is input size shrinkage


b >

1

4 | 7

Algorithms and Integer Multiplication

d operations appart from recursive calls.

The input to master method is a recursive relation and the master method tells is upper bound on running time of the algorithms. In general it depends on the parameters a,b,d

T (N

8 > < )= > :


a

O (N logN )
d l

O (N

O (N ogb a)

if if if

d d

a < b a > b

using this we can compute


T (N )

= 4; b = 2; d = 1
O (N og2 4)
l

and
2

a > b

O (N

This the computational complexity of this method is same as classical method.

0.3 Karatsuba Multiplication /Gauss Trick


This provide a little improvement in comparison with recursive multiplication technique. Another thing to notice about multiplication is that
X

requires multiplications (a c); (a d + b c); (b d).ie it does not require explicity quantities ad; bc just their sum. if we consider multiplication
(a + b)

= 10 (a

c)

+ 10

N=2

(a

+b

c)

+ (b

d)

(6)

+ d) =

ac

+ bd + (ad + bc)(ad + bc) = (a + b)

+ d)

ac

+ bd

Instead of 4 multiplication we can compute required components using 3 multiplications and 3 addition/subtraction operations. In this case we have divided N digit problems to 3 problem. computation required can be expressed as T (N ) ,which again needs to expressed in terms of N . using the master method we can compute
a > b
d

(N=2)

digit

 3

T (N=2)

O (N )

= 3; b = 2; d = 1

and

T (N )

O (N og2 3)

O (N

1:58

5 | 7

Algorithms and Integer Multiplication This method provides a complexity less than
< O (N

The base algorithm will take time of the form M (N ) = aN 2 + b N + d and the new algorithm takes 3 M (N=2) + d N + e which comes out to 3=4aN 2 + 3=2bN + dN + e + 3c. thus we can observe a speedup of 3=4 in product (N 2 )and speedup by a factor of 3=2 in linear (N ) operations.

0.3.1 Implementation
Ill be implementing all the algorithms using haskel. Since recursion is basic mechanism to loop in Haskell ,it fits in naturally within recursive algorithmic structure of divide and conquer algorithms. The code can be found at https://github.com/pi19404/m19404/blob/ master/Algorithm/multiplication/mul1.hs

6 | 7

Bibliography

Bibliography
[1] [2]

html.

Haskel map function.

url: http://www.haskell.org/tutorial/functions.

url: http : / / gmplib . org / manual / Karatsuba Multiplication.html#Karatsuba-Multiplication.


Karatsuba-Multiplication.

7 | 7

Potrebbero piacerti anche