 #jsDisabledContent { display:none; } My Account |  Register |  Help Flag as Inappropriate This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address:

# Computational complexity of mathematical operations

Article Id: WHEBN0006497220
Reproduction Date:

 Title: Computational complexity of mathematical operations Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:

### Computational complexity of mathematical operations

The following tables list the running time of various algorithms for common mathematical operations.

Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used.

Note: Due to the variety of multiplication algorithms, M(n) below stands in for the complexity of the chosen multiplication algorithm.

## Arithmetic functions

Operation Input Output Algorithm Complexity
Addition Two n-digit numbers One n+1-digit number Schoolbook addition with carry Θ(n)
Subtraction Two n-digit numbers One n+1-digit number Schoolbook subtraction with borrow Θ(n)
Multiplication Two n-digit numbers
One 2n-digit number Schoolbook long multiplication O(n2)
Karatsuba algorithm O(n1.585)
3-way Toom–Cook multiplication O(n1.465)
k-way Toom–Cook multiplication O(nlog (2k − 1)/log k)
Mixed-level Toom–Cook (Knuth 4.3.3-T) O(n 22 log n log n)
Schönhage–Strassen algorithm O(n log n log log n)
Fürer's algorithm O(n log n 2O(log* n))
Division Two n-digit numbers One 1-digit number Schoolbook long division O(n2)
Newton–Raphson division O(M(n))
Square root One n-digit number One n-digit number Newton's method O(M(n))
Modular exponentiation Two n-digit numbers and a k-bit exponent One n-digit number Repeated multiplication and reduction O(M(n) 2k)
Exponentiation by squaring O(M(n) k)
Exponentiation with Montgomery reduction O(M(n) k)

## Algebraic functions

Operation Input Output Algorithm Complexity
Polynomial evaluation One polynomial of degree n with fixed-size polynomial coefficients One fixed-size number Direct evaluation Θ(n)
Horner's method Θ(n)
Polynomial gcd (over Z[x] or F[x]) Two polynomials of degree n with fixed-size polynomial coefficients One polynomial of degree at most n Euclidean algorithm O(n2)
Fast Euclidean algorithm  O(M(n) log n)

## Special functions

Many of the methods in this section are given in Borwein & Borwein.

### Elementary functions

The elementary functions are constructed by composing arithmetic operations, the exponential function (exp), the natural logarithm (log), trigonometric functions (sin, cos), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either exp or log in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.

Below, the size n refers to the number of digits of precision at which the function is to be evaluated.

Algorithm Applicability Complexity
Taylor series; repeated argument reduction (e.g. exp(2x) = [exp(x)]2) and direct summation exp, log, sin, cos, arctan O(M(n) n1/2)
Taylor series; FFT-based acceleration exp, log, sin, cos, arctan O(M(n) n1/3 (log n)2)
Taylor series; binary splitting + bit burst method exp, log, sin, cos, arctan O(M(n) (log n)2)
Arithmetic-geometric mean iteration exp, log, sin, cos, arctan O(M(n) log n)

It is not known whether O(M(n) log n) is the optimal complexity for elementary functions. The best known lower bound is the trivial bound Ω(M(n)).

### Non-elementary functions

Function Input Algorithm Complexity
Gamma function n-digit number Series approximation of the incomplete gamma function O(M(n) n1/2 (log n)2)
Fixed rational number Hypergeometric series O(M(n) (log n)2)
m/24, m an integer Arithmetic-geometric mean iteration O(M(n) log n)
Hypergeometric function pFq n-digit number (As described in Borwein & Borwein) O(M(n) n1/2 (log n)2)
Fixed rational number Hypergeometric series O(M(n) (log n)2)

### Mathematical constants

This table gives the complexity of computing approximations to the given constants to n correct digits.
Constant Algorithm Complexity
Golden ratio, φ Newton's method O(M(n))
Square root of 2, 2 Newton's method O(M(n))
Euler's number, e Binary splitting of the Taylor series for the exponential function O(M(n) log n)
Newton inversion of the natural logarithm O(M(n) log n)
Pi, π Binary splitting of the arctan series in Machin's formula O(M(n) (log n)2)
Salamin–Brent algorithm O(M(n) log n)
Euler's constant, γ Sweeney's method (approximation in terms of the exponential integral) O(M(n) (log n)2)

## Number theory

Algorithms for number theoretical calculations are studied in computational number theory.

Operation Input Output Algorithm Complexity
Greatest common divisor Two n-digit numbers One number with at most n digits Euclidean algorithm O(n2)
Binary GCD algorithm O(n2)
Left/right k-ary binary GCD algorithm O(n2/ log n)
Stehlé–Zimmermann algorithm O(M(n) log n)
Schönhage controlled Euclidean descent algorithm O(M(n) log n)
Jacobi symbol Two n-digit numbers 0, −1, or 1
Schönhage controlled Euclidean descent algorithm O(M(n) log n)
Stehlé–Zimmermann algorithm O(M(n) log n)
Factorial A fixed-size number m One O(m log m)-digit number Bottom-up multiplication O(m2 log m)
Binary splitting O(M(m log m) log m)
Exponentiation of the prime factors of m O(M(m log m) log log m),
O(M(m log m))

## Matrix algebra

The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic.

Operation Input Output Algorithm Complexity
Matrix multiplication Two n×n matrices One n×n matrix Schoolbook matrix multiplication O(n3)
Strassen algorithm O(n2.807)
Optimized CW-like algorithms O(n2.373)
Matrix multiplication One n×m matrix &

one m×p matrix

One n×p matrix Schoolbook matrix multiplication O(nmp)
Matrix inversion One n×n matrix One n×n matrix Gauss–Jordan elimination O(n3)
Strassen algorithm O(n2.807)
Optimized CW-like algorithms O(n2.373)
Determinant One n×n matrix One number Laplace expansion O(n!)
LU decomposition O(n3)
Bareiss algorithm O(n3)
Fast matrix multiplication O(n2.373)
Back Substitution Triangular matrix n solutions Back substitution O(n2)

In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2.

^* Because of the possibility of blockwise inverting a matrix, where an inversion of an n×n matrix requires inversion of two half-sized matrices and six multiplications between two half-sized matrices, and since matrix multiplication has a lower bound of Ω(n2 log n) operations, it can be shown that a divide and conquer algorithm that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.