Katana VentraIP

Methods of computing square roots

Methods of computing square roots are algorithms for approximating the non-negative square root of a positive real number . Since all square roots of natural numbers, other than of perfect squares, are irrational,[1] square roots can usually only be computed to some finite precision: these methods typically construct a series of increasingly accurate approximations.

Most square root computation methods are iterative: after choosing a suitable initial estimate of , an iterative refinement is performed until some termination criterion is met. One refinement scheme is Heron's method, a special case of Newton's method. If division is much more costly than multiplication, it may be preferable to compute the inverse square root instead.


Other methods are available to compute the square root digit by digit, or using Taylor series. Rational approximations of square roots may be calculated using continued fraction expansions.


The method employed depends on the needed accuracy, and the available tools and computational power. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result).


A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an integer square root is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case).

History[edit]

Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Babylonian mathematicians calculated the square root of 2 to three sexagesimal "digits" after the 1, but it is not known exactly how. They knew how to approximate a hypotenuse using (giving for example for the diagonal of a gate whose height is rods and whose width is rods) and they may have used a similar approach for finding the approximation of [2]


Heron's method from first century Egypt was the first ascertainable algorithm for computing square root.[3]


Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance.


Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures.

It can be easier for manual calculations.

Every digit of the root found is known to be correct, i.e., it does not have to be changed later.

If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a .

square number

The algorithm works for any , and naturally, the way it proceeds depends on the base chosen.

base

Exponential identity[edit]

Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms () and exponentials (): The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules.

A two-variable iterative method[edit]

This method is applicable for finding the square root of and converges best for . This, however, is no real limitation for a computer-based calculation, as in base 2 floating-point and fixed-point representations, it is trivial to multiply by an integer power of 4, and therefore by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, can be moved to the range . Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one.


The initialization step of this method is while the iterative steps read Then, (while ).


The convergence of , and therefore also of , is quadratic.


The proof of the method is rather easy. First, rewrite the iterative definition of as Then it is straightforward to prove by induction that and therefore the convergence of to the desired result is ensured by the convergence of to 0, which in turn follows from .


This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill[8] for use on EDSAC, one of the first electronic computers.[9] The method was later generalized, allowing the computation of non-square roots.[10]

Applying to the equation produces a method that converges quadratically using three multiplications per step:

Newton's method

Another iteration is obtained by , which is the Householder's method of order two. This converges cubically, but involves five multiplications per iteration: and

Halley's method

If doing , the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing and adjusting all the other constants to compensate: and

fixed-point arithmetic

Taylor series[edit]

If N is an approximation to , a better approximation can be found by using the Taylor series of the square root function:


As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that is as small as possible.

Negative or complex square[edit]

If S < 0, then its principal square root is


If S = a+bi where a and b are real and b ≠ 0, then its principal square root is


This can be verified by squaring the root.[20][21] Here


is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part.

Alpha max plus beta min algorithm

nth root algorithm

"Square root algorithms". MathWorld.

Weisstein, Eric W.

Square roots by subtraction

Integer Square Root Algorithm by Andrija Radović

Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (May 1977) : page 22

Calculator to learn the square root