Katana VentraIP

Signed number representations

In computing, signed number representations are required to encode negative numbers in binary number systems.

Not to be confused with Signed-digit representation.

In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes.


There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement.

History[edit]

The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit.


There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704, 709 and 709x series computers being perhaps the best-known systems to use it.


Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1, CDC 160 series, CDC 3000 series, CDC 6000 series, UNIVAC 1100 series, and LINC computer use ones' complement representation.


Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity.[1] Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360, the GE-600 series,[2] and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX machines. The architects of the early integrated-circuit-based CPUs (Intel 8080, etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, including x86,[3] m68k, Power ISA,[4] MIPS, SPARC, ARM, Itanium, PA-RISC, and DEC Alpha.

Other systems[edit]

Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero. This allows a variable-length quantity encoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers.[11]


A similar method is used in the Advanced Video Coding/H.264 and High Efficiency Video Coding/H.265 video compression standards to extend exponential-Golomb coding to negative numbers. In that extension, the least significant bit is almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding.


Another approach is to give each digit a sign, yielding the signed-digit representation. For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation.

Balanced ternary

Binary-coded decimal

Computer number format

Method of complements

Signedness

Ivan Flores, The Logic of Computer Arithmetic, Prentice-Hall (1963)

Israel Koren, Computer Arithmetic Algorithms, A.K. Peters (2002),  1-56881-160-8

ISBN