Katana VentraIP

von Neumann architecture

The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC.[1] The document describes a design architecture for an electronic digital computer with these components:

See also: Stored-program computer and Universal Turing machine § Stored-program computer

The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system.[3]


The von Neumann architecture is simpler than the Harvard architecture (which has one dedicated set of address and data buses for reading and writing to memory and another set of address and data buses to fetch instructions).


A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units.


The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses (split-cache architecture).

History[edit]

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot run a word processor or games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program on ENIAC.[4]


With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an instruction set, and can store in memory a set of instructions (a program) that details the computation.


A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which operators had to do manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing.

Capabilities[edit]

On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. It makes "programs that write programs" possible.[5] This has made a sophisticated self-hosting computing ecosystem flourish around von Neumann architecture machines.


Some high level languages leverage the von Neumann architecture by providing an abstract, machine-independent way to manipulate executable code at runtime (e.g., LISP), or by using runtime information to tune just-in-time compilation (e.g. languages hosted on the Java virtual machine, or languages embedded in web browsers).


On a smaller scale, some repetitive operations such as BITBLT or pixel and vertex shaders can be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular.

(Birkbeck, University of London) officially came online on May 12, 1948.[17]

ARC2

(Victoria University of Manchester, England) made its first successful run of a stored program on June 21, 1948.

Manchester Baby

(University of Cambridge, England) was the first practical stored-program electronic computer (May 1949)

EDSAC

(University of Manchester, England) Developed from the Baby (June 1949)

Manchester Mark 1

(Council for Scientific and Industrial Research) Australia (November 1949)

CSIRAC

at the Kiev Institute of Electrotechnology in Kiev, Ukrainian SSR (November 1950)

MESM

(Ballistic Research Laboratory, Computing Laboratory at Aberdeen Proving Ground 1951)

EDVAC

(1951(

IAS machine

(University of Illinois) at Aberdeen Proving Ground, Maryland (completed November 1951)[18]

ORDVAC

at Los Alamos Scientific Laboratory (March 1952)

MANIAC I

at Institute for Advanced Study (June 1952)

IAS machine

at the University of Illinois, (September 1952)

ILLIAC

in Moscow (1952)

BESM-1

at Argonne National Laboratory (1953)

AVIDAC

at Oak Ridge National Laboratory (June 1953)

ORACLE

in Stockholm (1953)

BESK

at RAND Corporation (January 1954)

JOHNNIAC

in Denmark (1955)

DASK

at the Weizmann Institute of Science in Rehovot, Israel (1955)

WEIZAC

in Munich (1956)

PERM

in Sydney (1956)

SILLIAC

The First Draft described a design that was used by many universities and corporations to construct their computers.[16] Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.

The had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent.[19][20] However it was partially electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory.[21]

IBM SSEC

The developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London officially came online on May 12, 1948.[17] It featured the first rotating drum storage device.[22][23]

ARC2

The was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime.

Manchester Baby

The was modified to run as a primitive read-only stored-program computer (using the Function Tables for program ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann.

ENIAC

The ran some test programs in February, March, and April 1949, although was not completed until September 1949.

BINAC

The developed from the Baby project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949.

Manchester Mark 1

The ran its first program on May 6, 1949.

EDSAC

The was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951.

EDVAC

The ran its first program in November 1949.

CSIR Mk I

The was demonstrated in April 1950.

SEAC

The ran its first program on May 10, 1950, and was demonstrated in December 1950.

Pilot ACE

The was completed in July 1950.

SWAC

The was completed in December 1950 and was in actual use in April 1951.

Whirlwind

The first (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.

ERA Atlas

The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation.

Design limitations[edit]

Von Neumann bottleneck[edit]

The shared bus between the program memory and data memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to move to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of CPU.


The von Neumann bottleneck was described by John Backus in his 1977 ACM Turing Award lecture. According to Backus:

CARDboard Illustrative Aid to Computation

Interconnect bottleneck

Little man computer

Random-access machine

Harvard architecture

Turing machine

Harvard vs von Neumann

A tool that emulates the behavior of a von Neumann machine

JOHNNY: A simple Open Source simulator of a von Neumann machine for educational purposes