End-to-end principle
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The essence of what would later be called the end-to-end principle was contained in the work of Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s.[1] The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark.[2][a] The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.[3]
A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness.[b] Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.
History[edit]
In the 1960s, Paul Baran and Donald Davies, in their pre-ARPANET elaborations of networking, made comments about reliability. Baran's 1964 paper states: "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist."[9]: 5 Going further, Davies captured the essence of the end-to-end principle; in his 1967 paper he stated that users of the network will provide themselves with error control: "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated."[10]: 2.3
The ARPANET was the first large-scale general-purpose packet switching network – implementing several of the concepts previously articulated by Baran and Davies.[11][12]
Davies built a local-area network with a single packet switch and worked on the simulation of wide-area datagram networks.[13][14][15] Building on these ideas, and seeking to improve on the implementation in the ARPANET,[15] Louis Pouzin's CYCLADES network was the first to implement datagrams in a wide-area network and make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.[1] Concepts implemented in this network feature in TCP/IP architecture.[16]
Limitations[edit]
The most important limitation of the end-to-end principle is that its basic premise, placing functions in the application endpoints rather than in the intermediary nodes, is not trivial to implement.
An example of the limitations of the end-to-end principle exists in mobile devices, for instance with mobile IPv6.[27] Pushing service-specific complexity to the endpoints can cause issues with mobile devices if the device has unreliable access to network channels.[28]
Further problems can be seen with a decrease in network transparency from the addition of network address translation (NAT), which IPv4 relies on to combat address exhaustion.[29] With the introduction of IPv6, users once again have unique identifiers, allowing for true end-to-end connectivity. Unique identifiers may be based on a physical address, or can be generated randomly by the host.[30]
The end-to-end principle advocates pushing coordination-related functionality ever higher, ultimately into the application layer. The premise is that application-level information enables flexible coordination between the application endpoints and yields better performance because the coordination would be exactly what is needed. This leads to the idea of modeling each application via its own application-specific protocol that supports the desired coordination between its endpoints while assuming only a simple lower-layer communication service. Broadly, this idea is known as application semantics (meaning).
Multiagent systems offers approaches based on application semantics that enable conveniently implementing distributed applications without requiring message ordering and delivery guarantees from the underlying communication services. A basic idea in these approaches is to model the coordination between application endpoints via an information protocol[31] and then implement the endpoints (agents) based on the protocol. Information protocols can be enacted over lossy, unordered communication services. A middleware based on information protocols and the associated programming model abstracts away message receptions from the underlying network and enables endpoint programmers to focus on the business logic for sending messages.