Binary protocol
   HOME

TheInfoList



OR:

A communication protocol is a system of rules that allows two or more entities of a
communications system A communications system or communication system is a collection of individual telecommunications networks, transmission systems, relay stations, tributary stations, and terminal equipment usually capable of interconnection and interoperat ...
to transmit
information Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random, ...
via any kind of variation of a
physical quantity A physical quantity is a physical property of a material or system that can be quantified by measurement. A physical quantity can be expressed as a ''value'', which is the algebraic multiplication of a ' Numerical value ' and a ' Unit '. For exam ...
. The protocol defines the rules,
syntax In linguistics, syntax () is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure ( constituenc ...
,
semantics Semantics (from grc, σημαντικός ''sēmantikós'', "significant") is the study of reference, meaning, or truth. The term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and comput ...
and
synchronization Synchronization is the coordination of events to operate a system in unison. For example, the conductor of an orchestra keeps the orchestra synchronized or ''in time''. Systems that operate with all parts in synchrony are said to be synchronou ...
of
communication Communication (from la, communicare, meaning "to share" or "to be in relation with") is usually defined as the transmission of information. The term may also refer to the message communicated through such transmissions or the field of inqui ...
and possible error recovery methods. Protocols may be implemented by hardware,
software Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consist ...
, or a combination of both. Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a
technical standard A technical standard is an established norm or requirement for a repeatable technical task which is applied to a common and repeated use of rules, conditions, guidelines or characteristics for products or related processes and production methods, ...
. A
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming ...
describes the same for computations, so there is a close analogy between protocols and programming languages: ''protocols are to communication what programming languages are to computations''.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, "They (protocols) are to communication what programming languages are to computation" An alternate formulation states that ''protocols are to communication what
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s are to computation''.Comer 2000, Sect. 1.3 - Internet Services, p. 3, "Protocols are to communication what algorithms are to computation" Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a
protocol stack The protocol stack or network stack is an implementation of a computer networking protocol suite or protocol family. Some of these terms are used interchangeably but strictly speaking, the ''suite'' is the definition of the communication protoco ...
. Internet communication protocols are published by the
Internet Engineering Task Force The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements an ...
(IETF). The
IEEE The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operati ...
(Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the
International Organization for Standardization The International Organization for Standardization (ISO ) is an international standard development organization composed of representatives from the national standards organizations of member countries. Membership requirements are given in A ...
(ISO) handles other types. The
ITU-T The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Co ...
handles
telecommunications Telecommunication is the transmission of information by various types of technologies over wire, radio, optical, or other electromagnetic systems. It has its origin in the desire of humans for communication over a distance greater than that ...
protocols and formats for the
public switched telephone network The public switched telephone network (PSTN) provides infrastructure and services for public telecommunication. The PSTN is the aggregate of the world's circuit-switched telephone networks that are operated by national, regional, or local telep ...
(PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence.


Communicating systems


History

One of the first uses of the term ''protocol'' in a data-commutation context occurs in a memorandum entitled ''A Protocol for Use in the
NPL Data Communications Network The NPL network, or NPL Data Communications Network, was a local area computer network operated by a team from the National Physical Laboratory in London that pioneered the concept of packet switching. Based on designs first conceived by Dona ...
'' written by
Roger Scantlebury Roger Anthony Scantlebury (born August 1936) is a British computer scientist who worked at the National Physical Laboratory (NPL) and later at Logica. Scantlebury participated in pioneering work to develop packet switching and associated communi ...
and Keith Bartlett in April 1967. On the
ARPANET The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical fou ...
, the starting point for host-to-host communication in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The Network Control Protocol (NCP) for the ARPANET was first implemented in 1970. The NCP interface allowed
application software Application may refer to: Mathematics and computing * Application software, computer software designed to help the user to perform specific tasks ** Application layer, an abstraction layer that specifies protocols and interface methods used in a ...
to connect across the ARPANET by implementing higher-level communication protocols, an early example of the ''protocol layering'' concept. Networking research in the early 1970s by
Robert E. Kahn Robert Elliot Kahn (born December 23, 1938) is an American electrical engineer who, along with Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the hear ...
and
Vint Cerf Vinton Gray Cerf (; born June 23, 1943) is an American Internet pioneer and is recognized as one of " the fathers of the Internet", sharing this title with TCP/IP co-developer Bob Kahn. He has received honorary degrees and awards that include ...
led to the formulation of the Transmission Control Program (TCP). Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The
International Networking Working Group The International Networking Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for computer networking. Set up in 1972 as an informal group to consider the tec ...
agreed a connectionless
datagram A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The deliv ...
standard which was presented to the CCIT in 1975 but was not adopted by the ITU or by the ARPANET. International research, particularly the work of
Rémi Després Rémi Després (born January 16, 1943) is a French engineer and entrepreneur known for his contributions on data networking. Education In 1961–1963, Rémi Després attended École Polytechnique of Paris, of which he holds an Engineer degree. ...
, contributed to the development of the
X.25 X.25 is an ITU-T standard protocol suite for packet-switched data communication in wide area networks (WAN). It was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts a ...
standard, based on
virtual circuit A virtual circuit (VC) is a means of transporting data over a data network, based on packet switching and in which a connection is established within the network between two endpoints. The network, rather than having a fixed data rate reservation ...
s by the
ITU-T The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Co ...
in 1976. Computer manufacturers developed
proprietary protocol In telecommunications, a proprietary protocol is a communications protocol owned by a single organization or individual. Intellectual property rights and enforcement Ownership by a single organization gives the owner the ability to place restric ...
s such as IBM's
Systems Network Architecture Systems Network Architecture (SNA) is IBM's proprietary networking architecture, created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes formats and protocols but, in itself, is not a pi ...
(SNA), Digital Equipment Corporation's
DECnet DECnet is a suite of network protocols created by Digital Equipment Corporation. Originally released in 1975 in order to connect two PDP-11 minicomputers, it evolved into one of the first peer-to-peer network architectures, thus transforming D ...
and
Xerox Network Systems Xerox Network Systems (XNS) is a computer networking protocol suite developed by Xerox within the Xerox Network Systems Architecture. It provided general purpose network communications, internetwork routing and packet delivery, and higher level f ...
. TCP software was redesigned as a modular protocol stack. Originally referred to as ''IP/TCP'', it was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete protocol suite by 1989, as outlined in and , laid the foundation for the growth of
TCP/IP The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the su ...
as a comprehensive protocol suite as the core component of the emerging
Internet The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a '' network of networks'' that consists of private, p ...
. International work on a reference model for communication standards led to the
OSI model The Open Systems Interconnection model (OSI model) is a conceptual model that 'provides a common basis for the coordination of SOstandards development for the purpose of systems interconnection'. In the OSI reference model, the communications ...
, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.


Concept

The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any
state State may refer to: Arts, entertainment, and media Literature * ''State Magazine'', a monthly magazine published by the U.S. Department of State * ''The State'' (newspaper), a daily newspaper in Columbia, South Carolina, United States * ''Our S ...
-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s and
data structure In computer science, a data structure is a data organization, management, and storage format that is usually chosen for efficient access to data. More precisely, a data structure is a collection of data values, the relationships among them, ...
s. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself.Ben-Ari 1982, chapter 2 - The concurrent programming abstraction, p. 18-19, states the same.Ben-Ari 1982, Section 2.7 - Summary, p. 27, summarizes the concurrent programming abstraction. In contrast, because there is no
shared memory In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between progr ...
, communicating systems have to communicate with each other using a shared
transmission medium A transmission medium is a system or substance that can mediate the propagation of signals for the purposes of telecommunication. Signals are typically imposed on a wave of some kind suitable for the chosen medium. For example, data can modulat ...
. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made
operating system An operating system (OS) is system software that manages computer hardware, software resources, and provides common daemon (computing), services for computer programs. Time-sharing operating systems scheduler (computing), schedule tasks for ef ...
independent. The best-known frameworks are the
TCP/IP model The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the su ...
and the
OSI model The Open Systems Interconnection model (OSI model) is a conceptual model that 'provides a common basis for the coordination of SOstandards development for the purpose of systems interconnection'. In the OSI reference model, the communications ...
. At the time the Internet was developed,
abstraction layer In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libr ...
ing had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design.Sect. 11.10 - The Disadvantage Of Layering, p. 192, states: layering forms the basis for protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a
protocol suite The protocol stack or network stack is an implementation of a computer networking protocol suite or protocol family. Some of these terms are used interchangeably but strictly speaking, the ''suite'' is the definition of the communication protoco ...
.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, states the same. Some of the best-known protocol suites are
TCP/IP The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the su ...
, IPX/SPX,
X.25 X.25 is an ITU-T standard protocol suite for packet-switched data communication in wide area networks (WAN). It was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts a ...
, AX.25 and
AppleTalk AppleTalk is a discontinued proprietary suite of networking protocols developed by Apple Computer for their Macintosh computers. AppleTalk includes a number of features that allow local area networks to be connected with no prior setup or the ...
. The protocols can be arranged based on functionality in groups, for instance, there is a group of
transport protocol Transport (in British English), or transportation (in American English), is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipeline ...
s. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions.Comer 2000, Sect. 11.3 - The Conceptual Layers Of Protocol Software, p. 178, "Each layer takes responsibility for handling one part of the problem." To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer.


Types

There are two types of communication protocols, based on their representation of the content being carried: text-based and binary.


Text-based

A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text. The immediate human readability stands in contrast to binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical
parsing Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term ''parsing'' comes from ...
and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of
ASCII ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because ...
text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (
File Transfer Protocol The File Transfer Protocol (FTP) is a standard communication protocol used for the transfer of computer files from a server to a client on a computer network. FTP is built on a client–server model architecture using separate control and da ...
), SMTP (
Simple Mail Transfer Protocol The Simple Mail Transfer Protocol (SMTP) is an Internet standard communication protocol for electronic mail transmission. Mail servers and other message transfer agents use SMTP to send and receive mail messages. User-level email clients typica ...
), and the
finger protocol In computer networking, the Name/Finger protocol and the Finger user information protocol are simple network protocols for the exchange of human-oriented status and user information. Name/Finger protocol The Name/Finger protocol is based on Req ...
. Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during
debugging In computer programming and software development, debugging is the process of finding and resolving '' bugs'' (defects or problems that prevent correct operation) within computer programs, software, or systems. Debugging tactics can involve i ...
and during early protocol development design phases. To be clear, all digital communication is fundamentally binary. The "Text" based protocols mentioned here use only binary content, which is made "humanly readable" by a text editor (or other such software).


Binary

A binary protocol utilizes all values of a
byte The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable uni ...
, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in
ASCII ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because ...
encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like
EbXML Electronic Business using eXtensible Markup Language, commonly known as e-business XML, or ebXML (pronounced ee-bee-ex-em-el, 'bi,eks,em'el as it is typically referred to, is a family of XML based standards sponsored by OASIS and UN/CEFACT whose m ...
, HTTP/2,
HTTP/3 HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP ...
and EDOC. An interface in UML may also be considered a binary protocol.


Basic requirements

Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kinds of rules are said to express the ''syntax'' of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the ''semantics'' of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: ;Data formats for data exchange :Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the ''header'' and the ''payload''. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the
maximum transmission unit In computer networking, the maximum transmission unit (MTU) is the size of the largest protocol data unit (PDU) that can be communicated in a single network layer transaction. The MTU relates to, but is not identical to the maximum frame size tha ...
(MTU) are divided in pieces of appropriate size. ;Address formats for data exchange :Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair ''(sender address, receiver address)''. Usually, some address values have special meanings. An all-''1''s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an ''addressing scheme''. ;Address mapping :Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as ''address mapping''. ;Routing :When systems are not directly connected, intermediary systems along the ''route'' to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called ''
internetworking Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interc ...
''. ;Detection of transmission errors :Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. ;Acknowledgements :Acknowledgement of correct reception of packets is required for
connection-oriented communication Connection-oriented communication is a network communication mode in telecommunications and computer networking, where a communication session or a semi-permanent connection is established before any useful data can be transferred. The establishe ...
. Acknowledgments are sent from receivers back to their respective senders. ;Loss of information - timeouts and retries :Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect, so the number of retransmissions is limited. Exceeding the retry limit is considered an error. ;Direction of information flow :Direction needs to be addressed if transmissions can only occur in one direction at a time as on
half-duplex A duplex communication system is a point-to-point system composed of two or more connected parties or devices that can communicate with one another in both directions. Duplex systems are employed in many communications networks, either to allow ...
links or from one sender at a time as on a shared medium. This is known as
media access control In IEEE 802 LAN/MAN standards, the medium access control (MAC, also called media access control) sublayer is the layer that controls the hardware responsible for interaction with the wired, optical or wireless transmission medium. The MAC sublay ...
. Arrangements have to be made to accommodate the case of
collision In physics, a collision is any event in which two or more bodies exert forces on each other in a relatively short time. Although the most common use of the word ''collision'' refers to incidents in which two or more objects collide with great fo ...
or contention where two parties respectively simultaneously transmit or wish to transmit. ;Sequence control :If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. ;Flow control :Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. ;Queueing :Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization.


Protocol design

Systems engineering Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinki ...
principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of
concurrent programming Concurrent means happening at the same time. Concurrency, concurrent, or concurrence may refer to: Law * Concurrence, in jurisprudence, the need to prove both ''actus reus'' and ''mens rea'' * Concurring opinion (also called a "concurrence"), ...
is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as
communicating sequential processes In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or ...
(CSP). Concurrency can also be modeled using
finite state machine A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
s, such as Mealy and
Moore machine In the theory of computation, a Moore machine is a finite-state machine whose current output values are determined only by its current state. This is in contrast to a Mealy machine, whose output values are determined both by its current state and ...
s. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another.


Layering

In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a
combinatorial explosion In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to how the combinatorics of the problem is affected by the input, constraints, and bounds of the problem. Combinatorial explosion is sometimes used to ...
of cases, keeping each design relatively simple. The communication protocols in use on the
Internet The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a '' network of networks'' that consists of private, p ...
are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the
Internet Protocol Suite The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the sui ...
. The first two cooperating protocols, the
Transmission Control Protocol The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonl ...
(TCP) and the
Internet Protocol The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. ...
(IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The
OSI model The Open Systems Interconnection model (OSI model) is a conceptual model that 'provides a common basis for the coordination of SOstandards development for the purpose of systems interconnection'. In the OSI reference model, the communications ...
was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically
connectionless Connectionless communication, often referred to as CL-mode communication,Information Processing Systems - Open Systems Interconnection, "Transport Service Definition - Addendum 1: Connectionless-mode Transmission", International Organization for ...
in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as
Ethernet Ethernet () is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1 ...
. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an
Asynchronous Transfer Mode Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by American National Standards Institute (ANSI) and ITU-T (formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of ...
(ATM) network.


Protocol layering

Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols.Comer 2000, Sect. 11.2 - The Need For Multiple Protocols, p. 177, introduces the decomposition in layers. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a
data flow diagram A data-flow diagram is a way of representing a flow of data through a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram h ...
is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers.


Software layering

The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering.Comer 2000, Sect. 11.2 - The need for multiple protocols, p. 178, explains similarities protocol software and compiler, assembler, linker, loader. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary.


Strict layering

Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. While the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis.


Design patterns

Commonly recurring problems in the design and implementation of communication protocols can be addressed by
software design pattern In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine ...
s.


Formal specification

Popular formal methods of describing communication syntax are
Abstract Syntax Notation One Abstract Syntax Notation One (ASN.1) is a standard interface description language for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, a ...
(an ISO standard) and augmented Backus–Naur form (an
IETF The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements an ...
standard).
Finite-state machine A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
models are used to formally describe the possible interactions of the protocol. and communicating finite-state machines


Protocol development

For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a
standards organization A standards organization, standards body, standards developing organization (SDO), or standards setting organization (SSO) is an organization whose primary function is developing, coordinating, promulgating, revising, amending, reissuing, interpr ...
, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol.


The need for protocol standards

The need for protocol standards can be shown by looking at what happened to the
Binary Synchronous Communications Binary Synchronous Communication (BSC or Bisync) is an IBM character-oriented, half-duplex link protocol, announced in 1967 after the introduction of System/360. It replaced the synchronous transmit-receive (STR) protocol used with second gene ...
(BSC) protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening.Marsden 1986, Section 6.1 - Why are standards necessary?, p. 64-65, uses BSC as an example to show the need for both standard protocols and a standard framework. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as ''
de facto standard A ''de facto'' standard is a custom or convention that has achieved a dominant position by public acceptance or market forces (for example, by early entrance to the market). is a Latin phrase (literally " in fact"), here meaning "in practice b ...
s''. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition.


Standards organizations

Some of the
standards organization A standards organization, standards body, standards developing organization (SDO), or standards setting organization (SSO) is an organization whose primary function is developing, coordinating, promulgating, revising, amending, reissuing, interpr ...
s of relevance for communication protocols are the
International Organization for Standardization The International Organization for Standardization (ISO ) is an international standard development organization composed of representatives from the national standards organizations of member countries. Membership requirements are given in A ...
(ISO), the
International Telecommunication Union The International Telecommunication Union is a specialized agency of the United Nations responsible for many matters related to information and communication technologies. It was established on 17 May 1865 as the International Telegraph Unio ...
(ITU), the
Institute of Electrical and Electronics Engineers The Institute of Electrical and Electronics Engineers (IEEE) is a 501(c)(3) professional association for electronic engineering and electrical engineering (and associated disciplines) with its corporate office in New York City and its operation ...
(IEEE), and the
Internet Engineering Task Force The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements an ...
(IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the
public switched telephone network The public switched telephone network (PSTN) provides infrastructure and services for public telecommunication. The PSTN is the aggregate of the world's circuit-switched telephone networks that are operated by national, regional, or local telep ...
(PSTN), as well as many
radio Radio is the technology of signaling and communicating using radio waves. Radio waves are electromagnetic waves of frequency between 30  hertz (Hz) and 300  gigahertz (GHz). They are generated by an electronic device called a tr ...
communication systems. For marine electronics the NMEA standards are used. The
World Wide Web Consortium The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web. Founded in 1994 and led by Tim Berners-Lee, the consortium is made up of member organizations that maintain full-time staff working ...
(W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable
routing loops A routing loop is a common problem with various types of networks, particularly computer networks. They are formed when an error occurs in the operation of the routing algorithm, and as a result, in a group of nodes, the path to a particular dest ...
) may not be respected in another.


The standardization process

In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a ''draft proposal'' is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a ''draft international standard'', and ultimately an ''international standard''. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject.


OSI standardization

A lesson learned from
ARPANET The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical fou ...
, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for ''structured protocols'' (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the
Open Systems Interconnection model The Open Systems Interconnection model (OSI model) is a conceptual model that 'provides a common basis for the coordination of SOstandards development for the purpose of systems interconnection'. In the OSI reference model, the communications ...
(OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a ''service access point''. Corresponding layers at each system are called ''peer entities''. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): * The ''
Application layer An application layer is an abstraction layer that specifies the shared communications protocols and interface methods used by hosts in a communications network. An ''application layer'' abstraction is specified in both the Internet Protocol Su ...
'' may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. * The '' presentation layer'' may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). * The ''session layer'' may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. * The ''transport layer'' provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. * The ''network layer'' does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. * The ''data link layer'' does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. * The ''physical layer'' describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the #Software layering, TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSI and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by . Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards.


Ossification

Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognize. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols. Ossification is a major issue in
Internet The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a '' network of networks'' that consists of private, p ...
protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulation (networking), encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the
Transmission Control Protocol The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonl ...
(TCP) and User Datagram Protocol (UDP) are the only practical choices for
transport protocol Transport (in British English), or transportation (in American English), is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipeline ...
s on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult. Recommended methods of preventing ossification include encryption, encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first
IETF The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements an ...
transport protocol to have been designed with deliberate anti-ossification properties.


Taxonomies

Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A #Protocol layering, ''layering scheme'' combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes.Comer 2000, Sect. 11.5.1 - The TCP/IP 5-Layer Reference Model, p. 183, states the same. The layering scheme from the IETF is called ''Internet layering'' or ''TCP/IP layering''. The layering scheme from ISO is called ''the OSI model'' or ''ISO layering''. In networking equipment configuration, a term-of-art distinction is often drawn: The term ''protocol'' strictly refers to the transport layer, and the term ''service'' refers to protocols utilizing a ''protocol'' for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term ''service'' strictly refers to port numbers, and the term ''application'' is often used to refer to protocols identified through inspection signatures.


See also

*Lists of network protocols


Notes


References


Bibliography

* . In particular Ch. 18 on "network design folklore", which is als
available online
* * In particular Ch.11 Protocol layering. Also has a RFC guide and a Glossary of Internetworking Terms and Abbreviations. * Describes TCP/IP to the implementors of protocolsoftware. In particular the introduction gives an overview of the design goals of the suite. * * * * * * * * * * *


External links





{{Authority control Communications protocols, Data transmission Network protocols, *