HOME

TheInfoList



OR:

The Hypertext Transfer Protocol (HTTP) is an
application layer An application layer is an abstraction layer that specifies the shared communications protocols and Interface (computing), interface methods used by Host (network), hosts in a communications network. An ''application layer'' abstraction is speci ...
protocol in the
Internet protocol suite The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suit ...
model for distributed, collaborative,
hypermedia Hypermedia, an extension of the term hypertext, is a nonlinear medium of information that includes graphics, audio, video, plain text and hyperlinks. This designation contrasts with the broader term ''multimedia'', which may include non-interac ...
information systems. HTTP is the foundation of data communication for the
World Wide Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web se ...
, where
hypertext Hypertext is E-text, text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typi ...
documents include
hyperlink In computing, a hyperlink, or simply a link, is a digital reference to data that the user can follow or be guided by clicking or tapping. A hyperlink points to a whole document or to a specific element within a document. Hypertext is text wit ...
s to other resources that the user can easily access, for example by a
mouse A mouse ( : mice) is a small rodent. Characteristically, mice are known to have a pointed snout, small rounded ears, a body-length scaly tail, and a high breeding rate. The best known mouse species is the common house mouse (''Mus musculus' ...
click or by tapping the screen in a web browser. Development of HTTP was initiated by
Tim Berners-Lee Sir Timothy John Berners-Lee (born 8 June 1955), also known as TimBL, is an English computer scientist best known as the inventor of the World Wide Web. He is a Professorial Fellow of Computer Science at the University of Oxford and a profess ...
at
CERN The European Organization for Nuclear Research, known as CERN (; ; ), is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Gene ...
in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP protocol version that was named 0.9. That first version of HTTP protocol soon evolved into a more elaborated version that was the first draft toward a far future version 1.0. Development of early HTTP
Requests for Comments A Request for Comments (RFC) is a publication in a series from the principal technical development and standards-setting bodies for the Internet, most prominently the Internet Engineering Task Force (IETF). An RFC is authored by individuals or g ...
(RFCs) started a few years later and it was a coordinated effort by the
Internet Engineering Task Force The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and a ...
(IETF) and the
World Wide Web Consortium The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web. Founded in 1994 and led by Tim Berners-Lee, the consortium is made up of member organizations that maintain full-time staff working to ...
(W3C), with work later moving to the IETF. HTTP/1 was finalized and fully documented (as version 1.0) in 1996. It evolved (as version 1.1) in 1997 and then its specifications were updated in 1999, 2014, and 2022. Its secure variant named
HTTPS Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is enc ...
is used by more than 80% of websites.
HTTP/2 HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working ...
, published in 2015, provides a more efficient expression of HTTP's semantics "on the wire". It is now used by 41% of websites and supported by almost all web browsers (over 97% of users). It is also supported by major web servers over
Transport Layer Security Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securi ...
(TLS) using an
Application-Layer Protocol Negotiation Application-Layer Protocol Negotiation (ALPN) is a Transport Layer Security (TLS) extension that allows the application layer to negotiate which protocol should be performed over a secure connection in a manner that avoids additional round trips an ...
(ALPN) extension where
TLS 1.2 Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securi ...
or newer is required.
HTTP/3 HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP ( ...
, the successor to HTTP/2, was published in 2022. It is now used by over 25% of websites and is supported by many web browsers (over 75% of users). HTTP/3 uses
QUIC QUIC (pronounced "quick") is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meet ...
instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolesce previous major versions of the protocol. Support for HTTP/3 was added to
Cloudflare Cloudflare, Inc. is an American content delivery network and DDoS mitigation company, founded in 2009. It primarily acts as a reverse proxy between a website's visitor and the Cloudflare customer's hosting provider. Its headquarters are in San ...
and
Google Chrome Google Chrome is a cross-platform web browser developed by Google. It was first released in 2008 for Microsoft Windows, built with free software components from Apple WebKit and Mozilla Firefox. Versions were later released for Linux, macOS ...
first, and is also enabled in
Firefox Mozilla Firefox, or simply Firefox, is a free and open-source web browser developed by the Mozilla Foundation and its subsidiary, the Mozilla Corporation. It uses the Gecko rendering engine to display web pages, which implements current and ...
. HTTP/3 has lower latency for real-world web pages, if enabled on the server, load faster than with HTTP/2, and even faster than HTTP/1.1, in some cases over 3× faster than HTTP/1.1 (which is still commonly only enabled).


Technical overview

HTTP functions as a
request–response In computer science, request–response or request–reply is one of the basic methods computers use to communicate with each other in a network, in which the first computer sends a ''request'' for some data and the second ''responds'' to the requ ...
protocol in the
client–server model The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over ...
. A
web browser A web browser is application software for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on ...
, for example, may be the ''client'' whereas a
process A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management *Business process, activities that produce a specific se ...
, named
web server A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiate ...
, running on a computer hosting one or more
website A website (also written as a web site) is a collection of web pages and related content that is identified by a common domain name and published on at least one web server. Examples of notable websites are Google Search, Google, Facebook, Amaz ...
s may be the ''server''. The client submits an HTTP ''request'' message to the server. The server, which provides ''resources'' such as
HTML The HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser. It can be assisted by technologies such as Cascading Style Sheets (CSS) and scripting languages such as JavaScri ...
files and other content or performs other functions on behalf of the client, returns a ''response'' message to the client. The response contains completion status information about the request and may also contain requested content in its message body. A web browser is an example of a ''
user agent In computing, a user agent is any software, acting on behalf of a user, which "retrieves, renders and facilitates end-user interaction with Web content". A user agent is therefore a special kind of software agent. Some prominent examples of us ...
'' (UA). Other types of user agent include the indexing software used by search providers (
web crawler A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spid ...
s),
voice browser A voice browser is a software application that presents an interactive voice user interface to the user in a manner analogous to the functioning of a web browser interpreting Hypertext Markup Language (HTML). Dialog documents interpreted by voice br ...
s,
mobile app A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on d ...
s, and other
software Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists ...
that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from
web cache A Web cache (or HTTP cache) is a system for optimizing the World Wide Web. It is implemented both client-side and server-side. The caching of multimedias and other files can result in less overall delay when browsing the Web. Parts of the syste ...
servers that deliver content on behalf of
upstream server In computer networking, upstream server refers to a server that provides service to another server. In other words, upstream server is a server that is located higher in a hierarchy of servers. The highest server in the hierarchy is sometimes call ...
s to improve response time. Web browsers cache previously accessed web resources and reuse them, whenever possible, to reduce network traffic. HTTP
proxy server In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. Instead of connecting directly to a server that can fulfill a request ...
s at
private network In Internet networking, a private network is a computer network that uses a private address space of IP addresses. These addresses are commonly used for local area networks (LANs) in residential, office, and enterprise environments. Both the IPv4 ...
boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the
HTTP headers The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, ...
(found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server). HTTP is an
application layer An application layer is an abstraction layer that specifies the shared communications protocols and Interface (computing), interface methods used by Host (network), hosts in a communications network. An ''application layer'' abstraction is speci ...
protocol designed within the framework of the
Internet protocol suite The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suit ...
. Its definition presumes an underlying and reliable
transport layer In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end ...
protocol, thus
Transmission Control Protocol The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly ...
(TCP) is commonly used. However, HTTP can be adapted to use unreliable protocols such as the
User Datagram Protocol In computer networking, the User Datagram Protocol (UDP) is one of the core communication protocols of the Internet protocol suite used to send messages (transported as datagrams in packets) to other hosts on an Internet Protocol (IP) network. ...
(UDP), for example in
HTTPU Universal Plug and Play (UPnP) is a set of networking protocols that permits networked devices, such as personal computers, printers, Internet gateways, Wi-Fi access points and mobile devices to seamlessly discover each other's presence on the ...
and
Simple Service Discovery Protocol The Simple Service Discovery Protocol (SSDP) is a network protocol based on the Internet protocol suite for advertisement and discovery of network services and presence information. It accomplishes this without assistance of server-based configurat ...
(SSDP). HTTP resources are identified and located on the network by
Uniform Resource Locator A Uniform Resource Locator (URL), colloquially termed as a web address, is a reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifi ...
s (URLs), using the
Uniform Resource Identifier A Uniform Resource Identifier (URI) is a unique sequence of characters that identifies a logical or physical resource used by web technologies. URIs may be used to identify anything, including real-world objects, such as people and places, conc ...
s (URI's) schemes ''http'' and '

''. As defined in , URIs are encoded as
hyperlink In computing, a hyperlink, or simply a link, is a digital reference to data that the user can follow or be guided by clicking or tapping. A hyperlink points to a whole document or to a specific element within a document. Hypertext is text wit ...
s in
HTML The HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser. It can be assisted by technologies such as Cascading Style Sheets (CSS) and scripting languages such as JavaScri ...
documents, so as to form interlinked
hypertext Hypertext is E-text, text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typi ...
documents. In HTTP/1.0 a separate Connection-oriented communication, connection to the same server is made for every resource request. In HTTP/1.1 instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images,
scripts Script may refer to: Writing systems * Script, a distinctive writing system, based on a repertoire of specific elements or symbols, or that repertoire * Script (styles of handwriting) ** Script typeface, a typeface with characteristics of handw ...
, stylesheets, etc.). HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, specially under high traffic conditions.
HTTP/2 HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working ...
is a revision of previous HTTP/1.1 in order to maintain the same client–server model and the same protocol methods but with these differences in order: * to use a compressed binary representation of metadata (HTTP headers) instead of a textual one, so that headers require much less space; * to use a single TCP/ IP (usually
encrypted In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decip ...
) connection per accessed server domain instead of 2 to 8 TCP/IP connections; * to use one or more bidirectional streams per TCP/IP connection in which HTTP requests and responses are broken down and transmitted in small packets to almost solve the problem of the HOLB ( head of line blocking). * to add a push capability to allow server application to send data to clients whenever new data is available (without forcing clients to request periodically new data to server by using
polling Poll, polled, or polling may refer to: Figurative head counts * Poll, a formal election ** Election verification exit poll, a survey taken to verify election counts ** Polling, voting to make decisions or determine opinions ** Polling places o ...
methods). HTTP/2 communications therefore experience much less latency and, in most cases, even more speed than HTTP/1.1 communications.
HTTP/3 HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP ( ...
is a revision of previous HTTP/2 in order to use
QUIC QUIC (pronounced "quick") is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meet ...
+ UDP transport protocols instead of TCP/IP connections also to slightly improve the average speed of communications and to avoid the occasional (very rare) problem of TCP/IP connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "''head of line blocking''").


History

The term
hypertext Hypertext is E-text, text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typi ...
was coined by
Ted Nelson Theodor Holm Nelson (born June 17, 1937) is an American pioneer of information technology, philosopher, and sociologist. He coined the terms ''hypertext'' and ''hypermedia'' in 1963 and published them in 1965. Nelson coined the terms ''transcl ...
in 1965 in the
Xanadu Project Project Xanadu ( ) was the first hypertext project, founded in 1960 by Ted Nelson. Administrators of Project Xanadu have declared it superior to the World Wide Web, with the mission statement: "Today's popular software simulates paper. The World ...
, which was in turn inspired by
Vannevar Bush Vannevar Bush ( ; March 11, 1890 – June 28, 1974) was an American engineer, inventor and science administrator, who during World War II headed the U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime ...
's 1930s vision of the microfilm-based information retrieval and management "
memex Memex is a hypothetical electromechanical device for interacting with microform documents and described in Vannevar Bush's 1945 article "As We May Think". Bush envisioned the memex as a device in which individuals would compress and store all of ...
" system described in his 1945 essay "
As We May Think "As We May Think" is a 1945 essay by Vannevar Bush which has been described as visionary and influential, anticipating many aspects of information society. It was first published in ''The Atlantic'' in July 1945 and republished in an abridged v ...
".
Tim Berners-Lee Sir Timothy John Berners-Lee (born 8 June 1955), also known as TimBL, is an English computer scientist best known as the inventor of the World Wide Web. He is a Professorial Fellow of Computer Science at the University of Oxford and a profess ...
and his team at
CERN The European Organization for Nuclear Research, known as CERN (; ; ), is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Gene ...
are credited with inventing the original HTTP, along with HTML and the associated technology for a
web server A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiate ...
and a client
user interface In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine f ...
called
web browser A web browser is application software for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on ...
. Berners-Lee designed HTTP in order to help with the adoption of his other idea: the "WorldWideWeb" project, which was first proposed in 1989, now known as the
World Wide Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web se ...
. The first web server went live in 1990. The protocol used had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page.


Summary of HTTP milestone versions

;HTTP/0.9 :In 1991, the first documented official version of HTTP was written as a plain document, less than 700 words long, and this version was named HTTP/0.9. HTTP/0.9 supported only GET method, allowing clients to only retrieve HTML documents from the server, but not supporting any other file formats or information upload. ;HTTP/1.0-draft :Since 1992, a new document was written to specify the evolution of the basic protocol towards its next full version. It supported both the simple request method of the 0.9 version and the full GET request that included the client HTTP version. This was the first of the many unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0. ;W3C HTTP Working Group :After having decided that new features of HTTP protocol were required and that they had to be fully documented as official
RFC RFC may refer to: Computing * Request for Comments, a memorandum on Internet standards * Request for change, change management * Remote Function Call, in SAP computer systems * Rhye's and Fall of Civilization, a modification for Sid Meier's Civ ...
s, in early 1995 the HTTP Working Group (HTTP WG, led by Dave Raggett) was constituted with the aim to standardize and expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields. :The HTTP WG planned to revise and publish new versions of the protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many revisions, that timeline lasted much more than one year. :The HTTP WG planned also to specify a far future version of HTTP called HTTP-NG (HTTP Next Generation) that would have solved all remaining problems, of previous versions, related to performances, low latency responses, etc. but this work started only a few years later and it was never completed. ;HTTP/1.0 :In May 1996, was published as a final HTTP/1.0 revision of what had been used in previous 4 years as a pre-standard HTTP/1.0-draft which was already used by many web browsers and web servers. : In early 1996 developers started to even include unofficial extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.) into their products by using drafts of the upcoming HTTP/1.1 specifications. ;HTTP/1.1 :Since early 1996, major web browsers and web server developers also started to implement new features specified by pre-standard HTTP/1.1 drafts specifications. End-user adoption of the new versions of browsers and servers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet used the new HTTP/1.1 header "Host" to enable
virtual hosting Virtual hosting is a method for hosting multiple domain names (with separate handling of each name) on a single server (or pool of servers). This allows one server to share its resources, such as memory and processor cycles, without requiring all ...
. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant. :In January 1997, was officially released as HTTP/1.1 specifications. :In June 1999, was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications. ;W3C HTTP-NG Working Group :Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an ''HTTP-NG Working Group'' was formed to develop a new HTTP protocol named HTTP-NG (HTTP New Generation). A few proposals / drafts were produced for the new protocol to use
multiplexing In telecommunications and computer networking, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource - a ...
of HTTP transactions inside a single TCP/IP connection, but in 1999, the group stopped its activity passing the technical problems to IETF. ;IETF HTTP Working Group restarted :In 2007, the IET
HTTP Working Group
(HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify previous HTTP/1.1 specifications and secondly to write and refine future HTTP/2 specifications (named httpbis). ;SPDY: an unofficial HTTP protocol developed by Google :In 2009,
Google Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. ...
, a private company, announced that it had developed and tested a new HTTP binary protocol named
SPDY SPDY (pronounced "speedy") is an obsolete open-specification communication protocol developed for transporting web content. SPDY became the basis for HTTP/2 specification. However, HTTP/2 diverged from SPDY and eventually HTTP/2 subsumed all us ...
. The implicit aim was to greatly speed up web traffic (specially between future web browsers and its servers). :SPDY was indeed much faster than HTTP/1.1 in many tests and so it was quickly adopted by
Chromium Chromium is a chemical element with the symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal. Chromium metal is valued for its high corrosion resistance and hardne ...
and then by other major web browsers. :Some of the ideas about multiplexing HTTP streams over a single TCP/IP connection were taken from various sources, including the work of W3C HTTP-NG Working Group. ;HTTP/2 :In January–March 2012, HTTP Working Group (HTTPbis) announced the need to start to focus on a new HTTP/2 protocol (while finishing the revision of HTTP/1.1 specifications), maybe taking in consideration ideas and work done for SPDY. :After a few months about what to do to develop a new version of HTTP, it was decided to derive it from SPDY. :In May 2015,
HTTP/2 HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working ...
was published as and quickly adopted by all web browsers already supporting SPDY and more slowly by web servers. ;2014 updates to HTTP/1.1 :In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting : :* , ''HTTP/1.1: Message Syntax and Routing'' :* , ''HTTP/1.1: Semantics and Content'' :* , ''HTTP/1.1: Conditional Requests'' :* , ''HTTP/1.1: Range Requests'' :* , ''HTTP/1.1: Caching'' :* , ''HTTP/1.1: Authentication'' ;HTTP/0.9 Deprecation :In Appendix-A, HTTP/0.9 was deprecated for servers supporting HTTP/1.1 version (and higher): :Since 2016 many product managers and developers of user agents (browsers, etc.) and web servers have begun planning to gradually deprecate and dismiss support for HTTP/0.9 protocol, mainly for the following reasons: :* it is so simple that an RFC document was never written (there is only the original document); :* it has no HTTP headers and lacks many other features that nowadays are required for minimal security reasons; :* it has not been widespread since 1999..2000 (because of HTTP/1.0 and HTTP/1.1) and is commonly used only by some very old network hardware, i.e. routers, etc. : ;HTTP/3 :In 2020, the first drafts
HTTP/3 HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP ( ...
were published and major web browsers and web servers started to adopt it. :On 6 June 2022, IETF standardized HTTP/3 as . ;Updates and refactoring in 2022 :In June 2022, a batch of RFCs was published, deprecating many of the previous documents and introducing a few minor changes and a refactoring of HTTP semantics description into a separate document. :* , ''HTTP Semantics'' :* , ''HTTP Caching'' :* , ''HTTP/1.1'' :* , ''HTTP/2'' :* , ''HTTP/3'' (see also the section above) :* , ''QPACK: Field Compression for HTTP/3'' :* , ''Extensible Prioritization Scheme for HTTP''


HTTP data exchange

HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server. In HTTP implementations,
TCP/IP The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suit ...
connections are used using well known
port A port is a maritime facility comprising one or more wharves or loading areas, where ships load and discharge cargo and passengers. Although usually situated on a sea coast or estuary, ports can also be found far inland, such as Ham ...
s (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also
List of TCP and UDP port numbers This is a list of TCP and UDP port numbers used by protocols for operation of network applications. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) only need one port for duplex, bidirectional traffic. They usually u ...
). In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol
QUIC QUIC (pronounced "quick") is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meet ...
over UDP is used.


Request and response messages through connections

Data is exchanged through a sequence of request–response messages which are exchanged by a
session layer In the seven-layer OSI model of computer networking, the session layer is layer 5. The session layer provides the mechanism for opening, closing and managing a session between end-user application processes, i.e., a semi-permanent dialogue. ...
transport connection. An HTTP client initially tries to connect to a server establishing a connection (real or virtual). An HTTP(S) server listening on that port accepts the connection and then waits for a client's request message. The client sends its request to the server. Upon receiving the request, the server sends back an HTTP response message (header plus a body if it is required). The body of this message is typically the requested resource, although an error message or other information may also be returned. At any time (for many reasons) client or server can close the connection. Closing a connection is usually advertised in advance by using one or more HTTP headers in the last request/response message sent to server or client.


Persistent connections

In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent. In HTTP/1.0, as stated in RFC 1945, the TCP/IP connection should always be closed by server after a response has been sent. In HTTP/1.1 a keep-alive-mechanism was officially introduced so that a connection could be reused for more than one request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism. HTTP/1.1 added also
HTTP pipelining HTTP pipelining is a feature of HTTP/1.1 which allows multiple HTTP requests to be sent over a single TCP connection without waiting for the corresponding responses. HTTP/1.1 requires servers to respond to pipelined requests correctly, with non-p ...
in order to further reduce lag time when using persistent connections by allowing clients to send multiple requests before waiting for each response. This optimization was never considered really safe because a few web servers and many
proxy server In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. Instead of connecting directly to a server that can fulfill a request ...
s, specially transparent proxy servers placed in Internet /
Intranet An intranet is a computer network for sharing information, easier communication, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in c ...
s between clients and servers, did not handle pipelined requests properly (they served only the first request discarding the others, they closed the connection because they saw more data after the first request or some proxies even returned responses out of order etc.). Besides this only HEAD and some GET requests (i.e. limited to real file requests and so with URLs without query string used as a command, etc.) could be pipelined in a
safe A safe (also called a strongbox or coffer) is a secure lockable box used for securing valuable objects against theft or fire. A safe is usually a hollow cuboid or cylinder, with one face being removable or hinged to form a door. The body and ...
and
idempotent Idempotence (, ) is the property of certain operation (mathematics), operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence ...
mode. After many years of struggling with the problems introduced by enabling pipelining, this feature was first disabled and then removed from most browsers also because of the announced adoption of HTTP/2. HTTP/2 extended the usage of persistent connections by multiplexing many concurrent requests/responses through a single TCP/IP connection. HTTP/3 does not use TCP/IP connections but QUIC + UDP (see also: technical overview).


Content retrieval optimizations

; HTTP/0.9 : a requested resource was always sent entirely. ; HTTP/1.0 : HTTP/1.0 added headers to manage resources cached by client in order to allow conditional GET requests; in practice a server has to return the entire content of the requested resource only if its last modified time is not known by client or if it changed since last full response to GET request. One of these headers, "Content-Encoding", was added to specify whether the returned content of a resource was or was not compressed. : If the total length of the content of a resource was not known in advance (i.e. because it was dynamically generated, etc.) then the header "Content-Length: number" was not present in HTTP headers and the client assumed that when server closed the connection, the content had been entirely sent. This mechanism could not distinguish between a resource transfer successfully completed and an interrupted one (because of a server / network error or something else). ; HTTP/1.1 : HTTP/1.1 introduced: :* new headers to better manage the conditional retrieval of cached resources. :*
chunked transfer encoding Chunked transfer encoding is a streaming data transfer mechanism available in Hypertext Transfer Protocol (HTTP) version 1.1, defined in RFC 9112 §7.1. In chunked transfer encoding, the data stream is divided into a series of non-overlapping "c ...
to allow content to be streamed in chunks in order to reliably send it even when the server does not know in advance its length (i.e. because it is dynamically generated, etc.). :* byte range serving, where a client can request only one or more portions (ranges of bytes) of a resource (i.e. the first part, a part in the middle or in the end of the entire content, etc.) and the server usually sends only the requested part(s). This is useful to resume an interrupted download (when a file is really big), when only a part of a content has to be shown or dynamically added to the already visible part by a browser (i.e. only the first or the following n comments of a web page) in order to spare time, bandwidth and system resources, etc. ; HTTP/2, HTTP/3 : Both HTTP/2 and HTTP/3 have kept the above mentioned features of HTTP/1.1.


HTTP authentication

HTTP provides multiple authentication schemes such as
basic access authentication In the context of an HTTP transaction, basic access authentication is a method for an HTTP user agent (e.g. a web browser) to provide a user name and password when making a request. In basic HTTP authentication, a request contains a header field i ...
and
digest access authentication Digest access authentication is one of the agreed-upon methods a web server can use to negotiate credentials, such as username or password, with a user's web browser. This can be used to confirm the identity of a user before sending sensitive info ...
which operate via a challenge–response mechanism whereby the server identifies and issues a challenge before serving the requested content. HTTP provides a general framework for access control and authentication, via an extensible set of challenge–response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information. The authentication mechanisms described above belong to the HTTP protocol and are managed by client and server HTTP software (if configured to require authentication before allowing client access to one or more web resources), and not by the web applications using a web application session.


Authentication realms

The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root
URI Uri may refer to: Places * Canton of Uri, a canton in Switzerland * Úri, a village and commune in Hungary * Uri, Iran, a village in East Azerbaijan Province * Uri, Jammu and Kashmir, a town in India * Uri (island), an island off Malakula Islan ...
. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI.


HTTP application session

HTTP is a
stateless protocol A stateless protocol is a communication protocol in which the receiver must not retain session state from previous requests. The sender transfers relevant session state to the receiver in such a way that every request can be understood in isolatio ...
. A stateless protocol does not require the web server to retain information or status about each user for the duration of multiple requests. Some
web application A web application (or web app) is application software that is accessed using a web browser. Web applications are delivered on the World Wide Web to users with an active network connection. History In earlier computing models like client-serve ...
s need to manage user sessions, so they implement states, or server side sessions, using for instance
HTTP cookie HTTP cookies (also called web cookies, Internet cookies, browser cookies, or simply cookies) are small blocks of data created by a web server while a user is browsing a website and placed on the user's computer or other device by the user's w ...
s or hidden
variable Variable may refer to: * Variable (computer science), a symbolic name associated with a value and whose associated value may be changed * Variable (mathematics), a symbol that represents a quantity in a mathematical expression, as used in many ...
s within
web form A webform, web form or HTML form on a web page allows a user to enter data that is sent to a server for processing. Forms can resemble paper or database forms because web users fill out the forms using checkboxes, radio buttons, or text fields. ...
s. To start an application user session, an interactive
authentication Authentication (from ''authentikos'', "real, genuine", from αὐθέντης ''authentes'', "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicati ...
via web application
login In computer security, logging in (or logging on, signing in, or signing on) is the process by which an individual gains access to a computer system by identifying and authenticating themselves. The user credentials are typically some form ...
must be performed. To stop a user session a
logout In computer security, logging in (or logging on, signing in, or signing on) is the process by which an individual gains access to a computer system by identifying and authenticating themselves. The user credentials are typically some form ...
operation must be requested by user. These kind of operations do not use HTTP authentication but a custom managed web application authentication.


HTTP/1.1 request messages

Request messages are sent by a client to a target server.


Request syntax

A client sends ''request messages'' to the server, which consist of: * a request line, consisting of the case-sensitive request method, a
space Space is the boundless three-dimensional extent in which objects and events have relative position and direction. In classical physics, physical space is often conceived in three linear dimensions, although modern physicists usually consider ...
, the requested URL, another space, the protocol version, a
carriage return A carriage return, sometimes known as a cartridge return and often shortened to CR, or return, is a control character or mechanism used to reset a device's position to the beginning of a line of text. It is closely associated with the line feed a ...
, and a
line feed Newline (frequently called line ending, end of line (EOL), next line (NEL) or line break) is a control character or sequence of control characters in character encoding specifications such as ASCII, EBCDIC, Unicode, etc. This character, or a ...
, e.g.: GET /images/logo.png HTTP/1.1 * zero or more request header fields (at least 1 or more headers in case of HTTP/1.1), each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.: Host: www.example.com Accept-Language: en * an empty line, consisting of a carriage return and a line feed; * an optional message body. In the HTTP/1.1 protocol, all header fields except Host: hostname are optional. A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in . 090502 apacheweek.com


Request methods

HTTP defines methods (sometimes referred to as ''verbs'', but nowhere in the specification does it mention ''verb'') to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification defined the GET, HEAD, and POST methods, and the HTTP/1.1 specification added five new methods: PUT, DELETE, CONNECT, OPTIONS, and TRACE. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined, which allows for future methods to be specified without breaking existing infrastructure. For example,
WebDAV WebDAV (Web Distributed Authoring and Versioning) is a set of extensions to the Hypertext Transfer Protocol (HTTP), which allows user agents to collaboratively author contents ''directly'' in an HTTP web server by providing facilities for concu ...
defined seven new methods and specified the
PATCH Patch or Patches may refer to: Arts, entertainment and media * Patch Johnson, a fictional character from ''Days of Our Lives'' * Patch (''My Little Pony''), a toy * "Patches" (Dickey Lee song), 1962 * "Patches" (Chairmen of the Board song) ...
method. Method names are case sensitive. This is in contrast to HTTP header field names which are case-insensitive. ; GET: The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) For retrieving resources without making changes, GET is preferred over POST, as they can be addressed through a URL. This enables bookmarking and sharing and makes GET responses eligible for caching, which can save bandwidth. The
W3C The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web. Founded in 1994 and led by Tim Berners-Lee, the consortium is made up of member organizations that maintain full-time staff working to ...
has published guidance principles on this distinction, saying, "
Web application A web application (or web app) is application software that is accessed using a web browser. Web applications are delivered on the World Wide Web to users with an active network connection. History In earlier computing models like client-serve ...
design should be informed by the above principles, but also by the relevant limitations." See safe methods below. ; HEAD: The HEAD method requests that the target resource transfer a representation of its state, as for a GET request, but without the representation data enclosed in the response body. This is useful for retrieving the representation metadata in the response header, without having to transfer the entire representation. Uses include checking whether a page is available through the status code and quickly finding the size of a
file File or filing may refer to: Mechanical tools and processes * File (tool), a tool used to ''remove'' fine amounts of material from a workpiece **Filing (metalworking), a material removal process in manufacturing ** Nail file, a tool used to gent ...
(Content-Length). ; POST: The POST method requests that the target resource process the representation enclosed in the request according to the semantics of the target resource. For example, it is used for posting a message to an
Internet forum An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are often longer than one line of text, and are at least temporar ...
, subscribing to a
mailing list A mailing list is a collection of names and addresses used by an individual or an organization to send material to multiple recipients. The term is often extended to include the people subscribed to such a list, so the group of subscribers is re ...
, or completing an
online shopping Online shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app. Consumers find a product of interest by visiting the website of the r ...
transaction. ; PUT: The PUT method requests that the target resource create or update its state with the state defined by the representation enclosed in the request. A distinction from POST is that the client specifies the target location on the server. ; DELETE: The DELETE method requests that the target resource delete its state. ; CONNECT: The CONNECT method requests that the intermediary establish a TCP/IP tunnel to the origin server identified by the request target. It is often used to secure connections through one or more HTTP proxies with TLS. See HTTP CONNECT method. ; OPTIONS: The OPTIONS method requests that the target resource transfer the HTTP methods that it supports. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource. ; TRACE: The TRACE method requests that the target resource transfer the received request in the response body. That way a client can see what (if any) changes or additions have been made by intermediaries. ; PATCH: The PATCH method requests that the target resource modify its state according to the partial update defined in the representation enclosed in the request. This can save bandwidth by updating a part of a file or document without having to transfer it entirely. All general-purpose web servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification.


Safe methods

A request method is ''safe'' if a request with that method has no intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE are defined as safe. In other words, safe methods are intended to be read-only. They do not exclude
side effects In medicine, a side effect is an effect, whether therapeutic or adverse, that is secondary to the one intended; although the term is predominantly employed to describe adverse effects, it can also apply to beneficial, but unintended, consequence ...
though, such as appending request information to a
log file In computing, logging is the act of keeping a log of events that occur in a computer system, such as problems, errors or just information on current operations. These events may occur in the operating system or in other software. A message or lo ...
or charging an advertising account, since they are not requested by the client, by definition. In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH are not safe. They may modify the state of the server or have other effects such as sending an
email Electronic mail (email or e-mail) is a method of exchanging messages ("mail") between people using electronic devices. Email was thus conceived as the electronic ( digital) version of, or counterpart to, mail, at a time when "mail" meant ...
. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences. Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Careless or deliberately irregular programming can allow GET requests to cause non-trivial changes on the server. This is discouraged because of the problems which can occur when
web caching A Web cache (or HTTP cache) is a system for optimizing the World Wide Web. It is implemented both client-side and server-side. The caching of multimedias and other files can result in less overall delay when browsing the Web. Parts of the syste ...
,
search engines A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
, and other automated agents make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as ''https://example.com/article/1234/delete'', which, if arbitrarily fetched, even using GET, would simply delete the article. A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make. One example of this occurring in practice was during the short-lived
Google Web Accelerator Google Web Accelerator was a web accelerator produced by Google. It used client software installed on the user's computer, as well as data caching on Google's servers, to speed up page load times by means of data compression, prefetching of con ...
beta Beta (, ; uppercase , lowercase , or cursive ; grc, βῆτα, bē̂ta or ell, βήτα, víta) is the second letter of the Greek alphabet. In the system of Greek numerals, it has a value of 2. In Modern Greek, it represents the voiced labiod ...
, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted ''en masse''. The beta was suspended only weeks after its first release, following widespread criticism.


Idempotent methods

A request method is ''idempotent'' if multiple identical requests with that method have the same effect as a single such request. The methods PUT and DELETE, and safe methods are defined as idempotent. Safe methods are trivially idempotent, since they are intended to have no effect on the server whatsoever; the PUT and DELETE methods, meanwhile, are idempotent since successive identical requests will be ignored. A website might, for instance, set up a PUT endpoint to modify a user's recorded email address. If this endpoint is configured correctly, any requests which ask to change a user's email address to the same email address which is already recorded—e.g. duplicate requests following a successful request—will have no effect. Similarly, a request to DELETE a certain user will have no effect if that user has already been deleted. In contrast, the methods POST, CONNECT, and PATCH are not necessarily idempotent, and therefore sending an identical POST request multiple times may further modify the state of the server or have further effects, such as sending multiple
email Electronic mail (email or e-mail) is a method of exchanging messages ("mail") between people using electronic devices. Email was thus conceived as the electronic ( digital) version of, or counterpart to, mail, at a time when "mail" meant ...
s. In some cases this is the desired effect, but in other cases it may occur accidentally. A user might, for example, inadvertently send multiple POST requests by clicking a button again if they were not given clear feedback that the first click was being processed. While
web browser A web browser is application software for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on ...
s may show
alert dialog box An alert dialog box is a special dialog box that is displayed in a graphical user interface when something unexpected occurred that requires immediate user action. The typical alert dialog provides information in a separate box to the user, after ...
es to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once. Note that whether or not a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. To do so against recommendations, however, may result in undesirable consequences, if a
user agent In computing, a user agent is any software, acting on behalf of a user, which "retrieves, renders and facilitates end-user interaction with Web content". A user agent is therefore a special kind of software agent. Some prominent examples of us ...
assumes that repeating the same request is safe when it is not.


Cacheable methods

A request method is ''cacheable'' if responses to requests with that method may be stored for future reuse. The methods GET, HEAD, and POST are defined as cacheable. In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable.


Request header fields

Request header fields allow the client to pass additional information beyond the request line, acting as request modifiers (similarly to the parameters of a procedure). They give information about the client, about the target resource, or about the expected handling of the request.


HTTP/1.1 response messages

A response message is sent by a server to a client as a reply to its former request message.


Response syntax

A server sends ''response messages'' to the client, which consist of: * a status line, consisting of the protocol version, a
space Space is the boundless three-dimensional extent in which objects and events have relative position and direction. In classical physics, physical space is often conceived in three linear dimensions, although modern physicists usually consider ...
, the response status code, another space, a possibly empty reason phrase, a
carriage return A carriage return, sometimes known as a cartridge return and often shortened to CR, or return, is a control character or mechanism used to reset a device's position to the beginning of a line of text. It is closely associated with the line feed a ...
and a
line feed Newline (frequently called line ending, end of line (EOL), next line (NEL) or line break) is a control character or sequence of control characters in character encoding specifications such as ASCII, EBCDIC, Unicode, etc. This character, or a ...
, e.g.: HTTP/1.1 200 OK * zero or more response header fields, each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.: Content-Type: text/html * an empty line, consisting of a carriage return and a line feed; * an optional message body.


Response status codes

In HTTP/1.0 and since, the first line of the HTTP response is called the ''status line'' and includes a numeric ''status code'' (such as " 404") and a textual ''reason phrase'' (such as "Not Found"). The response status code is a three-digit integer code representing the result of the server's attempt to understand and satisfy the client's corresponding request. The way the client handles the response depends primarily on the status code, and secondarily on the other response header fields. Clients may not understand all registered status codes but they must understand their class (given by the first digit of the status code) and treat an unrecognized status code as being equivalent to the x00 status code of that class. The standard ''reason phrases'' are only recommendations, and can be replaced with "local equivalents" at the
web developer A web developer is a programmer who develops World Wide Web applications using a client–server model. The applications typically use HTML, CSS, and JavaScript in the client, and any general-purpose programming language in the server. is used f ...
's discretion. If the status code indicated a problem, the user agent might display the ''reason phrase'' to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the ''reason phrase'', though this might be unwise since the standard explicitly specifies that status codes are machine-readable and ''reason phrases'' are human-readable. The first digit of the status code defines its class: ; 1XX (informational): The request was received, continuing process. ; 2XX (successful): The request was successfully received, understood, and accepted. ; 3XX (redirection): Further action needs to be taken in order to complete the request. ; 4XX (client error): The request contains bad syntax or cannot be fulfilled. ; 5XX (server error): The server failed to fulfill an apparently valid request.


Response header fields

The response header fields allow the server to pass additional information beyond the status line, acting as response modifiers. They give information about the server or about further access to the target resource or related resources. Each response header field has a defined meaning which can be further refined by the semantics of the request method or response status code.


HTTP/1.1 example of request / response transaction

Below is a sample HTTP transaction between an HTTP/1.1 client and an HTTP/1.1 server running on www.example.com, port 80.


Client request

GET / HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Language: en-GB,en;q=0.5 Accept-Encoding: gzip, deflate, br Connection: keep-alive A client request (consisting in this case of the request line and a few headers that can be reduced to only the "Host: hostname" header) is followed by a blank line, so that the request ends with a double end of line, each in the form of a
carriage return A carriage return, sometimes known as a cartridge return and often shortened to CR, or return, is a control character or mechanism used to reset a device's position to the beginning of a line of text. It is closely associated with the line feed a ...
followed by a
line feed Newline (frequently called line ending, end of line (EOL), next line (NEL) or line break) is a control character or sequence of control characters in character encoding specifications such as ASCII, EBCDIC, Unicode, etc. This character, or a ...
. The "Host: hostname" header value distinguishes between various
DNS The Domain Name System (DNS) is a hierarchical and distributed naming system for computers, services, and other resources in the Internet or other Internet Protocol (IP) networks. It associates various information with domain names assigned to ...
names sharing a single
IP address An Internet Protocol address (IP address) is a numerical label such as that is connected to a computer network that uses the Internet Protocol for communication.. Updated by . An IP address serves two main functions: network interface ident ...
, allowing name-based
virtual hosting Virtual hosting is a method for hosting multiple domain names (with separate handling of each name) on a single server (or pool of servers). This allows one server to share its resources, such as memory and processor cycles, without requiring all ...
. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (A "/" (slash) will usually fetch a /index.html file if there is one.)


Server response

HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 155 Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) ETag: "3f80f-1b6-3e1cb03b" Accept-Ranges: bytes Connection: close An Example Page

Hello World, this is a very simple HTML document.

The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. "Content-Type" specifies the
Internet media type A media type (also known as a MIME type) is a two-part identifier for file formats and format contents transmitted on the Internet. The Internet Assigned Numbers Authority (IANA) is the official authority for the standardization and publication o ...
of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1
webserver A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiate ...
publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called
byte serving Byte serving (other names: Range Requests; Byte Range Serving; Page on demand) is the process introduced in HTTP protocol 1.1 of sending only a portion of a message from a server to a client. Byte serving begins when an HTTP server advertises its ...
. When "Connection: close" is sent, it means that the
web server A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiate ...
will close the TCP connection immediately after the end of the transfer of this response. Most of the header lines are optional but some are mandatory. When header "Content-Length: number" is missing in a response with an entity body then this should be considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1 if header "Transfer-Encoding: chunked" is present. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Some old implementations of HTTP/1.0 omitted the header "Content-Length" when the length of the body entity was not known at the beginning of the response and so the transfer of data to client continued until server closed the socket. A "Content-Encoding:
gzip gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and in ...
"
can be used to inform the client that the body entity part of the transmitted data is compressed by gzip algorithm.


Encrypted connections

The most popular way of establishing an encrypted HTTP connection is
HTTPS Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is enc ...
. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent.


Similar protocols

* The
Gopher protocol The Gopher protocol () is a communication protocol designed for distributing, searching, and retrieving documents in Internet Protocol networks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to ...
is a content delivery protocol that was displaced by HTTP in the early 1990s. * The
SPDY SPDY (pronounced "speedy") is an obsolete open-specification communication protocol developed for transporting web content. SPDY became the basis for HTTP/2 specification. However, HTTP/2 diverged from SPDY and eventually HTTP/2 subsumed all us ...
protocol is an alternative to HTTP developed at
Google Google LLC () is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. ...
, superseded by
HTTP/2 HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working ...
. * The Gemini protocol is a Gopher-inspired protocol which mandates privacy-related features.


See also

*
InterPlanetary File System The InterPlanetary File System (IPFS) is a protocol, hypermedia and file sharing peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace ...
- can replace http *
Comparison of file transfer protocols This article lists communication protocols that are designed for file transfer over a telecommunications network. Protocols for shared file systems—such as 9P and the Network File System—are beyond the scope of this article, as are file s ...
*
Constrained Application Protocol Constrained Application Protocol (CoAP) is a specialized Internet application protocol for constrained devices, as defined iRFC 7252 It enables those constrained devices called "nodes" to communicate with the wider Internet using similar protocols ...
– a semantically similar protocol to HTTP but used UDP or UDP-like messages targeted for devices with limited processing capability; re-uses HTTP and other internet concepts like
Internet media type A media type (also known as a MIME type) is a two-part identifier for file formats and format contents transmitted on the Internet. The Internet Assigned Numbers Authority (IANA) is the official authority for the standardization and publication o ...
and web linking (RFC 5988) *
Content negotiation Content negotiation refers to mechanisms defined as a part of HTTP that make it possible to serve different versions of a document (or more generally, representations of a resource) at the same URI, so that user agents can specify which version fit ...
*
Digest access authentication Digest access authentication is one of the agreed-upon methods a web server can use to negotiate credentials, such as username or password, with a user's web browser. This can be used to confirm the identity of a user before sending sensitive info ...
*
HTTP compression HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. HTTP data is compressed before it is sent from the server: compliant browsers will announce what methods are ...
*
HTTP/2 HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working ...
– developed by the IETF's Hypertext Transfer Protocol (httpbis) working group *
List of HTTP header fields A ''list'' is any set of items in a row. List or lists may also refer to: People * List (surname) Organizations * List College, an undergraduate division of the Jewish Theological Seminary of America * SC Germania List, German rugby union ...
*
List of HTTP status codes This is a list of Hypertext Transfer Protocol (HTTP) response status codes. Status codes are issued by a server in response to a client's request made to the server. It includes codes from IETF Request for Comments (RFCs), other specifications, ...
* Representational state transfer (REST) *
Variant object Variant objects in the context of HTTP are objects served by an Origin Content Server in a type of transmitted data variation (i.e. uncompressed, compressed, different languages, etc.). HTTP/1.1 (1997–1999) introduces Content/Accept headers. T ...
*
Web cache A Web cache (or HTTP cache) is a system for optimizing the World Wide Web. It is implemented both client-side and server-side. The caching of multimedias and other files can result in less overall delay when browsing the Web. Parts of the syste ...
*
WebSocket WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. The WebSocket protocol was standardized by the IETF as in 2011. The current API specification allowing web applications ...


Notes


References


External links

* * * A detailed technical history of HTTP. * Design Issues by Berners-Lee when he was designing the protocol. {{Authority control Application layer protocols Computer-related introductions in 1991 Internet protocols Network protocols World Wide Web Consortium standards