Video Games Set In Massachusetts
   HOME

TheInfoList



OR:

Video is an electronic medium for the recording,
copying Copying is the duplication of information or an artifact based on an instance of that information or artifact, and not using the process that originally generated it. With analog forms of information, copying is only possible to a limited degree o ...
, playback, broadcasting, and display of moving picture, moving image, visual Media (communication), media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems which, in turn, were replaced by flat panel displays of several types. Video systems vary in display resolution, Display aspect ratio, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, Video file format, computer files, and Streaming media, network streaming.


History


Analog video

Video technology was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a Live television, live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorders (VTR). In 1951, the first VTR captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape. Video recorders were sold for US$50,000 in 1956, and videotapes cost US$300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.


Digital video

Digital video is capable of higher quality and, eventually, much lower cost than earlier analog technology. After the invention of the DVD in 1997, and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology. the usage of Digital camera, digital cameras in Cinema of the United States, Hollywood has surpassed use of film cameras.


Characteristics of video streams


Number of frames per second

''Frame rate'', the number of still pictures per unit of time of video, ranges from six or eight frames per second (''frame/s'') for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s. Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a Persistence of vision, moving image is about sixteen frames per second.


Interlaced vs progressive

Video can be Interlaced video, interlaced or Progressive scan, progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical television, mechanical and cathode-ray tube, CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower Bandwidth (signal processing), bandwidth compared to progressive scanning. In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two ''fields'': an ''odd field'' (upper field) consisting of the odd-numbered lines and an ''even field'' (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display. NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an ''i'' to indicate interlacing. For example, PAL video format is often described as ''576i50'', where ''576'' indicates the total number of horizontal scan lines, ''i'' indicates interlacing, and ''50'' indicates 50 fields (half-frames) per second. When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.


Aspect ratio

Display aspect ratio, Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, and so can be described by a ratio between width and height. The ratio width to height for a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1. Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats. The 480p, 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display. The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat’s are watched in their entirety nine times more frequently than landscape video ads.


Color model and depth

The color model the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television and YCbCr is used for digital video. The number of distinct colors a pixel can represent depends on color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.


Video quality

Video quality can be measured with formal metrics like Peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the ''Double Stimulus Impairment Scale'' (DSIS). In DSIS, each expert views an ''unimpaired'' reference video followed by an ''impaired'' version of the same video. The expert then rates the ''impaired'' video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".


Video compression method (digital only)

Uncompressed video delivers maximum quality, but at a very high Uncompressed video#Data rates, data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal Redundancy (information theory), redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as ''Intra-frame, intraframe compression'' and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as ''Inter frame, interframe compression'', including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVD, Blu-ray Disc, Blu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.


Stereoscopic

Stereoscopic video for 3d film and other applications can be displayed using several different methods: * Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light polarization, light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters. * Anaglyph 3D where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcast or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content. * One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two.


Formats

Different layers of video transmission and storage each provide their own set of formats to choose from. For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain #Display standards, display standards that specify a particular refresh rate, display resolution, and color space. Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, of which a number are available.


Analog video

Analog video is a video signal represented by one or more analog signals. Analog color video signals include Luma (video), luminance, brightness (Y) and chrominance (C). When combined into one channel, as is the case, among others with NTSC, PAL and SECAM it is called composite video. Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats. Analog video is used in both consumer and professional television production applications. Composite-video-cable.jpg, Composite video
(single channel RCA) Close-up_of_S-video_female_connector.jpg, S-Video
(2-channel YC) Component-cables.jpg, Component video
(3-channel YPbPr) SCART_20050724_002.jpg, SCART Vga-cable.jpg, VGA connector, VGA 3.5mm.jpg, Phone connector (audio), TRRS D4_video_connector.jpg, D-Terminal


Digital video

Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), HDMI, High-Definition Multimedia Interface (HDMI) and DisplayPort Interface. BNC_connector_%28male%29.jpg, Serial digital interface (SDI) Dvi-cable.jpg, Digital Visual Interface (DVI) HDMI-Connector.jpg, HDMI Displayport-cable.jpg, DisplayPort


Transport medium

Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards. Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream, SMPTE 2022 and SMPTE 2110.


Display standards


Digital television

Digital television broadcasts use the MPEG-2 and other video coding formats and include: * ATSC Standards, ATSC – United States, Canada, Mexico, Korea * Digital Video Broadcasting (DVB) – Europe * ISDB – Japan ** ISDB-Tb – uses the MPEG-4 video coding format – Brazil, Argentina * Digital Multimedia Broadcasting (DMB) – Korea


Analog television

Analog television broadcast standards include: * Field-sequential color system (FCS) – US, Russia; obsolete * Multiplexed Analogue Components (MAC) – Europe; obsolete * Multiple sub-Nyquist sampling encoding (MUSE) – Japan * NTSC – United States, Canada, Japan ** Clear-Vision, EDTV-II "Clear-Vision" - NTSC extension, Japan * PAL – Europe, Asia, Oceania ** PAL-M – PAL variation, Brazil ** PAL-N – PAL variation, Argentina, Paraguay and Uruguay ** PALplus – PAL extension, Europe * RS-343 (military) * SECAM – France, former Soviet Union, Central Africa * CCIR System A * CCIR System B * CCIR System G * CCIR System H * CCIR System I * CCIR System M An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a ''blanking interval'' or ''blanking region''; the horizontal and vertical Analog television#Structure of a video signal, front porch and back porch are the building blocks of the blanking interval.


Computer displays

Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.


Recording

Early television was almost exclusively a live medium with some programs recorded to film for distribution of historical purposes using Kinescope. The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers or consumers; or were important historically. * 2" Quadruplex videotape (Ampex 1956) * VERA videotape format, VERA (BBC experimental format ca. 1958) * Type A videotape, 1" Type A videotape (Ampex) * EIAJ-1, 1/2" EIAJ (1969) * U-matic 3/4" (Sony) * Cartrivision, 1/2" Cartrivision (Avco) * Video Cassette Recording, VCR, VCR-LP, SVR * 1 inch type B videotape, 1" Type B videotape (Robert Bosch GmbH) * 1" Type C videotape (Ampex, Marconi Company, Marconi and Sony) * Betamax (Sony) * VHS (JVC) * Video 2000 (Philips) * IVC videotape format, 2" Helical Scan Videotape (International Video Corporation, IVC) * Compact Video Cassette, 1/4" CVC (Funai) * Betacam (Sony) * Sony HDVS, HDVS (Sony) * Betacam SP (Sony) * Video8 (Sony) (1986) * S-VHS (JVC) (1987) * VHS-C (JVC) * PXL-2000, Pixelvision (Fisher-Price) * Sony HDVS, UniHi 1/2" HD (Sony) * Hi8 (Sony) (mid-1990s) * W-VHS (JVC) (1994) Digital video tape recorders offered improved quality compared to analog recorders. * Betacam IMX (Sony) * D-VHS (JVC) * D-Theater * D1 (Sony), D1 (Sony) * D2 (video format), D2 (Sony) * D3 (video), D3 * D5 HD * D6 HDTV VTR, D6 (Philips) * Digital-S D9 (JVC) * Digital Betacam (Sony) * Digital8 (Sony) * DV (including DVC-Pro) * HDCAM (Sony) * HDV * ProHD (JVC) * MicroMV * MiniDV Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats. * Blu-ray Disc (Sony) * China Blue High-definition Disc (CBHD) * DVD (was Super Density Disc, DVD Forum) * Professional Disc * Universal Media Disc (UMD) (Sony) * Enhanced Versatile Disc (EVD, Chinese government-sponsored) * HD DVD (NEC Corporation, NEC and Toshiba) * HD-VMD * Capacitance Electronic Disc * Laserdisc (Music Corporation of America, MCA and Philips) * Television Electronic Disc (Teldec and Telefunken) * Video High Density, VHD (JVC)


Digital encoding formats

A video codec is software or Computer hardware, hardware that Data compression, compresses and Uncompressed video, decompresses digital video. In the context of video compression, ''codec'' is a portmanteau of ''encoder'' and ''decoder'', while a device that only compresses is typically called an ''Encoder (digital), encoder'', and one that only decompresses is a ''decoder''. The compressed data format usually conforms to a standard video coding format. The compression is typically Lossy compression, lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video. * CCIR 601 (ITU-T) * H.261 (ITU-T) * H.263 (ITU-T) * H.264/MPEG-4 AVC (ITU-T + International Organization for Standardization, ISO) * H.265 * MJPEG, M-JPEG (International Organization for Standardization, ISO) * MPEG-1 (International Organization for Standardization, ISO) * MPEG-2 (ITU-T + International Organization for Standardization, ISO) * MPEG-4 (International Organization for Standardization, ISO) * Ogg-Theora * VP8-WebM * VC-1 (SMPTE)


See also

;General * Index of video-related articles * Sound recording and reproduction * Video editing * Videography ;Video format * 360-degree video * Cable television * Color television * Telecine * Timecode * Volumetric video ;Video usage * Closed-circuit television * Fulldome, Fulldome video * Interactive video * Video art * Video feedback * Video sender * Video synthesizer * Videotelephony ;Video screen recording software * Bandicam * CamStudio * Camtasia * CloudApp * Fraps


References


External links

* *
Programmer's Guide to Video Systems: in-depth technical info on 480i, 576i, 1080i, 720p, etc.

Format Descriptions for Moving Images
{{Authority control Digital television Film and video technology High-definition television Video formats, Display technology Television terminology Video signal, History of television Media formats Articles containing video clips