HOME

TheInfoList



OR:

Real-time computer graphics or real-time rendering is the sub-field of
computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great de ...
focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (
GUI The GUI ( "UI" by itself is still usually pronounced . or ), graphical user interface, is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, inste ...
) to real-time
image analysis Image analysis or imagery analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as soph ...
, but is most often used in reference to interactive
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
, typically using a
graphics processing unit A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, m ...
(GPU). One example of this concept is a
video game Video games, also known as computer games, are electronic games that involves interaction with a user interface or input device such as a joystick, controller, keyboard, or motion sensing device to generate visual feedback. This feedba ...
that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and
polygon In geometry, a polygon () is a plane figure that is described by a finite number of straight line segments connected to form a closed '' polygonal chain'' (or ''polygonal circuit''). The bounded plane region, the bounding circuit, or the two ...
s in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional
Von Neumann architecture The von Neumann architecture — also known as the von Neumann model or Princeton architecture — is a computer architecture based on a 1945 description by John von Neumann, and by others, in the '' First Draft of a Report on the EDVAC''. T ...
-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience.


Principles of real-time 3D computer graphics

The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the
camera A camera is an optical instrument that can capture an image. Most cameras can capture 2D images, with some more advanced models being able to capture 3D images. At a basic level, most cameras consist of sealed boxes (the camera body), with ...
to the
world In its most general sense, the term "world" refers to the totality of entities, to the whole of reality or to everything that is. The nature of the world has been conceptualized differently in different fields. Some conceptions see the worl ...
for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of
z-buffer A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to represent depth information of objects in 3D space from a particular perspective. Depth buffers are an aid to rendering a scene to ensure that the ...
triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and
rasterizer In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, whi ...
hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources.


Video game graphics

Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and current
DirectX 11 Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with "Direct", ...
/
OpenGL OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardwa ...
4.x class hardware is capable of generating complex effects, such as
shadow volume Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divi ...
s,
motion blur Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or lo ...
ring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be
interactive Across the many fields concerned with interactivity, including information science, computer science, human-computer interaction, communication, and industrial design, there is little agreement over the meaning of the term "interactivity", but mo ...
. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate.


Advantages

Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which ...
and
animation Animation is a method by which still figures are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today, most ani ...
. These techniques largely dictate what is to be drawn on the screen—especially ''where'' to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the
spatial dimensions In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordi ...
), adding to the computer graphics' degree of realism. Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time.


Rendering pipeline

The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.


Architecture

The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization.


Application stage

The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.


Geometry stage

The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.


Model and view transformation

Before the final model is shown on the output device, the model is transformed onto multiple spaces or
coordinate system In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is sig ...
s. Transformations move and manipulate objects by altering their vertices. ''Transformation'' is the general term for the four specific ways that manipulate the shape or position of a point, line or shape.


Lighting

In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.


Projection

Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are
orthographic projection Orthographic projection (also orthogonal projection and analemma) is a means of representing three-dimensional objects in two dimensions. Orthographic projection is a form of parallel projection in which all the projection lines are orthogona ...
(also called parallel) and
perspective projection Linear or point-projection perspective (from la, perspicere 'to see through') is one of two types of graphical projection perspective in the graphic arts; the other is parallel projection. Linear perspective is an approximate representation ...
. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.


Clipping

Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.


Screen mapping

The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage.


Rasterizer stage

The rasterizer stage applies color and turns the graphic elements into pixels or picture elements.


See also

*
Bounding interval hierarchy A bounding interval hierarchy (BIH) is a partitioning data structure similar to that of bounding volume hierarchies or kd-trees. Bounding interval hierarchies can be used in high performance (or real-time) ray tracing and may be especially useful ...
* Demoscene *
Geometry instancing In real-time computer graphics, geometry instancing is the practice of rendering multiple copies of the same mesh in a scene at once. This technique is primarily used for objects such as trees, grass, or buildings which can be represented as repeat ...
* Optical feedback *
Quartz Composer Quartz Composer is a node-based visual programming language provided as part of the Xcode development environment in macOS for processing and rendering graphical data. Quartz Composer uses OpenGL (including GLSL), OpenCL (only in Mac OS X Sno ...
*
Real time (media) Real time within the media is a method in which events are portrayed at the same rate at which they occur in the plot. For example, if a film told in real time is two hours long, then the plot of that movie covers two hours of fictional time. If a ...
* Real-time raytracing * Video art * Video display controller


References


Bibliography

* * * *


External links


RTR Portal
– a trimmed-down "best of" set of links to resources {{Authority control Computer graphics
Computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great de ...