Name origin
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and 7 winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told '' Maximum PC'' in 2002 that "GeForce" originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry, offloading that function from theGraphics processor generations
GeForce 256
GeForce 2 series
Launched in April 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series.GeForce 3 series
Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The ''NV2A'' developed for the Microsoft Xbox game console is a derivative of the GeForce 3.GeForce 4 series
Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used theGeForce FX series
Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering".GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented high-dynamic-range imaging and introduced SLI (Scalable Link Interface) and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).GeForce 7 series
The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support theGeForce 8 series
Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully supportGeForce 9 series and 100 series
The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory. Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance ofGeForce 200 series and 300 series
Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a 65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase.GeForce 400 series and 500 series
On April 7, 2010, Nvidia released the GeForce GTX 470 and GTX 480, the first cards based on the new Fermi architecture, codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of GDDR5 memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction. In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card.GeForce 600 series, 700 series and 800M series
In September 2010, Nvidia announced that the successor to Fermi microarchitecture would be the Kepler microarchitecture, manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in Oak Ridge National Laboratory's "Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680. With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory. At the same time, Nvidia announced ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release. The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.GeForce 900 series
In March 2013, Nvidia announced that the successor to Kepler would be the Maxwell microarchitecture. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in OEM, and low TDP products in desktop GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs. This was the last GeForce series to support analog video output throughGeForce 10 series
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on 6 May 2016 and released on 27 May 2016. Architectural improvements include the following: * In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units. * GDDR5X New memory standard supporting 10Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3GB version), GTX 1050 Ti, and GTX 1050 use GDDR5. * Unified memory A memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine". * NVLink A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s. * 16-bit ( FP16) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision") and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate). * More advanced process node, TSMCGeForce 20 series and 16 series
In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as " Turing" at the Siggraph 2018 conference. This new GPU microarchitecture is aimed to accelerate the real-time ray tracing support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the DXR extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture. A whole new Tensor core design sinceGeForce 30 series
Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series, it is built on theGeForce 40 series
On September 20, 2022, NVIDIA announced its GeForce 40 Series graphics cards.Variants
Mobile GPUs
Since the GeForce 2 series, Nvidia has produced a number of graphics chipsets for notebook computers under the ''GeForce Go'' branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops. Beginning with the GeForce 8 series, the ''GeForce Go'' brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an ''M''. This ended in 2016 with the launch of the laptop GeForce 10 series – Nvidia dropped the ''M'' suffix, opting to unify the branding between their desktop and laptop GPU offerings, as notebook Pascal GPUs are almost as powerful as their desktop counterparts (something Nvidia tested with their "desktop-class" notebook GTX 980 GPU back in 2015). The ''GeForce MX'' brand, previously used by Nvidia for their entry-level desktop GPUs, was revived in 2017 with the release of the GeForce MX150 for notebooks. The MX150 is based on the sameSmall form factor GPUs
Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an ''S'', similar to the ''M'' used for mobile products.Integrated desktop motherboard GPUs
Beginning with the nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These onboard graphics solutions were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009. After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded ''Ion 2'' in 2010, this time containing a low-end GeForce 300 series GPU.Nomenclature
From the GeForce 4 series until the GeForce 9 series, the naming scheme below is used. Since the release of the GeForce 100 series of GPUs, Nvidia changed their product naming scheme to the one below. * Earlier cards such as the GeForce4 follow a similar pattern. * cfGraphics device drivers
Official proprietary
Nvidia develops and publishes GeForce drivers for Windows 10 x86/ x86-64 and later, Linux x86/x86-64/ ARMv7-A,nvidia-modeset.ko
has been available since version 358.09 beta.
The support Nvidia's display controller on the supported GPUs is centralized in nvidia-modeset.ko
. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, G-Sync, etc.) initiate from the various user-mode driver components and flow to nvidia-modeset.ko
.
In May 2022, Nvidia announced that it would release a partially open source driver for the Turing architecture and newer, in order to enhance the ability for it to be packaged as part of Linux distributions. At launch Nvidia considered the driver to be alpha quality for consumer GPUs, and production ready for datacenter GPUs. Currently the userspace components of the driver (including OpenGL, Vulkan, and CUDA) remain proprietary. In addition, the open source components of the driver are a wrapper for the GPU System Processor (GSP) firmware, a Third-party free and open-source
Community-created, free and open-source drivers exist as an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the reverse-engineered free and open-source '' nouveau'' graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products, although Nvidia has contributed code to the Nouveau driver. Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. Also, Nvidia's proprietary drivers consistently perform better than nouveau in various benchmarks. However, and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.Licensing and privacy issues
The license has common terms against reverse engineering and copying, and it disclaims warranties and liability. Starting in 2016 the GeFORCE license says Nvidia "SOFTWARE may access, collect non-personally identifiable information about, update, and configure Customer's system in order to properly optimize such system for use with the SOFTWARE." The privacy notice goes on to say, "We are not able to respond to "Do Not Track" signals set by a browser at this time. We also permit third party online advertising networks and social media companies to collect information... We may combine personal information that we collect about you with the browsing and tracking information collected by these ookies and beaconstechnologies." The software configures the user's system to optimize its use, and the license says, "NVIDIA will have no responsibility for any damage or loss to such system (including loss of data or access) arising from or relating to (a) any changes to the configuration, application settings, environment variables, registry, drivers,GeForce Experience
GeForce Experience is program containing several tools including Nvidia ShadowPlay. Due to a seriouReferences
External links