GeForce is a
brand
A brand is a name, term, design, symbol or any other feature that distinguishes one seller's goods or service from those of other sellers. Brands are used in business, marketing, and advertising for recognition and, importantly, to create and ...
of
graphics processing units
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal co ...
(GPUs) designed by
Nvidia
Nvidia Corporation ( ) is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Founded in 1993 by Jensen Huang (president and CEO), Chris Malachowsky, and Curti ...
and marketed for the performance market. As of the
GeForce 50 series
The GeForce RTX 50 series is a series of consumer graphics processing units (GPUs) developed by Nvidia as part of its GeForce line of Graphics card, graphics cards, succeeding the GeForce RTX 40 series, GeForce 40 series. Announced at Consume ...
, there have been nineteen iterations of the design. In August 2017, Nvidia stated that "there are over 200 million GeForce gamers".
The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin
PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive
GPUs integrated on motherboards to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and
AMD
Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and technology company headquartered in Santa Clara, California and maintains significant operations in Austin, Texas. AMD is a hardware and fabless company that de ...
's
Radeon
Radeon () is a brand of computer products, including graphics processing units, random-access memory, RAM disk software, and solid-state drives, produced by Radeon Technologies Group, a division of AMD. The brand was launched in 2000 by ATI Tech ...
GPUs are the only remaining competitors in the high-end market. GeForce GPUs are very dominant in the
general-purpose graphics processor unit (GPGPU) market thanks to their proprietary
Compute Unified Device Architecture (CUDA). GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex
branching code).
Name origin
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the
RIVA TNT2 line of graphics boards. There were over 12,000 entries received and seven winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told ''
Maximum PC
''Maximum PC'', formerly known as ''boot'', was an American magazine and website published by Future US. It focuses on cutting-edge PC hardware, with an emphasis on product reviews, step-by-step tutorials, and in-depth technical briefs. Compon ...
'' in 2002 that "GeForce" originally stood for "Geometry Force" since
GeForce 256
The GeForce 256 is the original release in Nvidia's "GeForce" product line. Announced on August 31, 1999 and released on October 11, 1999, the GeForce 256 improves on its predecessor (RIVA TNT2) by increasing the number of fixed Graphics pipelin ...
was the first GPU for personal computers to calculate the
transform-and-lighting geometry, offloading that function from the
CPU.
Graphics processor generations
GeForce 256
GeForce 2 series
Launched in March 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series.
GeForce 3 series
Launched in February 2001, the GeForce3 (NV20) introduced programmable
vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The ''NV2A'' developed for the
Microsoft
Microsoft Corporation is an American multinational corporation and technology company, technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the History of personal computers#The ear ...
Xbox
Xbox is a video gaming brand that consists of four main home video game console lines, as well as application software, applications (games), the streaming media, streaming service Xbox Cloud Gaming, and online services such as the Xbox networ ...
game console is a derivative of the GeForce 3.
GeForce 4 series
Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the
AGP 4× interface, but a few began the transition to AGP 8×.
GeForce FX series
Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak
floating point
In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a signed sequence of a fixed number of digits in some base) multiplied by an integer power of that base.
Numbers of this form ...
shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering".
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented
high-dynamic-range imaging
High dynamic range (HDR), also known as wide dynamic range, extended dynamic range, or expanded dynamic range, is a signal with a higher dynamic range than usual.
The term is often used in discussing the dynamic ranges of images, videos, audio or ...
and introduced
SLI (Scalable Link Interface) and
PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the
AGP bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency
supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the Nvidia line. This era began the transition to the PCI-Express interface.
A 128-bit, eight
render output unit (ROP) variant of the 7800 GTX, called the
RSX Reality Synthesizer, is used as the main GPU in the Sony
PlayStation 3
The PlayStation 3 (PS3) is a home video game console developed and marketed by Sony Computer Entertainment (SCE). It is the successor to the PlayStation 2, and both are part of the PlayStation brand of consoles. The PS3 was first released on ...
.
GeForce 8 series
Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully support
Direct3D
Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware ...
10. Manufactured using a 90 nm process and built around the new
Tesla microarchitecture, it implemented the
unified shader model
In the field of 3D computer graphics, the unified shader model (known in Direct3D 10 as " Shader Model 4.0") refers to a form of shader hardware in a graphical processing unit (GPU) where all of the shader stages in the rendering pipeline
A ...
. Initially just the 8800GTX model was launched, while the GTS variant was released months into the product line's life, and it took nearly six months for mid-range and OEM/mainstream cards to be integrated into the 8 series. The die shrink down to
65 nm and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, 8800GT and 8800GTS-512, first released on October 29, 2007, almost one whole year after the initial G80 release.
GeForce 9 series and 100 series
The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512 MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of
Direct3D
Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware ...
10.1. In March 2009, several sources reported that Nvidia had quietly launched a new series of GeForce products, namely the GeForce 100 Series, which consists of rebadged 9 Series parts. GeForce 100 series products were not available for individual purchase.
GeForce 200 series and 300 series
Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a
65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase.
GeForce 400 series and 500 series
On April 7, 2010, Nvidia released the GeForce GTX 470 and GTX 480, the first cards based on the new
Fermi architecture, codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of
GDDR5
Graphics Double Data Rate 5 Synchronous Dynamic Random-Access Memory (GDDR5 SDRAM) is a type of synchronous graphics random-access memory (SGRAM) with a high bandwidth ("double data rate") interface designed for use in graphics cards, game con ...
memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction.
In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card.
GeForce 600 series, 700 series and 800M series

In September 2010, Nvidia announced that the successor to
Fermi microarchitecture would be the
Kepler microarchitecture, manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in
Oak Ridge National Laboratory
Oak Ridge National Laboratory (ORNL) is a federally funded research and development centers, federally funded research and development center in Oak Ridge, Tennessee, United States. Founded in 1943, the laboratory is sponsored by the United Sta ...
's
"Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680.
With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory.
At the same time, Nvidia announced
ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release.
The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, Nvidia announced that the successor to Kepler would be the
Maxwell microarchitecture. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in
OEM, and low
TDP products in
desktop
A desktop traditionally refers to:
* The surface of a desk (often to distinguish office appliances that fit on a desk, such as photocopiers and printers, from larger equipment covering its own area on the floor)
Desktop may refer to various compu ...
GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs.
This was the last GeForce series to support analog video output through
DVI-I. Although, analog display adapters exist and are able to convert a digital
Display Port,
HDMI
High-Definition Multimedia Interface (HDMI) is a proprietary digital interface used to transmit high-quality video and audio signals between devices. It is commonly used to connect devices such as televisions, computer monitors, projectors, gam ...
, or
DVI-D (Digital).
GeForce 10 series
In March 2014, Nvidia announced that the successor to Maxwell would be the
Pascal microarchitecture; announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. Architectural improvements include the following:
* In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
*
GDDR5X
Graphics Double Data Rate 5 Synchronous Dynamic Random-Access Memory (GDDR5 SDRAM) is a type of synchronous graphics random-access memory (SGRAM) with a high bandwidth ("double data rate") interface designed for use in graphics cards, game con ...
New memory standard supporting 10 Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3 GB version), GTX 1050 Ti, and GTX 1050 use GDDR5.
* Unified memoryA memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
*
NVLink
NVLink is a wire-based serial multi-lane near-range communications protocol, communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central ...
A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.
* 16-bit (
FP16
In computing, half precision (sometimes called FP16 or float16) is a binary floating-point computer number format that occupies 16 bits (two bytes in modern computers) in computer memory. It is intended for storage of floating-point values in a ...
) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision") and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate).
* More advanced process node, TSMC 16mm instead of the older TSMC
28 nm
GeForce 20 series and 16 series
In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as "
Turing" at the Siggraph 2018 conference. This new GPU microarchitecture is aimed to accelerate the real-time
ray tracing support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the
DXR extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture.
A whole new Tensor core design since
Volta introduces AI deep learning acceleration, which allows the utilisation of DLSS (
Deep Learning Super Sampling
Deep Learning Super Sampling (DLSS) is a suite of Real-time computing, real-time deep learning image enhancement and Image scaling, upscaling technologies developed by Nvidia that are available in a number of video games. The goal of these technol ...
), a new form of anti-aliasing that uses AI to provide crisper imagery with less impact on performance. It also changes its integer execution unit which can execute in parallel with the floating point data path. A new unified cache architecture which doubles its bandwidth compared with previous generations was also announced.
The new GPUs were revealed as the Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. The high end Quadro RTX 8000 features 4,608 CUDA cores and 576 Tensor cores with 48 GB of VRAM.
Later during the
Gamescom
Gamescom (stylized as ''gamescom'') is a trade fair for video games held annually at the Koelnmesse in Cologne, North Rhine-Westphalia, Germany. Gamescom is the world's largest gaming event, with 370,000 visitors and 1,037 exhibitors from 56 ...
press conference, Nvidia's CEO Jensen Huang, unveiled the new GeForce RTX series with RTX 2080 Ti, 2080, and 2070 that will use the Turing architecture. The first Turing cards were slated to ship to consumers on September 20, 2018. Nvidia announced the RTX 2060 on January 6, 2019, at CES 2019.
On July 2, 2019, Nvidia announced the GeForce RTX Super line of cards, a 20 series refresh which comprises higher-spec versions of the RTX 2060, 2070 and 2080. The RTX 2070 and 2080 were discontinued.
In February 2019, Nvidia announced the
GeForce 16 series
The GeForce GTX 16 series is a series of graphics processing units (GPUs) developed by Nvidia, based on the Turing microarchitecture, announced in February 2019. The GeForce GTX 16 series, commercialized within the same timeframe as the GeFor ...
. It is based on the same Turing architecture used in the GeForce 20 series, but disabling the Tensor (
AI) and RT (
ray tracing) cores to provide more affordable graphic cards for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations.
Like the RTX Super refresh, Nvidia on October 29, 2019, announced the GTX 1650 Super and 1660 Super cards, which replaced their non-Super counterparts.
On June 28, 2022, Nvidia quietly released their GTX 1630 card, which was meant for low-end gamers.
GeForce 30 series
Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series, it is built on the
Ampere
The ampere ( , ; symbol: A), often shortened to amp,SI supports only the use of symbols and deprecates the use of abbreviations for units. is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 c ...
microarchitecture. The GeForce Special Event introduced took place on September 1, 2020, and set September 17th as the official release date for the RTX 3080 GPU, September 24 for the RTX 3090 GPU and October 29th for the RTX 3070 GPU. With the latest GPU launch being the RTX 3090 Ti. The RTX 3090 Ti is the highest-end Nvidia GPU on the Ampere microarchitecture, it features a fully unlocked GA102 die built on the Samsung
8 nm node due to supply shortages with
TSMC
Taiwan Semiconductor Manufacturing Company Limited (TSMC or Taiwan Semiconductor) is a Taiwanese multinational semiconductor contract manufacturing and design company. It is one of the world's most valuable semiconductor companies, the world' ...
. The RTX 3090 Ti has 10,752 CUDA cores, 336 Tensor cores and texture mapping units, 112 ROPs, 84 RT cores, and 24 gigabytes of
GDDR6X
Graphics Double Data Rate 6 Synchronous Dynamic Random-Access Memory (GDDR6 SDRAM) is a type of Synchronous dynamic random-access memory#Synchronous graphics RAM .28SGRAM.29, synchronous graphics random-access memory (SGRAM) with a high Bandwidth ...
memory with a 384-bit bus. When compared to the RTX 2080 Ti, the 3090 Ti has 6,400 more CUDA cores. Due to the
global chip shortage, the 30 series was controversial as scalpers and high demand meant that GPU prices skyrocketed for the 30 series and the
AMD
Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and technology company headquartered in Santa Clara, California and maintains significant operations in Austin, Texas. AMD is a hardware and fabless company that de ...
RX 6000 series.
GeForce 40 series
On September 20, 2022, Nvidia announced its GeForce 40 Series graphics cards. These came out as the RTX 4090, on October 12, 2022, the RTX 4080, on November 16, 2022, the RTX 4070 Ti, on January 3, 2023, with the RTX 4070, on April 13, 2023, and the RTX 4060 Ti on May 24, 2023, and the RTX 4060, following on June 29, 2023. These were built on the
Ada Lovelace
Augusta Ada King, Countess of Lovelace (''née'' Byron; 10 December 1815 – 27 November 1852), also known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage's proposed mechanical general-pur ...
architecture, with current part numbers being, "AD102", "AD103", "AD104" "AD106" and "AD107". These parts are manufactured using the TSMC N4 process node which is a custom-designed process for Nvidia. At the time, the RTX 4090 was the fastest chip for the mainstream market that has been released by a major company, consisting of around 16,384
CUDA
In computing, CUDA (Compute Unified Device Architecture) is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated gene ...
cores, boost clocks of 2.2 / 2.5 GHz, 24 GB of
GDDR6X
Graphics Double Data Rate 6 Synchronous Dynamic Random-Access Memory (GDDR6 SDRAM) is a type of Synchronous dynamic random-access memory#Synchronous graphics RAM .28SGRAM.29, synchronous graphics random-access memory (SGRAM) with a high Bandwidth ...
, a 384-bit memory bus, 128 3rd gen
RT cores, 512 4th gen
Tensor
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other ...
cores,
DLSS 3.0 and a TDP of 450W. From October to December 2024, the RTX 4090, 4080, 4070 and relating variants were officially discontinued, marking the end of a two-year production run, in order to free up production space for the coming RTX 50 series.
Notably, a China-only edition of the RTX 4090 was released, named the RTX 4090D (Dragon). The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original 4090. This was primarily owing to the United States Department of Commerce beginning the enactment of restrictions on the Nvidia RTX 4090 for export to certain countries in 2023. This was targeted mainly towards China as an attempt to halt its AI development.
The 40 series saw Nvidia re-releasing the 'Super' variant of graphics cards, not seen since the 20 series, as well as being the first generation in Nvidia's lineup to combine both 'Super' and 'Ti' brandings together. This began with the release of the RTX 4070 Super on January 17, 2024, following with the RTX 4070 Ti Super on January 24, 2024, and the RTX 4080 Super on January 31, 2024.
GeForce 50 series (Current)
The GeForce 50 series, based on the
Blackwell microarchitecture, was announced at
CES 2025, with availability starting in January. Nvidia CEO
Jensen Huang
Jen-Hsun "Jensen" Huang ( zh, t=黃仁勳, poj=N̂g Jîn-hun, hp=Huáng Rénxūn; born February 17, 1963) is a Taiwanese and American businessman, electrical engineer, and philanthropist who is the president, co-founder, and chief executive of ...
presented prices for the RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090.
Variants
Mobile GPUs
Since the GeForce 2 series, Nvidia has produced a number of graphics chipsets for notebook computers under the ''GeForce Go'' branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops.
Beginning with the GeForce 8 series, the ''GeForce Go'' brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an ''M''. This ended in 2016 with the launch of the laptop GeForce 10 series – Nvidia dropped the ''M'' suffix, opting to unify the branding between their desktop and laptop GPU offerings, as notebook Pascal GPUs are almost as powerful as their desktop counterparts (something Nvidia tested with their "desktop-class" notebook GTX 980 GPU back in 2015).
The ''GeForce MX'' brand, previously used by Nvidia for their entry-level desktop GPUs, was revived in 2017 with the release of the GeForce MX150 for notebooks.
The MX150 is based on the same
Pascal GP108 GPU as used on the desktop GT 1030,
and was quietly released in June 2017.
Small form factor GPUs
Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an ''S'', similar to the ''M'' used for mobile products.
Integrated desktop motherboard GPUs
Beginning with the
nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009.
After the nForce range was discontinued, Nvidia released their
Ion
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convent ...
line in 2009, which consisted of an
Intel Atom
Intel Atom is a line of IA-32 and x86-64 instruction set ultra-low-voltage processors by Intel Corporation designed to reduce electric consumption and power dissipation in comparison with ordinary processors of the Intel Core series. Atom is m ...
CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded ''Ion 2'' in 2010, this time containing a low-end GeForce 300 series GPU.
Nomenclature
From the GeForce 4 series until the GeForce 9 series, the naming scheme below is used.
Since the release of the GeForce 100 series of GPUs, Nvidia changed their product naming scheme to the one below.
* Earlier cards such as the GeForce4 follow a similar pattern.
* cf
Nvidia's Performance Graphhere.
Graphics device drivers
Official proprietary
Nvidia develops and publishes GeForce drivers for
Windows 10
Windows 10 is a major release of Microsoft's Windows NT operating system. The successor to Windows 8.1, it was Software release cycle#Release to manufacturing (RTM), released to manufacturing on July 15, 2015, and later to retail on July 2 ...
x86
x86 (also known as 80x86 or the 8086 family) is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel, based on the 8086 microprocessor and its 8-bit-external-bus variant, the 8088. Th ...
/
x86-64
x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit extension of the x86 instruction set architecture, instruction set. It was announced in 1999 and first available in the AMD Opteron family in 2003. It introduces two new ope ...
and later,
Linux
Linux ( ) is a family of open source Unix-like operating systems based on the Linux kernel, an kernel (operating system), operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically package manager, pac ...
x86/x86-64/
ARMv7-A,
OS X 10.5 and later,
Solaris
Solaris is the Latin word for sun.
It may refer to:
Arts and entertainment Literature, television and film
* ''Solaris'' (novel), a 1961 science fiction novel by Stanisław Lem
** ''Solaris'' (1968 film), directed by Boris Nirenburg
** ''Sol ...
x86/x86-64 and
FreeBSD
FreeBSD is a free-software Unix-like operating system descended from the Berkeley Software Distribution (BSD). The first version was released in 1993 developed from 386BSD, one of the first fully functional and free Unix clones on affordable ...
x86/x86-64. A current version can be downloaded from Nvidia and most Linux distributions contain it in their own repositories. Nvidia GeForce driver 340.24 from 8 July 2014 supports the
EGL interface enabling support for
Wayland in conjunction with this driver. This may be different for the
Nvidia Quadro
Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine l ...
brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. On the same day the
Vulkan
Vulkan is a cross-platform API and open standard for 3D graphics and computing. It was intended to address the shortcomings of OpenGL, and allow developers more control over the GPU. It is designed to support a wide variety of GPUs, CPUs and o ...
graphics API was publicly released, Nvidia released drivers that fully supported it. Nvidia has released drivers with optimizations for specific video games concurrent with their release since 2014, having released 150 drivers supporting 400 games in April 2022.
Basic support for the
DRM mode-setting interface in the form of a new kernel module named
nvidia-modeset.ko
has been available since version 358.09 beta.
The support of Nvidia's
display controller
A video display controller (VDC), also called a display engine or display interface, is an integrated circuit which is the main component in a video-signal generator, a device responsible for the production of a TV video signal in a computing ...
on the supported GPUs is centralized in
nvidia-modeset.ko
. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock,
G-Sync
G-Sync is a proprietary adaptive sync technology developed by Nvidia aimed primarily at eliminating screen tearing and the need for software alternatives such as Vsync. G-Sync eliminates screen tearing by allowing a video display's refresh r ...
, etc.) initiate from the various user-mode driver components and flow to
nvidia-modeset.ko
.
In May 2022, Nvidia announced that it would release a partially open-source driver for the (GSP enabled)
Turing architecture and newer, in order to enhance the ability for it to be packaged as part of Linux distributions. At launch Nvidia considered the driver to be alpha quality for consumer GPUs, and production ready for datacenter GPUs. Currently the userspace components of the driver (including OpenGL, Vulkan, and CUDA) remain proprietary. In addition, the open-source components of the driver are only a wrapper (CPU-RM) for the GPU System Processor (GSP) firmware, a RISC-V
binary blob that is now required for running the open-source driver. The GPU System Processor is a
RISC-V
RISC-V (pronounced "risk-five") is an open standard instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles. The project commenced in 2010 at the University of California, Berkeley. It transfer ...
coprocessor codenamed "Falcon" that is used to offload GPU initialization and management tasks. The driver itself is still split for the host CPU portion (CPU-RM) and the GSP portion (GSP-RM). Windows 11 and Linux proprietary drivers also support enabling GSP and make even gaming faster.
CUDA
In computing, CUDA (Compute Unified Device Architecture) is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated gene ...
supports GSP since version 11.6. Upcoming Linux kernel 6.7 will support GSP in
Nouveau.
Third-party free and open-source
Community-created, free and open-source drivers exist as an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the
reverse-engineered
Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accompl ...
free and open-source ''
nouveau'' graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products,
although Nvidia has contributed code to the Nouveau driver.
Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. Also, Nvidia's proprietary drivers consistently perform better than nouveau in various benchmarks. However, and version 3.16 of the
Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.
Licensing and privacy issues
The license has common terms against reverse engineering and copying, and it disclaims warranties and liability.
Starting in 2016 the GeForce license says Nvidia "SOFTWARE may access, collect non-personally identifiable information about, update, and configure Customer's system in order to properly optimize such system for use with the SOFTWARE."
The privacy notice goes on to say, "We are not able to respond to "Do Not Track" signals set by a browser at this time. We also permit third party online advertising networks and social media companies to collect information... We may combine personal information that we collect about you with the browsing and tracking information collected by these
ookies and beaconstechnologies."
The software configures the user's system to optimize its use, and the license says, "NVIDIA will have no responsibility for any damage or loss to such system (including loss of data or access) arising from or relating to (a) any changes to the configuration, application settings, environment variables, registry, drivers,
BIOS
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is a type of firmware used to provide runtime services for operating systems and programs and to perform hardware initialization d ...
, or other attributes of the system (or any part of such system) initiated through the SOFTWARE".
GeForce Experience
GeForce Experience is a software suite developed by Nvidia that served as a companion application for PCs equipped with Nvidia graphics cards. Initially released in 2013, it was designed to enhance the gaming experience by providing performance optimization tools, driver management, and various capture and streaming features.
One of its core functions was the ability to optimize game settings automatically based on the user's hardware configuration, helping to strike a balance between visual quality and performance. It also allowed users to manage driver updates seamlessly, particularly through the distribution of "Game Ready Drivers," which were released in sync with major game launches to ensure optimal performance from day one.
GeForce Experience included Nvidia ShadowPlay, a popular feature that enabled gameplay recording and live streaming with minimal performance impact. It also featured Nvidia Ansel, a tool for capturing high-resolution, 360-degree, and HDR in-game screenshots, as well as Nvidia Freestyle, which allowed gamers to apply real-time visual filters. Laptop users benefited from features like Battery Boost, which helped conserve battery life while gaming by intelligently adjusting system performance.
By August 2017, the software had been installed on over 90 million PCs,
making it one of the most widely used applications among gamers. Despite its broad adoption, GeForce Experience faced ongoing criticism for its resource usage, mandatory login requirement, and occasional user experience issues. One major controversy stemmed from a critical security vulnerability discovered before a patch released on March 26, 2019. The vulnerability exposed users to remote code execution, denial of service, and privilege escalation attacks. Additionally, the software was known to force a system restart after installing new drivers, initiating a 60-second countdown that offered no option to cancel or postpone.
On November 12, 2024, Nvidia officially retired GeForce Experience and launched its successor, the Nvidia App, with version 1.0. The new application was designed to modernize the user interface and streamline the experience, offering faster performance, better integration of features, and a more intuitive layout. It consolidated key tools like game optimization, driver updates, and hardware monitoring into a single platform, while also enhancing support for content creators through deeper integration with Nvidia Studio technologies.
This transition marks a new chapter in Nvidia's software ecosystem, with the Nvidia App aiming to deliver a more efficient and user-friendly experience tailored to the needs of modern gamers and creators.
Nvidia App
The Nvidia App is a program that is intended to replace both GeForce Experience and the Nvidia Control Panel which can be downloaded from Nvidia's website. In August 2024, it was in a Beta (software development), beta version. On November 12, 2024, version 1.0 was released, marking its stable release.
New features include an overhauled user interface, a new in-game overlay, support for Nvidia ShadowPlay, ShadowPlay with 120 fps, as well as RTX HDR
and RTX Dynamic Vibrance,
which are Artificial intelligence, AI-based in-game filters that enable High dynamic range, HDR and increase color saturation in any DirectX 9 (and newer) or Vulkan game, respectively.
The Nvidia App also features Auto Tuning, which adjusts the GPU's clock rate based on regular hardware scans to ensure optimal performance.
According to Nvidia, this feature will not cause any damage to the GPU and retain its warranty.
However, it might cause instability issues. The feature is similar to the GeForce Experience's "Enable automatic tuning" option, which was released in 2021, with the difference being that this was a one-off overclocking feature that did not adjust the GPU's clock speed on a regular basis.
In January 2025, Nvidia added Smooth Motion to the Nvidia App, a feature similar to Frame Generation which generates an extra frame between two natively randered frames.
Because the feature is driver-based, it also works in games that do not support DLSS's Frame Generation option.
As of its release, the feature is only available on
GeForce 50 series
The GeForce RTX 50 series is a series of consumer graphics processing units (GPUs) developed by Nvidia as part of its GeForce line of Graphics card, graphics cards, succeeding the GeForce RTX 40 series, GeForce 40 series. Announced at Consume ...
GPUs, though Nvidia stated they will add support for GeForce 40 series GPUs in the future as well.
References
External links
GeForce product page on Nvidia's websiteGeForce powered games on Nvidia's websiteTechPowerUp GPU Specs Database
{{Nvidia
Nvidia
Nvidia graphics processors
Graphics cards
Computer-related introductions in 1999