Launched | May 27, 2016 |
---|---|
Designed by | Nvidia |
Manufactured by | |
Fabrication process | |
Codename(s) | GP10x |
Product Series | |
Desktop | |
Professional/workstation | |
Server/datacenter | |
Specifications | |
L1 cache | 24 KB (per SM) |
L2 cache | 256 KB—4 MB |
Memory support | |
PCIe support | PCIe 3.0 |
Supported Graphics APIs | |
DirectX | DirectX 12 (12.1) |
Direct3D | Direct3D 12.0 |
Shader Model | Shader Model 6.7 |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
CUDA | Compute Capability 6.0 |
Vulkan | Vulkan 1.3 |
Media Engine | |
Encode codecs | |
Decode codecs | |
Color bit-depth |
|
Encoder(s) supported | NVENC |
Display outputs | |
History | |
Predecessor | Maxwell |
Successor |
Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 (GP100) on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070 (both using the GP104 GPU), which were released on May 17, 2016, and June 10, 2016, respectively. Pascal was manufactured using TSMC's 16 nm FinFET process, [1] and later Samsung's 14 nm FinFET process. [2]
The architecture is named after the 17th century French mathematician and physicist, Blaise Pascal.
In April 2019, Nvidia enabled a software implementation of DirectX Raytracing on Pascal-based cards starting with the GTX 1060 6 GB, and in the 16 series cards, a feature reserved to the Turing-based RTX series up to that point. [3] [4]
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and released on May 27 of the same year. The Tesla P100 (GP100 chip) has a different version of the Pascal architecture compared to the GTX GPUs (GP104 chip). The shader units in GP104 have a Maxwell-like design. [5]
Architectural improvements of the GP100 architecture include the following: [6] [7] [8]
Architectural improvements of the GP104 architecture include the following: [5]
A chip is partitioned into Graphics Processor Clusters (GPCs). For the GP104 chips, a GPC encompasses 5 SMs.
A "Streaming Multiprocessor" is analogous to AMD's Compute Unit. An SM encompasses 128 single-precision ALUs ("CUDA cores") on GP104 chips and 64 single-precision ALUs on GP100 chips. While all CU versions consist of 64 shader processors (i.e. 4 SIMD Vector Units, each 16 lanes wide), Nvidia experimented with very different numbers of CUDA cores:
The Polymorph Engine version 4.0 is the unit responsible for Tessellation. It corresponds functionally with AMD's Geometric Processor. It has been moved from the shader module to the TPC to allow one Polymorph engine to feed multiple SMs within the TPC. [19]
GK104 | GK110 | GM204 (GTX 970) | GM204 (GTX 980) | GM200 | GP104 | GP100 | |
---|---|---|---|---|---|---|---|
Dedicated texture cache per SM | 48 KiB | — | — | — | — | — | — |
Texture (graphics or compute) or read-only data (compute only) cache per SM | — | 48 KiB [29] | — | — | — | — | — |
Programmer-selectable shared memory/L1 partitions per SM | 48 KiB shared memory + 16 KiB L1 cache (default) [30] | 48 KiB shared memory + 16 KiB L1 cache (default) [30] | — | — | — | — | — |
32 KiB shared memory + 32 KiB L1 cache [30] | 32 KiB shared memory + 32 KiB L1 cache [30] | ||||||
16 KiB shared memory + 48 KiB L1 cache [30] | 16 KiB shared memory + 48 KiB L1 cache [30] | ||||||
Unified L1 cache/texture cache per SM | — | — | 48 KiB [31] | 48 KiB [31] | 48 KiB [31] | 48 KiB [31] | 24 KiB [31] |
Dedicated shared memory per SM | — | — | 96 KiB [31] | 96 KiB [31] | 96 KiB [31] | 96 KiB [31] | 64 KiB [31] |
L2 cache per chip | 512 KiB [31] | 1536 KiB [31] | 1792 KiB [32] | 2048 KiB [32] | 3072 KiB [31] | 2048 KiB [31] | 4096 KiB [31] |
The theoretical single-precision processing power of a Pascal GPU in GFLOPS is computed as 2 × operations per FMA instruction per CUDA core per cycle × number of CUDA cores × core clock speed (in GHz).
The theoretical double-precision processing power of a Pascal GPU is 1/2 of the single precision performance on Nvidia GP100, and 1/32 of Nvidia GP102, GP104, GP106, GP107 & GP108.
The theoretical half-precision processing power of a Pascal GPU is 2× of the single precision performance on GP100 [12] and 1/64 on GP104, GP106, GP107 & GP108. [18]
The Pascal architecture was succeeded in 2017 by Volta in the HPC, cloud computing, and self-driving car markets, and in 2018 by Turing in the consumer and business market. [33]
Each of those SMs also contains 32 FP64 CUDA cores - giving us the 1/2 rate for FP64 - and new to the Pascal architecture is the ability to pack 2 FP16 operations inside a single FP32 CUDA core under the right circumstances
Launched | May 27, 2016 |
---|---|
Designed by | Nvidia |
Manufactured by | |
Fabrication process | |
Codename(s) | GP10x |
Product Series | |
Desktop | |
Professional/workstation | |
Server/datacenter | |
Specifications | |
L1 cache | 24 KB (per SM) |
L2 cache | 256 KB—4 MB |
Memory support | |
PCIe support | PCIe 3.0 |
Supported Graphics APIs | |
DirectX | DirectX 12 (12.1) |
Direct3D | Direct3D 12.0 |
Shader Model | Shader Model 6.7 |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
CUDA | Compute Capability 6.0 |
Vulkan | Vulkan 1.3 |
Media Engine | |
Encode codecs | |
Decode codecs | |
Color bit-depth |
|
Encoder(s) supported | NVENC |
Display outputs | |
History | |
Predecessor | Maxwell |
Successor |
Pascal is the codename for a GPU microarchitecture developed by Nvidia, as the successor to the Maxwell architecture. The architecture was first introduced in April 2016 with the release of the Tesla P100 (GP100) on April 5, 2016, and is primarily used in the GeForce 10 series, starting with the GeForce GTX 1080 and GTX 1070 (both using the GP104 GPU), which were released on May 17, 2016, and June 10, 2016, respectively. Pascal was manufactured using TSMC's 16 nm FinFET process, [1] and later Samsung's 14 nm FinFET process. [2]
The architecture is named after the 17th century French mathematician and physicist, Blaise Pascal.
In April 2019, Nvidia enabled a software implementation of DirectX Raytracing on Pascal-based cards starting with the GTX 1060 6 GB, and in the 16 series cards, a feature reserved to the Turing-based RTX series up to that point. [3] [4]
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and released on May 27 of the same year. The Tesla P100 (GP100 chip) has a different version of the Pascal architecture compared to the GTX GPUs (GP104 chip). The shader units in GP104 have a Maxwell-like design. [5]
Architectural improvements of the GP100 architecture include the following: [6] [7] [8]
Architectural improvements of the GP104 architecture include the following: [5]
A chip is partitioned into Graphics Processor Clusters (GPCs). For the GP104 chips, a GPC encompasses 5 SMs.
A "Streaming Multiprocessor" is analogous to AMD's Compute Unit. An SM encompasses 128 single-precision ALUs ("CUDA cores") on GP104 chips and 64 single-precision ALUs on GP100 chips. While all CU versions consist of 64 shader processors (i.e. 4 SIMD Vector Units, each 16 lanes wide), Nvidia experimented with very different numbers of CUDA cores:
The Polymorph Engine version 4.0 is the unit responsible for Tessellation. It corresponds functionally with AMD's Geometric Processor. It has been moved from the shader module to the TPC to allow one Polymorph engine to feed multiple SMs within the TPC. [19]
GK104 | GK110 | GM204 (GTX 970) | GM204 (GTX 980) | GM200 | GP104 | GP100 | |
---|---|---|---|---|---|---|---|
Dedicated texture cache per SM | 48 KiB | — | — | — | — | — | — |
Texture (graphics or compute) or read-only data (compute only) cache per SM | — | 48 KiB [29] | — | — | — | — | — |
Programmer-selectable shared memory/L1 partitions per SM | 48 KiB shared memory + 16 KiB L1 cache (default) [30] | 48 KiB shared memory + 16 KiB L1 cache (default) [30] | — | — | — | — | — |
32 KiB shared memory + 32 KiB L1 cache [30] | 32 KiB shared memory + 32 KiB L1 cache [30] | ||||||
16 KiB shared memory + 48 KiB L1 cache [30] | 16 KiB shared memory + 48 KiB L1 cache [30] | ||||||
Unified L1 cache/texture cache per SM | — | — | 48 KiB [31] | 48 KiB [31] | 48 KiB [31] | 48 KiB [31] | 24 KiB [31] |
Dedicated shared memory per SM | — | — | 96 KiB [31] | 96 KiB [31] | 96 KiB [31] | 96 KiB [31] | 64 KiB [31] |
L2 cache per chip | 512 KiB [31] | 1536 KiB [31] | 1792 KiB [32] | 2048 KiB [32] | 3072 KiB [31] | 2048 KiB [31] | 4096 KiB [31] |
The theoretical single-precision processing power of a Pascal GPU in GFLOPS is computed as 2 × operations per FMA instruction per CUDA core per cycle × number of CUDA cores × core clock speed (in GHz).
The theoretical double-precision processing power of a Pascal GPU is 1/2 of the single precision performance on Nvidia GP100, and 1/32 of Nvidia GP102, GP104, GP106, GP107 & GP108.
The theoretical half-precision processing power of a Pascal GPU is 2× of the single precision performance on GP100 [12] and 1/64 on GP104, GP106, GP107 & GP108. [18]
The Pascal architecture was succeeded in 2017 by Volta in the HPC, cloud computing, and self-driving car markets, and in 2018 by Turing in the consumer and business market. [33]
Each of those SMs also contains 32 FP64 CUDA cores - giving us the 1/2 rate for FP64 - and new to the Pascal architecture is the ability to pack 2 FP16 operations inside a single FP32 CUDA core under the right circumstances