Each of the NVIDIA GPUs is packed with 5,120 CUDA cores and another 640 Tensor cores and can deliver up to 125 TFLOPS of mixed-precision floating point, 15.7 TFLOPS of single-precision floating point, and 7.8 TFLOPS of double-precision floating point. On the two larger sizes, the GPUs are connected together via NVIDIA NVLink 2.0 running at a total data rate of up to 300 GBps. This allows the GPUs
![New – Amazon EC2 Instances with Up to 8 NVIDIA Tesla V100 GPUs (P3) | Amazon Web Services](https://cdn-ak-scissors.b.st-hatena.com/image/square/0fc257e8ac5c0a4234f0ec8727a86934bb55480e/height=288;version=1;width=512/https%3A%2F%2Fd2908q01vomqb2.cloudfront.net%2F827bfc458708f0b442009c9c9836f7e4b65557fb%2F2020%2F06%2F03%2FBlog-Post_thumbnail.png)