NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch Most deep learning frameworks, including PyTorch, train using 32-bit floating point (FP32) arithmetic by default. However, using FP32 for all operations is not essential to achieve full accuracy for many state-of-the-art deep neural networks (DNNs). In 2017, NVIDIA researchers developed a methodology for mixed-precision training in wh
![NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch | NVIDIA Developer Blog](https://cdn-ak-scissors.b.st-hatena.com/image/square/9fd5f1cf6c411c526f97f65a6135f9aa208be5bc/height=288;version=1;width=512/https%3A%2F%2Fdeveloper-blogs.nvidia.com%2Fwp-content%2Fuploads%2F2018%2F12%2Ftensor_cube_white-1280.png)