Squeeze the Memory Consumption of Deep Learning¶ One important theme about deep learning is to train deeper and larger nets. While the hardware has been upgraded rapidly in recent years, the huge deepnet monsters are always hungry about the GPU RAMS. Being able to use less memory for the same net also means we can user larger batch size, and usually higher GPU utilization rate. This article discus
![Squeeze the Memory Consumption of Deep Learning — mxnet 0.5.0 documentation](https://cdn-ak-scissors.b.st-hatena.com/image/square/8faa0b511d76aa3ba6b7944db02bd84ccde67df4/height=288;version=1;width=512/https%3A%2F%2Fraw.githubusercontent.com%2Fdmlc%2Fweb-data%2Fmaster%2Fmxnet%2Fmemory%2Falloc_step.png)