Not every smartphone owner carries around a high-end GPU and a power generator in their pockets. For most practical situations, we need compact models with small memory footprints and fast inference times.That said, you might have noticed that many recent advancements in Deep Learning are all about scaling up our models to gargantuan proportions. How can we take advantage of these new models while
![Distilling knowledge from Neural Networks to build smaller and faster models](https://cdn-ak-scissors.b.st-hatena.com/image/square/fbbb4df1195d2be05937251f88364a35ec07f00a/height=288;version=1;width=512/https%3A%2F%2Fblog.floydhub.com%2Fcontent%2Fimages%2F2019%2F11%2Fteacherclass.jpeg)