May 21, 2015 There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simpl
![The Unreasonable Effectiveness of Recurrent Neural Networks](https://cdn-ak-scissors.b.st-hatena.com/image/square/54cbfbec77bea6d87786502e90edb92ca4804d78/height=288;version=1;width=512/http%3A%2F%2Fkarpathy.github.io%2Fassets%2Frnn%2Funder2.jpeg)