サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
ノーベル賞
iamtrask.github.io
Tutorial: DeepMind's Synthetic Gradients Posted by iamtrask on March 21, 2017 TLDR: In this blogpost, we're going to prototype (from scratch) and learn the intuitions behind DeepMind's recently proposed Decoupled Neural Interfaces Using Synthetic Gradients paper. I typically tweet out new blogposts when they're complete at @iamtrask. Feel free to follow if you'd be interested in reading more in th
A Tutorial for Encrypted Deep Learning Posted by iamtrask on March 17, 2017 TLDR: In this blogpost, we're going to train a neural network that is fully encrypted during training (trained on unencrypted data). The result will be a neural network with two beneficial properties. First, the neural network's intelligence is protected from those who might want to steal it, allowing valuable AIs to be tr
An Unofficial Startup Guide. Posted by iamtrask on January 15, 2017 EDIT: A complete revamp of PyTorch was released today (Jan 18, 2017), making this blogpost a bit obselete. I will update this post with a new Quickstart Guide soon, but for now you should check out their documentation.</a> This Blogpost Will Cover: Part 1: PyTorch Installation Part 2: Matrices and Linear Algebra in PyTorch Part 3:
Learning to Transduce with Unbounded Memory Posted by iamtrask on February 25, 2016 Summary: I learn best with toy code that I can play with. This tutorial teaches DeepMind's Neural Stack machine via a very simple toy example, a short python implementation. I will also explain my thought process along the way for reading and implementing research papers from scratch, which I hope you will find use
Baby steps to your neural network's first memories. Posted by iamtrask on November 15, 2015 Summary: I learn best with toy code that I can play with. This tutorial teaches Recurrent Neural Networks via a very simple toy example, a short python implementation. Chinese Translation Korean Translation I'll tweet out (Part 2: LSTM) when it's complete at @iamtrask. Feel free to follow if you'd be intere
A Neural Network in 13 lines of Python (Part 2 - Gradient Descent) Improving our neural network by optimizing Gradient Descent Posted by iamtrask on July 27, 2015 Summary: I learn best with toy code that I can play with. This tutorial teaches gradient descent via a very simple toy example, a short python implementation. Followup Post: I intend to write a followup post to this one adding popular fe
A bare bones neural network implementation to describe the inner workings of backpropagation. Posted by iamtrask on July 12, 2015 Summary: I learn best with toy code that I can play with. This tutorial teaches backpropagation via a very simple toy example, a short python implementation. Edit: Some folks have asked about a followup article, and I'm planning to write one. I'll tweet it out when it's
このページを最初にブックマークしてみませんか?
『iamtrask.github.io』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く