サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
アメリカ大統領選
colah.github.io
Recurrent Neural Networks Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to
I do not plan to write more of my deep learning articles on this site. Instead, I will be co-editor of Distill, a visual, interactive journal for machine learning research emphasizing human understanding. I believe this will allow me to better serve the community. If you’ve enjoyed my blog, you should check out the first few articles on Distill. I think they’re substantially better than the conten
I love the feeling of having a new way to think about the world. I especially love when there’s some vague idea that gets formalized into a concrete concept. Information theory is a prime example of this. Information theory gives us precise language for describing a lot of things. How uncertain am I? How much does knowing the answer to question A tell me about the answer to question B? How similar
An Ad-Hoc Field Deep learning, despite its remarkable successes, is a young field. While models called artificial neural networks have been studied for decades, much of that work seems only tenuously connected to modern results. It’s often the case that young fields start in a very ad-hoc manner. Later, the mature field is understood very differently than it was understood by its early practitione
Introduction Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. Beyond its use in deep learning, backpropagation is a powerf
Posted on April 6, 2014 topology, neural networks, deep learning, manifold hypothesis Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.1 However, there remain a number of concerns about them. One is that it can be quite challenging to understand what a neural network is really doing
Visualizing MNIST: An Exploration of Dimensionality Reduction At some fundamental level, no one understands machine learning. It isn’t a matter of things being too complicated. Almost everything we do is fundamentally very simple. Unfortunately, an innate human handicap interferes with us understanding these simple things. Humans evolved to reason fluidly about two and three dimensions. With some
Here are all my previous posts: Calculus on Computational Graphs: Backpropagation - August 31, 2015 Understanding LSTM Networks - August 27, 2015 Visualizing Representations: Deep Learning and Human Beings - January 16, 2015 Groups & Group Convolutions - December 8, 2014 Visualizing MNIST: An Exploration of Dimensionality Reduction - October 9, 2014 Understanding Convolutions - July 13, 2014 Conv
In a previous post, we explored techniques for visualizing high-dimensional data. Trying to visualize high dimensional data is, by itself, very interesting, but my real goal is something else. I think these techniques form a set of basic building blocks to try and understand machine learning, and specifically to understand the internal operations of deep neural networks. Deep neural networks are a
Inceptionism Going Deeper into Neural Networks On the Google Research Blog
Introduction In the last few years, deep neural networks have lead to breakthrough results on a variety of pattern recognition problems, such as computer vision and voice recognition. One of the essential components leading to these results has been a special kind of neural network called a convolutional neural network. At its most basic, convolutional neural networks can be thought of as a kind o
In a previous post, we built up an understanding of convolutional neural networks, without referring to any significant mathematics. To go further, however, we need to understand convolutions. If we just wanted to understand convolutional neural networks, it might suffice to roughly understand convolutions. But the aim of this series is to bring us to the frontier of convolutional neural networks
Introduction In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well? This post reviews some extremely remarkable results in applying deep neural networks to natural language
このページを最初にブックマークしてみませんか?
『Home - colah's blog』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く