サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
ドラクエ3
cs231n.github.io
Table of Contents: Setting up the data and the model Data Preprocessing Weight Initialization Batch Normalization Regularization (L2/L1/Maxnorm/Dropout) Loss functions Summary Setting up the data and the model In the previous section we introduced a model of a Neuron, which computes a dot product following a non-linearity, and Neural Networks that arrange neurons into layers. Together, these choic
Table of Contents: Generating some data Training a Softmax Linear Classifier Initialize the parameters Compute the class scores Compute the loss Computing the analytic gradient with backpropagation Performing a parameter update Putting it all together: Training a Softmax Classifier Training a Neural Network Summary In this section we’ll walk through a complete implementation of a toy Neural Networ
Table of Contents: Introduction Visualizing the loss function Optimization Strategy #1: Random Search Strategy #2: Random Local Search Strategy #3: Following the gradient Computing the gradient Numerically with finite differences Analytically with calculus Gradient descent Summary Introduction In the previous section we introduced two key components in context of the image classification task: A (
(this page is currently in draft form) Visualizing what ConvNets learn Several approaches for understanding and visualizing Convolutional Networks have been developed in the literature, partly as a response the common criticism that the learned features in a Neural Network are not interpretable. In this section we briefly survey some of these approaches and related work. Visualizing the activation
This tutorial was originally contributed by Justin Johnson. We will use the Python programming language for all assignments in this course. Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing. We expect that many of you will have some experience with Pyt
Table of Contents: Architecture Overview ConvNet Layers Convolutional Layer Pooling Layer Normalization Layer Fully-Connected Layer Converting Fully-Connected Layers to Convolutional Layers ConvNet Architectures Layer Patterns Layer Sizing Patterns Case Studies (LeNet / AlexNet / ZFNet / GoogLeNet / VGGNet) Computational Considerations Additional References Convolutional Neural Networks (CNNs / Co
(These notes are currently in draft form and under development) Table of Contents: Transfer Learning Additional References Transfer Learning In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNe
Table of Contents: Quick intro without brain analogies Modeling one neuron Biological motivation and connections Single neuron as a linear classifier Commonly used activation functions Neural Network architectures Layer-wise organization Example feed-forward computation Representational power Setting number of layers and their sizes Summary Additional references Quick intro It is possible to intro
Table of Contents: Gradient checks Sanity checks Babysitting the learning process Loss function Train/val accuracy Weights:Updates ratio Activation/Gradient distributions per layer Visualization Parameter updates First-order (SGD), momentum, Nesterov momentum Annealing the learning rate Second-order methods Per-parameter adaptive learning rates (Adagrad, RMSProp) Hyperparameter Optimization Evalua
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition. For questions/concerns/bug reports, please submit a pull request directly to our git repo.
このページを最初にブックマークしてみませんか?
『CS231n Convolutional Neural Networks for Visual Recognition』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く