サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
衆院選
stanford.edu/~shervine
アフシンアミディ・シェルビンアミディ 著 チャントゥアンアイン・中井喜之 訳 概要 伝統的な畳み込みニューラルネットワークのアーキテクチャ CNNとしても知られる畳み込みニューラルネットワークは一般的に次の層で構成される特定種類のニューラルネットワークです。 畳み込み層とプーリング層は次のセクションで説明されるハイパーパラメータに関してファインチューニングできます。 層の種類 畳み込み層 (CONV) 畳み込み層 (CONV)は入力$I$を各次元に関して走査する時に、畳み込み演算を行うフィルタを使用します。畳み込み層のハイパーパラメータにはフィルタサイズ$F$とストライド$S$が含まれます。結果出力$O$は特徴マップまたは活性化マップと呼ばれます。 注: 畳み込みステップは1次元や3次元の場合にも一般化できます。 プーリング (POOL) プーリング層 (POOL)は位置不変性をもつ縮小操
My twin brother Afshine and I created this set of illustrated Deep Learning cheatsheets covering the content of the CS 230 class, which I TA-ed in Winter 2019 at Stanford. They can (hopefully!) be useful to all future students of this course as well as to anyone else interested in Deep Learning. • Types of layer, filter hyperparameters, activation functions • Object detection, face verification an
By Afshine Amidi and Shervine Amidi Overview Architecture of a traditional CNN Convolutional neural networks, also known as CNNs, are a specific type of neural networks that are generally composed of the following layers: The convolution layer and the pooling layer can be fine-tuned with respect to hyperparameters that are described in the next sections. Types of layer Convolution layer (CONV) The
Data Science track of the Computational and Mathematical Engineering department Research at the Stanford Vision Lab TA at Stanford's Computer Science and ICME departments Centrale Paris engineering curriculum (ECP 17) Research at the Center for Visual Computing (CVC) with Professors Evangelia I. Zacharaki and Nikos Paragios Teaching MIT With my twin brother Afshine, we built easy-to-digest study g
By Afshine Amidi and Shervine Amidi Classification metrics In a context of a binary classification, here are the main metrics that are important to track in order to assess the performance of the model. Confusion matrix The confusion matrix is used to have a more complete picture when assessing the performance of a model. It is defined as follows:
By Afshine Amidi and Shervine Amidi Introduction to Unsupervised Learning Motivation The goal of unsupervised learning is to find hidden patterns in unlabeled data $\{x^{(1)},...,x^{(m)}\}$. Jensen's inequality Let $f$ be a convex function and $X$ a random variable. We have the following inequality:
By Afshine Amidi and Shervine Amidi Introduction to Supervised Learning Given a set of data points $\{x^{(1)}, ..., x^{(m)}\}$ associated to a set of outcomes $\{y^{(1)}, ..., y^{(m)}\}$, we want to build a classifier that learns how to predict $y$ from $x$. Type of prediction The different types of predictive models are summed up in the table below:
By Afshine Amidi and Shervine Amidi Neural Networks Neural networks are a class of models that are built with layers. Commonly used types of neural networks include convolutional and recurrent neural networks. Architecture The vocabulary around neural networks architectures is described in the figure below: By noting $i$ the $i^{th}$ layer of the network and $j$ the $j^{th}$ hidden unit of the lay
My twin brother Afshine and I created this set of illustrated Machine Learning cheatsheets covering the content of the CS 229 class, which I TA-ed in Fall 2018 at Stanford. They can (hopefully!) be useful to all future students of this course as well as to anyone else interested in Machine Learning. Cheatsheet • Loss function, gradient descent, likelihood • Linear models, Support Vector Machines,
Want more content like this? Subscribe here to be notified of new releases! python keras 2 fit_generator large dataset multiprocessing By Afshine Amidi and Shervine Amidi Motivation Have you ever had to load a dataset that was so memory consuming that you wished a magic trick could seamlessly take care of that? Large datasets are increasingly becoming part of our lives, as we are able to harness a
このページを最初にブックマークしてみませんか?
『Shervine Amidi』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く