はてなブックマークアプリ

サクサク読めて、
アプリ限定の機能も多数!

アプリで開く

はてなブックマーク

  • はてなブックマークって?
  • アプリ・拡張の紹介
  • ユーザー登録
  • ログイン
  • Hatena

はてなブックマーク

トップへ戻る

  • 総合
    • 人気
    • 新着
    • IT
    • 最新ガジェット
    • 自然科学
    • 経済・金融
    • おもしろ
    • マンガ
    • ゲーム
    • はてなブログ(総合)
  • 一般
    • 人気
    • 新着
    • 社会ニュース
    • 地域
    • 国際
    • 天気
    • グルメ
    • 映画・音楽
    • スポーツ
    • はてな匿名ダイアリー
    • はてなブログ(一般)
  • 世の中
    • 人気
    • 新着
    • 新型コロナウイルス
    • 働き方
    • 生き方
    • 地域
    • 医療・ヘルス
    • 教育
    • はてな匿名ダイアリー
    • はてなブログ(世の中)
  • 政治と経済
    • 人気
    • 新着
    • 政治
    • 経済・金融
    • 企業
    • 仕事・就職
    • マーケット
    • 国際
    • はてなブログ(政治と経済)
  • 暮らし
    • 人気
    • 新着
    • カルチャー・ライフスタイル
    • ファッション
    • 運動・エクササイズ
    • 結婚・子育て
    • 住まい
    • グルメ
    • 相続
    • はてなブログ(暮らし)
    • 掃除・整理整頓
    • 雑貨
    • 買ってよかったもの
    • 旅行
    • アウトドア
    • 趣味
  • 学び
    • 人気
    • 新着
    • 人文科学
    • 社会科学
    • 自然科学
    • 語学
    • ビジネス・経営学
    • デザイン
    • 法律
    • 本・書評
    • 将棋・囲碁
    • はてなブログ(学び)
  • テクノロジー
    • 人気
    • 新着
    • IT
    • セキュリティ技術
    • はてなブログ(テクノロジー)
    • AI・機械学習
    • プログラミング
    • エンジニア
  • おもしろ
    • 人気
    • 新着
    • まとめ
    • ネタ
    • おもしろ
    • これはすごい
    • かわいい
    • 雑学
    • 癒やし
    • はてなブログ(おもしろ)
  • エンタメ
    • 人気
    • 新着
    • スポーツ
    • 映画
    • 音楽
    • アイドル
    • 芸能
    • お笑い
    • サッカー
    • 話題の動画
    • はてなブログ(エンタメ)
  • アニメとゲーム
    • 人気
    • 新着
    • マンガ
    • Webマンガ
    • ゲーム
    • 任天堂
    • PlayStation
    • アニメ
    • バーチャルYouTuber
    • オタクカルチャー
    • はてなブログ(アニメとゲーム)
    • はてなブログ(ゲーム)
  • おすすめ

    猛暑に注意を

『Home - colah's blog』

  • 人気
  • 新着
  • すべて
  • Understanding LSTM Networks -- colah's blog

    17 users

    colah.github.io

    Recurrent Neural Networks Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to

    • テクノロジー
    • 2017/09/27 21:22
    • Deep Learning
    • LSTM
    • blog
    • Distill -- colah's blog

      4 users

      colah.github.io

      I do not plan to write more of my deep learning articles on this site. Instead, I will be co-editor of Distill, a visual, interactive journal for machine learning research emphasizing human understanding. I believe this will allow me to better serve the community. If you’ve enjoyed my blog, you should check out the first few articles on Distill. I think they’re substantially better than the conten

      • テクノロジー
      • 2017/03/21 07:30
      • Visual Information Theory -- colah's blog

        27 users

        colah.github.io

        I love the feeling of having a new way to think about the world. I especially love when there’s some vague idea that gets formalized into a concrete concept. Information theory is a prime example of this. Information theory gives us precise language for describing a lot of things. How uncertain am I? How much does knowing the answer to question A tell me about the answer to question B? How similar

        • テクノロジー
        • 2015/10/15 09:26
        • machine learning
        • 機械学習
        • math
        • あとで読む
        • InformationTheory
        • Neural Networks, Types, and Functional Programming -- colah's blog

          23 users

          colah.github.io

          An Ad-Hoc Field Deep learning, despite its remarkable successes, is a young field. While models called artificial neural networks have been studied for decades, much of that work seems only tenuously connected to modern results. It’s often the case that young fields start in a very ad-hoc manner. Later, the mature field is understood very differently than it was understood by its early practitione

          • テクノロジー
          • 2015/09/04 10:12
          • Haskell
          • DeepLearning
          • Deep Learning
          • 設計
          • AI
          • Calculus on Computational Graphs: Backpropagation -- colah's blog

            10 users

            colah.github.io

            Introduction Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. Beyond its use in deep learning, backpropagation is a powerf

            • テクノロジー
            • 2015/09/01 21:38
            • 機械学習
            • Understanding LSTM Networks -- colah's blog

              5 users

              colah.github.io

              Recurrent Neural Networks Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to

              • テクノロジー
              • 2015/08/29 05:42
              • 機械学習
              • あとで読む
              • Understanding LSTM Networks -- colah's blog

                95 users

                colah.github.io

                Recurrent Neural Networks Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to

                • テクノロジー
                • 2015/08/28 11:59
                • lstm
                • RNN
                • DeepLearning
                • 機械学習
                • deep learning
                • tensorflow
                • ml
                • Neural Networks, Manifolds, and Topology -- colah's blog

                  5 users

                  colah.github.io

                  Posted on April 6, 2014 topology, neural networks, deep learning, manifold hypothesis Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.1 However, there remain a number of concerns about them. One is that it can be quite challenging to understand what a neural network is really doing

                  • テクノロジー
                  • 2015/06/13 11:20
                  • Visualizing MNIST: An Exploration of Dimensionality Reduction - colah's blog

                    5 users

                    colah.github.io

                    Visualizing MNIST: An Exploration of Dimensionality Reduction At some fundamental level, no one understands machine learning. It isn’t a matter of things being too complicated. Almost everything we do is fundamentally very simple. Unfortunately, an innate human handicap interferes with us understanding these simple things. Humans evolved to reason fluidly about two and three dimensions. With some

                    • テクノロジー
                    • 2015/02/19 03:04
                    • 可視化
                    • 機械学習
                    • 資料
                    • Archives - colah's blog

                      4 users

                      colah.github.io

                      Here are all my previous posts: Calculus on Computational Graphs: Backpropagation - August 31, 2015 Understanding LSTM Networks - August 27, 2015 Visualizing Representations: Deep Learning and Human Beings - January 16, 2015 Groups & Group Convolutions - December 8, 2014 Visualizing MNIST: An Exploration of Dimensionality Reduction - October 9, 2014 Understanding Convolutions - July 13, 2014 Conv

                      • テクノロジー
                      • 2015/01/17 20:29
                      • Visualizing Representations: Deep Learning and Human Beings - colah's blog

                        15 users

                        colah.github.io

                        In a previous post, we explored techniques for visualizing high-dimensional data. Trying to visualize high dimensional data is, by itself, very interesting, but my real goal is something else. I think these techniques form a set of basic building blocks to try and understand machine learning, and specifically to understand the internal operations of deep neural networks. Deep neural networks are a

                        • テクノロジー
                        • 2015/01/17 19:58
                        • 可視化
                        • 機械学習
                        • 勉強
                        • Home - colah's blog

                          31 users

                          colah.github.io

                          Inceptionism Going Deeper into Neural Networks On the Google Research Blog

                          • テクノロジー
                          • 2014/12/11 08:41
                          • 機械学習
                          • visualization
                          • Visualizing MNIST: An Exploration of Dimensionality Reduction - colah's blog

                            18 users

                            colah.github.io

                            Visualizing MNIST: An Exploration of Dimensionality Reduction At some fundamental level, no one understands machine learning. It isn’t a matter of things being too complicated. Almost everything we do is fundamentally very simple. Unfortunately, an innate human handicap interferes with us understanding these simple things. Humans evolved to reason fluidly about two and three dimensions. With some

                            • テクノロジー
                            • 2014/11/03 22:07
                            • 機械学習
                            • Conv Nets: A Modular Perspective - colah's blog

                              6 users

                              colah.github.io

                              Introduction In the last few years, deep neural networks have lead to breakthrough results on a variety of pattern recognition problems, such as computer vision and voice recognition. One of the essential components leading to these results has been a special kind of neural network called a convolutional neural network. At its most basic, convolutional neural networks can be thought of as a kind o

                              • テクノロジー
                              • 2014/09/28 20:29
                              • Understanding Convolutions - colah's blog

                                12 users

                                colah.github.io

                                In a previous post, we built up an understanding of convolutional neural networks, without referring to any significant mathematics. To go further, however, we need to understand convolutions. If we just wanted to understand convolutional neural networks, it might suffice to roughly understand convolutions. But the aim of this series is to bring us to the frontier of convolutional neural networks

                                • テクノロジー
                                • 2014/07/18 09:28
                                • 文章
                                • Deep Learning, NLP, and Representations - colah's blog

                                  26 users

                                  colah.github.io

                                  Introduction In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well? This post reviews some extremely remarkable results in applying deep neural networks to natural language

                                  • テクノロジー
                                  • 2014/07/09 08:16
                                  • 機械学習
                                  • ロボット
                                  • あとで読む
                                  • Neural Networks, Manifolds, and Topology -- colah's blog

                                    33 users

                                    colah.github.io

                                    Posted on April 6, 2014 topology, neural networks, deep learning, manifold hypothesis Recently, there’s been a great deal of excitement and interest in deep neural networks because they’ve achieved breakthrough results in areas such as computer vision.1 However, there remain a number of concerns about them. One is that it can be quite challenging to understand what a neural network is really doing

                                    • テクノロジー
                                    • 2014/04/09 11:54
                                    • Topology
                                    • Deep Learning
                                    • DeepLearning
                                    • 機械学習
                                    • 学習
                                    • 統計
                                    • 資料

                                    このページはまだ
                                    ブックマークされていません

                                    このページを最初にブックマークしてみませんか?

                                    『Home - colah's blog』の新着エントリーを見る

                                    キーボードショートカット一覧

                                    j次のブックマーク

                                    k前のブックマーク

                                    lあとで読む

                                    eコメント一覧を開く

                                    oページを開く

                                    はてなブックマーク

                                    • 総合
                                    • 一般
                                    • 世の中
                                    • 政治と経済
                                    • 暮らし
                                    • 学び
                                    • テクノロジー
                                    • エンタメ
                                    • アニメとゲーム
                                    • おもしろ
                                    • アプリ・拡張機能
                                    • 開発ブログ
                                    • ヘルプ
                                    • お問い合わせ
                                    • ガイドライン
                                    • 利用規約
                                    • プライバシーポリシー
                                    • 利用者情報の外部送信について
                                    • ガイドライン
                                    • 利用規約
                                    • プライバシーポリシー
                                    • 利用者情報の外部送信について

                                    公式Twitter

                                    • 公式アカウント
                                    • ホットエントリー

                                    はてなのサービス

                                    • はてなブログ
                                    • はてなブログPro
                                    • 人力検索はてな
                                    • はてなブログ タグ
                                    • はてなニュース
                                    • ソレドコ
                                    • App Storeからダウンロード
                                    • Google Playで手に入れよう
                                    Copyright © 2005-2025 Hatena. All Rights Reserved.
                                    設定を変更しましたx