並び順

ブックマーク数

期間指定

  • から
  • まで

121 - 160 件 / 180件

新着順 人気順

"deep learning"の検索結果121 - 160 件 / 180件

  • DeepMind Researchers Introduce Epistemic Neural Networks (ENNs) For Uncertainty Modeling In Deep Learning

    Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI. A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with ne

      DeepMind Researchers Introduce Epistemic Neural Networks (ENNs) For Uncertainty Modeling In Deep Learning
    • A few favorite recipes in computer vision & deep learning

      A few days ago from the time of writing this blog post I tweeted - Some recent favorite recipes (#CV & #DL): 👉Have loads of labeled data? Try improving your image classifier with Supervised Contrastive Learning. 👉Don't have loads but loads of unlabeled data? Try SimCLRv2. 👉Just want to fine-tune? Try BigTransfer. 1/3 — Sayak Paul (@RisingSayak) July 22, 2020 In this blog post, I will expand on

        A few favorite recipes in computer vision & deep learning
      • Amazon SageMaker Simplifies Training Deep Learning Models With Billions of Parameters | Amazon Web Services

        AWS News Blog Amazon SageMaker Simplifies Training Deep Learning Models With Billions of Parameters Today, I’m extremely happy to announce that Amazon SageMaker simplifies the training of very large deep learning models that were previously difficult to train due to hardware limitations. In the last 10 years, a subset of machine learning named deep learning (DL) has taken the world by storm. Based

          Amazon SageMaker Simplifies Training Deep Learning Models With Billions of Parameters | Amazon Web Services
        • 「ゼロから作るDeep Learning 2」自習メモ(その20)2巻目 自然言語処理編 - Qiita

          import numpy as np class MatMul: def __init__(self, W): print('MatMul.init の W',type(W),'\n',W) self.params = [W] print('MatMul.init の params',type(self.params),'\n',self.params) self.grads = [np.zeros_like(W)] print('MatMul.init の grads',type(self.grads),'\n',self.grads) self.x = None def forward(self, x): W, = self.params print('MatMul.forward の W',type(W),'\n',W) out = np.dot(x, W) self.x = x r

            「ゼロから作るDeep Learning 2」自習メモ(その20)2巻目 自然言語処理編 - Qiita
          • [Rust] PyTorchで作成したONNXモデルをBurnで変換して使う [Deep Learning] | DevelopersIO

            Introduction burnはRust用Deep Learningフレームワークです。 現在アクティブに開発が進められているようで、 今後が期待できるプロダクトです。 公開されているMNISTデモはこちら。 今回はこのburnを用いて、ONNX形式の既存モデルを burn用モデルに変換して使ってみます。 Burn? burnは2021年にリリースされた新しめの深層学習フレームワークです。 少し使ってみた感じだと、PyTorchに近い感じです。 burnの特徴は、以下のとおりです。 Tensor Tensor(テンソル)は、深層学習フレームワークを使う際の 基本的なデータ構造であり、 多次元の数値データを表現するために使用します。 burnでも例によってTensor構造体を使います。 このあたりも既存のフレームワークを使い慣れている人なら 馴染みやすいかと思います。 バックエンド bu

              [Rust] PyTorchで作成したONNXモデルをBurnで変換して使う [Deep Learning] | DevelopersIO
            • GitHub - webdataset/webdataset: A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.

              WebDataset format files are tar files, with two conventions: within each tar file, files that belong together and make up a training sample share the same basename when stripped of all filename extensions the shards of a tar file are numbered like something-000000.tar to something-012345.tar, usually specified using brace notation something-{000000..012345}.tar You can find a longer, more detailed

                GitHub - webdataset/webdataset: A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.
              • Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges

                The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simpl

                • MIT 6.S191: Introduction to Deep Learning

                  Course lectures for MIT Introduction to Deep Learning. http://introtodeeplearning.com

                    MIT 6.S191: Introduction to Deep Learning
                  • 最近のVisual Odometry with Deep Learning

                    社内のCV輪講で使用した資料です。 2017年以降に発表されたDeep Learningを用いたVisual Odometryの手法についてまとめました。

                      最近のVisual Odometry with Deep Learning
                    • GitHub - NVIDIA-Merlin/NVTabular: NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.

                      NVTabular is a feature engineering and preprocessing library for tabular data that is designed to easily manipulate terabyte scale datasets and train deep learning (DL) based recommender systems. It provides high-level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS Dask-cuDF library. NVTabular is a component of NVIDIA Merlin, an open source framework for build

                        GitHub - NVIDIA-Merlin/NVTabular: NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
                      • The Modern Mathematics of Deep Learning

                        We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surpr

                        • Deep learning to translate between programming languages

                          Migrating a codebase from an archaic programming language such as COBOL to a modern alternative like Java or C++ is a difficult, resource-intensive task that requires expertise in both the source and target languages. COBOL, for example, is still widely used today in mainframe systems around the world, so companies, governments, and others often must choose whether to manually translate their code

                            Deep learning to translate between programming languages
                          • Baselines for Uncertainty and Robustness in Deep Learning

                            Philosophy We strive to create an environment conducive to many different types of research across many different time scales and levels of risk. Learn more about our Philosophy Learn more

                              Baselines for Uncertainty and Robustness in Deep Learning
                            • Overview — deep learning for molecules & materials

                              Contributors# Thank you to contributors for offering suggestions, identifying errors, and helping improve this book! Twitter handles, if available Contributed Chapter# Mehrad Ansari (@MehradAnsari) Sam Cox (@SamCox822) Heta Gandhi (@gandhi_heta) Josh Rackers (@JoshRackers) Contributed Content to Chapter# Geemi Wellawatte (@GWellawatte) Substantial Feedback on Content# Lily Wang (@lilyminium) Marc

                              • Rasperry Pi 4のCPUでDeep Learningを高速化 - Qiita

                                私たちは、マルチコアCPUやSIMDアーキテクチャのHW性能を引出す組込みSW最適化技術をコアコンピタンスとするスタートアップを目指す有志集団です。 Raspberry Pi 3/4のCPUだけでどれくらいDeep Learningを高速化できるかに挑戦しています。 過去、Chainerやdarknetといったフレームワーを対象としていましたが、現在はONNX runtimeの高速化に挑戦しています。 現時点での結果は以下の通りです。 @onnxruntime on RPi4(CPU Only) MobileNetV3(Image clasification) MobileNetV2-SSDLite(Image detection) Original vs. Accelerated#RaspberryPi #Python #DeepLearninghttps://t.co/wvBLn9Tf

                                  Rasperry Pi 4のCPUでDeep Learningを高速化 - Qiita
                                • Reengineering Facebook AI’s deep learning platforms for interoperability

                                  Reengineering Facebook AI’s deep learning platforms for interoperability AI is used at Facebook today in scores of different ways, from providing intelligent shopping recommendations to detecting harmful content to translating text to generating automated captions. We’ve built several deep learning platforms so we can iterate quickly on new modeling ideas and then seamlessly deploy them at scale.

                                    Reengineering Facebook AI’s deep learning platforms for interoperability
                                  • Deep Learningによる異常検知デモ①:回転レーン上のブロックを撮影し、ブロックの位置とゲート跡を検出する - OPTiM TECH BLOG

                                    お久しぶりです、R&Dの加藤です。最近は「A列車で行こう~はじまる観光計画~」をコツコツ進めていますが楽しいですね。Steamでも公開されるそうなのでNintendo Switchを持っていなくてもプレイできますよ。 さて、今回はDeep Learningによる異常検知デモを作成したので説明します。 デモ動画 解説 撮影環境とアノテーション Deep Learningによる異常検知の速度と精度の話 速度改善する方法 Deep Learningとチャンネル モノクロ画像とカラー画像 カラー画像(3ch)を超えた、4chの画像も存在する Deep Learningにおける畳込みとチャンネル 最後に デモ動画 まずはデモ動画を観てください。 回転レーン上のブロックを撮影し、ブロックの位置(Infer image[2])とゲート跡(Infer image[1])を検出する様子です。 youtu.b

                                      Deep Learningによる異常検知デモ①:回転レーン上のブロックを撮影し、ブロックの位置とゲート跡を検出する - OPTiM TECH BLOG
                                    • Do we really need deep learning models for time series forecasting?

                                      3 main points ✔️ In the domain of time series prediction, deep learning models have recently shown rapid performance improvements. However, is classical machine learning models no longer necessary, which is why this large-scale survey and comparison experiment was conducted. ✔️ GBRT is used as a representative of classical learning models. The representation of inter-sequence dependencies realized

                                        Do we really need deep learning models for time series forecasting?
                                      • GitHub - facebookresearch/d2go: D2Go is a toolkit for efficient deep learning

                                        You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                                          GitHub - facebookresearch/d2go: D2Go is a toolkit for efficient deep learning
                                        • Geometric foundations of Deep Learning

                                          This blog post was co-authored with Joan Bruna, Taco Cohen, and Petar Veličković and is based on the new “proto-book” M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges (2021), Petar’s talk at Cambridge and Michael’s keynote talk at ICLR 2021. In October 1872, the philosophy faculty of a small university in the Bavarian cit

                                            Geometric foundations of Deep Learning
                                          • Deep Learning for Anomaly Detection: A Review

                                            Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e., deep anomaly detection, has emerged as a critical direction. This paper sur

                                            • Attention in transformers, step-by-step | Deep Learning Chapter 6

                                              Demystifying attention, the key mechanism inside transformers and LLMs. Instead of sponsored ad reads, these lessons are funded directly by viewers: https://3b1b.co/support Special thanks to these supporters: https://www.3blue1brown.com/lessons/attention#thanks An equally valuable form of support is to simply share the videos. Demystifying self-attention, multiple heads, and cross-attention. Inst

                                                Attention in transformers, step-by-step | Deep Learning Chapter 6
                                              • Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models

                                                Deep learning recommendation models (DLRMs) are used across many business-critical services at Facebook and are the single largest AI application in terms of infrastructure demand in its data-centers. In this paper we discuss the SW/HW co-designed solution for high-performance distributed training of large-scale DLRMs. We introduce a high-performance scalable software stack based on PyTorch and pa

                                                • ゼロから作るDeep Learning❷で素人がつまずいたことメモ:4章 - Qiita

                                                  Deleted articles cannot be recovered. Draft of this article would be also deleted. Are you sure you want to delete this article?

                                                    ゼロから作るDeep Learning❷で素人がつまずいたことメモ:4章 - Qiita
                                                  • https://tilman151.github.io/posts/deep-learning-unit-tests/

                                                    • The Principles of Deep Learning Theory

                                                      Buy from Amazon. Buy from Cambridge University Press. Download a draft from the arXiv. Reload website: deeplearningtheory.com The Principles of Deep Learning Theory An Effective Theory Approach to Understanding Neural Networks Daniel A. Roberts, Sho Yaida, Boris Hanin A Cambridge University Press Book This book develops an effective theory approach to understanding deep neural networks of practica

                                                      • Faster NLP with Deep Learning: Distributed Training

                                                        Training deep learning models for NLP tasks typically requires many hours or days to complete on a single GPU. In this post, we leverage Determined’s distributed training capability to reduce BERT for SQuAD model training from hours to minutes, without sacrificing model accuracy. In this 2-part blog series, we outline tips and tricks to accelerate NLP deep learning model training across multiple G

                                                          Faster NLP with Deep Learning: Distributed Training
                                                        • GitHub - Layout-Parser/layout-parser: A Unified Toolkit for Deep Learning Based Document Image Analysis

                                                          You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                                                            GitHub - Layout-Parser/layout-parser: A Unified Toolkit for Deep Learning Based Document Image Analysis
                                                          • Coursera / Neural Networks and Deep Learning 受講メモ - たにしきんぐダム

                                                            Deep Learning を勉強しようと思い、Coursera の Deep Learning Specialization を受講し始めた。 ある手法がうまくいく/うまくいかないことのイメージを説明してくれたり、実装に際してのtips and tricksも教えてくれるのが良い。 解析や線形代数を知らない人にも門戸を開くために、コスト関数やactivation functionの微分の計算などは答えだけ提示している。(良いと思う) 穴埋め形式ではあるのものの、Jupyter Notebook 上で自分で Neural Network を実装する課題があって面白い。 www.coursera.org この専門講座は5つのコースから構成されていて、Neural Networks and Deep Learning はその1つ目のコース。内容としてはロジスティック回帰、単層ニューラルネット、

                                                              Coursera / Neural Networks and Deep Learning 受講メモ - たにしきんぐダム
                                                            • A New Lens on Understanding Generalization in Deep Learning

                                                              Philosophy We strive to create an environment conducive to many different types of research across many different time scales and levels of risk. Learn more about our Philosophy Learn more

                                                                A New Lens on Understanding Generalization in Deep Learning
                                                              • 相互情報量からみるDeep Learning

                                                                Deep Learningの表現学習を情報量という観点で見てみる。 所属組織のしがらみがあるので公開情報に限定し自分の考察などは基本記述しない まとめ 相互情報量使うといろいろおもしろ表現学習できるし汎化誤差にも関係ありそうだし、相互情報量大事だよ! おまけで相互情報量を計算するサンプルコード載せたよ! 相互情報量とは? 2つの確率変数XとYの情報がどれだけかぶっていないかを表す指標で以下で定義される I\left(X;Y\right)\equiv D_{{\rm KL}}\left(p\left(x,y\right)||p\left(x\right)p\left(y\right)\right)=\iint p\left(x,y\right)\log\frac{p\left(x,y\right)}{p\left(x\right)p\left(y\right)}dxdy ここでD_{\rm{

                                                                  相互情報量からみるDeep Learning
                                                                • OCR with Keras, TensorFlow, and Deep Learning - PyImageSearch

                                                                  In this tutorial, you will learn how to train an Optical Character Recognition (OCR) model using Keras, TensorFlow, and Deep Learning. This post is the first in a two-part series on OCR with Keras and TensorFlow: Part 1: Training an OCR model with Keras and TensorFlow (today’s post)Part 2: Basic handwriting recognition with Keras and TensorFlow (next week’s post) For now, we’ll primarily be focusi

                                                                    OCR with Keras, TensorFlow, and Deep Learning - PyImageSearch
                                                                  • Google Research Team bring Deep Learning to Pfam

                                                                    May 2, 2021 at 7:32 pm Exciting! This begs a few questions, actually: 1) Why not re-train the hmm suite on a larger corpus than the small set of seed alignments? 2) Why not just take (a representative subset of) the new Pfam-N alignments and build more HMMs out of them (naming each the same thing that ProtENN mapped them to)? 3) For sequences that still don’t have great matches to existing cluster

                                                                      Google Research Team bring Deep Learning to Pfam
                                                                    • 【Deep Learning研修(発展)】データ生成・変換のための機械学習

                                                                      【Deep Learning研修(発展)】はディープラーニング・機械学習に関する発展的な話題を幅広く紹介する研修動画シリーズです。Neural Network Consoleチャンネル(https://www.youtube.com/c/NeuralNetworkConsole)でもディープラーニングに関するより...

                                                                        【Deep Learning研修(発展)】データ生成・変換のための機械学習
                                                                      • The Little Book of Deep Learning

                                                                        The Little Book of Deep Learning François Fleuret François Fleuret is a professor of computer sci- ence at the University of Geneva, Switzerland. The cover illustration is a schematic of the Neocognitron by Fukushima [1980], a key an- cestor of deep neural networks. This ebook is formatted to fit on a phone screen. Contents Contents 5 List of figures 7 Foreword 8 I Foundations 10 1 Machine Learnin

                                                                        • 6. Deep Learningの研究分野:自然言語処理、時系列解析

                                                                          ある時点での出力を、次の時点での入力として利用する再帰的構造(閉路)を持ったニューラルネットワーク。狭義には単純な構造のものを指す。逐語的に処理するため、並列に学習できないのが欠点。

                                                                            6. Deep Learningの研究分野:自然言語処理、時系列解析
                                                                          • 将棋ソフト、長時間の対局は、Deep Learning系に軍配が上がるのか? | やねうら王 公式サイト

                                                                            Deep Learning系の将棋ソフト(ふかうら王)と、やねうら王(評価関数:水匠)とで、長時間にした時にどちらのほうが棋力の伸びが大きいかについて計測したので公開する。 単純には、Deep Learning系の将棋ソフトのほうが局面評価の精度が高いと考えられるので、長時間になればなるほど強くなるように思われる。ところが、探索部の性能の差があって、やねうら王はドメイン知識(将棋固有の知識)を活用して探索しているので、探索性能の差があると考えられる。 水匠 vs ふかうら王 水匠30万ノード(局面)、ふかうら王(評価関数:GCT) 3000ノードでの比較。 // 水匠300kn vs ふかうら王V6.02 3kn + root df-pn mate 30kn // 双方、1スレッドでノード数の制限をして対局させた。 // やねうら王互角局面集の24手目の局面からの対局。 T1,b1000,

                                                                            • 【自然言語処理】Deep Learningを使ってアンジャッシュ渡部のAIを作る【RNN】 - Qiita

                                                                              はじめに アンジャッシュ渡部さん(以下敬称略)の記者会見が世間を賑わせましたね。 この記者会見を見て皆さんいろいろと思うところがあるとは思いますが、僕は以下の記事が気になりました。 https://sn-jp.com/archives/22422 今回の記者会見の受け答えで、渡部が特定のワードを連発していたというものです。 特に「本当に」というワードに限っては100回以上使われていました。 これだけ偏った語彙の言葉が連発された記者会見を見て、僕はこう思いました 渡部のセリフをディープラーニングで学習して、渡部っぽい文章を自動生成する渡部AIを作りたい! そうして勢いのままにこの記事を書いています。 結構長くなってしまったので、手っ取り早く結果だけ知りたい人は最後の「実際にやってみた」だけでも読んでもらえればと思います。 自然言語処理の勉強をしながら書いたので間違ってる箇所や正確でない箇所が

                                                                                【自然言語処理】Deep Learningを使ってアンジャッシュ渡部のAIを作る【RNN】 - Qiita
                                                                              • MIT researchers warn that deep learning is approaching computational limits

                                                                                Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We’re approaching the computational limits of deep learning. That’s according to researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia, who found in a recent study that progress in d

                                                                                  MIT researchers warn that deep learning is approaching computational limits
                                                                                • 1984年に発売されたパソコンでDeep Learning型の囲碁AI | やねうら王 公式サイト

                                                                                  昨日と今日開催されている第15回UEC杯コンピュータ囲碁大会で面白いことがあったので記事として取り上げる。 第15回UEC杯コンピュータ囲碁大会 : http://entcog.c.ooco.jp/entcog/new_uec/ みんながいつもお世話になっているMCTS(モンテカルロ木探索)の発明者であるRémi Coulomさんが、Thomoson MO5(以下、MO5と略す)という1984年に発売されたパソコンで動く囲碁AIのプログラムを持参したのだ。 MO5は、フランスの学校で教育用として使われていたそうで、多くのフランスの子どもたちが、このパソコンを初めてのコンピューターとして体験したのだそうだ。 そんな思い入れのある(?)MO5に、Crazy Stone(Rémiさんの囲碁AI)を移植したということである。 MO5は、1MHzのMotorola 6809 CPU、32kbのRAM