並び順

ブックマーク数

期間指定

  • から
  • まで

41 - 57 件 / 57件

新着順 人気順

"Neural Networks"の検索結果41 - 57 件 / 57件

  • US10452978B2 - Attention-based sequence transduction neural networks - Google Patents

    US10452978B2 - Attention-based sequence transduction neural networks - Google Patents Attention-based sequence transduction neural networks Download PDF Info Publication number US10452978B2 US10452978B2 US16/021,971 US201816021971A US10452978B2 US 10452978 B2 US10452978 B2 US 10452978B2 US 201816021971 A US201816021971 A US 201816021971A US 10452978 B2 US10452978 B2 US 10452978B2 Authority US Unit

    • DeepMind Researchers Introduce Epistemic Neural Networks (ENNs) For Uncertainty Modeling In Deep Learning

      Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI. A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with ne

        DeepMind Researchers Introduce Epistemic Neural Networks (ENNs) For Uncertainty Modeling In Deep Learning
      • Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words?

        人手によってアノテーションしたattention mapを元に、深層学習モデルのattention mapと比較分析を行った。 ■ イベント:ACL 2020 オンラインLT会 https://nlpaper-challenge.connpass.com/event/185240/ ■ 登壇概要 タイトル:Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words?

          Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words?
        • GitHub - google-deepmind/ithaca: Restoring and attributing ancient texts using deep neural networks

          Yannis Assael1,*, Thea Sommerschield2,3,*, Brendan Shillingford1, Mahyar Bordbar1, John Pavlopoulos4, Marita Chatzipanagiotou4, Ion Androutsopoulos4, Jonathan Prag3, Nando de Freitas1 1 DeepMind, United Kingdom 2 Ca’ Foscari University of Venice, Italy 3 University of Oxford, United Kingdom 4 Athens University of Economics and Business, Greece * Authors contributed equally to this work Ancient His

            GitHub - google-deepmind/ithaca: Restoring and attributing ancient texts using deep neural networks
          • Neural Networks and the Chomsky Hierarchy

            Reliable generalization lies at the heart of safe ML and AI. However, understanding when and how neural networks generalize remains one of the most important unsolved problems in the field. In this work, we conduct an extensive empirical study (20'910 models, 15 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice

            • ニューラルネットワークのベイズ推論 / Bayesian inference of neural networks

              研究室内のGNN勉強会の資料 NNの基礎からということで,この資料はGNNでなくベイズ深層学習とドロップアウトの関係の話です.

                ニューラルネットワークのベイズ推論 / Bayesian inference of neural networks
              • GitHub - advboxes/AdvBox: Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line t

                You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                  GitHub - advboxes/AdvBox: Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line t
                • A Guide to Activation Functions in Artificial Neural Networks

                  Activation functions are mathematical equations attached to the end of every layer of an artificial (deep) neural network. This helps in computing the output and figuring out if nodes would fire or not. They also help neutral networks learn complex nonlinear relationships in data. What Does Node’s Firing Mean The phrase “node will fire or not” is a metaphorical way of describing how a neuron in an

                    A Guide to Activation Functions in Artificial Neural Networks
                  • Mysteries of the universe: Training neural networks to estimate parameters of synthetic black hole images | Amazon Web Services

                    AWS Public Sector Blog Mysteries of the universe: Training neural networks to estimate parameters of synthetic black hole images Before the Event Horizon Telescope project released the first-ever picture of a black hole in 2019, nobody had ever seen one. Black holes are a region of space with a gravitational pull so strong that nothing—not even light—can escape them. A prediction from Einstein’s t

                      Mysteries of the universe: Training neural networks to estimate parameters of synthetic black hole images | Amazon Web Services
                    • Is the future of Neural Networks Sparse? An Introduction (1/N)

                      TLDR: YesHi, I am François Lagunas.I am doing Machine Learning research, and I have been working for the last months on using sparse matrices, especially in Transformers. The recent announcement that OpenAI is porting its block sparse toolbox in PyTorch is really big news: “We are in the process of writing PyTorch bindings for our highly-optimized blocksparse kernels, and will open-source those bi

                        Is the future of Neural Networks Sparse? An Introduction (1/N)
                      • Coursera / Neural Networks and Deep Learning 受講メモ - たにしきんぐダム

                        Deep Learning を勉強しようと思い、Coursera の Deep Learning Specialization を受講し始めた。 ある手法がうまくいく/うまくいかないことのイメージを説明してくれたり、実装に際してのtips and tricksも教えてくれるのが良い。 解析や線形代数を知らない人にも門戸を開くために、コスト関数やactivation functionの微分の計算などは答えだけ提示している。(良いと思う) 穴埋め形式ではあるのものの、Jupyter Notebook 上で自分で Neural Network を実装する課題があって面白い。 www.coursera.org この専門講座は5つのコースから構成されていて、Neural Networks and Deep Learning はその1つ目のコース。内容としてはロジスティック回帰、単層ニューラルネット、

                          Coursera / Neural Networks and Deep Learning 受講メモ - たにしきんぐダム
                        • [DL輪読会]EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

                          [DL輪読会]EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

                            [DL輪読会]EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
                          • GitHub - google/neural-tangents: Fast and Easy Infinite Neural Networks in Python

                            Neural Tangents is a high-level neural network API for specifying complex, hierarchical, neural networks of both finite and infinite width. Neural Tangents allows researchers to define, train, and evaluate infinite networks as easily as finite ones. The library has been used in >100 papers. Infinite (in width or channel count) neural networks are Gaussian Processes (GPs) with a kernel function det

                              GitHub - google/neural-tangents: Fast and Easy Infinite Neural Networks in Python
                            • mtes Neural Networks、AIカメラを活用して人の転倒を検知する「ヤモリン転倒検知システム」を開発 | IoT NEWS

                              2021-08-252020-06-11 mtes Neural Networks、AIカメラを活用して人の転倒を検知する「ヤモリン転倒検知システム」を開発 多くの高齢者が入居する介護や福祉施設では、少子高齢化のなかで介護士などの人手不足が深刻化するなかで入居する高齢者の方々の転倒事故などへの対応力が低下している。また医療や介護施設においては、新型コロナウイルスの集団感染が発生し人との接触を制限するなかで日常業務を遂行しなくてはならない。 AI・IoT技術開発のmtes Neural Networks株式会社は、AIカメラを活用して人の転倒を検知する「ヤモリン転倒検知システム」を開発した。また、株式会社関東サンガの有料老人ホーム「あきる野翔裕館」にAIカメラ4台を導入して実証試験を開始した。 同システムは、画像分析機能を備えたAIカメラが差分分析と重心ベクトルで人の動き(動画)をスクリーニ

                                mtes Neural Networks、AIカメラを活用して人の転倒を検知する「ヤモリン転倒検知システム」を開発 | IoT NEWS
                              • Graph: A Survey of Graph Neural Networks, Embedding, Tasks and Applications

                                グラフに関連する話題について幅広くサーベイを行い、30本の重要論文と70本の関連論文にまとめました。 ※こちらは発表のための縮小版です ※ 発表はこちらで行いました。動画もあります( https://nlpaper-challenge.connpass.com/event/136090/ ) GNN, GCN, RelationalGCN and other GNNs, Link Prediction, Graph Classification, Graph Completion, Graph Representation/Embedding, Graph Kernel, Combinatorial/Logical, その他の最近の話題, CV, NLP, Molecular Graph Generation, Recommendation などの Application について、201

                                  Graph: A Survey of Graph Neural Networks, Embedding, Tasks and Applications
                                • [Paper reading] Hamiltonian Neural Networks

                                  - Standard machine learning models do not guarantee satisfying physical conservation laws for motion prediction. - The paper proposes learning the "equations of motion" in the form of a Hamiltonian function using neural networks to predict trajectories that obey conservation laws. - The learned Hamiltonian function is integrated on the fly to generate predictions, ensuring the predictions satisfy

                                    [Paper reading] Hamiltonian Neural Networks
                                  • Advancing AI theory with a first-principles understanding of deep neural networks

                                    Advancing AI theory with a first-principles understanding of deep neural networks The steam engine powered the Industrial Revolution and changed manufacturing forever — and yet it wasn’t until the laws of thermodynamics and the principles of statistical mechanics were developed over the following century that scientists could fully explain at a theoretical level why and how it worked. Lacking theo

                                      Advancing AI theory with a first-principles understanding of deep neural networks