You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert
 
  
  こんにちは。データソリューショングループの李です。 データソリューショングループでは、ウェブリコメンドやターゲティングバナー設置のアルゴリズム実装、賃料の予測、投函チラシの配布エリア最適化など、こんにちは。データソリューショングループの李です。 データソリューショングループでは、ウェブリコメンドやターゲティングバナー設置のアルゴリズム実装、賃料の予測、投函チラシの配布エリア最適化など、 機械学習や統計モデリングをつかってサービスの磨きこみをしています。 今回は、SUUMOが持っている建築物の外観画像と内観画像を使って画像分析を行いました。Deep learningを用いた手法や、得られた結果について紹介します。 Deep Learningとは Computer Vision(CV)やAIなどの難易度が高い学習タスクに対して、深い構造*を使って処理する機械学習アルゴリズムセットです。 表
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We ex
Deep Learningによる音声認識 GTC Japan 2015 2015-09-18 Fairy Devices c⃝ 2015 Fairy Devices Inc. 1 / 28 概要 Deep Learning の導入は音声認識の精度を大きく向上させた. 音声処理分野の幅広い 課題については, 現在も様々なネットワーク構造や学習手法などを用いた研究が活発 に行われている. また音声インターフェースを搭載した製品の普及が進み, 今後も音声処理技術のさま ざまな活用が期待されている. 本講演では Deep Learning による音声認識の概要を説明し, その活用における注意点 について紹介する. 2 / 28 Outline 1 Introduction 2 Deep Learning による音声認識 DNN による音声認識の構造 DNN による音声認識の課題 3 音声認識の活
Large-Scale Item Categorization in e-Commerce Using Multiple Recurrent Neural Networks Precise item categorization is a key issue in e-commerce domains. However, it still remains a challenging problem due to data size, category skewness, and noisy metadata. Here, we demonstrate a successful report on a deep learning-based item categorization method, i.e., deep categorization network (DeepCN), in a
DSSTNEとは? DSSTNEは、Deep Scalable Sparse Tensor Network Engineの頭文字を並べたもので、読み方は“Destiny”と読むらしい。 なぜ今アマゾンがDeep Learning(DL)のオープンソースを発表したのか? DSSTNEは、既存のDeepLearningのオープンソースよりも、データがSparse(疎)なときに高いパフォーマンスを示すため、Amazonのように大量の商品データ、ユーザーデータを持ち、その二つのオブジェクトが購買、評価などの行動をした行動データを持つような疎行列データを持つ場合に強いDeapLearningのオープンソースと言える。 スパース(疎)行列データとは? 疎行列(そぎょうれつ、英: sparse matrix)とは、成分のほとんどが零である行列のことをいう。 スパース行列とも言う。 有限差分法、有限体積法
 
      
  Deep Learning Local Optima Alireza Shafaei - Dec. 2015 Saddle Points A critical point Saddle Points A critical point With Hessian Local minimum: Saddle Points A critical point With Hessian Local minimum: Local maximum: Saddle Points With Hessian ... Saddle point with min-max structure Saddle Points With Hessian ... Saddle point with min-max structure Saddle Points With Hessian ... Saddle point wit
Deep neural networks are state-of-the-art models for understanding the content of images, video and raw input data. However, implementing a deep neural network in embedded systems is a challenging task, because a typical deep neural network, such as a Deep Belief Network using 128x128 images as input, could exhaust Giga bytes of memory and result in bandwidth and computing bottleneck. To address t
Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, in most previous works, the models are learned based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we use the multi-task learning framework to jointly learn across multiple related tasks. Based on recurrent neural network,
In this paper we propose a new approach for learning local descriptors for matching image patches. It has recently been demonstrated that descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Unfortunately their computational complexity is prohibitive for any practical application. We address this problem and propose a CNN based descriptor wit
We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, simi
In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. With no unrealistic assumption, we first prove the following statements for the squared loss function of deep linear neural networks with any depth and any widths: 1) the function is non-convex and non-concave, 2) every local minimum is a glo
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through re
Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning." Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information wi
Drone movement and coordination are learned thru five independently-trained neural networks in four categories of operation. Specifically: avoid: The first two neural networks enable the drone to avoid obstacles. The turn RNN trains a drone moving at constant speed to avoid stationary and moving obstacles. Inputs are a set of five sonar sensor readings that emanate from the front of the drone. Out
The creation of practical deep learning data-products often requires parallelization across processors and computers to make deep learning feasible on large data sets, but bottlenecks in communication bandwidth make it difficult to attain good speedups through parallelism. Here we develop and test 8-bit approximation algorithms which make better use of the available bandwidth by compressing 32-bit
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く
