サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
体力トレーニング
dennybritz.com
Deep Learning is such a fast-moving field and the huge number of research papers and ideas can be overwhelming. The goal of this post is to review ideas that have stood the test of time. These ideas, or improvements of them, have been used over and over again. They’re known to work. If you were to start in Deep Learning today, understanding and implementing each of these techniques would probably
Two years ago I wrote a post about applying Reinforcement Learning to financial markets. A few people asked me what became of it. This post covers some high-level things I’ve learned. It’s more of a rant than an organized post. Over the past few years I’ve built four and a half trading systems. The first one failed to be profitable. The second one I never finished because I realized early on that
Thanks to @aerinykim, @suzatweet and @hardmaru for the useful feedback! The academic Deep Learning research community has largely stayed away from the financial markets. Maybe that’s because the finance industry has a not-so-great reputation, the problem doesn’t seem interesting from a research perspective, or because data is difficult and expensive to obtain. In this post, I’m going to argue that
The year is coming to an end. I did not write nearly as much as I had planned to. But I’m hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Bayesian Methods coming to WildML! And what better way to start than with a summary of all the amazing things that happened in 2017? Looking back through my Twitter history and the WildML newsletter, the followi
Github Repo with code and exercises Why Study Reinforcement Learning #Reinforcement Learning is one of the fields I’m most excited about. Over the past few years amazing results like learning to play Atari Games from raw pixels and Mastering the Game of Go have gotten a lot of attention, but RL is also used in Robotics, Image Processing and Natural Language Processing. Combining Reinforcement Lear
Chatbots, also known as Conversational Agents or Dialog Systems, are a hot topic now. Microsoft is making big bets on chatbots, and so are companies like Facebook (M), Apple (Siri), Google, WeChat, and Slack. There is a new wave of startups trying to change how consumers interact with services by building consumer apps like Operator or x.ai, bot platforms like Chatfuel, and bot libraries like Howd
A recent trend in Deep Learning are Attention Mechanisms. In an interview, Ilya Sutskever, now the research director of OpenAI, mentioned that Attention Mechanisms are one of the most exciting advancements, and that they are here to stay. That sounds exciting. But what are Attention Mechanisms? Attention Mechanisms in Neural Networks are (very) loosely based on the visual attention mechanism found
The full code is available on Github. In this post we will implement a model similar to Kim Yoon’s Convolutional Neural Networks for Sentence Classification. The model presented in the paper achieves good classification performance across a range of text classification tasks (like Sentiment Analysis) and has since become a standard baseline for new text classification architectures. I’m assuming t
My name is Denny. I started my career in databases and distributed systems at UC Berkeley, where I was part of the AMPLab and an early research engineer at Apache Spark. I went to graduate school at Stanford where I applied database techniques to fast inference in probabilistic graphical models before moving into Deep Learning research in ~2015. I published several papers as part of the Google AI
When we hear about Convolutional Neural Network (CNNs), we typically think of Computer Vision. CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook’s automated photo tagging to self-driving cars. More recently we’ve also started to apply CNNs to problems in Natural Language Processing and gotten some interesting
All the code is also available as an Jupyter notebook on Github. In this post we will implement a simple 3-layer neural network from scratch. We won’t derive all the math that’s required, but I will try to give an intuitive explanation of what we are doing. I will also point to resources for you read up on the details. Here I’m assuming that you are familiar with basic Calculus and Machine Learnin
このページを最初にブックマークしてみませんか?
『Denny's Blog』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く