サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
衆院選
blog.otoro.net
Going for a ride. GitHub In the previous article, I have described a few evolution strategies (ES) algorithms that can optimise the parameters of a function without the need to explicitly calculate gradients. These algorithms can be applied to reinforcement learning (RL) problems to help find a suitable set of model parameters for a neural network agent. In this article, I will explore applying ES
Survival of the fittest. In this post I explain how evolution strategies (ES) work with the aid of a few visual examples. I try to keep the equations light, and I provide links to original articles if the reader wishes to understand more details. This is the first post in a series of articles, where I plan to show how to apply these algorithms to a range of tasks from MNIST, OpenAI Gym, Roboschool
This post is not meant to be a comprehensive overview of recurrent neural networks. It is intended for readers without any machine learning background. The goal is to show artists and designers how to use a pre-trained neural network to produce interactive digital works using simple Javascript and p5.js library. Introduction Handwriting Generation with Javascript Machine learning has become a popu
In this post, I will talk about our recent paper called Hypernetworks. I worked on this paper as a Google Brain Resident - a great research program where we can work on machine learning research for a whole year, with a salary and benefits! The Brain team is now accepting applications for the 2017 program: see g.co/brainresidency. This article has also been translated to Simplified Chinese. Introd
For the Javascript demo of Mixture Density Networks, here is the link. Update: A more comprehensive write-up about MDNs implemented with TensorFlow here While I was going through Grave’s paper on artificial handwriting generation, I noticed that his model is not setup to predict the next location of the pen, but trained to generate a probability distribution of what happens next to the pen, includ
Update: This post has been translated to Japanese and Chinese. Recently I came across a video of a simulation that demonstrates the use of evolutionary techniques to train agents to avoid moving obstacles. The methodology used seems to employ a variation of the NEAT algorithm used to evolve the topology of neural networks so that it can perform certain tasks correctly. It was written to be used
Recurrent neural network playing slime volleyball. Can you beat them? I remember playing this game called slime volleyball, back in the day when Java applets were still popular. Although the game had somewhat dodgy physics, people like me were hooked to its simplicity and spent countless hours at night playing the game in the dorm rather than getting any actual work done. As I can’t find any ver
このページを最初にブックマークしてみませんか?
『大トロ ・ Machine Learning』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く