Daan Wierstra, David Silver, Yuval Tassa, Tom Erez, Nicolas Heess, Alexander Pritzel, Jonathan J. Hunt, Timothy P. Lillicrap - 2015
Publications: arXiv Add/Edit
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
Languages: HTML Add/Edit
This project is an exercise in reinforcement learning as part of the Machine Learning Engineer Nanodegree from Udacity. The idea behind this project is to teach a simulated quadcopter how to perform some activities.
Deep deterministic policy gradients + supervised learning for car steering control
This tool is developed to scrape twitter data, process the data, and then create either an unsupervised network to identify interesting patterns or can be designed to specifically verify a concept or idea.
This repository contains: 1. Unofficial code for paper "The Cross Entropy Method for Fast Policy Search" 2. Unofficial code for paper "Continuous control with deep reinforcement learning" 3. Unofficial code for paper "Deep Reinforcement Learning with Double Q-learning"
My solution to Collaboration and Competition using MADDPG algorithm, Udacity 3rd project of Deep RL Nanodegree from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
Deep Deterministic Policy Gradient (DDPG) implemented for the unity Reacher Environment