Karol Zieba, Jake Zhao, Xin Zhang, Jiakai Zhang, Urs Muller, Mathew Monfort, Lawrence D. Jackel, Prasoon Goyal, Beat Flepp, Bernhard Firner, Daniel Dworakowski, Davide Del Testa, Mariusz Bojarski - 2016
Publications: arXiv Add/Edit
We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).
In this project, we use deep learning to imitate human driving in a simulator. In particular, we utilize Keras libraries to build a convolutional neural network that predicts steering angle response in the simulator.
In this project a end-to-end-solution for autonomous lane keeping is developed using imitation learning. The goal is to develop a robust lane keeping system for a model scale car (scale 1:8) for indoor szenarios (e.g. driving szenarios of the Carolo Cup).
This project is my implementation of NVIDIA's PilotNet End to End deep CNN (built with Keras) to clone the behavior of a self driving car .
Udacity Term 1 - Project 3 drives a car around a simulated track. This program uses a Convolutional Neural Network which was taught how to drive. This was accomplished by collecting data (image and steering angle). This data was used to train the neural network, and to drive a vehicle in a simulator.
In a new automotive application, we have used convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. This powerful end-to-end approach means that with minimum training data from humans, the system learns to steer, with or without lane markings, on both local roads and highways. The system can also operate in areas with unclear visual guidance such as parking lots or unpaved roads. In this Project i am going to implement Nvidia End-to-End Deep Learning for Self-Driving Cars" Network Architecture(https://arxiv.org/pdf/1604.07316.pdf) and around 45,000 Data-set(https://github.com/SullyChen/driving-datasets)
Keras/Tensorflow Implementation of
End-to-End Learning for self-driving car (Nvidia). Spatial factorization is additionally implemented to reduce network size.
End-to-End Deep Learning and Behavior cloning for Self-Driving Cars
Keras implementation for Nvidia's paper (https://arxiv.org/abs/1604.07316)
The objective of this project is to implement an end to end learning system which learns from manually driven scenarios how to drive safely around a circuit in a simulator. The CNN inputs are raw images and the output is the predicted steering angle.