Scroll to top
Artificial Intelligence

Self Driving RC Car

The main purpose of this project is to demonstrate the usability of artificial intelligence in the field of transportation and to demonstrate the concept of machine learning and neural networking. With the completion of this project we have properly demonstrated the usability of this technology in the near future and also demonstrate how this capitalizes on the user safety.

The model is trained for 3 hours in well-marked road track to produce training images of around 216000, which resulted in the accuracy of 83%.

 

Hardware Components

  • Raspberry Pi 3
  • Pi Camera
  • 5V Motors (*2)
  • Relay (*4)
  • Transistor Diodes
  • Power Source (10000 mAh)
  • Wheels (*3)

Software Components

  • Rasbian OS
  • Flask Library
  • Tensorflow library
  • Keras library
  • Socket

 

How It Works

The software runs in two modes: auto and manual mode. The video stream from raspberry pi can be viewed from the computer. Input controls are to be provided in manual mode by the user while in auto mode the computer program itself sends the input controls. A TCP/IP server runs in a computer from which images captured from raspberry pi are streamed in the computer. With this server, input controls are also sent to the raspberry pi. A python script runs in the raspberry pi that sends high/low values to GPIO pins. In manual mode, a program runs on a computer that asks for input controls. The images are saved in a directory. The images and input controls (in form of numpy array) are saved in a CSV file as trained data.

During training, the images and input controls are split into train, validation and test data. The training data is passed to a complex convolutional network. This network adjusts weights and biases based on the data. Then hyperparameters are tuned with the help of validation data and accuracy is calculated using test data. If the accuracy is acceptable, then the weights and biases are saved as h5 file.

In auto mode, a program that is run on the computer takes each image frame from the Raspberry Pi camera and passes this numpy array of the image to the convolutional network. This convolutional net then predicts output for each of the image. At the same time, another program is run on the computer that detects an object and calculates distance. If then object is a stop signal and minimum distance is reached, then the stop signal is sent to a raspberry pi.

 

Algorithm

Power is turned on in both server (computer) and client (raspberry pi and components)
Raspberry pi camera, raspberry pi socket client and Rpi Rest API server is opened
Socket Server is opened from a computer
Server/client are connected.

Check if its training mode or autonomous mode
If (training mode)
    Stream camera and capture camera frames from rpi camera via socket
    Enable controls from the computer
    While (controlling)
        Save image frame label data and control label
        Send control data through rest api
        Rpi responds to rest api client data and car is controlled
    Stop control
    Trained data is saved into npy file
Else
    Stream camera and capture camera frames from rpi camera via socket
    While (controlling)
        Compare image frames label with control label
        Predict control based on comparison
        Send control data through rest api
        Rpi responds to rest api client data and car is controlled
    Stop control

Project Report

Source Code

  • Date

    February 26, 2017

  • Skills

    Raspberry Pi 3, Pi Camera, Rasbian OS, Flask Library, Tensorflow library, Keras library, SkLearn Library, Socket, Convolution Neural Network

  • Client

    University Semi FInal Project

Share project
Open Website