Tuesday, March 31, 2015

Classifying everything using your RPi Camera: Deep Learning with the Pi

(P.S. Those of you wanting to learn more about Deep Learning, check out this book:

For those who don't want to read, the code can be found on my github with a readme:
https://github.com/StevenHickson/RPi_CaffeQuery
You can also read about it on my Hackaday io page here.

What is object classification?

Object classification has been a very popular topic the past couple years. Given an image, we want a computer to be able to tell us what that image is showing. The newest trend has been using convolutional neural networks in order to classify networks trained with a large amount of data.

One of the bigger frameworks for this is the Caffe framework. For more on this see the Caffe home page.
You can test out there web demo here. It isn't great at people but it is very good at cats, dogs, objects, and activities.


Why is this useful?

There are all kinds of autonomous tasks you can do with the RPi camera. Perhaps you want to know if your dog is in your living room, so the Pi can take his/her picture or tell him/her they are a good dog. Perhaps you want your RPi to recognize whether there is fruit in your fruit drawer so it can order you more when it is empty. The possibilities are endless.

How do convolutional neural networks work (a VERY simple overview)?

Convolutional neural networks are based loosely off how the human brain works. They are built of layers of many neurons that are "activated" by certain inputs. The input layer is connected in a network through a series of interconnected neurons in hidden layers like so:
[1]

Each neuron sends its signal to any other neuron it is connected to which is then multiplied by the connection weight and run through a sigmoid function. The training of the network is done by changing the weights in order to minimize the error function based on a set of inputs with a known set of outputs using back propagation.

How do we get this on the Pi?

Well I went ahead and compiled Caffe on the RPi. Unfortunately since it doesn't have code to optimize the network with it's GPU, the classification takes ~20-25s per image, which is far too much.
Note: I did find a different optimized CNN network for the RPi by Pete Warden here. It looks great but it still takes about 3 seconds per image, which still doesn't seem fast  enough. 

You will also need the Raspberry Pi camera which you can get from here:
Raspberry PI 5MP Camera Board Module

A better option: Using the web demo with python

So we can take advantage of the Caffe web demo and use that to reduce the processing time even further. With this method, the image classification takes ~1.5s, which is usable for a system.

How does the code work?

We make a symbolic link from /dev/shm/images/ to our /var/www for apache and forward our router port 5050 to the Pi port 80. 
Then we use raspistill to take an image and save it to memory as /dev/shm/images/test.jpg. Since this is symlinked in /var/www, we should be able to see it at http://YOUR-EXTERNAL-IP:5005/images/test.jpg.
Then we use grab to qull up the Caffe demo framework with our image and get the classification results. This is done in queryCNN.py which gets the results.

What does the output look like?

Given a picture of some of my Pi components, I get this, which is pretty accurate:

Where can I get the code?

[1] http://white.stanford.edu/teach/index.php/An_Introduction_to_Convolutional_Neural_Networks

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Tuesday, March 10, 2015

Introducing videolooper 3.0

Introducing videolooper 3.0!!

This image is compatible with the A,B,B+, and B 2 versions. 

I have a brand new version of the Raspberry Pi Videolooper that is compatible with the new B V2 and has a bunch of new features that streamline it for easy use.
It can now loop one video seamlessly (without audio though) thanks to a solution from the talented individual over at Curioustechnologist.com (link here). And again thanks to Tim Schwartz as well (link here).

You can download the new image here:

https://onedrive.live.com/redir?resid=e0f17bd2b1ffe81!411&authkey=!AGW37ozZuaeyjDw&ithint=file%2czip

MIRROR: https://mega.co.nz/#!JBcDxLhQ!z41lixcpCS0-zvF2X9SkX-T98Gj5I4m3QIFjXKiZ5p4


For help you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or email me (be forewarned, I respond intermittently and sporadically)

Normally I try to avoid statements like this but I'm having some unforeseen financial setbacks lately so I'm breaking my rule. If any of you really like this software and have money to spare, please consider donating some money by clicking my paypal button at the bottom of the page. It would really help. Thanks

How to set up the looper

  1. Copy this image to an SD card following these directions
  2. If you want to use USB, change usb=0 to usb=1 in looperconfig.txt on the SD card (It is in the boot partition which can be read by Windows and Mac).
  3. If you want to disable the looping autostart to make copying files easier, change autostart=1 to autostart=0 in looperconfig.txt
  4. If you want to change the audio source to 3.5 mm, change audio_source=hdmi to audio_source=local in looperconfig.txt.
  5. If you want to play a seamless video (supports only one for now), convert it according to these directions, put it in the videos folder, and then change seamless=0 to seamless=name-of-your-video.h264 in looperconfig.txt. (NOTE: This video won't have audio so take that into account).
  6. You may also want to expand your filesystem to it your SD card by using sudo raspi-config.
  7. If you aren't using a USB (NTFS) put your video files in the /home/pi/videos directory with SFTP or by turning autostart off. Otherwise, put your video files in a directory named videos on the root directory of your USB.
  8. Set your config options and plug it in!

Features

  • NEW: Has an audio_source flag in the config file (audio_source=hdmi,audio_source=local)
  • NEW: Has a seamless flag in the config file (seamless=0,seamless=some-file.h264)
  • NEW: Has a new boot up splash screen
  • NEW: Compatible with the RPi B2 (1 GB RAM version)
  • NEW: Updated all packages (no heartbleed vulnerability, new omxplayer version)
  • Has a config file in the boot directory (looperconfig.txt)
  • Has a autostart flag in the config file (autostart=0,autostart=1)
  • Has a USB flag in the config file (usb=0,usb=1), just set usb=1, then plug a USB (NTFS) with a videos folder on it and boot
  • Only requires 4GB SD card and has a smaller zipped download file
  • Supports all raspberry pi video types (mp4,avi,mkv,mp3,mov,mpg,flv,m4v)
  • Supports subtitles (just put the srt file in the same directory as the videos)
  • Reduces time between videos
  • Allows spaces and special characters in the filename
  • Full screen with a black background and no flicker
  • SSH automatically enabled with user:pi and password:raspberry
  • Allows easy video conversion using ffmpeg (ffmpeg INFILE -sameq OUTFILE)
  • Has a default of HDMI audio output with one quick file change (replace -o hdmi with -o local in startvideos.sh).
  • Can support external HDDs and other directories easily with one quick file change (Change FILES=/home/pi/videos/ to FILES=/YOUR DIRECTORY/ in startvideos.sh)

Source code

The source code can be found on github here

This is perfect if you are working on a museum or school exhibit. Don't spend a lot of money and energy on a PC running windows and have problems like below (courtesy of the Atlanta Aquarium)!

If you are a museum or other educationally based program and need help, you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or contact me by e-mail at help@stevenhickson.com

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me