Tuesday, May 12, 2015

Getting your fridge to order food for you with a RPi camera and a hacked up Instacart API

This is a detailed post on how to get your fridge to autonomously order fruit for you when you are low.  An RPi takes a picture every day and detects if you have fruit or not using my Caffe web query code. If your fridge is low on fruit, it orders fruit using Instacart, which is then delivered to your house. You can find the code with a walk through here:
https://github.com/StevenHickson/AutonomousFridge

Some of my posts are things I end up using every day and some are proof of concepts that I think are interesting. This is one of the latter. When I was younger, I heard an urban legend that Bill Gates had a fridge that ordered food for him and delivered it same-day whenever he was low. That story always intrigued me and I finally decided to implement a proof of concept of it. Below is how I set about doing this.

Hacking up an Instacart API

The first thing we need is a service that picks out food and delivers it to you. There are many of these, but as I live in Atlanta, I chose Instacart. Now we need an API. Unfortunately, Instacart doesn't provide one, so we will need to make our own. 

Head over to instacart.com and set up an account and login. Then right click and view source. You are looking for a line in the source like this:
FirebaseUrl="https://instacart.firebaseio.com/carts/SOME_HASH_STRING_HERE

That string is what you need to access your instacart account. Open up a terminal and type:
curl https://instacart.firebaseio.com/carts/YOUR_HASH_STRING.json

You should get back a response that looks like this:
{"checkout_state":{"workflow_state":"shopping"},"items":{"1069829":{"created_at":1.409336316211E9,"qty":1,"user_id":YOUR_USER_ID}},"users":{"-JXAzAp6rgtM4u2dV2tI":{"id":YOUR_USER_ID"name":"StevenH"},"-Jj2_kFsu5hvZRhx4KX1":{"id":YOUR_USER_ID,"name":"Steven H"},"-Jp8VvDusSDOyEiJ0J5D":{"id":YOUR_USER_ID,"name":"Steven H"}}}

Now we just need to figure out what different items are. Pick a store and start adding items to your cart and run the same command. If I add some fruit (oranges, bananas, strawberries, pears) to my cart and then run the same curl request. I get something like this:
{"checkout_state":{"workflow_state":"shopping"},"items":{"1069829":{"created_at":1.409336316211E9,"qty":1,"user_id":YOUR_USER_ID},"8182033":{"created_at":1.431448385824E9,"qty":2,"user_id":YOUR_USER_ID},"8583398":{"created_at":1.431448413452E9,"qty":3,"user_id":YOUR_USER_ID},"8585519":{"created_at":1.431448355207E9,"qty":3,"user_id":YOUR_USER_ID},"8601780":{"created_at":1.424915467829E9,"qty":3,"user_id":YOUR_USER_ID},"8602830":{"created_at":1.43144840911E9,"qty":1,"user_id":YOUR_USER_ID}},"users":{"-JXAzAp6rgtM4u2dV2tI":{"id":22232545,"name":"StevenH"},"-Jj2_kFsu5hvZRhx4KX1":{"id":YOUR_USER_ID,"name":"Steven H"},"-Jp8VvDusSDOyEiJ0J5D":{"id":YOUR_USER_ID,"name":"Steven H"}}}

Now empty your cart and we will make sure we can add all those things to your cart with a curl request. Take your response from earlier, and use it in the following line:
curl -X PATCH -d 'YOUR_FULL_CART_RESPONSE' https://instacart.firebaseio.com/carts/YOUR_HASH_STRING.json

Now, your cart should be full of fruit again. Now we just need a way to recognize whether your fridge has fruit or not.

Detecting fruit in your fridge

(P.S. Those of you wanting to learn more about Deep Learning, check out this book:
For this we just need a Raspberry Pi 2 Model B Project Board - 1GB RAM - 900 MHz Quad-Core CPU and a Raspberry PI 5MP Camera Board Module.
Set up your camera following these instructions and you will be ready to go. Set up your camera module in your fridge (or wherever you store your fruit).

We are going to use the Caffe framework for recognizing whether fruit is in the refrigerator drawer or not. You can read about how to do that here.
We are going to set this up similarly. Run the following commands to set things up:

git clone https://github.com/StevenHickson/AutonomousFridge.git
sudo apt-get install python python-pycurl python-lxml python-pip
sudo pip install grab sudo apt-get install apache2
mkdir -p /dev/shm/images
sudo ln -s /dev/shm/images /var/www/images

Then you must forward your router from port 5005 to port 80 on the Pi
Now you can edit test.sh with your info and run ./test.sh
Or add the following line to cron with crontab -e:
00 17 * * * /home/pi/AutonomousFridge/test.sh

This script takes a picture with raspistill and puts it in a symlinked directory in memory accessible from port 80. Then it sends that URL to the Caffe web demo and gets the result.
The Caffe demo shows how well it classifies the existence of fruit as shown below:



The end result of this is a script that runs every day at 5 pm. When your fridge doesn't have fruit, it adds a bunch of fruit to your Instacart cart. You can order it at your leisure to make sure you are home when it arrives. You could also use my PiAUISuite to get it to text you about your fruit status. It can be alot of fun to make a proof of concept of an old urban legend.

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Thursday, April 23, 2015

RPi Videolooper Not booting: blinking cursor bug fix

VideoLooper 4 (bug fix)!!

Wanted to apologize to everyone for the blinking cursor bug with the newest videolooper. I introduced it without realizing it by over-aggressively shrinking the partition to ease the download. If you have that bug you can download the newest version below, which fixes that. 

Alternatively, you can do the following (Thanks to Anthony Calvano for this) :
SSH in or press Windows key + R at the blinking menu, then you can extend the partition using the directions at "Manually resizing the SD card on Raspberry Pi" located at http://elinux.org/RPi_Resize_Flash_Partitions.



This image is compatible with the A,B,B+, and B 2 versions. 

I have a brand new version of the Raspberry Pi Videolooper that is compatible with the new B V2 and has a bunch of new features that streamline it for easy use.
It can now loop one video seamlessly (without audio though) thanks to a solution from the talented individual over at Curioustechnologist.com (link here). And again thanks to Tim Schwartz as well (link here).

You can download the new image here:

https://onedrive.live.com/redir?resid=e0f17bd2b1ffe81!411&authkey=!AGW37ozZuaeyjDw&ithint=file%2czip

MIRROR: https://mega.co.nz/#!JBcDxLhQ!z41lixcpCS0-zvF2X9SkX-T98Gj5I4m3QIFjXKiZ5p4


For help you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or email me (be forewarned, I respond intermittently and sporadically)

How to set up the looper

  1. Copy this image to an SD card following these directions
  2. If you want to use USB, change usb=0 to usb=1 in looperconfig.txt on the SD card (It is in the boot partition which can be read by Windows and Mac).
  3. If you want to disable the looping autostart to make copying files easier, change autostart=1 to autostart=0 in looperconfig.txt
  4. If you want to change the audio source to 3.5 mm, change audio_source=hdmi to audio_source=local in looperconfig.txt.
  5. If you want to play a seamless video (supports only one for now), convert it according to these directions, put it in the videos folder, and then change seamless=0 to seamless=name-of-your-video.h264 in looperconfig.txt. (NOTE: This video won't have audio so take that into account).
  6. You may also want to expand your filesystem to it your SD card by using sudo raspi-config as detailed here: http://elinux.org/RPi_Resize_Flash_Partitions.
  7. If you aren't using a USB (NTFS) put your video files in the /home/pi/videos directory with SFTP or by turning autostart off. Otherwise, put your video files in a directory named videos on the root directory of your USB.
  8. Set your config options and plug it in!

Features

  • NEW: Has an audio_source flag in the config file (audio_source=hdmi,audio_source=local)
  • NEW: Has a seamless flag in the config file (seamless=0,seamless=some-file.h264)
  • NEW: Has a new boot up splash screen
  • NEW: Compatible with the RPi B2 (1 GB RAM version)
  • NEW: Updated all packages (no heartbleed vulnerability, new omxplayer version)
  • Has a config file in the boot directory (looperconfig.txt)
  • Has a autostart flag in the config file (autostart=0,autostart=1)
  • Has a USB flag in the config file (usb=0,usb=1), just set usb=1, then plug a USB (NTFS) with a videos folder on it and boot
  • Only requires 4GB SD card and has a smaller zipped download file
  • Supports all raspberry pi video types (mp4,avi,mkv,mp3,mov,mpg,flv,m4v)
  • Supports subtitles (just put the srt file in the same directory as the videos)
  • Reduces time between videos
  • Allows spaces and special characters in the filename
  • Full screen with a black background and no flicker
  • SSH automatically enabled with user:pi and password:raspberry
  • Allows easy video conversion using ffmpeg (ffmpeg INFILE -sameq OUTFILE)
  • Has a default of HDMI audio output with one quick file change (replace -o hdmi with -o local in startvideos.sh).
  • Can support external HDDs and other directories easily with one quick file change (Change FILES=/home/pi/videos/ to FILES=/YOUR DIRECTORY/ in startvideos.sh)

Source code

The source code can be found on github here

This is perfect if you are working on a museum or school exhibit. Don't spend a lot of money and energy on a PC running windows and have problems like below (courtesy of the Atlanta Aquarium)!

If you are a museum or other educationally based program and need help, you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or contact me by e-mail at help@stevenhickson.com

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Tuesday, March 31, 2015

Classifying everything using your RPi Camera: Deep Learning with the Pi

(P.S. Those of you wanting to learn more about Deep Learning, check out this book:

For those who don't want to read, the code can be found on my github with a readme:
https://github.com/StevenHickson/RPi_CaffeQuery
You can also read about it on my Hackaday io page here.

What is object classification?

Object classification has been a very popular topic the past couple years. Given an image, we want a computer to be able to tell us what that image is showing. The newest trend has been using convolutional neural networks in order to classify networks trained with a large amount of data.

One of the bigger frameworks for this is the Caffe framework. For more on this see the Caffe home page.
You can test out there web demo here. It isn't great at people but it is very good at cats, dogs, objects, and activities.


Why is this useful?

There are all kinds of autonomous tasks you can do with the RPi camera. Perhaps you want to know if your dog is in your living room, so the Pi can take his/her picture or tell him/her they are a good dog. Perhaps you want your RPi to recognize whether there is fruit in your fruit drawer so it can order you more when it is empty. The possibilities are endless.

How do convolutional neural networks work (a VERY simple overview)?

Convolutional neural networks are based loosely off how the human brain works. They are built of layers of many neurons that are "activated" by certain inputs. The input layer is connected in a network through a series of interconnected neurons in hidden layers like so:
[1]

Each neuron sends its signal to any other neuron it is connected to which is then multiplied by the connection weight and run through a sigmoid function. The training of the network is done by changing the weights in order to minimize the error function based on a set of inputs with a known set of outputs using back propagation.

How do we get this on the Pi?

Well I went ahead and compiled Caffe on the RPi. Unfortunately since it doesn't have code to optimize the network with it's GPU, the classification takes ~20-25s per image, which is far too much.
Note: I did find a different optimized CNN network for the RPi by Pete Warden here. It looks great but it still takes about 3 seconds per image, which still doesn't seem fast  enough. 

You will also need the Raspberry Pi camera which you can get from here:
Raspberry PI 5MP Camera Board Module

A better option: Using the web demo with python

So we can take advantage of the Caffe web demo and use that to reduce the processing time even further. With this method, the image classification takes ~1.5s, which is usable for a system.

How does the code work?

We make a symbolic link from /dev/shm/images/ to our /var/www for apache and forward our router port 5050 to the Pi port 80. 
Then we use raspistill to take an image and save it to memory as /dev/shm/images/test.jpg. Since this is symlinked in /var/www, we should be able to see it at http://YOUR-EXTERNAL-IP:5005/images/test.jpg.
Then we use grab to qull up the Caffe demo framework with our image and get the classification results. This is done in queryCNN.py which gets the results.

What does the output look like?

Given a picture of some of my Pi components, I get this, which is pretty accurate:

Where can I get the code?

[1] http://white.stanford.edu/teach/index.php/An_Introduction_to_Convolutional_Neural_Networks

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Tuesday, March 10, 2015

Introducing videolooper 3.0

Introducing videolooper 3.0!!

This image is compatible with the A,B,B+, and B 2 versions. 

I have a brand new version of the Raspberry Pi Videolooper that is compatible with the new B V2 and has a bunch of new features that streamline it for easy use.
It can now loop one video seamlessly (without audio though) thanks to a solution from the talented individual over at Curioustechnologist.com (link here). And again thanks to Tim Schwartz as well (link here).

You can download the new image here:

https://onedrive.live.com/redir?resid=e0f17bd2b1ffe81!411&authkey=!AGW37ozZuaeyjDw&ithint=file%2czip

MIRROR: https://mega.co.nz/#!JBcDxLhQ!z41lixcpCS0-zvF2X9SkX-T98Gj5I4m3QIFjXKiZ5p4


For help you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or email me (be forewarned, I respond intermittently and sporadically)

Normally I try to avoid statements like this but I'm having some unforeseen financial setbacks lately so I'm breaking my rule. If any of you really like this software and have money to spare, please consider donating some money by clicking my paypal button at the bottom of the page. It would really help. Thanks

How to set up the looper

  1. Copy this image to an SD card following these directions
  2. If you want to use USB, change usb=0 to usb=1 in looperconfig.txt on the SD card (It is in the boot partition which can be read by Windows and Mac).
  3. If you want to disable the looping autostart to make copying files easier, change autostart=1 to autostart=0 in looperconfig.txt
  4. If you want to change the audio source to 3.5 mm, change audio_source=hdmi to audio_source=local in looperconfig.txt.
  5. If you want to play a seamless video (supports only one for now), convert it according to these directions, put it in the videos folder, and then change seamless=0 to seamless=name-of-your-video.h264 in looperconfig.txt. (NOTE: This video won't have audio so take that into account).
  6. You may also want to expand your filesystem to it your SD card by using sudo raspi-config.
  7. If you aren't using a USB (NTFS) put your video files in the /home/pi/videos directory with SFTP or by turning autostart off. Otherwise, put your video files in a directory named videos on the root directory of your USB.
  8. Set your config options and plug it in!

Features

  • NEW: Has an audio_source flag in the config file (audio_source=hdmi,audio_source=local)
  • NEW: Has a seamless flag in the config file (seamless=0,seamless=some-file.h264)
  • NEW: Has a new boot up splash screen
  • NEW: Compatible with the RPi B2 (1 GB RAM version)
  • NEW: Updated all packages (no heartbleed vulnerability, new omxplayer version)
  • Has a config file in the boot directory (looperconfig.txt)
  • Has a autostart flag in the config file (autostart=0,autostart=1)
  • Has a USB flag in the config file (usb=0,usb=1), just set usb=1, then plug a USB (NTFS) with a videos folder on it and boot
  • Only requires 4GB SD card and has a smaller zipped download file
  • Supports all raspberry pi video types (mp4,avi,mkv,mp3,mov,mpg,flv,m4v)
  • Supports subtitles (just put the srt file in the same directory as the videos)
  • Reduces time between videos
  • Allows spaces and special characters in the filename
  • Full screen with a black background and no flicker
  • SSH automatically enabled with user:pi and password:raspberry
  • Allows easy video conversion using ffmpeg (ffmpeg INFILE -sameq OUTFILE)
  • Has a default of HDMI audio output with one quick file change (replace -o hdmi with -o local in startvideos.sh).
  • Can support external HDDs and other directories easily with one quick file change (Change FILES=/home/pi/videos/ to FILES=/YOUR DIRECTORY/ in startvideos.sh)

Source code

The source code can be found on github here

This is perfect if you are working on a museum or school exhibit. Don't spend a lot of money and energy on a PC running windows and have problems like below (courtesy of the Atlanta Aquarium)!

If you are a museum or other educationally based program and need help, you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or contact me by e-mail at help@stevenhickson.com

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Friday, February 13, 2015

Using your RPi2 for Valentines Day

Thought I would share something cool I did with my Raspberry Pi that others might like for Valentines day.

I basically had a lot of devices sitting around that I realized I could amalgamate together for a good Valentines day surprise for my girlfriend.

First off I had my robotic bartender (bottender), which you can see in my Hackaday projects page.
I modified it so that it would pour wine on command.

Next I had a set of WeMo light switches that you can get here:

Belkin WeMo Light Switch, Wi-Fi Enabled

These are really nicely made. They are easy to install WiFi-enabled and it is easy to interface with them using our custom API.
I found a nice API for the wemo light switches here.
In the end though, I ended up creating a simple shell script API that uses CURL. You can see mine on github here.

I set up my WeMo light switch to control my bedroom fan. Then I sprinkled the top with rose petals.
Connecting this all together, I have a button on my phone that turns the fan on, sprinkling rose petals down, and turns bottender on, pouring two glasses of wine. Resulting in this:


Happy Valentines Day everyone!



Consider donating to further my tinkering since I do all this and help people out for free.


Places you can find me

Wednesday, February 11, 2015

Control anything electrical with the Raspberry Pi using 433 MHz RF

I've gotten a lot of e-mails in the past about controlling lights, plugged-in utilities, etc. with my voicecommand software on the Raspberry Pi and I decided to make a quick easy guide.

Why would you want to do this?

This is an easy question to answer. I walk into a room and I say Pi . . . Let there be light and behold the lights turn on. Or for those with less dramatic flair, just saying lights and having the lights turn on.
Having voice controlled lights in your house just seems convenient.

How can I do this?

It's easy actually.
I've created an image for the RPi that already has everything you need on it. This means it's easier than ever to control electronics with your voice.

You can download it at:
https://mega.co.nz/#!MM8W1JxR!4PlZ_1-dumasDUCYRI4LuiBwEJgtqhfoin0R8ls90NQ

Once you have the image on your Raspberry Pi, buy some RF 433 MHz light switches or sockets like these: 
Etekcity ZAP 5LX Auto-Programmable Function Wireless Remote Control Outlet Light Switch with 2 Remotes, 5-Pack Outlet

You can plug those into an outlet and plug anything (including a light) into them.

Next you need a 433 MHz transmitter and receiver for the RPi. You can get those here:
433Mhz RF transmitter and receiver kit for your Experiment

Then you can wire them up to the pi using the GPIO and use pilight to control them. You wire them in as below:
The smaller unit is plugged into voltage, ground, and pin 17 while the larger board is plugged into voltage, ground, and pin 18.

Once your transmitter and receiver are wired up, simply point the remote at the receiver, run pilight-debug, and press the button on the remote you want to learn. Now you can copy that string and use pilight-send to send it.
Example:
sudo killall pilight-daemon or sudo service pilight stop
sudo pilight-debug
Then CTRL+C when you see a RF string. Mine looks something like this:
172 688 172 688 172 516 172 688 172 688 516 172 172 688 516 172 172 688 172 688 172 688 516 172 172 688 516 172 516 172 516 172 172 688 172 688 172 688 172 688 516 172 516 172 172 688 172 688 172 5814
**Note** Sometime's I have trouble escaping out of pilight-debug. When that happens I kill it (kill -9) from a different terminal
Now I can turn that light on.
First restart the pilight daemon:
sudo pilight-daemon or sudo service pilight start
Now send the command:
sudo pilight-send -p raw -c "172 688 172 688 172 516 172 688 172 688 516 172 172 688 516 172 172 688 172 688 172 688 516 172 172 688 516 172 516 172 516 172 172 688 172 688 172 688 172 688 516 172 516 172 172 688 172 688 172 5814"
but with your own string. Do this for the on and off settings for as many lights as you want.

Now I can add that command in a script and use voicecommand to run it when it hears the right command!
You can do this with voicecommand -e

You can read more about it at the hackaday projects page here.

And here is a quick video demo:

Consider donating to further my tinkering since I do all this and help people out for free.




Also, a contact of mine recently did a computer vision kickstarter. You should join that if you want to learn more about computer vision.
https://www.kickstarter.com/projects/1186001332/pyimagesearch-gurus-become-a-computer-vision-openc

Places you can find me