Monday, January 27, 2020

Powering Google Nest Cameras with Power over Ethernet





Recently I embarked on replacing all a friend's old cameras with some Nest cameras. Unfortunately, the cameras they had were wired up with Ethernet wire so that the only wiring available was custom wiring using 12V. I love the Nest Outdoor Cameras but I don't know why they don't have a PoE option. I didn't want to rewire parts of the house nor have the unsightly nest power cord.

I decided to make my own and figured I would share the details for others who want to avoid the  power cord (Note: This will probably void your nest cam warranty).




Above is what the original camera wiring looked like. Two wires for power (12V) and two for data.

You might think, okay just change the power supply on the other side to 5V and call it a day. Unfortunately, Ethernet wire doesn't support 5V very well. When a load is applied (1.5A for the Nest) over a distance, the voltage drops and the nest cam doesn't work.

So, at first, I figured, let's just throw a voltage regulator on there and it should be fine (I used a LM7805). This is shown below. However, this was getting pretty hot and I was worried about it not lasting for several years.

I ended up finding the perfect circuit for this on Amazon here.
It's a LM317 Adjustable Linear Regulator Converter Power Supply with a heat sink and everything you already need. Next, I just rotated the adjustable knob until I read 5V to prep them.
Finally, I got some female USB ports here and soldiered them to the output of the power supply module as shown below.



Then all I had to do was plug the Ethernet wires with the 12V power into the input of each supply, put it in a small enclosure, plug the Nest cam in, and push the Nest cord up into the camera whole.



Here is the final output. A clean nice ethernet based Nest solution. Hopefully Nest comes out with some sort of a PoE solution. However, f you are looking to do this before then, the links above should be all you need to replicate it. This should work for any supply over 9V up to the voltage regulator input limit so you aren't locked in to a 12V only solution like mine.



Feel free to contact me for any questions or if you have products or ideas you think I should try. Places you can find me

Consider donating here to support my tinkering habits.



Monday, March 26, 2018

Computer Vision Could Have Avoided Fatal Uber Crash

Uber's autonomous car hit and killed a pedestrian this week. You can see the video footage previous to the wreck here.

There have been a lot of articles talking about how the driver could have avoided the accident. I'm instead going to take a look at how state-of-the-art computer vision methods could have avoided it.

With autonomous car systems, we are going to need to have backups of backups in order to prevent these kinds of accidents. Uber has not yet discussed why the LiDAR or depth sensors failed to prevent this. Velodyne has released a statement saying it wasn't a failure of their hardware (which is most likely true from my experience with their technology).

But even not considering specialized sensors, just looking at the video footage, it's clear something wrong happened here.

Just using the video footage supplied, I ran some frames through a state of the art neural network called Mask RCNN trained on the COCO dataset. Note, this isn't an autonomous driving dataset but it does contain people, car, and bicycles so it is relevant. Below are the algorithm's output on some frames preceding the accident (more than a second before).



These images bring up some interesting questions for Uber. If the Velodyne LiDar should have caught this and the computer vision system should have caught this, then why did it happen?
This implies there was a most likely avoidable bug or failure in Uber's software stack and this caused someone's death.

This is not a post to say autonomous driving is bad or that we shouldn't pursue it; we most certainly should. This is just the first case of an easily avoidable death. We as researchers and programmers need to be careful about testing these methods and having many backups to prevent easy edge cases. Companies pushing this technology should be even more robust and careful in testing many edge cases and being confident in their software.


Places you can find me

Tuesday, May 12, 2015

Getting your fridge to order food for you with a RPi camera and a hacked up Instacart API

This is a detailed post on how to get your fridge to autonomously order fruit for you when you are low.  An RPi takes a picture every day and detects if you have fruit or not using my Caffe web query code. If your fridge is low on fruit, it orders fruit using Instacart, which is then delivered to your house. You can find the code with a walk through here:
https://github.com/StevenHickson/AutonomousFridge

Some of my posts are things I end up using every day and some are proof of concepts that I think are interesting. This is one of the latter. When I was younger, I heard an urban legend that Bill Gates had a fridge that ordered food for him and delivered it same-day whenever he was low. That story always intrigued me and I finally decided to implement a proof of concept of it. Below is how I set about doing this.

Hacking up an Instacart API

The first thing we need is a service that picks out food and delivers it to you. There are many of these, but as I live in Atlanta, I chose Instacart. Now we need an API. Unfortunately, Instacart doesn't provide one, so we will need to make our own. 

Head over to instacart.com and set up an account and login. Then right click and view source. You are looking for a line in the source like this:
FirebaseUrl="https://instacart.firebaseio.com/carts/SOME_HASH_STRING_HERE

That string is what you need to access your instacart account. Open up a terminal and type:
curl https://instacart.firebaseio.com/carts/YOUR_HASH_STRING.json

You should get back a response that looks like this:
{"checkout_state":{"workflow_state":"shopping"},"items":{"1069829":{"created_at":1.409336316211E9,"qty":1,"user_id":YOUR_USER_ID}},"users":{"-JXAzAp6rgtM4u2dV2tI":{"id":YOUR_USER_ID"name":"StevenH"},"-Jj2_kFsu5hvZRhx4KX1":{"id":YOUR_USER_ID,"name":"Steven H"},"-Jp8VvDusSDOyEiJ0J5D":{"id":YOUR_USER_ID,"name":"Steven H"}}}

Now we just need to figure out what different items are. Pick a store and start adding items to your cart and run the same command. If I add some fruit (oranges, bananas, strawberries, pears) to my cart and then run the same curl request. I get something like this:
{"checkout_state":{"workflow_state":"shopping"},"items":{"1069829":{"created_at":1.409336316211E9,"qty":1,"user_id":YOUR_USER_ID},"8182033":{"created_at":1.431448385824E9,"qty":2,"user_id":YOUR_USER_ID},"8583398":{"created_at":1.431448413452E9,"qty":3,"user_id":YOUR_USER_ID},"8585519":{"created_at":1.431448355207E9,"qty":3,"user_id":YOUR_USER_ID},"8601780":{"created_at":1.424915467829E9,"qty":3,"user_id":YOUR_USER_ID},"8602830":{"created_at":1.43144840911E9,"qty":1,"user_id":YOUR_USER_ID}},"users":{"-JXAzAp6rgtM4u2dV2tI":{"id":22232545,"name":"StevenH"},"-Jj2_kFsu5hvZRhx4KX1":{"id":YOUR_USER_ID,"name":"Steven H"},"-Jp8VvDusSDOyEiJ0J5D":{"id":YOUR_USER_ID,"name":"Steven H"}}}

Now empty your cart and we will make sure we can add all those things to your cart with a curl request. Take your response from earlier, and use it in the following line:
curl -X PATCH -d 'YOUR_FULL_CART_RESPONSE' https://instacart.firebaseio.com/carts/YOUR_HASH_STRING.json

Now, your cart should be full of fruit again. Now we just need a way to recognize whether your fridge has fruit or not.

Detecting fruit in your fridge

(P.S. Those of you wanting to learn more about Deep Learning, check out this book:
For this we just need a Raspberry Pi 2 Model B Project Board - 1GB RAM - 900 MHz Quad-Core CPU and a Raspberry PI 5MP Camera Board Module.
Set up your camera following these instructions and you will be ready to go. Set up your camera module in your fridge (or wherever you store your fruit).

We are going to use the Caffe framework for recognizing whether fruit is in the refrigerator drawer or not. You can read about how to do that here.
We are going to set this up similarly. Run the following commands to set things up:

git clone https://github.com/StevenHickson/AutonomousFridge.git
sudo apt-get install python python-pycurl python-lxml python-pip
sudo pip install grab sudo apt-get install apache2
mkdir -p /dev/shm/images
sudo ln -s /dev/shm/images /var/www/images

Then you must forward your router from port 5005 to port 80 on the Pi
Now you can edit test.sh with your info and run ./test.sh
Or add the following line to cron with crontab -e:
00 17 * * * /home/pi/AutonomousFridge/test.sh

This script takes a picture with raspistill and puts it in a symlinked directory in memory accessible from port 80. Then it sends that URL to the Caffe web demo and gets the result.
The Caffe demo shows how well it classifies the existence of fruit as shown below:



The end result of this is a script that runs every day at 5 pm. When your fridge doesn't have fruit, it adds a bunch of fruit to your Instacart cart. You can order it at your leisure to make sure you are home when it arrives. You could also use my PiAUISuite to get it to text you about your fruit status. It can be alot of fun to make a proof of concept of an old urban legend.

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Thursday, April 23, 2015

RPi Videolooper Not booting: blinking cursor bug fix

VideoLooper 4 (bug fix)!!

Wanted to apologize to everyone for the blinking cursor bug with the newest videolooper. I introduced it without realizing it by over-aggressively shrinking the partition to ease the download. If you have that bug you can download the newest version below, which fixes that. 

Alternatively, you can do the following (Thanks to Anthony Calvano for this) :
SSH in or press Windows key + R at the blinking menu, then you can extend the partition using the directions at "Manually resizing the SD card on Raspberry Pi" located at http://elinux.org/RPi_Resize_Flash_Partitions.



This image is compatible with the A,B,B+, and B 2 versions. 

I have a brand new version of the Raspberry Pi Videolooper that is compatible with the new B V2 and has a bunch of new features that streamline it for easy use.
It can now loop one video seamlessly (without audio though) thanks to a solution from the talented individual over at Curioustechnologist.com (link here). And again thanks to Tim Schwartz as well (link here).

You can download the new image here:

https://onedrive.live.com/redir?resid=e0f17bd2b1ffe81!411&authkey=!AGW37ozZuaeyjDw&ithint=file%2czip

MIRROR: https://mega.co.nz/#!JBcDxLhQ!z41lixcpCS0-zvF2X9SkX-T98Gj5I4m3QIFjXKiZ5p4


For help you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or email me (be forewarned, I respond intermittently and sporadically)

How to set up the looper

  1. Copy this image to an SD card following these directions
  2. If you want to use USB, change usb=0 to usb=1 in looperconfig.txt on the SD card (It is in the boot partition which can be read by Windows and Mac).
  3. If you want to disable the looping autostart to make copying files easier, change autostart=1 to autostart=0 in looperconfig.txt
  4. If you want to change the audio source to 3.5 mm, change audio_source=hdmi to audio_source=local in looperconfig.txt.
  5. If you want to play a seamless video (supports only one for now), convert it according to these directions, put it in the videos folder, and then change seamless=0 to seamless=name-of-your-video.h264 in looperconfig.txt. (NOTE: This video won't have audio so take that into account).
  6. You may also want to expand your filesystem to it your SD card by using sudo raspi-config as detailed here: http://elinux.org/RPi_Resize_Flash_Partitions.
  7. If you aren't using a USB (NTFS) put your video files in the /home/pi/videos directory with SFTP or by turning autostart off. Otherwise, put your video files in a directory named videos on the root directory of your USB.
  8. Set your config options and plug it in!

Features

  • NEW: Has an audio_source flag in the config file (audio_source=hdmi,audio_source=local)
  • NEW: Has a seamless flag in the config file (seamless=0,seamless=some-file.h264)
  • NEW: Has a new boot up splash screen
  • NEW: Compatible with the RPi B2 (1 GB RAM version)
  • NEW: Updated all packages (no heartbleed vulnerability, new omxplayer version)
  • Has a config file in the boot directory (looperconfig.txt)
  • Has a autostart flag in the config file (autostart=0,autostart=1)
  • Has a USB flag in the config file (usb=0,usb=1), just set usb=1, then plug a USB (NTFS) with a videos folder on it and boot
  • Only requires 4GB SD card and has a smaller zipped download file
  • Supports all raspberry pi video types (mp4,avi,mkv,mp3,mov,mpg,flv,m4v)
  • Supports subtitles (just put the srt file in the same directory as the videos)
  • Reduces time between videos
  • Allows spaces and special characters in the filename
  • Full screen with a black background and no flicker
  • SSH automatically enabled with user:pi and password:raspberry
  • Allows easy video conversion using ffmpeg (ffmpeg INFILE -sameq OUTFILE)
  • Has a default of HDMI audio output with one quick file change (replace -o hdmi with -o local in startvideos.sh).
  • Can support external HDDs and other directories easily with one quick file change (Change FILES=/home/pi/videos/ to FILES=/YOUR DIRECTORY/ in startvideos.sh)

Source code

The source code can be found on github here

This is perfect if you are working on a museum or school exhibit. Don't spend a lot of money and energy on a PC running windows and have problems like below (courtesy of the Atlanta Aquarium)!

If you are a museum or other educationally based program and need help, you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or contact me by e-mail at help@stevenhickson.com

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Tuesday, March 31, 2015

Classifying everything using your RPi Camera: Deep Learning with the Pi

(P.S. Those of you wanting to learn more about Deep Learning, check out this book:

For those who don't want to read, the code can be found on my github with a readme:
https://github.com/StevenHickson/RPi_CaffeQuery
You can also read about it on my Hackaday io page here.

What is object classification?

Object classification has been a very popular topic the past couple years. Given an image, we want a computer to be able to tell us what that image is showing. The newest trend has been using convolutional neural networks in order to classify networks trained with a large amount of data.

One of the bigger frameworks for this is the Caffe framework. For more on this see the Caffe home page.
You can test out there web demo here. It isn't great at people but it is very good at cats, dogs, objects, and activities.


Why is this useful?

There are all kinds of autonomous tasks you can do with the RPi camera. Perhaps you want to know if your dog is in your living room, so the Pi can take his/her picture or tell him/her they are a good dog. Perhaps you want your RPi to recognize whether there is fruit in your fruit drawer so it can order you more when it is empty. The possibilities are endless.

How do convolutional neural networks work (a VERY simple overview)?

Convolutional neural networks are based loosely off how the human brain works. They are built of layers of many neurons that are "activated" by certain inputs. The input layer is connected in a network through a series of interconnected neurons in hidden layers like so:
[1]

Each neuron sends its signal to any other neuron it is connected to which is then multiplied by the connection weight and run through a sigmoid function. The training of the network is done by changing the weights in order to minimize the error function based on a set of inputs with a known set of outputs using back propagation.

How do we get this on the Pi?

Well I went ahead and compiled Caffe on the RPi. Unfortunately since it doesn't have code to optimize the network with it's GPU, the classification takes ~20-25s per image, which is far too much.
Note: I did find a different optimized CNN network for the RPi by Pete Warden here. It looks great but it still takes about 3 seconds per image, which still doesn't seem fast  enough. 

You will also need the Raspberry Pi camera which you can get from here:
Raspberry PI 5MP Camera Board Module

A better option: Using the web demo with python

So we can take advantage of the Caffe web demo and use that to reduce the processing time even further. With this method, the image classification takes ~1.5s, which is usable for a system.

How does the code work?

We make a symbolic link from /dev/shm/images/ to our /var/www for apache and forward our router port 5050 to the Pi port 80. 
Then we use raspistill to take an image and save it to memory as /dev/shm/images/test.jpg. Since this is symlinked in /var/www, we should be able to see it at http://YOUR-EXTERNAL-IP:5005/images/test.jpg.
Then we use grab to qull up the Caffe demo framework with our image and get the classification results. This is done in queryCNN.py which gets the results.

What does the output look like?

Given a picture of some of my Pi components, I get this, which is pretty accurate:

Where can I get the code?

[1] http://white.stanford.edu/teach/index.php/An_Introduction_to_Convolutional_Neural_Networks

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Tuesday, March 10, 2015

Introducing videolooper 3.0

Introducing videolooper 3.0!!

This image is compatible with the A,B,B+, and B 2 versions. 

I have a brand new version of the Raspberry Pi Videolooper that is compatible with the new B V2 and has a bunch of new features that streamline it for easy use.
It can now loop one video seamlessly (without audio though) thanks to a solution from the talented individual over at Curioustechnologist.com (link here). And again thanks to Tim Schwartz as well (link here).

You can download the new image here:

https://onedrive.live.com/redir?resid=e0f17bd2b1ffe81!411&authkey=!AGW37ozZuaeyjDw&ithint=file%2czip

MIRROR: https://mega.co.nz/#!JBcDxLhQ!z41lixcpCS0-zvF2X9SkX-T98Gj5I4m3QIFjXKiZ5p4


For help you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or email me (be forewarned, I respond intermittently and sporadically)

Normally I try to avoid statements like this but I'm having some unforeseen financial setbacks lately so I'm breaking my rule. If any of you really like this software and have money to spare, please consider donating some money by clicking my paypal button at the bottom of the page. It would really help. Thanks

How to set up the looper

  1. Copy this image to an SD card following these directions
  2. If you want to use USB, change usb=0 to usb=1 in looperconfig.txt on the SD card (It is in the boot partition which can be read by Windows and Mac).
  3. If you want to disable the looping autostart to make copying files easier, change autostart=1 to autostart=0 in looperconfig.txt
  4. If you want to change the audio source to 3.5 mm, change audio_source=hdmi to audio_source=local in looperconfig.txt.
  5. If you want to play a seamless video (supports only one for now), convert it according to these directions, put it in the videos folder, and then change seamless=0 to seamless=name-of-your-video.h264 in looperconfig.txt. (NOTE: This video won't have audio so take that into account).
  6. You may also want to expand your filesystem to it your SD card by using sudo raspi-config.
  7. If you aren't using a USB (NTFS) put your video files in the /home/pi/videos directory with SFTP or by turning autostart off. Otherwise, put your video files in a directory named videos on the root directory of your USB.
  8. Set your config options and plug it in!

Features

  • NEW: Has an audio_source flag in the config file (audio_source=hdmi,audio_source=local)
  • NEW: Has a seamless flag in the config file (seamless=0,seamless=some-file.h264)
  • NEW: Has a new boot up splash screen
  • NEW: Compatible with the RPi B2 (1 GB RAM version)
  • NEW: Updated all packages (no heartbleed vulnerability, new omxplayer version)
  • Has a config file in the boot directory (looperconfig.txt)
  • Has a autostart flag in the config file (autostart=0,autostart=1)
  • Has a USB flag in the config file (usb=0,usb=1), just set usb=1, then plug a USB (NTFS) with a videos folder on it and boot
  • Only requires 4GB SD card and has a smaller zipped download file
  • Supports all raspberry pi video types (mp4,avi,mkv,mp3,mov,mpg,flv,m4v)
  • Supports subtitles (just put the srt file in the same directory as the videos)
  • Reduces time between videos
  • Allows spaces and special characters in the filename
  • Full screen with a black background and no flicker
  • SSH automatically enabled with user:pi and password:raspberry
  • Allows easy video conversion using ffmpeg (ffmpeg INFILE -sameq OUTFILE)
  • Has a default of HDMI audio output with one quick file change (replace -o hdmi with -o local in startvideos.sh).
  • Can support external HDDs and other directories easily with one quick file change (Change FILES=/home/pi/videos/ to FILES=/YOUR DIRECTORY/ in startvideos.sh)

Source code

The source code can be found on github here

This is perfect if you are working on a museum or school exhibit. Don't spend a lot of money and energy on a PC running windows and have problems like below (courtesy of the Atlanta Aquarium)!

If you are a museum or other educationally based program and need help, you can post on the Raspberry Pi subreddit (probably the best way to get fast help) or contact me by e-mail at help@stevenhickson.com

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me

Friday, February 13, 2015

Using your RPi2 for Valentines Day

Thought I would share something cool I did with my Raspberry Pi that others might like for Valentines day.

I basically had a lot of devices sitting around that I realized I could amalgamate together for a good Valentines day surprise for my girlfriend.

First off I had my robotic bartender (bottender), which you can see in my Hackaday projects page.
I modified it so that it would pour wine on command.

Next I had a set of WeMo light switches that you can get here:

Belkin WeMo Light Switch, Wi-Fi Enabled

These are really nicely made. They are easy to install WiFi-enabled and it is easy to interface with them using our custom API.
I found a nice API for the wemo light switches here.
In the end though, I ended up creating a simple shell script API that uses CURL. You can see mine on github here.

I set up my WeMo light switch to control my bedroom fan. Then I sprinkled the top with rose petals.
Connecting this all together, I have a button on my phone that turns the fan on, sprinkling rose petals down, and turns bottender on, pouring two glasses of wine. Resulting in this:


Happy Valentines Day everyone!



Consider donating to further my tinkering since I do all this and help people out for free.


Places you can find me