Thursday, February 20, 2014

Throwing fireballs with the Kinect and Oculus Rift in Unity 3D

I decided I wanted to make a small game where you were a viking and you threw fireballs at enemy vikings on Unity 3D. The catch is, I wanted to actually throw the fireballs, so I wanted to use the Kinect, and I wanted to actually see if I was the character, so I wanted to use the Oculus Rift.

Here is a quick video of the results before we get on to the discussion:



Basically this started because a colleague at Georgia Tech, Alex Trevor (soon to be Dr,), had an Oculus Rift and mounted a camera on top of it to see the Kinect output with the Oculus Rift. I thought that was really cool and had previously worked on a game that used the Kinect Skeleton to throw fireballs (though I lost all the source code for the first version). I really wanted to combine those ideas and had to start over.

Luckily, Dr. Brian Peasley (now at Microsoft) came to my rescue as always and gifted me an Oculus Rift. 


To explain a bit, the Oculus Rift is an immersive virtual reality headset. The two images are displayed on the screen above because one is projected to the left eye and one to the right eye. As you rotate your head, the images change and it feels like you are looking at a real environment (it is pretty amazing). This is why there are two images in the video (and pretty much all Oculus Rift demos). To appreciate it fully, I recommend wearing an Oculus Rift while watching the video.



The Kinect everyone should know by now. It can yield a really good skeletal estimation of a person's joints from the depth data, which can then be used to represent gestures.

I had some time this week so I grabbed the Unity third person MMO example, added the Kinect scripts provided by CMU here, created my own fireball and fireball related prefabs and scripts, added the Oculus package, changed all the camera stuff to make it first person, and tweaked a lot of stuff. Daniel Castro (also at Georgia Tech) was nice enough to help me film me making a fool of myself.
It was obviously a bit more complicated than that. Lots of tinkering and scripting was involved to get things working but that is the output. Here at RIM, we are working on lots of other cool things and if you are interested, check out some of my other projects.

I've uploaded the source for public use and you can find it here (just make sure to please cite me):

The fireball script just takes two Game objects (the Viking's left hand normalized by your hip) and uses a velocity measurement to determine if you want to throw the fireball, then it creates a fireball and set's its velocity to your hand's velocity whenever you throw. This can be done easily with a small amount of code as below:
void Update () {
Vector3 norm_hand = HandPosition.transform.position - HipPosition.transform.position;
Vector3 velocity = (norm_hand - lastPos) / Time.deltaTime;
float dist = velocity.magnitude;
if (Input.GetButtonDown("Fire1") || (dist > THRESH && dist < MAX_THRESH)) {
                        Rigidbody clone;
Vector3 pos = HandPosition.transform.position;
pos.z += 1;
                        clone = (Rigidbody)Instantiate(Projectile, pos, transform.rotation);
clone.velocity = velocity * SPEED;
                }
lastPos = norm_hand;
}

And that's it for the fireball. Then there are some scripts destroying the fireball and vikings when they collide.
For the mapping of the joints to the main Viking, each joint position of the Viking is mapped to the corresponding Kinect Skeleton joints with GameObjects in the KinectControllerScript.
Then the Oculus SDK is used to create the camera and player control mapped to the main viking.
For all the code, see the Github project

I'm using a friend's version of Unity Pro because I'm a poor graduate student. So please donate if you liked this work so I can continue doing it. All of these gadgets are expensive and I do all this and post it for free.

So please consider donating to further my tinkering!!



Places you can find me

Thursday, February 6, 2014

Controlling music with your mind

Last year I bought an EEG headset (the Mindwave Mobile) to play with my Raspberry Pi and then ended up putting it down for a while. Luckily, this semester I started doing some more machine learning and decided to try it back out. I thought it might be possible to have it recognize when you dislike music and then switch the song on Pandora for you. This would be great for when you are working on something or moving around away from your computer.

So using the EEG headset, a Raspberry Pi, and a bluetooth module, I set to work on recording some data. I listened to a couple songs I liked and then a couple songs I didn't like with labeled data. The Mindwave gives you the delta, theta, high alpha, low alpha, high beta, low beta, high gamma, and mid gamma brainwaves. It also approximates your attention level and meditation level using the FFT (Fast Fourier Transform) and gives you a skin contact signal level (with 0 being the best and 200 being the worst).

Since I know very little about brainwaves, I can't make an educated decision on what changes to look at to detect this; that's where machine learning comes in. I can use Bayesian Estimation to construct two multivariate Gaussian models, one that represents good music and one that represents bad music.



----TECHNICAL DETAILS BELOW----
We construct the model using the parameters below (where μ is the mean of the data and Σ is the standard deviation of the data):










Now that we have the model above for both good music and bad music, we can use a decision boundary to detect what kind of music you are listening to at each data point.





where:






The boundary will be some sort of quadratic (hyper ellipsoid, hyper parabola, etc) and it might look something like below (though ours is a 10 dimensional function):
 

----END TECHNICAL DETAILS----

The result is an algorithm that is accurate about 70% of the time, which isn't reliable enough. However, since we have temporal data, we can utilize that information, and we wait until we get 4 bad music estimations in a row, then we skip the song.

I've created a short video (don't worry, I skip around so you don't have to watch me listen to music forever) as a proof of concept. Then end result is a way to control what song is playing with only your brainwaves.


This is an extremely experimental system and only works because there are only two classes to choose and it is not even close to good accuracy. I just thought it was cool. I'm curious to see if training using my brainwaves will work for other people as well but I haven't tested it yet. There is a lot still to refine but it's cool to have a proof of concept. You can't buy one of these off the shelf and expect it to change your life. It's uncomfortable and not as accurate as an expensive EEG but it is fun to play with. Now I need to attach one to Google Glass.

NOTE: This was done as a toybox example as fun. You probably aren't going to see EEG controlled headphones in the next couple years. Eventually maybe, but not due to work like this.

How to get it working


HERE IS THE SOURCE CODE

I use pianobar to stream Pandora and have a modified version of the control-pianobar.sh control scripts I have put in the github repository below.
I have put the code on Github here but first you need to make sure you have python >= 3.0, bluez, pybluez, and pianobar installed to use it. You will also need to change the home directory information, copy the control-pianobar.sh script to /usr/bin, change the MAC address (mindwaveMobileAddress) in  mindwavemobile/MindwaveMobileRawReader.py to the MAC address of your mindwave mobile device (which I got the python code from here), and run sudo python setup.py install.

I start pianobar with control-pianobar.sh p then I start the EEG program with python control_music.py, it will tell you what it thinks the song is in real time and then will skip it if it detects 4 bad signals in a row. It will also tell you whether the headset is on well enough with a low signal warning.

Thanks to Dr. Aaron Bobick (whose pictures and equations I used), robintibor (whose python code I used), and Daniel Castro (who showed me his code for Bayesian Estimation in python since my implementation was in Matlab).


Consider donating to further my tinkering.


Places you can find me