Category Archives: Programming

How OpenNI Nearly Spoiled The Show

Half RealSo, for the last few months I’ve taken a break from the PhD to do some work for a theatre show for The Border Project, Half Real.

There’s a lot of technology in the show. In particular, most of the set is projected, and we are using a Microsoft Kinect to track the actors on stage, and modifying the projections based on their location.

I’m working on Linux, and using OpenNI for interfacing with the Kinect. Things almost worked perfectly. In this post I will document the trials and tribulations of getting the Kinect to work for Half Real.

Continue reading

kinect

Kinect on Ubuntu with OpenNI

UPDATED March 1 2014 for the latest versions of everything!

I’ve spent all this morning trying to talk to the Microsoft Kinect using OpenNI. As it turns out, the process is not exceptionally difficult, it’s just there doesn’t seem to be any up to date documentation on getting it all working. So, this post should fill the void. I describe how to get access to the Kinect working using Ubuntu 12.04 LTS, OpenNI 1.5.4, and NITE 1.5.2. Continue reading

adaptive-detail

New Publication: Adaptive Color Marker for SAR Environments

Hey Everyone

So right now I am at the IEEE Symposium on 3D User Interfaces in Singapore. We have a couple of publications which I’ll be posting over the next few days. First up is Adaptive Color Marker for SAR Environments. In a previous study we created interactive virtual control panels by projecting onto otherwise blank designs. We used a simple orange marker to track the position of the user’s finger. However, in a SAR environment, this approach suffers from several problems:

  • The tracking system can’t track the marker if we project the same colour as the marker.
  • Projecting onto the marker changes it’s appearance, causing tracking to fail.
  • Users could not tell when they were pressing virtual controls, because their finger occluded the projection.

We address these problems with an active colour marker. We use a colour sensor to detect what is being projected onto the marker, and change the colour of the marker to an opposite colour, so that tracking continues to work. In addition, we can use the active marker as a form of visual feedback. For example, we can change the colour to indicate a virtual button press.

I’ve added the publication to my publications page, and here’s the video of the marker in action.

 

Behaviours Demo – Android Programming

Hey Everyone

So this week I became a member sponsor on www.3dbuzz.com. The first thing I had a look at was their XNA Behaviour Programming videos, which are the first in their set on AI programming. However, not being particularly interested in XNA, I implemented the algorithms presented in the videos for Android.

Here’s a video of the demo running on my Nexus One:

Since I was on Android and only using the Android and OpenGL ES libraries, I had to write a lot of low level code to replace the XNA functionality that 3DBuzz’s videos rely on. I also had to implement an on-screen joystick. I might write up a couple of posts on the more interesting parts of the code (what is not in the videos) soon.

Thanks
Michael

OpenSceneGraph, Dual Screens & TwinView

So some of my work at uni involves programming using OpenSceneGraph. Now, anybody who has used OSG before will know that as powerful as it may be, it is seriously lacking in the documentation department. So, this article describes how to do dual screen graphics on Linux using OpenSceneGraph. First we’ll look at the X Screens approach, which is easier but probably not the best solution. Then we’ll look at a method that works with a single X screen. Continue reading