From 2007 until 2014 I worked at PrimeSense as a computer vision researcher. PrimeSense was acquired by Apple at the end of 2013.
PrimeSense developed depth sensors and computer vision middleware to enable natural interaction with computers and other devices.
Our technology was used in the original XBox Kinect, as well as many other projects requiring 3D
capabilities (such as 3D scanning) and gesture interaction.
During my time at PrimeSense/Apple, I developed a number of depth-based tracking and
gesture detection algorithms, including 3D full body tracking. See below for examples of some of this work.
From 2006 to 2007 I was a distinguished postdoctoral fellow in the Simbios Center
at Stanford University. There I contributed to the development of OpenSim, software for simulation and analysis of musculoskeletal models.
I received a Bachelor of Mathematics in Computer Science and Pure Math from the
of Waterloo in 2001. Through Waterloo's co-op program, I got a chance to work at some interesting places as a part of my degree.
Computer Vision at PrimeSense
See these videos
for examples of some of the work I was been involved with while at PrimeSense. Some of these algorithms were available as
part of the PrimeSense NiTE middleware (no longer online officially, but seems to be archived here).