Building Pinch-to-Zoom with Motion Capture Data

Project 08 from 60-212, Fall 2017




The Backstory

When I was 12 years old, I visited the North American veterinary conference (NAVC) with my mother in Orlando, Florida. I was walking around the show floor with my mom when we decided to stop at the Bayer booth. In the middle of the booth was an original Microsoft Surface table — many people were congregating around to to see what it was all about. My mom and I played with it for awhile and then she left to enjoy the rest of the conference, but I stayed in the Bayer booth for easily 3 or 4 more hours becoming good friends with the booth attendants. I think it was the first highly responsive touch interface I’d ever used and it played on in my dreams for weeks. When I returned home, I tried to get my dad to buy one for our house, but at the time it was ~10-15K to install and you had to be a commercial partner…



Reflection

Direct manipulation, Pinch to zoom, two-finger rotation, and other related interactions are fundamental principles of modern touch and gesture-based computing. Yet, understanding all of the underlying intricacies of these human-interfaces is difficult to do, even when directly analyzing them. But now, having implemented (parts) of them myself, I've gained a totally new understanding for the nuance of design needed to make these interactions feel as magical as they do. For example, seeing how multiple anchor-points (fingers/hands) affect scaling, movement, and selection is something I've never considered so consciously.



Code & Technologies

Kinect SDK 2.0
KinectV2-OSC
p5osc
Processing



Note: This project, completed for 60-212: Interactivity & Computation for Creative Practice with Prof. Golan Levin, is primarily an arts-engineering exercise, not a design project. See the original documentation for project eight here: cambu-mocap

November 2016