With the advent of commercially available consumer depth sensors, and continued efforts in computer vision research to improve multi-modal image and point cloud processing, robust person tracking with the stability and responsiveness necessary to drive interactive applications is now possible at low cost. The results of the research, however, are not easy to use for application developers. Founded on the premise that a disruptive project is needed to enable artists and creators to work with real-time person tracking, OpenPTrack aims to support "creative coders" in the arts and cultural and education sectors who wish to experiment with real-time person tracking as an input for their applications. The project contain numerous state-of-the-art algorithms for RGB and/or depth tracking, and has been created on top of a modular node-based architecture, to support the addition and removal of different sensor streams online.
OpenPTrack is led by UCLA REMAP and Open Perception, and involves the University of Padova, Electroland and Indiana University Bloomington as key collaborators. Early adopters of the ongoing work-in-progress include the STEP project, the Interpretive Media Laboratory, the California Department of Parks and Recreation (California State Parks), and UCLA Lab School. Code is available under a BSD license.
Portions of the work are supported by the National Science Foundation (IIS-1323767).