Difference between revisions of "Gesture Recognition"

From Second Life Wiki
Jump to navigation Jump to search
(→‎Project Milestones (weekly or better): noted jira entry for some progress on example code)
(Never implemented)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
'''Update - 2010 January: this project wasn't implemented in 2009 and it appears that Linden Lab has no more short-term plans for it.'''
= Gesture Tracking =
= Gesture Tracking =


Line 40: Line 42:
* [http://www.cs.unc.edu/Research/vrpn/ VRPN : Virtual Reality Peripheral Network]
* [http://www.cs.unc.edu/Research/vrpn/ VRPN : Virtual Reality Peripheral Network]
* [http://en.wikipedia.org/wiki/D-Bus D-Bus]
* [http://en.wikipedia.org/wiki/D-Bus D-Bus]
* [http://www.visionbib.com/bibliography/people919.html Keith Price Bibliography on Facial Expressions and Emotion Analysis and Description]
* [http://note.sonots.com/SciSoftware/haartraining.html Tutorial on OpenCV facedetect]


Thanks to Jan Ciger for providing most of those links.
Thanks to Jan Ciger for providing most of those links.

Latest revision as of 10:15, 6 January 2010

Update - 2010 January: this project wasn't implemented in 2009 and it appears that Linden Lab has no more short-term plans for it.

Gesture Tracking

Goal: Add the ability for a PC camera to detect head and upper body movements and relay them to your own avatar. This is a design for a first simple project, intended as a summer intern project which will affect only the viewer/Snowglobe codebase.

Minimal Functional Goal

  • Nodding or shaking your head 'yes' or 'no' triggers the equivalent animation on your SL avatar.
  • Relative motion of your head is captured in real time and used to animate where your avatar is looking in real time. Inotherwards, if you look up, down, left, or right, your avatar will do the same thing.

Design Details

Add the ability for Snowglobe viewer to listen on a port for local UDP messages containing either head position updates or animation triggers. This creates an easy way for developers to control the viewer by simply sending small network packets. This design will also keep the latency and CPU cost of the interface at a minimum.

Write an application which accesses the PC camera, analyzes the image to detect head movement and gestures, and sends UDP packets to the viewer port.

Project Milestones (weekly or better)

  • VWR-13713 write the basic 'camera' app that can send UDP messages to the SL viewer, initially just triggered by keystrokes for testing. Modify the SL viewer to receive these packets and trigger an animation.
  • VWR-13713 Add ability for SL viewer to receive head position information and pass it on to the lookAt point for the avatar. Again, use keystrokes for trivial testing. Verify that latency and performance is adequate/instantaneous.
  • Add ability to get frames of video from the camera. Demonstrate simplest analysis by looking for a bright light or color in frame and animating lookAt according to its location in the camera frame.
  • Add calls to openCV (or alternative library) to detect head orientation. Pass head orientation to viewer lookAt.
  • Add ability to trigger animation based on data reported from the analysis library. Demonstrate nodding and shaking head.

Extra Credit:

  • Attempt to detect other upper body gestures: Hand raised, Hand wave, Shrug shoulders.
  • Extend application to run on both mac and windows.
  • Include application in the snowglobe installer package.

Useful Links

Thanks to Jan Ciger for providing most of those links.

Project Log

LL intern will start first week of June and begin updating this page. In the meantime it would be great if someone is interested in implementing the viewer-side changes to listen for UDP packets and trigger animations and lookAt based on the data in those packets.

See Also