Difference between revisions of "Camera-based Input"

From Second Life Wiki
Jump to: navigation, search
m
Line 1: Line 1:
 
On the [[SLDev]] mailing list, [[User:Philip Linden|Philip Linden]] asked: "''Has anyone here worked with camera-based gesture recognition before?  How about OpenCV?  Is OpenCV the best package for extracting basic head  position/gesture information from a camera image stream?  Merov and I are pondering a Snowglobe project to detect head motion from simple  cameras and connect it to the SL Viewer.''"
 
On the [[SLDev]] mailing list, [[User:Philip Linden|Philip Linden]] asked: "''Has anyone here worked with camera-based gesture recognition before?  How about OpenCV?  Is OpenCV the best package for extracting basic head  position/gesture information from a camera image stream?  Merov and I are pondering a Snowglobe project to detect head motion from simple  cameras and connect it to the SL Viewer.''"
 +
 +
Philip added a specific project proposal for first phase [[Gesture Recognition]]
  
 
A very long thread followed:
 
A very long thread followed:

Revision as of 10:16, 22 May 2009

On the SLDev mailing list, Philip Linden asked: "Has anyone here worked with camera-based gesture recognition before? How about OpenCV? Is OpenCV the best package for extracting basic head position/gesture information from a camera image stream? Merov and I are pondering a Snowglobe project to detect head motion from simple cameras and connect it to the SL Viewer."

Philip added a specific project proposal for first phase Gesture Recognition

A very long thread followed: sldev thread about camera-based gesture recognition

Quick Summary: Several replies immediately followed that suggested hardware options beyond a normal webcam. These mainly included 3D cams and remote sensors. The discussion thread continued into details of what to expect from different device drivers. The details ranged from ability to track eye gaze to comments about the device's user-friendliness. Later on in the discussion, several people described needs on the interface, and this discussion quickly went into generic and universal interface design. As the thread quited down, at least a few people seemed to agree that a simple generic interface is a good start point. More specifically, one that can use input from a normal webcam, recognize a few common gestures, and play the related avatar animation much like one now can type /yes or /no and have the head nod. It was also mentioned that the simple interface would not require server changes to implement.