Difference between revisions of "Puppetry Mocap"
Wulf Linden (talk | contribs) |
(add Puppetry category) |
||
Line 1: | Line 1: | ||
[[Category:Puppetry]] | |||
<h2>Puppetry Mocap</h2> | <h2>Puppetry Mocap</h2> | ||
Revision as of 14:58, 30 August 2022
Puppetry Mocap
This is a high level look at how the webcam_puppetry module works in the Second Life Puppetry viewer.
This is a high level look at the program logic, showing how the module starts and then enters a main loop where it fetches an image from the camera, tries to find a face and apply position and pose detection, then attempts to locate hand and fingers as well. After enough time has elapsed, it sends the information to the Second Life viewer.
The source code has some comments and more information on how this works, and some of the calculations needed to correct the data and make it work better. The position of the camera can be sensitive here, affecting the quality of the data and how your avatar will look. It's good to experiment with camera placement and see how it interacts with the positions.
As we initially release the Puppetry feature, the data from webcam motion capture is restricted in the Z direction, out of the screen. This means it works best for moving hands around up and to the side, not reaching towards the camera. The pipeline between the raw data and landmarks from the camera to your Second Life avatar in-world is fractured, and the motion isn't right.
There are a lot of complexities to solve to get this right, factoring in the camera setup, person's size and position, speed, capture rate, landmark detection, translation and mapping to Second Life skeleton and dimensions, swapping coordinate systems, broadcast speed, server transmission, inverse kinematics, animation priority and perhaps lag.