Difference between revisions of "Movie System GStreamer"

From Second Life Wiki
Jump to navigation Jump to search
Line 49: Line 49:
* Letting GStreamer open the audio device directly is simply not very reliable on Linux (and possibly other unixoids); the SL application will already be using the audio device, and multiplexing (simultaneous app audio) support on Linux has proven to be very spotty.  Thus the idea is to consume GStreamer's audio data ourself and feed it into FMOD(/whatever) for mixing into the existing SL audio.
* Letting GStreamer open the audio device directly is simply not very reliable on Linux (and possibly other unixoids); the SL application will already be using the audio device, and multiplexing (simultaneous app audio) support on Linux has proven to be very spotty.  Thus the idea is to consume GStreamer's audio data ourself and feed it into FMOD(/whatever) for mixing into the existing SL audio.
** Note: This aspect has been punted to the future while we figure out whether this is significant issue in 'real' usage.
** Note: This aspect has been punted to the future while we figure out whether this is significant issue in 'real' usage.
*** I can't speak for other distributions but in Fedora we fixed this a year ago by using dmix by default in FC5. We fixed it even more with pulseaudio in F8. [[User:Seg Baphomet|Seg Baphomet]] 11:43, 9 December 2007 (PST)


=== VIDEO ===
=== VIDEO ===

Revision as of 11:43, 9 December 2007

The GStreamer media back-end for the Second Life Viewer

--Tofu Linden 12:41, 29 October 2007 (PDT)

INTRODUCTION

Goals

  • In-world video playback - alternative non-QuickTime implementation.

Why another video playback back-end?

  • QuickTime API exists only for MSWindows and Macs
  • We need to support Linux clients and other platforms
  • Would prefer an Open-Source video playback solution to the proprietary QuickTime API

Considerations

  • Lots of existing in-world media stream URLs assuming a QuickTime media engine
  • Thus a solution needs to make it possible to play the majority of these legacy streams
  • We don't want to get into the business of bundling codecs, for size and licensing reasons
  • We don't want *hard* runtime dependencies on uncommon or hard-to-bundle libraries

Why GStreamer?

  • Format-agnostic, streaming-agnostic, codec-agnostic runtime plugin system makes specific format support (i.e. playback of QT MOVs) someone else's problem.
  • Designed to be embedded in applications
  • Potentially supports streaming of existing QT content and a lot more.
  • De facto standard on reasonably modern Linux distributions; probably already installed (GStreamer 0.10.x).
  • Open-Source core and utility classes, with provision for a spectrum of licenses for specific plugins
  • Can be extended by the embedding application without modifying GStreamer itself
  • Can handle a lot of the output format conversion quite transparently
  • Very few cross-platform alternatives with comparable functionality!

What's not so great about GStreamer

  • Not very API or ABI stable
  • C-oriented API with its own bizarro object system
  • Spotty documentation
  • Mac and Windows support fairly young
  • No guarantee that a user has the right plugins for a particularly useful level of movie compatibility as far as Second Life content is concerned

IMPLEMENTATION

  • Implement a new subclass of LLMediaImpl, wire it up to the SL app
  • Use GStreamer's playbin plugin to automatically set up the majority of the decode pipeline based on a provided media source URI
  • Implement custom audio and video sinks (as 'static', app-specific plugins) and plug those into the decode pipeline for this media source
  • We use APR to dlopen() the gstreamer libs and extract its symbols dynamically instead of mandating GStreamer 0.10 as a hard runtime dependancy, improving our runtime compatibility.

Why a custom video sink?

  • We need to be able to put decoded movie frames into OpenGL textures for display inside our world. These textures are subject to particular format and size constraints.

Why a custom audio sink?

  • Letting GStreamer open the audio device directly is simply not very reliable on Linux (and possibly other unixoids); the SL application will already be using the audio device, and multiplexing (simultaneous app audio) support on Linux has proven to be very spotty. Thus the idea is to consume GStreamer's audio data ourself and feed it into FMOD(/whatever) for mixing into the existing SL audio.
    • Note: This aspect has been punted to the future while we figure out whether this is significant issue in 'real' usage.
      • I can't speak for other distributions but in Fedora we fixed this a year ago by using dmix by default in FC5. We fixed it even more with pulseaudio in F8. Seg Baphomet 11:43, 9 December 2007 (PST)

VIDEO

  • GStreamer runs plugins in their own threads, so we need to protect access to data shared with main app. Decode is quite independent of the SL application, so the frame rate of the decoded video may be greater or lower than the frame date at which SL is consuming or displaying at.
    • Implement slvideo plugin to consume decoded video frames
    • slvideo's capability-negotiation with the GStreamer core ensures that we only receive video frames in RGBA or BGRA format, ready-scaled to power-of-two frame dimensions as required by OpenGL.
    • When GStreamer pushes a new video frame to us, we lock the slvideo instance object and copy that frame (its pixel data, dimensions and format) into fields within it, atomically as far as other lock-observing threads are concerned. We also raise a flag as an advisory to a consuming thread (the application) that a new frame was copied into here since the flag was last lowered.
  • The SL application polls the video system once per SL frame by the nature of LLMediaImpl. This is the point at which it expects its own view of the video data to be updated if necessary, and be told whether there is work for it to be done this frame so it can resize/update its own private GL structures for the new frame.
    • We lock the slvideo object for this stream and examine its retained frame.
    • If that retained frame is not flagged as being new, we unlock the object and tell SL that there's nothing to be done, otherwise we lower the flag and continue:
    • If the retained frame is a different size to the previous then we unlock the slvideo object and tell SL that we've resized.
    • We copy the frame data over to SL's internal buffer, unlock the slvideo object and tell SL that an update to the GL texture is needed.
  • Possible improvements TBD:
    • When the slvideo thread comes to copy a new GStreamer frame into its internal retained frame, it could avoid the CPU time it takes for a big frame-copy if the 'new frame' flag is still raised, i.e. the previous frame wasn't consumed by the app yet. However, this would put the displayed movie frame up to a whole SL frame duration behind where it would otherwise be (worsening perceived video lag when SL's frame rate is low).
    • The retained 'new frame' flag could be a real semaphore/mutex which implicitly puts the slvideo decoder-thread to sleep as long as the main SL thread hasn't consumed the current frame. I've briefly experimented along these lines without success so far.
    • GStreamer's own choice of power-of-two destination size based on source size often errs on the side of being undersized. A more explicit caps-negotiator would help with that.
    • Perhaps there's another cleaner way to have GStreamer 'frame-skip' to save CPU as long as the SL frame rate is somewhat lower than the video frame rate.
    • Finish the custom audio sink (and associated LLAudio plumbing) if assuming decent audio multiplexing proves to be a signicant problem for our users.