Difference between revisions of "WebRTC Voice"

From Second Life Wiki
Jump to navigation Jump to search
m (Update paths to test viewers.)
Line 74: Line 74:
* Rollout (Q2 2024)
* Rollout (Q2 2024)
** Based on the success of the beta and preparedness of viewers, full-scale production rollout may happen in late April, 2024. This process will take 2-3 weeks.
** Based on the success of the beta and preparedness of viewers, full-scale production rollout may happen in late April, 2024. This process will take 2-3 weeks.
== Technical Documentation ==
[https://webrtc-public-voice-beta.s3.us-west-2.amazonaws.com/WebRTC+on+Second+Life%2C+a+Developer's+View.pdf WebRTC on Second Life, a Developer's View]

Revision as of 13:28, 26 March 2024

Summary

This project details moving Second Life voice to WebRTC (Web Real-time Communications.)

Background

Second Life currently provides four types of voice communication:

  • Spatialized voice, for in-world communication.
  • Group voice.
  • Peer to peer voice.
  • Ad-Hoc voice

We currently use a 3rd party provider (Vivox) to provide voice-as-a-service. Vivox has been the primary voice provider since voice was added 2007 and has been a terrific partner. However, we would like to push the envelope of what is possible with Second Life, and, in that light: it's time for an update.

WebRTC (Web Real-Time Communications) is the predominant telephony protocol used by web-based applications, such as Google Meet. It’s built-in to Chrome, Safari, Firefox, and many other web browsers, and allows audio communication as well as data and video. All of those browsers support an open API allowing access to WebRTC functionality via javascript. The core implementation of WebRTC is an open source C++ package, actively maintained predominantly by Google, as well as many others. (https://webrtc.org/)

Features provided by the webrtc.org implementation include:

  • NAT hole punching via STUN.
  • Data relaying via TURN.
  • Audio, Video, and Data transmission.
  • Audio/video device selection.
  • Stereo audio.
  • Configurable audio/video bandwidth allowing control over audio and video quality.
  • Audio noise reduction.
  • Audio automatic gain control.
  • Audio echo cancellation
  • Multiple audio tracks per stream.
  • Multiple streams per connection.
  • Simultaneous connections to multiple WebRTC sources (servers, clients, etc.)
  • Peer 2 peer connection (directly or via relay)
  • Communication with peers via servers (SFU, MCU)
  • Improve privacy and security.
  • And many more.

We're moving over to self-hosted WebRTC-based voice to gain those features, and to open the door for future features.

As the voice subsystem in the viewer is modular, the major changes are limited to a new WebRTC voice module and a supporting dynamic library which wraps the native webrtc library.

Changes from Vivox include:

  • Spatial voice for a given region is handled by a Second Life Voice Server running in conjunction with that region.
  • AdHoc and Group voice is handled by a pool of Second Life Voice Servers dedicated to those types of voice.
  • Unlike Vivox's direct-call mechanism, WebRTC P2P is routed through the AdHoc voice server pool as if it were a two-person Ad-Hoc voice conference.
  • There is no separate SLVoice executable for WebRTC Voice (in the short term, we'll allow both WebRTC Voice and Vivox voice to coincide, so SLVoice will be running to provide Vivox support.)

Roadmap

This schedule is tentative and subject to change.

Technical Documentation

WebRTC on Second Life, a Developer's View