User talk:Lee Ponzu/Pseudo-Voice

From Second Life Wiki
Jump to navigation Jump to search

Barriers to Implementation

SignpostMarv Martin (15:46, 12 January 2007 (PST)):

Distributed Processing

  • Processing the audio on the local client and streaming it in-world would require Peer-to-Peer streaming audio.

Local Processing

  • Distributing text-to-speech preferences would require modifications to the server-side code, using 3rd party servers, or Peer-to-Peer protocols.
  • Allowing for the ability to add processing the data for up to 400 avatars (100 avatars per sim, within shout distance of region corner) at once would add significant processing overhead.

While distributed processing would probably mean less CPU-intensive code, local processing would have the distinct advantage of allowing users' preferences to be overridden, allowing people to do such things as cap voice pitch, preventing annoying chipmunk-speak from occuring.

Addendum

Alderic LeShelle (00:55, 25 January 2007 (PST)):

  • It is possible to restrict changes to client side only. One may use existing chat/IM protocols to transmit speech preference data or use a client-to-client protocol to achieve it.

Distributed processing would mean a considerable network bandwidth impact, be it on each client or a dedicated chat server where local processing would simply add the chat preference data traffic to the network, but the speech synthesis CPU load on each client. It is easier to cap the CPU load, though, for example by simply capping speech synthesis for avatars out of a certain, configurable, hearing range, or simply disabling speech synthesis for the nth nearest avatar and further away.