Difference between revisions of "Sounds"

From Second Life Wiki
Jump to navigation Jump to search
m (Sound file length was extended.)
Line 4: Line 4:


=== Functional Spec ===
=== Functional Spec ===
Sounds must be in .WAV file in standard PCM format, 16-bit/44.1kHz/mono or stereo (converted to mono), and less than ten seconds in length.  Thus, 9.99 seconds is fine, but 10.0 will fail to upload.
Sounds must be in .WAV file in standard PCM format, 16-bit/44.1kHz/mono or stereo (converted to mono), and less than thirty seconds in length.  Thus, 29.99 seconds is fine, but 30.0 will fail to upload.
(source: LL Knowledge base.)
(source: LL Knowledge base.)



Revision as of 08:34, 8 September 2024

Feature Design Document

The Sounds folder contains audio clips. (give specs for audio clips, max length, freq, codec, file extension)

Functional Spec

Sounds must be in .WAV file in standard PCM format, 16-bit/44.1kHz/mono or stereo (converted to mono), and less than thirty seconds in length. Thus, 29.99 seconds is fine, but 30.0 will fail to upload. (source: LL Knowledge base.)

Test scripts

Audio test

Discussion for future improvements

Sound Synthesis [pardon the ramble... if it is inappropriate here please contact me]

IMHO a much ignored aspect of game sound design is Sound Synthesis. Presently the pervasive model is 'event driven file playback' which, although often workable (e.g. gunshots, explosions, physical interactions), also limits the sound designer's ability to craft more dynamic, deeply interactive sonic events. A simple example of this would be: Imagine a 'machine', emitting a low rumble, with a switch on the side which when touched, changed the sound (say to a higher pitch). In contrast, imagine the same machine with a lever on it, which allowed the user to arbitrarily change the sound (again say pitch) as they interacted with it.

Whereas the former can be created with well crafted looping file playback and an interactive trigger to another looping file.... the latter (arbitrary interaction on the users part) would be almost impossible to create using only 'file playback'. Instead, this type of dynamic interaction can only be created using standard sound design synthesis tools (oscillators, filters, etc.). And a quick look at the API of sound engine for SL (FMOD) reveals that it is fully capable of this.

So it is my suggestion that these features be revealed to the designer.

Wether this is functionally possible I do not know... but if so: It would allow sound designers to craft much more dynamic and interactive sonic content. - Robb

Relationship to other features

List of features that need to be tested when this feature changes, and why.

(none)

User Guides

(none)