AS3 SoundChannel Feature Request

Regarding to this posting I had an idea today to handle SoundChannels in a very effective way. One of my favourite feature in AS3 is the Displaylist, where DisplayObjects can easily added and rejected from a DisplayObjectContainer remaining as intact objects.
So, imagine a SoundChannelList, where SoundChannels can be added as childs into other SoundChannels. Assume that we will have new manipulators as effects and filters as well as the wellknown volume and pan, nested SoundChannels would be effected by their preferences itself as well as the parent SoundChannels.

A SoundChannel with an effect and volume about 50%.
A nested SoundChannel with a volume about 50%.
Results the nested SoundChannel to be played with 25% (related to its waveform volume) and the effect from the parent SoundChannel.

Get the point ? Awesome possebilities !

9 thoughts on “AS3 SoundChannel Feature Request”

  1. hm, no i dont get it.

    why is this a more effective way to handle soundchannels?

    To have Soundchannels as kind of nested clips is a fine idea, but what is the higher purpose of that?
    You could have set the sound to 25% in the first place…
    And when it comes to panning? what if the nested soundchannel is 100% left, and the parent soundchannel ist 100% right? So the nested one will be 0%(centered)?

    Sorry I dont understand it, can you make another example?

  2. Basically you only need to be able to manage each playing Sound’s volume. In theory you could, on top of the lowest level Sound object (Sample?) keep track of your own ‘channel management’ and ‘performance management’ by knowing the max polyphony of channels (currently, 8) and cycling new sound events into those channels. You could even develop a priority queue for sounds to see which new events should override which old events (etc). All this can be done with no new additions to AS.

    That said, several higher level classes/objects should be made (could be made third party) to develop an effective ‘interactive sound/music’ environment. Typically the architecture would be something like (forgive this is linear):

    The raw sound sample data.


    Convenient structure for grouping samples or setting them up to playback as a macro sequence.


    //Instrument Data//
    Can trigger several samples or sequence of samples from WaveTable, can define offsets (base values), envelopes and oscillators for Frequency/Amplitude(Pan), can define ‘performance vectors’ or interactive events and how they affect the playing Sample(s), can define a sound’s base ‘priority’ or the rules of its dynamic priority (etc.).

    This is typically the level where interactivity can be passed in to affect the ‘tone’ of a given sound being played…


    //Pattern Data//
    This is where events are scheduled which trigger Sounds (Instruments) to play, and pass them control parameters which can affect the way they sound. This can be manipulated by realtime changes in Model/Controller. This is probably the level where you would implement some sort of ‘voice groups’ or ‘tracks’.

    This is typically the level where interaction could dictate the overall ‘music’ or changes in musical or sequenced events.


    //Song Data//
    This is where overall pattern order is defined or generated.

    Typically in a playback situation your pattern data would have ‘voice groupings’ which would define the equivalent of similar sounds which pass through a mixing board. In this sense it would be nice to (obviously) be able to control voice group volumes which in turn pass the scalar values all the way down… but ultimately what you are writing as a playback engine is a low-level event manager, where every ‘step’, the event manager checks for new events and updates all the playing ‘voices’ accordingly. Your applied pitch and volume changes are passed down level by level

    Song = master volume scalar

    Pattern = track / voice group level volume scalar

    Instrument = instrument volume and volume envelope scalar

    Wavetable = maybe has a per entry volume scalar

    Sample = has a base volume scalar

    Same works with pitch/frequency (transpose) and pan is really just discreet L/R volume(amplitude) values.

    I don’t know if this helps, but I guess what I’m getting at is that if one was to write an ‘audio manager’ or ‘sequencer’ in AS they could do so on top of the basic playback functionality given that they can control the volume of a playing sound therefore they can have that volume be inherited any number of ways as needed…

  3. Thinking aloud your idea would actually be pretty nice to have implemented, as a time saver. The main difficulty is polyphony, you only really have 8 total sound channels (voices) to begin with… so it would almost be moot on concurrently playing voices (parallel) vs. a way of assigning a whole range of sounds (samples) into a group which can then have its main properties scaled (serial, during playback over time).

    Processing actual SAMPLE data in realtime may prove to be pretty tricky. Frequency offset and Amplitude Offset (vol/pan) should be quite possible, but actually manipulating the sample data across sample groups (wavetables?) would probably be too intense to perform much in realtime.

    Most of my assumption with having proper access to SampleData in the future of Flash is that you’ll be able to pre-calc or ‘offscreen’ calc synthesis, processing or implement unique data compression in AS. Always a trade off with memory footprint…

    If you think about it Sound requires a much higher update period between processing time and perception time vs. visual events… for example the sound is playing back at thousands of samples per second. The video (picture) can update as low as 10 times a second and still be passable but clunky. When audio fails to perform the result is basically dropped or choppy sound, which from a perception standpoint is much more interruptive than losing video FPS.

    So unless/until hardware rendering of video/audio is supported I think its fairly unlikely we’ll get past the 8-channel limitation or have the ability to do much realtime heavy effects processing (beyond basic stuff like changes in base frequency/volume at a lower overall frame rate than actual sample-time).


  4. Arrgh, I didnt see the Anti-Spam question and it blew out my response, now I have to re-type…

    I might be missing the point but you can achieve this with the current Sound object and nested MovieClips…

    parentMc = createEmptyMovieClip(...);
    childMc = parentMc.createEmptyMovieClip(...)

    parentSound = new Sound( parentMc );
    childSound = new Sound( childMc );

    parentSound.setVolume( 50 );
    parentSound.setPan( 50 );
    childSound.setVolume( 50 );

    Should do the trick, no?

  5. I think that would inherit. If you actually attached a sound, presumably to child, it would use child’s volume, unless child wasn’t defined in which case it would use parent’s value (override inheritance, like CSS).

    I don’t think it actually ‘scales’ the volume sequentially level by level from lowest to uppermost.

  6. Funny, I just wrote a SoundBus class to do this exact thing in AS3 before stumbling onto this post. Basically, the sound bus object can have other sound bus objects parented to it, and you can call a playSound function on any sound bus passing in the sound and various parameters (volume, etc). It then recurses up the tree multiplying the volumes together to get the real sound volume and stores the soundChannel in an array of currently playing sounds. If the channel volume gets set, then it adjusts the volume of all the sounds currently on the bus and tells any sub-buses to do the same thing, much like a real mixing console.

    For our engine, we’re basically going to have a mixes setup something like so:

    Master Fader (user controlled)
    – music volume (user controlled)
    – music volume track 1 (game controlled)
    – music volume track 2 (game controlled)
    – sound volume (user controlled)
    – sound volume (game controlled)

    By forcing everything through this structure we can ensure that all sounds get properly volume scaled when the user adjusts one of their sliders, while still dynamically adjusting the volume of channels when the game wants to ramp them up and down. Best of all, the structure is arbitrary, making it easy to rearrange or append for later use.

    I actually broke this into two classes (Bus and SoundBus which inherits from Bus) in case I ever want to set up this type of relationship in another system, such as a video mixer or something.

  7. I Wish I had this now as managing them as individual entities in games is kinda hell.

    I am using a dictionary at the moment to manage them which is OK but your suggestion is far greater possibilities.

    Do you know if it was included?

Comments are closed.