Cabbage Logo
Back to Cabbage Site

Unity Audio SDK/aka Unity Mixer plug in support or existing Csound mixing options?

okay last one for a while i promise…sort of a pitch for using the AudioSDK in Unity.

it occurred to me that as i mentioned using the Unity AudioSDK instead of your current system around AudioSources, i did not think about how useful using a Mixer plugin is to, well, mixing. assuming that CsoundUnity can support loading multiple CSD files, it means routing between effects and instruments could be set up visually, like a bus send for individual synths, and all of this can be utilized in Snapshots or exposed in the Editor. currently individual synths could be routed via their respective Audio Sources to Output Groups, but any effects that one wanted to apply globally might be a tricky thing to manage with the send volumes set in the CSD file for each instrument and the effect output as an Audio Source for the effect’s CSD. i can see the synths using AudioSources, but effects would greatly benefit from being a Unity mixer plug-in, or located there.

continuing on that front is there a good example of mixing in Csound, and especially of things like a bus send or something like this? in terms of the 3D sequencer i was definitely thinking of being able to say, turn on an effect for one particular event on one layer, but make a send available to all layers that could be turned on if desired. i didn’t see any mixing examples per se but of course there’s all that additive stuff where you’re obviously mixing many waveforms or partials into one output. but aux sends is what i’m thinking of especially.

It’s really not an issue. I’m very interested in this area, so it’s always good to hear peoples thoughts. [quote=“metaphysician, post:1, topic:490”]
i can see the synths using AudioSources, but effects would greatly benefit from being a Unity mixer plug-in, or located there.
[/quote]

This is true. I’ll see what I can do. I did create a very basic Csound based plugin for Unity using the older API. It may not be much work to update it so you can test it out.

What makes Csound’s use in games a little different than its traditional use is the option to load multiple instances of Csound. Most composers in the past would run everything through a single orchestra. In this case mixing was simple with global variables, or using a channel bus. But when Csound is loaded across different instances it is not possible send audio back and forth without some serious forethought. Which adds more weight you your arguments above about having Csound as part of the mixer plugin setup. Ok. I’ll try it out and see how I get on. I can’t promise it will be anytime in the next few weeks, but leave it with me.

thanks a lot! yeah it looks like you’re busy with finals it seems - good luck with them!

as far as approximating a bus send without using plugins on the Mixer, the way i would imagine it for my purposes is that there’s an output on an instrument that doesn’t get seen by Unity but it’s recognized by Csound. that output then goes to another CSD serving as an effect input. the output of that CSD would be routed to a Unity AudioSource, while the send control would be a parameter exposed in Unity on the instrument. it’s not a flexible method though and it requires interscript messaging which i haven’t researched in Csound. anyway still trying to wrap my head around Csound in general. definitely a challenge to my thinking.

This won’t work without writing your own inter-process mechanism. You’d need all your instruments to live in the same orchestra. Alternatively you could send audio data over OSC or another network protocol, but I wouldn’t advise gong down this route for reasons that are most likely already quite obvious to you. It sounds to me like the only real way to do this would be with a Csound mixer plugin. Am I correct in thinking that you can route audio to and fro mixer plugins in an almost modular fashion? I have never really delved into this side of Unity.

as far as i know it works like a normal mixer. plugins at the top of an OutputGroup (Mixer channel) will flow downwards to the bottom of that group. there’s an Attenuator plugin that is always present and acts as a fader. so you can put effects before or after the fader. there are also sends and receives. these can be positioned before or after the Attenuator as desired. and all of this can be under control of Snapshots, and parameters with the plugin can be exposed to the editor. you can also create as many sub mixers as you might need.

for me though, the most interesting thing about the Mixer is that it simultaneously is available to 2D and 3D outputs in the Listener via the Spatial Blend control of whatever AudioSource is feeding it is set at. it does not provide surround or stereo placement per se, though. that is relative to the object’s position around the Listener or set by the panning control on the Audio Source. So i can still see a need to have plugins available at the Audio Source as well as on the Mixer as a result.