Cabbage Logo
Back to Cabbage Site

CsoundUnity Mixer Plugin

After some discussion here about the potential need for a Csound mixer plugin for Unity I went ahead and built one. It’s Windows only for now as I’m away from my Mac for the next week or so. This plugin will work with existing Cabbage effects, so long as they only use sliders to control their parameters. I’m afraid it will remain this way until I can spend some time on creating a custom GUI.

To use it, you will need to have either Csound or Cabbage already installed. The CsoundUnity Mixer Plugin package comes with a very simple scene. An AudioSource is attached to a cube. Its output is sent to PingPongDelay effect (thanks Iain) in an AudioMixer. To create a new effect simple make a copy of the AudioPluginPingPongDelay.dll and rename it AudioPluginEffectName.dll, where EffectName should be replaced with the name of the .csd file containing the instrument code. This .csd file should reside in the same folder as the plugin dll. For example, if you create a csd file called CrazyShit.csd, you should create a copy of the plugin .dll and name it AudioPluginCrazyShit.dll. Unity insists on audio plugins being named with AudioPlugin at the start. If you call it by another name Unity won’t load it. If this all sounds a little confusing, just take a look in the Plugins/x86_64 folder. It should be clear from looking there what’s going on.

With this copying and renaming complete you can go ahead and launch Unity, and add a mixer to your project. When you click on ‘Add Effect’ you will see listed any effects you’ve created. The name of the effect as it appears in Unity is the caption() identifier used with the form widget. The names of the parameters will be the sliders’s text() identifier if there is one. Otherwise Unity will display the slider’s channel name.

Finally. This is completely untested!! I’m sharing it in the hope that @metaphysician and others will give it a test ride and let me know how it works. Please be patient. It could be weeks or months (or days!) before we have a fully stable version.

p.s. no reason why this won’t work for synths either. One just needs a parameter to enable or disable playback.

thanks a bunch Rory! i will try giving this a whirl on the Windows machine. hopefully should have some feedback on it by the end of the weekend (aka Easter). anyway also wanted to register my interest in helping devise a custom Editor UI approach that could parse Cabbage UI info into some kind of equivalent editor interface. rotary controls especially are pretty rare in the Unity Editor, though there appears to be an older leftover object in the Inspector that’s still present. there’s a UnityExtensions UI toolset that’s freely available, and it has a rotary control, but i have no idea if those extensions can work for creating custom UI interfaces - i suspect not, but i’ll investigate on that front.

I agree that a custom GUI would be nice. For now I can just use the one I wrote for CsoundUnity. At least it provides buttons. We can build on that one. It would also be nice to for both interfaces to have the same GUI. The GUI examples that ship with the SDK have some nice graphical displays but still only features basic horizontal sliders. I think we can do better than that :wink:

I just took a look at this and I already see some issues. One is the way Unity sets up the custom GUIs. Each audio plugin can have a corresponding GUI, as long as it has the same name. From the Unity docs:

Once Unity has loaded the native plugin DLLs and registered the contained audio plugins, it will start looking for corresponding GUIs that match the names of the registered plugins.

This means we need a custom editor for every effect. It will involve needless copying and pasting. Argghh. But if we don’t compile the editor scripts and leave them as plain old .cs files it should be easier.

Another issue is that we can’t simply drag and drop Csound files to create new effects as we do with the CsoundUnity AudioSource interface. This is because Unity audio plugins have a fixed number of parameters that can’t change once the plugin is loaded. In the interface I created yesterday, the first thing the plugin does is read the corresponding .csd file to get the number of parameters it needs. This can’t change once the plugin has been loaded. This is very similar to how FMOD plugins work. In fact, I see now how much the Unity audio system borrows from FMOD. The audio APIs for Unity and FMOD are almost identical.

[edit] An idea just came to me. I can write a simple script that will automatically generate a corresponding GUI .cs file for each csd file in the Plugins folder.

And finally, one last but pretty major roadblock. If we create our own sliders we have no way of ‘exposing’ them which means we can’t control them from our scripts. There are threads on the Unity forums about this. I think this pretty much kills an ideas of funky custom GUIs. I’ll keep digging. Maybe something will turn up.

Latest update…

I can’t seem to prevent Unity from creating its own GUI for each audio plugin parameter. This is balls. Right now I’m creating a custom GUI, but Unity then adds sliders for each parameter itself even though I’m creating my own. I’ve asked on the Unity forum how to prevent this, but I’m not sure if I’ll get an answer. Watch this space.

Thanks to some help on the Unity forum I’m now able to create a custom GUI Ok. The problem remains however that I can’t expose the parameters as all the built in Unity mixer plugins do. I could expose them through a script but this is a lot of work and means these effects will nee to be handled differently from Unitys own audio effect. I’m starting to think if all the work in creating a custom GUI is actually worth it.

Not being able to expose parameters seems like a serious drawback.

I’m wondering is it really that bad to use Unity’s GUI for custom plug-ins? If you were to use the default GUI would the parameters then be exposed to Unity programs?

Custom or native, the problem is we can’t expose them via scripts. I think this might change in the future. I hope it does.

hi Rory very late reply here - but, have you thought about using a method like the Kalimba plug-in does in Unity and PD? essentially it’s merely an OSC bridge between Unity and another program (ostensibly Cabbage), that would allow communication of parameters. it might be impractical seeing as how Csound is already working natively inside Unity, but i guess the question is how one would see themselves altering synth parameters.

if you’re building an executable, then that seems like a standard GUI issue and you don’t need to involve the Mixer - you create onscreen sliders or other controls, and then have those changes modify Csound channels in the script in a custom manner determined by whatever your app needs.

otherwise it seems that the only other method needed is something to modify the sound while you’re developing inside Unity. and at that point if you could somehow run the script in Cabbage and have Cabbage send OSC data to Unity to control parameters in Csound Unity for the synth used, maybe that would work? i may be fairly off base here. it may be that to create heavily modifiable synths and samplers rather than effects, you don’t use the Mixer plugin method.

sort of on a side front, and possibly off topic, i would ask for a feature to have the Csound script instance appear before the pan position of the Audio Source, so that a Csound script’s output can be manipulated via the normal Spatial Blend control in Unity or by positional VR spatial audio middleware ala Oculus Spatializer, Steam Audio, or GoogleVR Audio. as far as i’ve tested this doesn’t yet seem possible. i realize that Csound has a lot of positioning options but it would be helpful to be able to route and position sound in the same manner as existing sounds in Unity, rather than coordinating between two different panning systems.

I know right. Each time i think of OSC as a possible solution I keep coming back to this exact same conclusion. Why bother with OSC when Csound can exist as a library directly within Unity.

This is for sure an issue. To be honest I’ve been expecting the Unity devs to update the SDK to make this possible for some time, but there doesn’t seem to be any movement on this. Indeed it seems that you and I are the last people to make any noise about this issue on the Unity threads :confounded:

True. I think there are a few here who are working in this way.

What is stopping you from doing that now? Maybe I’m not following but usually I have both Unity and Cabbage open at the same time. I modify the Csound instruments as I wish in Cabbage, and when I next return to Unity it will use the most up to date .csd file. Each slider you add in Cabbage will appear as a native slider in Unity. I find the workflow to be pretty good, but certainly not as good as good as FMOD or Wwise. We could certainly explore OSC. I think it would be pretty trivial to impalement, but doesn’t it become redundant when one builds a standalone of their game? Or doesn’t one have to then rewrite things without the OSC bridge for standalone versions?

This is already the case is it not? I’ve certainly used the Unity spatialiser to move the output of my Csound instruments? I override the OnAudioFilterRead method, which manipulates the samples contained within an AudioSource clip object. You can then use the 3d spatialiser provided by Unity to move the sounds around. Or are you talking about other 3rd party plugins?

hi Rory! really quick observation - but it does appear that the 2D panner isn’t working for me. i had Spatial Blend set to 2D the last time i tested. i’ll check again when i have a moment. i did not try the Spatialize in 3D mode since my camera is basically static.

i’ll have more thoughts on the Mixer plugin/OSC later today.

That may have been because of the panning bug that was resolved (I hope!) in the last release. I will check again myself during the week.

Cool! very good to hear. let me know how that goes.

As far as my observations on the situation with OSC and Cabbage go, i currently don’t have immediate plans to edit synth parameters myself. but i was trying to think about how anybody including myself would consider using a synth with Csound Unity. and the two applications i can see are either modifications of parameters at runtime, which would mean onscreen control you set up in Unity GUI. since i want the user to have more control i’d opt for this method.

the other method is realtime control and triggering of a synth while in the editor, with changes saveable as a preset. in that case you’re communicating via some type of messaging and then saving your changes. since you can pretty much already do that in Cabbage by itself it does seem to obviate the need for more integrated and tight connection. about the only other issue i can see is processing of Unity’s audio inside a Csound patch. but this seems pretty possible to do as well. so maybe there’s really no big deal happening here.

I agree with Rory about avoiding the unnecessary complication of OSC. I find the integration of Csound inside of Unity to be optimal, and have found that my csound file changes are picked up immediately in Unity. I have used LibPD and OSC before and found them to be kind of a pain, so I like the tight integration of channel messages, not to mention the fact that I don’t believe there will be any bottleneck issues if large amounts of control data are sent to Csound (something I ran into in controlling SuperCollider from Unity in the past). Also, although I haven’t gotten into Cabbage much I plan on it to provide the realtime tweaking I need from the Unity editor.
In terms of efficiency and overhead I think using Unity’s spatializing will be more optimized, especially since you can combine audio streams via the Unity mixer and then spatialize them. But on that note, I have not been able to get either the 2D or 3D Unity spatializer to work with Csound audio, and am looking forward to the new UnityPackage release for OSX (hopefully this week, right?)

Also, it would be good to be able to snapshot settings, not sure how that would work. But the other thing that would be on my wish list, since control is so dynamic, would be to somehow specify min and max ranges of certain synth parameters, and maybe even the stepping resolution that should be sent to the Csound instrument. Not easy to do. I guess it means that not everything can be automated, and that pencil and paper is still useful (or typing notes I suppose).

hi Thomas! it’s Scott Looney. just saw your name there and couldn’t resist. in terms of snapshotting settings are you talking about the user being able to do this at runtime in the app? because you can do it with Unity UI controls at runtime, and then have the UI update the CSD instruments via channel sends when a control is dirty. i do think you’d need to discover optimal ranges of effectiveness for filters and such and then perform the mapping either in Unity before the channel send or afterwards in the Cabbage/CSD script. but i’ve just started working with different mapping ranges in my sequencer project. i’ll let you know how it goes.

Saving snapshots within Unity seems like the easiest way to do this. With regards to ranges, when you develop your instruments in Cabbage you set ranges for the various widgets. Check out the examples, all the synths and effect there have ranges mapped to each parameter that should provide as much flexibility as the user should need.

Hi Scott. I’m thinking more about adjustments a sound artist might make while tweaking a level, someone who doesn’t know scripting. I think adding a snapshot button in Cabbage that sets parameters up in the Unity program which then compiles to the app could be useful. I know that Unity editor settings override default values in the Unity script but are they then compiled into the App? Something I haven’t checked yet.

Sounds great, I’ll check it out.