Cabbage Logo
Back to Cabbage Site

CsoundUnity Mixer Plugin

Cool! very good to hear. let me know how that goes.

As far as my observations on the situation with OSC and Cabbage go, i currently don’t have immediate plans to edit synth parameters myself. but i was trying to think about how anybody including myself would consider using a synth with Csound Unity. and the two applications i can see are either modifications of parameters at runtime, which would mean onscreen control you set up in Unity GUI. since i want the user to have more control i’d opt for this method.

the other method is realtime control and triggering of a synth while in the editor, with changes saveable as a preset. in that case you’re communicating via some type of messaging and then saving your changes. since you can pretty much already do that in Cabbage by itself it does seem to obviate the need for more integrated and tight connection. about the only other issue i can see is processing of Unity’s audio inside a Csound patch. but this seems pretty possible to do as well. so maybe there’s really no big deal happening here.

I agree with Rory about avoiding the unnecessary complication of OSC. I find the integration of Csound inside of Unity to be optimal, and have found that my csound file changes are picked up immediately in Unity. I have used LibPD and OSC before and found them to be kind of a pain, so I like the tight integration of channel messages, not to mention the fact that I don’t believe there will be any bottleneck issues if large amounts of control data are sent to Csound (something I ran into in controlling SuperCollider from Unity in the past). Also, although I haven’t gotten into Cabbage much I plan on it to provide the realtime tweaking I need from the Unity editor.
In terms of efficiency and overhead I think using Unity’s spatializing will be more optimized, especially since you can combine audio streams via the Unity mixer and then spatialize them. But on that note, I have not been able to get either the 2D or 3D Unity spatializer to work with Csound audio, and am looking forward to the new UnityPackage release for OSX (hopefully this week, right?)

Also, it would be good to be able to snapshot settings, not sure how that would work. But the other thing that would be on my wish list, since control is so dynamic, would be to somehow specify min and max ranges of certain synth parameters, and maybe even the stepping resolution that should be sent to the Csound instrument. Not easy to do. I guess it means that not everything can be automated, and that pencil and paper is still useful (or typing notes I suppose).

hi Thomas! it’s Scott Looney. just saw your name there and couldn’t resist. in terms of snapshotting settings are you talking about the user being able to do this at runtime in the app? because you can do it with Unity UI controls at runtime, and then have the UI update the CSD instruments via channel sends when a control is dirty. i do think you’d need to discover optimal ranges of effectiveness for filters and such and then perform the mapping either in Unity before the channel send or afterwards in the Cabbage/CSD script. but i’ve just started working with different mapping ranges in my sequencer project. i’ll let you know how it goes.

Saving snapshots within Unity seems like the easiest way to do this. With regards to ranges, when you develop your instruments in Cabbage you set ranges for the various widgets. Check out the examples, all the synths and effect there have ranges mapped to each parameter that should provide as much flexibility as the user should need.

Hi Scott. I’m thinking more about adjustments a sound artist might make while tweaking a level, someone who doesn’t know scripting. I think adding a snapshot button in Cabbage that sets parameters up in the Unity program which then compiles to the app could be useful. I know that Unity editor settings override default values in the Unity script but are they then compiled into the App? Something I haven’t checked yet.

Sounds great, I’ll check it out.

I think the Csound score can be of use here. If you set your instruments up to work with p-fields you can achieve a quick way of triggering instruments in particular states. Look at the following instruments:

instr 1
kPartials = 0
if chnget:k("trigger")==1 then
	if metro(chnget:k("speed"))==1 then
		event "i", "BellSound", 0, chnget:k("duration"), chnget:k("modIndex"), chnget:k("crossFade")
	endif
endif
endin

It triggers the sound of a bell using score statements, and all the p-fields can be updated in real-time. The p-fields in this case are all controlled by sliders that the artists can tweak. If the sound designer finds a sound they like they could then save that sound, as a score statement, to a simple text file. It could then be recalled at the push of a button.

Saving real-time slider data could also be implemented. I remember Matt Ingalls showing me an instrument that wrote k values to a file. They could then be recalled later. I actually wrote my own version of a while after. I must see if I could find it. The nice thing is these presets could be triggered from a button in the CsundUnity inspector. And it could all be done on the Csound side without having to write any C# code.

I just found this Csound UDO I wrote many years ago that will write k values to file and read them back later. This could easily be adapted for use here. The code below is a standard Csound file, but I could easily modify it for use Cabbage.

<CsoundSynthesizer>
<CsOptions>
-odevaudio -b10 -idevaudio
</CsOptions>
<CsInstruments>
sr = 44100  
ksmps = 64
nchnls = 1

opcode dataRW, k, Ski  
Sname, kvalue, iRW   xin
kout init 0
       if (iRW==0) then     
              dumpk  kvalue, Sname, 8, 0
              kout = kvalue
       else 
              kres readk Sname, 8, 0              
              kout = kres       
       endif
       xout kout                  
endop

instr 1
kline init 0
kline line 0, p3, p3
k1 init 0
if(kline < 5) then
k1 dataRW "test2.txt", kline, 0
printks "writing: %f\n", 0, k1
elseif(kline > 7 && kline < 13) then  
k1 dataRW "test2.txt", 1, 1
printks "reading: %f\n", 0, k1
endif
printk 1, 9999999
endin

</CsInstruments> 
<CsScore>
f1 0 1024 10 1
i1 0 25   
</CsScore>
</CsoundSynthesizer>

Thanks! I’ll dig into this.

i think that a text file with the info is the way go whether you go with sending to Csound directly or to Unity and then Csound. i prefer the latter approach if you have on screen controls that should inform the user that something’s changed, even if it’s just a preset dot, but especially if it’s an array of configurable controls. then make any changes to your onscreen controls automatically update the Csound script. this way you can have the read settings from the text file update it, say when first loading, but afterwards it can be tweaked onscreen by the same method. any state update would update Csound, and then the state of those controls can be serialized and stored via a Save button onscreen.

I’ll try to prepare a Csound driven way of doing it and we can see if it is worth pursuing.

I feel very late to the party here, I have been working on a VR unity project with csound and have been running into issues with the GUI slider in unity not communicating with the sliders in cabbage. Just wondering if anyone here could lend a hand with the scripting issues.

Below are the channels set in Cabbage however I have not been able to script the native UI sliders in unity to adjust the values in the game scene.

hslider channel(“CTFF”), range(0,10000,5000), text(“Cutoff_Frequency_Slider”)
hslider channel(“FBSL”), range(0,1,0.5), text(“FeedBack_Level_Slider”)
hslider channel(“REVMIX”), range(0,1,0.5), text(“Dry_Wet_Mix_Slider”)
hslider channel(“AMP1”), range(0.00001,1,0.5), text(“Max_Amplitude”)
hslider channel(“AMP2”), range(0.00001,1,0.0001), text(“Min_Amplitude”)
hslider channel(“DUR1”), range(1,1000,2), text(“Max_Duration”)
hslider channel(“TRIGGER”), range(0,3,0), text(“Modulation_Algorithm”)

I tried using chnset to send the signals back to the slider controls in unity but this just causes the audio to stop completely, I am pretty sure its a c# issue rather than a csound issue however I’m not the most familiar with c# to fix it.

Any help would be greatly appreciated

Hi @Adzo94. you’re never too late, it’s 24 hour party central here!

Are you just using the CsoundUnity package or the CsoundUnityMixer linked above? (Something I had completely forgotten about writing until right now!)

Thanks Rory,
I’m using the Csound unity package, have the channels all declared and they appear in the inspector. Initially I was looking at using object xyz transform positions to change the values in the channels but I’ve been having issues getting that to work so I decided to try getting a slider interface working first before moving onto that

Just calling

csoundUnity.setChannel("mySliderChannel", 4.0f);

should be enough to update the corresponding channel. If you are getting crashes it might be because the csoundUnity object is null. It’s awkward to offer support on these ones as your project is probably already quite large? Is there any way you could create a simplified project illustrating the problems? Alternatively, send me a link to your project, if it’s not too large in size.

I’m currently using CsoundUnity for a VR game myself at the moment, and I’m not having any issues. Apart from the odd crash caused when I have an error in my Csound code. Note I don’t have any VR gear at the moment. All the students have them signed out :angry: So I’m stuck to using emulators, which aren’t a whole lot of fun!

You wouldn’t have an script example script that uses that by any chance, my project is 2gb at the moment so it’s a bit much to be sending on, Here is my csound file and my current c# script with the score events, I’ve tried implementing the set channel however it causes a couple of errors so I’m worried I’m just not integrating it correctly.

Oscilator_Room_MkV.csd (6.9 KB)
Midi_Control_4.cs.zip (1.5 KB)

Without trying this, I’d say avoid the use of alwayson, it’s a C++ plugin opcode, and I’m not sure how well the dot.net wrapper for Csound works with it. A plain old score event will do, i.e,

</CsInstruments>
<CsScore>
i"revrb" 0 z

And while I’m looking in the score, I also see that you have no score events keeping Csound open. You should add

f0 z

This keeps Csound awake and listening for events. Without it Csound will just parse the instruments, but it won’t actually start. Me thinks this is likely the issue you are having. I looked through the C# file and I can’t see anything that would be causing issue, apart from the fact that Csound may not have been running to begin with :laughing:

Csound was running fine in unity, I am able to get score events sent from triggers with no problem, It’s just editing parameters with the UI that has been causing issues, I’ve made the adjustments to csound script still trying to figure out the sliders though

All of your instrument instances use the same channel name. So if you move the ‘REVMIX’ slider, it will affect every instance the same. Also note that only k-rate variables can be updated in realtime. So changing the ‘AMP1’ slider after a note has started will have no effect.