Cabbage Logo
Back to Cabbage Site

Returning to CsoundUnity and updating old project

but for sure letting Csound handle the audio timing as well as the note generation obviously seems the better approach - that’s what i meant about the two complete systems comment. the visual lag can definitely have a fair bit of timing slop and that won’t appreciably affect the operation of the app but not having tight audio timing is definitely a problem - i don’t think it needs to be Squarepusher levels of precision, but it does need to better than what i’ve tested.

when i have some time/opportunity i’ll investigate how the timing is when just sending events rather than triggering sounds in Unity - that was just a test and it’s good to know from Giovanni that it’s not optimized. but my thinking is that if i pick up the clock event, move an object, have it detect active cells in a layer at a specific position, and then send the message to Csound to play notes, that’s three events and three chances of varied timing latency on the receiving end. i’m pretty sure the results won’t be great though possibly they may be better than what i’ve tried.

anyway, i’m going to probably not be working on this as much in the coming weeks, since i’m trying to polish up my VR/XR development skills for work as a software engineer/developer and unfortunately procedural audio/music generation is not really considered to be a marketable skill, so i’ll have to focus on different projects not involving audio for portfolio. i’ll post updates when i have them. thanks a lot to you both for your help and advice!

1 Like

Good luck with everything. Let us know if you return to it :+1:

okay folks! - i’m back and on the warpath with this project after a long delay and giving up trying for app development as a career, since the layoffs in the game industry have created a glut of qualified applicants.

i’ve actually pitched this VR/MR sequencer idea to Meta because they had an opening for an accelerator grant for lifestyle apps. you might be interested in a brief demo of what i got going with the older script (4 instances of CsoundUnity running on macOS): https://www.dropbox.com/scl/fi/yyoel0ft8dwzn3xn80e5r/Constellation-4layers_rotating.mp4?rlkey=42vacqzrtk1cek6mkm4effwp4&dl=0

anyway i’m ready to resume work on this because i was not yet able to get a build running on the Quest 3 with more than 2 instances of CsoundUnity. i barely managed to get 4 instances in that demo vid in Unity on a Mac M1 without dropouts every minute or so (what you’re hearing has been edited to remove the glitches).

more importantly i have received some outside funding from a freelance development gig so am willing to compensate anyone who can help me get the audio portion of this going enough for a decent vertical slice. i’m going to abandon my previous naive and impractical ideas and just go with the large sequence list that Rory proposed on his Feb 8 post. i’m thinking something between a lesson and assistance in wrangling Csound scripting.

my goal would be to run 16 layers of 16 x 16 sequences, but as far as the instrument instances i’m thinking 8 to 12 total and polyphony can be limited if required. i do need to squeeze every bit of performance i can from this, so being able to shut off DSP for unused instruments would be required.

longer term plans include a limited amount of control channels for each instrument, effects routing, and spatial positioning, probably with some external toolkit like Steam Audio. but for now the ability to run, say 8 instruments would be enough for a demo.

anyway, let me know your thoughts - meanwhile i’ll take a closer look at Rory’s script and see what i can figure out for myself.

Good to hear from you @metaphysician! That demo looks great. I’m surprised you’re hitting such serious performance issues. At the Csound conference this year the guys in Berklee showed some pretty complex CsoundUnity stuff that was running quite a lot of instances. And by quite a lot, I’m talking about dozens of instances all running side by side without any problems.

My advice for now would be to build a basic POC sequencer outside of Unity using Cabbage or CsoundQT. The workflow will be much quicker. It needn’t be complex, but scalability should be one of the biggest design factor. Also, when it comes to posting stuff here, the simpler is it the better chance we will be able to offer help. I’m looking forward to seeing where this goes :slight_smile:

thanks Rory! glad you liked it. and i appreciate the info on the performance situation. it seems like my existing script must be fairly inefficient, or maybe the macOS version is. i haven’t updated it since 3.4.3 in April. i can say that starting up scenes and stopping them in the editor results in a pronounced audio spike. i was planning to address that in a separate post though.

anyway yeah - i can’t get more than 4 instances of the script to run on the Mac - when i put in 8, it practically staggered to a complete halt. i did load 8 instruments in the script even if only one instrument was selected via a chnget so maybe that uses up cycles - theres two sine instruments, two UDO subtractive synths from Iain, and four copies of the fluidsynth opcode, loading one of two sf2 files. and possibly the polyphony affects it as well, although it mostly uses single notes and rarely has polyphony exceeding 4 notes at a time.

if i could get better performance out of the existing script using multiple instances while working on getting a more robust single instance setup going in Cabbage, that would be a start. it would also be easier to use in the current setup since the links are already configured to it. i’ll post the current script i’m using and get your feedback on good ways to optimize it.

The demo looks great! It’s a very interesting concept. Keep working on it and keep us posted!
I have a feeling that the fluidsynth opcodes are CPU intensive, you could try and make comparisons between your instruments CPU load to understand where the bottleneck is.
My attempts on a sequencer on Unity had big performance issues too. You can find the repo here:


There are 4 instruments plus a lead synth you can control with your hands.
The instruments I am using should not be that heavy, but the rate at which I read the steps are slowing down things, so I guess there must be a better way to do it.
For example I’m thinking of some sort of pooling, so instead of triggering a new instrument instance every time it has to play, it could be better to create some always playing instances of the needed instruments (ie define your maximum polyphony in advance) and enable their playback with envelopes on the volume.

Looking at your code Giovanni, I see some potential issues, the first being ksmps=1, but I guess there are reasons for this. The other issues is passing arrays in and out of UDOs is not at all efficient in Csound6 as the array data is copied in and out. Csound7 now handles UDO arrays as references making them much more efficient. Then there is all that string parsing stuff. This can be pretty expensive in Csound.

The instrument themselves don’t look overly complex. If I had to go about debugging this, I would update the sequencers to call a dummy empty instrument. This would tell you how much of your processing resources are being used up in the sound generation. I might also try using the clock to measure time spent in various parts of the code. The clock opcodes are a little basic, but they might still give you some indication as to where the bottlenecks are.

ksmps=1 was the only option at that time where CsoundUnity was still forcing it to be 1 (so sr and kr were always the device sample rate).
The arrays are created once when the app starts and when you change the sequence (reading structures created on the Unity editor), and yes, that is super expensive as you noted.
I couldn’t think of a better way of passing this kind of info from Unity to Csound, but I guess that with Csound7 we will have better options.
What would be the right way of building a playback like a normal DAW is doing?
I mean, a collection of tracks with regions where each region has a resolution of say 1/128th and can have n notes playing.
Maybe creating MIDI sequences in advance? Is reading MIDI better than creating and reading arrays of steps? I had issues with MIDI sequences on Unity when I tried.
Btw I stil haven’t tried Csound7 on Unity as I’m overloaded with work, any help is appreciated for CsoundUnity v4! :pray:
(and I have lots of uncommitted and incomplete work on Unity Timelines)

MIDI sequence sound like an option, but reading them in Csound is a pain. In Csound6, function tables are probably more efficient because we can just pass the table number. In Csound7 we have custom structs. I’m not sure it will give any improved performance, but it’s going to make code easier to manage:

struct Region start:k, end:k
struct Track number:i, name:S, regions:Region[]

Then create an array of tracks:

instr Init
tracks:Track[] init 100; //max tracks
iCnt init 0
while iCnt < lenarray(tracks) do
    tracks[iCnt].number = -99     // set all unused tracks to -99
    tracks[iCnt].regions init 100 // set max regions
    iCnt += 1
od

Populate data for track 1 when need be:

//insert track 1
tracks[0].number = 1
tracks[0].name = "Track1"

Update region data for track 1 when needed:

tracks[0].regions[0].start = 0;
tracks[0].regions[0].end = 1024;

I haven’t really tested these custom data types yt, so my code might contain some errors. But I’m excited by the possibilities of writing more readable code.

1 Like

Yes it looks amazing, I can’t wait to try it :star_struck:

@giovannibedetti thanks! it’s come a fair distance visually since i last showed it.

okay - a bit of an update. i did find out that i’m NOT using fluidsynth but instead the opcode sfinstr instead. so i got curious and commented out all of the sfinstr definitions to see what would happen and…

nothing. no saving of DSP in Unity whatsoever. then i deleted all of the instr definitions except the first two using oscili and again, no change at all - each layer takes up about 20% of DSP just running the sine instruments. it is technically still loading the sf2 files and the UDO information and ftgen from Iain’s synth is there. so i’ll go and delete that, and…

nothing changed - still at 20-ish% per layer.

i am obviously doing something wrong. disabling all layers gets me the expected near zero DSP, but i should be seeing some kind of savings deleting the instrument defs and i’m not. i have trouble believing that the sequencer array portion is using all the DSP while instruments are taking none of it.

i’m uploading the script in hopes that someone can spot whatever is using up that much DSP. thanks!

Sequencer16-sf2player_osciliOnly.csd (3.7 KB)

tbh this is what is happening to me too, the reading of the arrays is very cpu hungry.
Maybe using tables instead of arrays and the tab opcode (and a phasor maybe) there could be a performance gain :thinking:

yeah i noticed yours is using about 15% or so which isn’t much better than mine. i’m still curious why the arrays use up so much DSP but instruments hardly any. it’s seriously just numbers with MIDI note values. honestly it makes you wonder if you could host the arrays somewhere else and just have it generate pulses reading outside values on the fly. i’m especially perplexed based on Rory’s statement that he saw students at Berklee running a lot of instances of CsoundUnity. maybe they didn’t use sequences?

also BTW i’d like to steal your line renderer audio reactive spectrum code, and use it in the app. looks great! let me know if that’s okay.

sure! it’s already in some of the CsoundUnity samples, I just improved it a bit.
You will notice that it doesn’t work if you set AudioSource as the ListenerSettings, still not sure why (in the past it was a Unity issue but claimed as fixed).

Yes they weren’t using sequences as far as I know.
But I also saw (I was at the conference too) lots of CsoundUnity instances running together on the Quest 3, it’s true! :smiley:

yeah i really need to improve performance - 4 Layers currently means using up 80% DSP, and if the synths themselves are hardly using resources then that is a major issue. really need to find a way to save DSP on the sequencing front. it seems like using Csound to play the instrument live via MIDI isn’t anywhere near as taxing as getting it to read values from arrays.

i’ll look into the table option and see if that will work better.

and thanks for the info on the line renderer thing. my intent is to show it vertically with greatly reduced height and use it as a playbar showing signal for that Layer’s output, so maybe it won’t work since it only wants to show the Listener mix. i’ll scout around for other options.

1 Like

Let us know how it goes with the table option.
Or if you’re able to use Midi as I never tried (well I tried once using the Csound Option -midifile but it was crashing).

about the line renderer:

_source.GetSpectrumData(_samples, 0, FFTWindow.BlackmanHarris);

This just executes once, then all the spectrum data is 0. Maybe I should try to mess with the execution order?

okay - let me muse some more. i think one explanation for the high DSP usage is that i’m regarding each of the rows in my 16x16 grid as a potential voice and having 16 voices able to be played. i could be satisfied with much less (like 4) but that brings up issues of voice management which i have no idea how to work on this. in Max you’d make a single voice and use the [poly] object and set the instance limit.

the simplest start would be a single voice step sequencer and i ran into one from Victor Lazzarini in a web example, playing from a table:https://waaw.csound.com/tabex.html

this seems like a good start, and i have an idea on how to handle multiple notes on a step by using row numbers for the notes and appending numbers for each note, which then means parsing the number, but i did read that parsing is also taxing on DSP. the short gist is if you have a chord starting on the lowest row and you skip rows to form a chord then the value for that step would look like: 00020406 which would be broken up to form a chord on rows 0,2,4, and 6. but if parsing large values takes a hit on efficiency it’s probably not a good approach.

the other method is to try using a timed event from Csound to go out to Unity, pick up one or more note values from an array for that step, and then trigger the instruments in Csound. it would basically be like a live performance via MIDI but utilizing an event from Csound to trigger the NoteOn value read from an array in Unity. this is a bit similar to what i was proposing earlier in the thread, except that i’m sending the note value back sooner rather than waiting for the physics to trigger a note from the playhead on the row and sending it back.

it probably will not work, but if the majority of overhead comes from the usage of reading arrays then something needs to give. creating a massive multi array as Rory posted way back in the thread would very likely not save cycles either.

anyway lots to wrestle with!

Is this the csd that is causing the bottleneck? This doesn’t look at all problematic. I’d remove the printing as that’s never CPU friendly. But apart from that I don’t see any issues. There are no issues with arrays here. Array are perfectly fine, unless you plan to send them to UDOs. When I run this instrument in Cabbage it hardly makes a dent in my CPU meter. Is it really such a hog in Unity?

@giovannibedetti, I’m having some issues getting going with the latest release. First of all I can’t import it using the git tag? Is this an known issue? If I download the release zip and impor tit from disk, it looks like everything is fine, but I guess it’s so long since I used it I’m a little lost. When I add a CsoundUnity gameObject to my scene it looks like this in the inspector. I don’t see any of the usual fields?

image

It looks like you have the Debug active for the inspector, try setting normal clicking the 3 dots top right:
image

About importing, this should work:

https://github.com/rorywalsh/CsoundUnity.git

It will grab the latest release (3.5)

If you want to test a specific branch like the 4.00 use this:

https://github.com/rorywalsh/CsoundUnity.git#release_4_0_0