Cabbage Logo
Back to Cabbage Site

Unity 3D music sequencer project - advice on options

greetings everyone! this is a somewhat involved topic on the subject of using Csound in a Unity based VR oriented 3D music sequencer of sorts. briefly, imagine eight 8 X 8 matrix sequencers stacked up into a cube. triggering always moves from left to right, time is the horizontal axis and pitch is the vertical axis. then rotate the cube data but keep the triggering and note assignments stable.

what i’m interested in is two things mainly. first, the ability to have a bank of instruments that i can assign as instruments to these layers. technically it uses MIDI notes but it’s not officially MIDI. i’ve been getting by using PD up until now but it’s been difficult due to my plugin’s limitation on only being able to load a single PD patch - no abstractions, no samples, nothing. frustrating. so looking at Cabbage i see the opportunity to create very high quality instruments that can be played by the sequencer. i would assume this part is fairly trivial to set up in CsoundUnity in terms of sending control and note data to the synth.

the second is more challenging from my perspective. i need to have a really tight timing grid interacting with the environment controlling the sequence event triggering. preferably Squarepusher/Aphex Twin tight. C# is not great for this due to GC, but i’d be curious what could work better. i discovered a thread on the Juce forums where someone wanted to create a Juce audio back end to a Unity app. it was interesting to me as there was talk on creating a Mixer plugin using the C++ plug-in SDK. i might want to mention as an aside and a reply to your post that the newer version of the Mixer Plug In SDK does contain a MIDI-triggered subtractive synthesizer demo as well as some sequence demos, so it’s not all just audio effects at this point, and integration of Cabbage into the SDK might well be something to look at in the near future.

so there we go - virtual synths and samplers, check (it’s mostly a messaging and format issue i’m guessing). sample accurate DSP based timing, not so sure about. i did get a decent-ish timing situation in Standalone mode using OnFilterRead but building on Android the results were laughably chaotic (a demo of this ran on Gear VR), but i am upgrading to Vive quite soon so i may run it on that device. i do like the portability of the GearVR though. also someone needs to amend the Android latency problem to something that reads like ‘horrifically inconsistent latency’ to more accurately reflect my personal experience. anyway, i’m also not the most efficient programmer so there could be other issues at play. i have heard of an Android based low latency library called Superpowered that might be able to help as well, but i want to see what options Cabbage or the Unity Mixer SDK might be able to offer in this regard. any advice appreciated!

Yeah, this should be quite trivial.

Can you provide more info on this? My first thought would have been to use use Csound to trigger the grid sequencer? Hang on, I have a few minutes. I’ll see if I can put together a rough example.

That’s great. But what Mixer API? The only API I can find is this one I will certainly look at using a native API if I can.

I don’t think there is much you can do about this until more Android devices support low latency. Any ‘M’ device does. But they are not yet many of them.

Almost there, but I need to take a class now. I’ll send something on in about an hour or so…

Here you go. Latency is good if you set your audio settings to ‘Best Latency’. Import the CsoundUnity package first. Then remove everything but the StreamingAssets. After that import this package. Sorry I don’t have time to go into any details. Please feel free to ask me any questions! Gotta dash.

CsoundSequencer.unitypackage (61.1 KB)

I didn’t have time to write comments, so here is a quick video outlining how I put it together.

[edit] once you have imported everything, open the .csd file, make some edits and save it. This will cause CsoundUnity to put it in the correct place when you return to Unity. Otherwise you’ll get an error about a Csound file not being found…

wow - excellent - this is a very good start, and thanks tremendously for the video as well. i’m very grateful that you took the time. the most important issue for me is relying on Csound for the musical timing and not Unity’s own timing (it’s Ok if the graphics lag a bit behind the audio, but the audio triggering needs to be spot on). i’ll try looking into this in a bit. my VR rig just arrived so i’ll be offline for awhile setting it up and playing with it, but i’ll try to get back to this before too long. thanks very much again!

No worries. It didn’t take long. If you have any links to the mixer API you referred to please let me know. I can’t seem to find anything like that in my searches.

no, you were correct and that link is the right one, but the native audio SDK runs as a plug-in to the Mixer asset. at least all the demos i saw did anyway. check it out.

Ok, now I see there is a plugin labelled as synthesiser. I’ll take a look. To be honest, I’m not sure one would run into too much performance issues with CsoundUnity. There is so little being done in the OnAudioFilterRead that I doubt it will cause a bottleneck. But I agree that it is worth exploring the use of the native API.

I just took a look over the new audio SDK, and I have to say that I’m not so convinced I need to use it en lieu of the current approach. It still offloads everything to mixers, which I find a little counter intuitive. If users report problem with the current implementation I’m happy to look at this again. But for now, if it’s not broken there is probably no need to fix it.

the only thing i might say is to be able to implement Csound Unity in such a way as to make it agnostic in terms of the spatialized system used. Unity’s spatialized audio is not very good, so there are a lot of alternatives offering basic HRTF-based spatialization, like the Oculus Spatializer and the Google VR audio framework. In Feb of this year Valve bought the audio tool startup Impulsonic and have now released their Phonon framework as Steam Audio. beyond HRTF spatialization it also offers occlusion and baked in DSP and sound propagation settings for for static geometry. so the spatialization thing wasn’t what i was thinking of in terms of the music sequencer but it definitely would be nice to know it’s there and available for things like Csound should there be a need. many of these plugins have Unity Mixer hookups for setting their versions of Reverb Zones, etc. the output engine is set up in the Audio part of the Project Settings. so, i’m not sure how to route the output of Csound Unity such that it inserts itself before the spatializer but that’s what i’d wish for.

I would say that it is already spatially agnostic because Csound is delivered as an AudioSource in Unity. So you should already be able to avail of all these other tools you mentioned. Btw, Csound does ship with a set of HRTF opcodes. And I know that there is some current research being done at the moment to provide newer and better HRTF algorithms that you won’t find in any of the existing frameworks. If I’m not mistaken, some game developers have already being using Csound’s HRTF opcodes in Unity. I recall reading about it some time back.

hmm - i’d be quite interested in that. a knowledgable friend of mine says that most of those folks doing new things fro HRTF are basically creating interpolations between existing data sets. but if those folks are doing different stuff, i’d be interested to know more.

They are. As far as I know they’re looking at cost effective ways to scan the ear, and then generate HRTFs based on a 3d model of the scanned ear. This would give each user a unique data set that matches his/her ear.

interesting. sound like the same thing Ossic is trying to do with their headphones, which cost a fair mint.

BTW, managed to fire up the sequencer you sent me and, of course i get no audio. here’s what i did:

  • blank project
  • imported CsoundUnity package - only imported Streaming Assets folder contents
  • imported the Sequencer
  • went back and forth on getting it to recognize the csd file. i think it’s loading it but i get no sound and no errors.

here’s the Dropbox link:

That zip extracted and ran for me without any issues. Click the main camera to access to the CsoundUnity object. Then click ‘Log Csound Output’. Then hit play and take a look at what the console tells you. If there is an issue with Csound it should show up there.

well, here’s a screenshot of the Console but i doubt it will help. looks like there’s some initial files not found but it thinks the audio is working fine:

any ideas?

well, i opened the previous working Csound demo i had downloaded, and not only does it work, it prints out the exact same messages as i had with the sequencer. so i was right in thinking that it certainly thinks everything is fine. i must be missing something. could i maybe take the sequencer CSD file and scene and export them as a package to the other working project?

I guess you can try. It does look like it’s all set up correctly. When you click on any of the squares do they change colour and turn green? Also, when hit hit play does the cube resize on each beat? Oh also, If you have had any audio software open, while Unity was open you may need to restart Unity. It doesn’t seem to like to share audio with any other applications. I’ve noticed this quite a lot on Windows. Not so much on OSX…

well - i’m an idiot. i never clicked on a block, so of course it was silent. works now. it took be a awhile to find the synth ‘note’ data, but now i get it, you’re doing additive synthesis and each block is a partial.

but i was curious about the csd file you made up. this section appears the be the instrument correct?:

instr SYNTH
a1 expon .1, p3, 0.001
aOut oscili a1, p4*100
outs aOut, aOut
endin

so if i created a variety of instruments (like a sampler or a FM synth or a subtractive synth), and let’s say these were switchable at the sequencer level? i’m curious about the basic configuration. maybe let me share a screenshot:

that’s the GUI for a layer. so a layer is an NxN grid - currently 8 for the prototype. so i’m thinking this script would effectively be a layer definition that sets a tempo relationship(Division) based on a master clock value (for the master clock i think your MainController script seems to do that nicely). then we have Instrument and Gate Time. Note Offset shouldn’t be needed as there’s a new key assignment interface set up to pick which row makes which note (but that information would need to be passed as MIDI -ish note data). The Sequence Length parameter here is kinda interesting. the actual grid size is set to 8, but a layer’s sequence can be set to longer, effectively creating a pause in between. this can create some interesting options of phasing between simple melodic ideas.

so, assuming this is the beginning of a layer and i’ll need 8 of these, or 8 independent instances of these, can Csound easily instance? if so all i have to do is define one layer for this behavior, and one instrument or more in a CSD instrument or orchestra file, which is currently a pretty far cry from how PureData works (or at least what vanilla can accomplish with the current Unity libpd plugin options ).

so how is the instr reference accomplished if we’re doing it to a separate area or file and wanted the option to switch them controlled by a variable from Unity? just curious how you get parameters in and out. i can probably look at the main Csound project for clues there.

at any rate i’m figuring i can set one file up as the layer and one file up as all the instruments available for a layer. maybe one more to serve as the master clock to sync everything.

this is just step one, but i’m curious how it can work.

thanks for all your help!

scott

That GUI looks nice. I really have to spend some time on that area of Unity. It would indeed be trivial to swap between instruments. There are many ways to do what you want. One would be to avoid the need for layers completely and just set up a straight 8x8 sequencer in a single instrument. This is how I normally work with patterns matrixes. And then if I need to I create a new instance of that grid controller. It’s as simple as adding another

i"CONTROLLER" 0 3600

to your score section. I also tend to keep all my Csound wizardry to a single Csound file if elements need to be synced. For standard sound design I spread as many Csound files around the place as I wish, but a project like this it is probably best to use a single orchestra.

We use channels to send data in and out of Csound from and to Unity. There are some examples in that project of this. Feel free to quiz me on any of the code.