Cabbage Logo
Back to Cabbage Site

Unity 3D music sequencer project - advice on options

well actually we are talking about 8 8X8 sequencers - each 8x8 sequencer is a layer. these are all running synced to the main clock, and in tandem, and arranged on the z axis like a cube. the triggering will always go left to right within the grid, and instrumentation is basically set per layer. those controls are for one 8X8 layer.

then there’s the rotation aspect which i haven’t discussed. imagine being able to rotate the all the layers, arranged like a cube, as a cube. keep the layer assignment and sequence triggering exactly the same, but rotate the notes around . like in this movie:

(URL is a 200MB plus movie - here’s a download link that doesn’t result in a movie player inline: https:www.dropbox.com (and then) /s/5d3mbh3segi0kvv/constellation-prototype.mov?dl=0)

note this demo doesn’t feature note assignment or note editing in a layer, but its in process. so the takeaway here is i think the instruments would have to be separate from note data and timing, unless you put everything together in one huge file that would callable by layer. my brain is sort of shorting out here on how you can get eight 8 voice instrument definitions configurable from Unity with separate parameters. the biggest issue is that in PD i had a message with a hierarchy and used [route] objects to filter messages to destinations like ’ ', but in this situation i’m kind of lost on how the equivalent would work in Csound. and i think the examples in the project will most likely show a couple of simple parameters passed. technically i’d need a system or hierarchy of parameters.

all my note and instrument assignment data is put into a json file. here’s the layout for one 8 x 8 layer. this is just instrument and row assignment, not a grid of selected notes. keep in mind the notes keep shifting (rotating), so i wanted things separate.

“LayerObject0”:
{“InstNumber”:0,“SeqLength”:21,“BeatDivision”:2,“NoteList”:[12,17,19,24,26,31,34,36],“PosIntensity”:[12,17,19,24,26,31,34,36]}

NoteList is the pitch assigned to a horizontal row, from bottom to top. PosIntensity is velocity at any given position in the row. It’s also sort of meant to double as an option to send a value to control a parameter like a filter or ring mod effect.

so if you’ve got any ideas on how one could create a template CSD file to convey this info, that would be a great start. the sticking points for me are

  • the instancing for voices (rows) for a single instrument
  • if you were to dump the whole thing into one file (all 8 layers i mean) can the instrument itself and the layer settings be instanced, or would these all need separate defs?

lots of questions. i’m just beginning this, but i think you have a clearer idea of what i’m doing anyway.

That looks nice. I see now what you are trying to do and I think that it should be straightforward enough with Csound.

Agreed. This is how I normally approach this. This also gives you the chance of swapping out instruments without having to mess with the master controller.

I can’t say I’m having the same problem imaging this, but I may be missing something? This is very possible, there is quite some complexity to what you are doing!

This is clear and easy to read. And you could send this data to Csound from Unity, but I think it would be better to hold this data in Csound. It would just remove a level of abstraction? Is Unity reading the json file and then triggering things in Pd? Btw, in your demo do you use Unity to time all the events or Pd?

Having several instrument instances playing at the same time is not an issue. Each can have unique parameters.

They can be instanced without any issues. And each can be instanced with a unique number which can be retrieved from the instrument so you can tell which instance you are currently in.

I think my advice would be to strip everything back and start from the ground up. Create a simple 8x8 grid that is controllable via Unity. Once that’s working, create 8 instances of it and see how it goes. To be honest, if you can get one 8x8 layer working, you can create as many instances of them as you wish.

1 Like

ok well - first i have to start with a row. in the sequencer example you used a bunch of harmonics as your notes, how do i replace them with say MIDI notes? is there a Cabbage patch (heh) maybe in Studio that i could study for that?

It’s very simple. I don’t have that project here right now, I’ll get something to you tomorrow. Check out this opcode. Instead of passing 1-8 to the synth through p4, just use a midi note instead. You could easily set up an array to hold a list of midi note numbers. I’ll send something on tomorrow. If I have time I might even try to create the 8z8 matrix for you. It sounds like fun :wink:

well i would certainly be most grateful! meantime i guess i should spend a bit of time studying Csound in more depth. i’m a complete noob.

best,
scott

Here’s an 8x8 version. First, I made some changes to the project hierarchy. Each row of cubes resides in a RowN GameObject. Each of these objects has a script attached that when loaded, assigns each of the child cubes to an array. Therefore I don’t have to bother with dragging 64 gameObjects to my scripts! Each child cube uses more or less the same Cube controller script as in the original.

The main controller, on the camera, is still in charge of receiving and sending messages to Csound. When a cube is clicked, it sends its index within the row, and the row number to the main controller script. The main controller then sends the data to Csound.

Rather than create one instrument that handles the 8x8 grid, I simply modified the original instrument and created 8 instances of it. Each instance can retrieve it’s row ID (1-8) from p4 which is set in the score section. I also use p4 to set the correct channels in Csound.

Finally, I added an array of midi notes, a simple C major scale. Each row can play one note from this scale. But these could easily be swapped out for drum samples, or anything at all. Note that if I was to continue with this, I might rethink the design as this is more or less a live improvisation! Here’s a short demo of it in action.

Here’s the project files, apart from the StreamingAssets folder…same instruction apply as posted above.
UnitySequencer.unitypackage (68.5 KB)

excellent - a good start for sure, though like you said, a bit rough on the edges. for example i want to have the note turn off when unselected but so far there’s no logic for this. in the CSD file i found this:

;if user has enabled a note, update note amp array
if changed(kUpdateIndex) == 1 then
kNotesAmps[kUpdateIndex] = kNotesAmps[kUpdateIndex]==1 ? 0 : 1
printks “Updating row %d - index: %d - value %d”, 0, p4, kUpdateIndex, kNotesAmps[kUpdateIndex]
endif

okay i’m betting that this line does the changing:
kNotesAmps[kUpdateIndex] = kNotesAmps[kUpdateIndex]==1 ? 0 : 1

is the final :1 the amp value, or is it ==1?

also how does Csound handle if and else and else if? is that if, elif, else, and endif?

there’s a slight bit of wonkiness to the playback bar you have - it triggers notes when scaled back down. that’s not much of an issue.

however, if i wanted to change a number in the array from Unity in kNoteValues, would you say ‘kNoteValues[index number] = (whatever number i want)’? that would be very important. i’m okay with Unity temporarily holding the data and shuttling it to Csound via GUI interaction from the json file, but i’m not sure if a generic k-Rate parameter change is currently available. it’s not obvious to me that it is.

scott

Ha, I never even tested that. I’ve attached an updated version. In some ways it’s easier, I simply send the current state to Csound, along with the index. [quote=“metaphysician, post:27, topic:464”]
also how does Csound handle if and else and else if? is that if, elif, else, and endif?
[/quote]

if statements use the then keyword along with an endif. elseif and else are also supported. Here’s a good run down of control structures in Csound

You would need to send the data to Csound over a channel. Generic control of k-rate parameters is possible, but again it need to be done over channels. So

kNoteValues[index number] = (whatever number i want)

Would translate to(in Csound code):

kNoteValues[index number] = chnget:k("NewValue)

Then in your Unity script you would do something like this:

csundUnity.setChannel("NewValue" value);

UnitySequencer.unitypackage (68.6 KB)

okay quick one here. there were still some bugs mostly caused by off by one errors which have been fixed (one in the Unity script, one in the CSD file), but i cannot for the life of me figure out how you’re assigning the row to read the kNoteValues in reverse order from the way it’s shown in the file. all the unity scripts seem to set the row number from the number in the object name, which makes sense but row 0 plays the highest note and row 7 the lowest. are array values read from right to left? i’m looking at the CSD script to see if there’s an inversion operator there, but i don’t see it. it has to be in the CSD file, as i’ve checked everywhere else. i could just reverse the array order manually in kNoteValues[] but would prefer not to.

scott

Update: - never mind, found out. closer examination of line 27 of the CSD file was the key [8-p4-0] changed to just [p4-1] (for the off by one issue).

:+1:

hey, another question relevant to the sequencer as well as your replies in other topics, i’m just trying to determine if instancing is a good idea here. for example is it possible and desirable to keep each layer 8X8 sequencer as an instance of one CSD orchestra file? then you have eight instances of one file which basically should stay in sync. alternately i can put everything into one large file with eight individually addressable sequence players. this is somewhat similar to the PureData approach i took, except that PureData did not hold the sequence data and merely acted as the instrument.

i’ve also been looking at existing synth instruments you have, to determine which ones would be good candidates for inclusion. i quite like Ian’s GEN02 example and its streamlined controls - also because it comes with a sequencer and can change musical timing - but it’s been tricky to get it modified so that it works as a sort of dumb player. i’ve been trying to modify the code but version 1’s editor on the Mac makes the process a bit difficult. i’ve actually gotten the QtCsound app running which is a bit easier in certain ways and also has some synth examples, but they’re not as interesting as yours and Ian’s (guy is kind of a genius, imo). anyway, mostly it’s just down on me to try to understand the nomenclature and syntax structure. been looking at tutorials but progress is slow, and of course i have other things to deal with in the meantime.

I am not sure putting each layer into its own .csd file is the best approach. I think you may end up with some timing issues depending on when each new block of samples is processed. This would leave you relying on Unity’s timing, which I wouldn’t feel all that comfortable with. Personally I usually work with one large .csd file. Everything is self contained and relatively easy to maintain. Plus, Unity only has to use one AudioSource to host it. But it may be worth trying it the other way. Who knows, it might be fine.

It’s not easy to pull apart other people’s Csound code, especially when you are new to Csound, but stick at it. If you have a fairly good grasp of synthesis techniques then you are already half way there. @iainmccurdy is a genius alright. If you have any questions I’m sure he’s more than happy to answer them.

wow - i’ve been away for far far too long, but i’m finally ready to get back into this project. so i’m going to tag @iainmccurdy here for the next step in this evolution. i’m happy to break this out into another thread if needed but it’s all still relevant to the sequencer so i thought i would put it here.

so Iain, i’ve been looking at your wonderful examples and there’s so much to learn from. but in particular for this project i’ve been looking at your Gen 02 synth/sequencer as a potential synth voice for the sequencer script that Rory was so kind to whip up for me. back in April i did analyze it somewhat and made an attempt to try and tease out the synth voice from the UI and other unneeded things. i’m finding it to be a bit challenging. the first obstacle is just getting the synth voice itself. the second is to potentially replace the notes in your ftables with note assignments that i would push from Unity based on the interface assignment (if you read through the thread posts i’m making a 3D sequencer of sorts and there’s a short video clip of an earlier prototype using PD posted). at any rate, Rory got me a music sequencer patch using a simple sine wave synth - i’d like to replace it with your Gen02 synth voice. dropbox link to the Unity test project - running with two 8X8 layers - is here:

any advice you can give greatly appreciated! also including Rory’s CSD file here.Sequencer.csd (1.6 KB)

best,
scott

Hi Scott, I’ll try and take a proper look at this later, but apologies if there’s a delay as I am currently holidaying in the Alps. I will also post a csd with just the synth from this example, without all the tables and sequencer gubbins.
bye, Iain

no worries, thanks a bunch Iain! enjoy your holiday!

just a bump here for @iainmccurdy. any updates? wasn’t sure if you were back yet.

Hi Scott,

Here you go:
GEN02ExampleSynth.csd (5.9 KB)

Iain

okay i’ve been messing around and i’m starting to get the hang of things. got some issues and a couple of clarification questions.

  1. Would like to start and stop sequencer as well as to play very slow. seems like this may be possible by changing tempo in the game engine to a float instead of an int? stopping seemingly can be done by setting tempo to 0.
  2. would like to reset sequencer to beginning with a message or routine call. not sure how at the moment.
  3. i started working on making the sequencer multitimbral. i created another layer with a unique name and altered the script so that on mouse down it knows what layer the cell is, and therefore what instrument it’s playing. but right now it’s not working independently, and i suspect it’s because the cube literally only has the enabled state to refer to. imagine two layers side by side. if you click an event on the left side and it will play both layers with instrument 0 . if you click the other side the instrument timbre switches to instrument 1, again, for both sides. i need the sound to be different so that cells on the left layer play with one timbre and cells on the right play the other timbre. right now it switches to an either/or situation. but i believe the switching in Unity is being handled correctly.
  4. i’m definitely going to want panning for this. probably stereo to begin with, but ideally i’d want separate panning for the instrument instance assigned to a layer. these sound fonts don’t have panning it seems. i’m assuming that would mean separate sf2 instruments loaded and connected to a pan opcode which i would use in stereo and not quad.

that covers the immediate needs it seems. #3 is the most important for me at the moment. help appreciated! including the edited CSD file here.Sequencer-sf2player.csd (2.2 KB)

and just in case you need to check Unity files here’s the updated scripts:
UnityScripts.zip (3.1 KB)

As an int it can only ever be as slow as 1Hz. Using a float will give your lower frequencies. And yes, setting tempo to 0 is a simple way of stopping playback.

Something like this will set the beat back to 0, thus resetting the sequencer?

kBeatReset chnget "resetBeat"	
if changed(kBeatReset)==1 then
	kBeat = 0
endif

you just need to send a random number to the “resetBeat” channel from your Unity script, ala

csound.setChannel("resetBeat", Random.Range(-10.0f, 10.0f));

I’m not sure I follow. You want two layers to be playing simultaneously at certain times? Or? If so then wouldn’t something like this work:

if kNotesAmps[kBeat] == 1 && kInst = 0 then
		event "i", "SYNTH", 0, 3, kNoteValues[p4-1]
		event "i", "sf2inst", 0, 3, kNoteValues[p4-1],50
elseif kNotesAmps[kBeat] == 1 && kInst = 1 then
		event "i", "sf2inst", 0, 3, kNoteValues[p4-1],50
endif

Panning is simple, just modify your outs line, eg.

outs a1, a2*0 ;pans hard left

You could pass the pan position as a pfield when you call the event opcode. Hope this helps. In the meantime I will try to read over point #3 again and see if I can make more sense of it :pray: