Cabbage Logo
Back to Cabbage Site

CsoundUnity examples

Hi @giovannibedetti, @Alex_Emile. I was thinking about the examples and I propose we start a list? Here are a few things I can work on as I have most of the code written already.

  1. A grid sequencer - I have one of these ready to go, but I thought it might be fun to try to shoot the grid to enable notes?
  2. A CsoundUnityChild demo - A simple roller ball type scene where one can move towards different sound sources that are all being fed a signal from the master CsoundUnity component. Or we do something as a FPS? We would have to keep it simple, but as the character moves through the scene they could encounter various sounds?
  3. I would really like to add the ImageSliders example that already exists? The one where the player jumps onto harmonics. I would just rework it to follow the same visual style as all the other demos.

These would be pretty quick to implement. But if anyone would like to propose other, or better demos, let’s chat about it.

p.s. I would like to create a simple VR drumming scene too. I don’t have any code for it, but I have been working with VR for the past month, implementing various sound toys for a game. The responsiveness to gesture is quite good. It kept making me want to write a set of drums for the user to play.

:+1: but with different interaction modes by platform (desktop, mobile, VR)

:+1: same as above, but it’s more suitable for mobile, using gyro/accelerometer

I don’t remember he was jumping, wasn’t he shooting at the bars? one example was creating sine tones, the other was creating the spectrum, but yes something like this is needed! :+1:

:+1: Me too! My aim with CsoundUnity is to create a VR instrument (maybe soon I’ll have a Quest), and anything towards this would be super cool!

I’d like to add an example with some visual effects that use Csound opcodes to analyse sound (since audio input has some latency we could showcase the Process AudioClip, with a playlist that changes the clips at runtime), would be useful for VJs!

Another thing I think we should do is create a UI for a playable synth, something with XY pads, maybe something like a Kaoss Pad, to show that CsoundUnity could be used to create multiplatform synths! (then we’ll se what we can do with input audio latency)

Sound good. I wonder if we should create sub modules in git for the examples? That way people who write a new demo can make a PR and we can add it as a sub-module. I haven’t used them too often in the past, but they seems like a good solution here, and will provide an easy way for users to contribute demos?

Never used submodules, but yes, it’s a good idea! We need to build a user base, and this seems the way to go to get people involved!

The idea is you clone a sub repo into your main repo. We would continue to work from our dev branch, but end-users can checkout and push to the samples folder contained within it, without messing up our development.

Ah then I might not have published this one? You can jump but you don’t need to, you simply walk around on platforms turning up and down harmonics. It’s Iain McCurdy’s Synths->Misc->ImageSliders.csd that ships with Cabbage. It’s a lovely instrument. Check it out if you haven’t already.

Apropos, there are quite a few of Iain’s Fun and Games instruments that would make nice demos. The electricity one would be nice with Unity’s particle system.

Yes in the examples I had you were shooting at the bars, but yes this is very good!

Also the Electricity is good!!

And of course some example with wind, rain and thunders like what you showcased in a youtube video!!
These could be very useful in a game!
I could create a wind generator for VR (with azimut and elevation, with a spatialized feeling) using this udo: http://csound.1045644.n5.nabble.com/Csnd-New-UDO-an-aeroacoustic-physical-model-td5756093.html
Also showcasing UDOs could be a good thing?

Hmm, I just tried that and I get an error about the fractalnoise opcode. I’m on windows…

[edit] I had Unity open in the background. It was robbing my OPCODES6DIR64 from me. Yeah, these sound good!

That’s the code of the UDO I have, working on macOS and on Windows (I developed my csd there):

/*
m_aerosound - Aeroacoustic semi-physical model

DESCRIPTION
Model the effect of wind blowing past a cylinder.

SYNTAX
aleft, aright m_aerosound kairspeed, kdiametre, klength, kelevation, \
		kazimuth, kdistance

INITIALIZATION

PERFORMANCE
kairspeed - speed of the wind in m/s (may not be 0!)
kdiametre - diametre of the cylinder in m (may not be 0!)
klength - length of the cylinder in metres
kelevation - elevation angle between listener and sound source (in Radians)
kazimuth - Azimuth angle between listener and sound source (in Radians)
kdistance - Distance between listener and sound source (in Metres)

CREDITS
Written in Csound by Jeanette C., original proposition, research and PureData
code by Rod Selfridge as part of his thesis at Queen Mary University of
London, many thanks to him for his untiring support and generousity!
*/

opcode m_aerosound, aa, kkkkkk
	kairspeed, kdiametre, klength, kelevation, kazimuth, kdistance xin

	; Protect against division by 0, airspeed must be greater than 0
	if kairspeed == 0 then
		kairspeed = 1
	endif

	kbasefreq = 0.2 * kairspeed / kdiametre ; base frequency of the Aeolian tone
	kM = kairspeed / 343 ; the Mach number
	; Calculate the Reynolds number necessary as parameter to the q-value
	kreynolds = 1.225 * kdiametre * kairspeed / 0.000148
	
	; Calculate the inverse of the Q-value percentage, based on value of Reynolds
	if kreynolds < 193260 then
		kqinv = 0.00004624 * kreynolds + 0.9797
	else
		kqinv = 0.000000000127 * (kreynolds^2) - 0.00008522 * kreynolds + 16.5
	endif
	kq = 1 / (kqinv * 0.01) ; invert the percentage
	kq port kq, 0.5 ; smooth the q-value, in case of passing the reynolds
			; threshhold in the if-statement above
	kbaseamp = 0.000000003 * (klength * (kairspeed^6) * (sin(kelevation)^2) \
		* (cos(kazimuth)^2)) / ((kdistance^2) * ((1 - kM * cos(kelevation)^4)))
			; amplitude of the base frequency, the first factor is a static
			; correction, so we don't overflow and distort
	kampfactor = log10(kbaseamp / 2 * 0.00001) ; linear/log conversion factor
			; present in all amplitude calculation for the harmonics
	; Amplitudes of the harmonics for drag and lift dipoles
	k2ndamp = 10^(0.1 * kampfactor)
	k3rdamp = 10^(0.6 * kampfactor)
	k4thamp = 10^(0.0125 * kampfactor)
	k5thamp = 10^(0.1 * kampfactor)
	anoise fractalnoise 1, 1 ; Brown noise as basic sound source

	; All noise bands for basic tone and harmonics
	abase butterbp anoise*kbaseamp, kbasefreq, kq
	a2nd butterbp anoise*k2ndamp, kbasefreq*2, kq
	a3rd butterbp anoise*k3rdamp, kbasefreq*3, kq
	a4th butterbp anoise*k4thamp, kbasefreq*4, kq
	a5th butterbp anoise*k5thamp, kbasefreq*5, kq

	aout = abase + a2nd + a3rd + a4th + a5th
	aout butterhp aout, kbasefreq
	
	; Convert angles to polar coordinates
	kx = sin(kazimuth) * kdistance
	ky = cos(kazimuth) * kdistance
	kz = sin(kelevation) * kdistance
	; Place the sound in 3d space with b-format output
	aw, ax, ay, az spat3d aout, kx, ky, kz, 1, 0, 3, 1, 2
	; Encode b-format to stereo
	aout_l, aout_r bformdec1 1, aw, ax, ay, az

	xout aout_l, aout_r
endop

Sure, I’d be happy to try my hand at a simple instrument that triggers notes/ chords and has basic modulation. Maybe something with 4-5 stacked horizontal bars that goes through a scale as you touch (or click) them, sort of like that Plink game but with more focus on parameter modulation.

What do you think?

Sounds great. I will try to set up some sub-modules in git when I get a chance and we can get the ball rolling :wink:

@giovannibedetti. I’ve a quick question regarding the samples. I’ve set up a CsoundUnityExample repo, but I’m not sure this approach will work. Am I correct in thinking we must modify the package.json each time we add a new sample? If that’s the case, a sub module might not be be all that useful as the parent repo will need updating each time a new sub module scene is added?

Yes it’s true, but like this it works like a pull request, so the module can of course be added to the submodule, but only if accepted will become part of the package!

I know, so I’m wondering if we should just let people make PRs to the main repo. We will review changes at any rate, and if they have done anything with the main code we’ll know.

I just pushed an example using some child sources now. You can use the arrow keys to move from left to right to hear the different tones. Do you hear clicks? I’m getting them here, and I’m using a pretty serious gaming PC to test this. I’m curious to know if you get them too…

Pulling now to test it

Yes in fact this is the same!

I think we are making redundant calls to:

namedAudioChannelDataDict[chanName][i / numChannels]

It should only get calls twice in this example, but is being call 4 times because we run through the loop on every channel. We don’t need to. I’m going to modify it now and see if that helps…

1 Like

It shouldn’t be a problem, it is just a reference, so there’s no computation involved other than find the address of the first element, if it is not recreated it should be ok!

I didn’t realise that? Anyhow, fwiw, I’ve rearranged it to make it a little clearer:

public void ProcessBlock(float[] samples, int numChannels)
    {
        if (compiledOk)
        {
            for (int i = 0; i < samples.Length; i += numChannels, ksmpsIndex++)
            {
                for (uint channel = 0; channel < numChannels; channel++)
                {
                    if (mute == true)
                        samples[i + channel] = 0.0f;
                    else
                    {
                        if ((ksmpsIndex >= ksmps) && (ksmps > 0))
                        {
                            PerformKsmps();
                            ksmpsIndex = 0;

                            foreach (var chanName in availableAudioChannels)
                            {
                                if (!namedAudioChannelTempBufferDict.ContainsKey(chanName)) continue;
                                namedAudioChannelTempBufferDict[chanName] = GetAudioChannel(chanName);
                            }
                        }

                        if (processClipAudio)
                        {
                            SetInputSample((int)ksmpsIndex, (int)channel, samples[i + channel] * zerdbfs);
                        }

                        samples[i + channel] = (float)GetOutputSample((int)ksmpsIndex, (int)channel) / zerdbfs;

                    }
                }

                foreach (var chanName in availableAudioChannels)
                {
                    if (!namedAudioChannelDataDict.ContainsKey(chanName) || !namedAudioChannelTempBufferDict.ContainsKey(chanName)) continue;
                    namedAudioChannelDataDict[chanName][i / numChannels] = namedAudioChannelTempBufferDict[chanName][ksmpsIndex];
                }
            }
        }
    }

Yes this was wrong, it was unnecessarily updating the values twice
Good catch!

Good to have a second pair of eyes on this. But I’m still getting some dropouts… at least that what they sound like…

I have no dropouts, tested in macOS build! (it was using system installed csound but once this is issued, it should be the same)
Could it be some syncing problem between the master and the child?
have a look here:

https://docs.unity3d.com/ScriptReference/AudioSettings-dspTime.html