Cabbage Logo
Back to Cabbage Site

Panning from Csound, relationship to AudioSource in Unity

I’ll just create a new project for now. Should be simpler then dipping into this one.

[edit] how do I enable the file watcher again? Shouldn’t this be enabled by default? When I make a change to a .csd on disk, that change should be reflected the next time we build?

FILEWATCHER_ON in the PlayerSettings Scripting Define Symbols :wink:
Yes I think it could be time to make it enabled by default, no issues till now

Oh I think I have not pushed a change on how we initialise the children… let me push it
Ok now the above example should work, and also the filewatcher is enabled by default.
You will also see some commented attempts I made to find a workaround for spatialization not happening.

Right. I’m up and running, but I’ll need to leave this till tomorrow as it’s getting late here :+1:

Sure there’s no hurry! But I always appreciate your quick replies :wink:

@giovannibedetti I’m just looked at this now. i’ve set up the simple roll-a-ball toot from Unity and have placed a CsoundUnity component on one of the rotating squares. The main listener is on the ball. I don’t hear any panning info when I move the ball around. But I do hear distance attenuation. Is this the same problem as you are having?

Something else I noticed is that the pan slider doesn’t work either.

This seems to be the approach taken by others. I just tried it here and like you, I can’t seem to get it to work. What I find strange is that the distance attenuation seems to work out of the box, but not the panning?

Yes, exactly: the volume is updated correctly, but panning depending on the angle between the source and the listener is not working.
Apparently the OnAudioFilterRead callback happens on a later stage than the spatialization calculation, so when we write the output samples there we also overwrite those calculations.
So in the example I created what you hear is the volume staying constant and the resulting sound is middle panned, while sound is moving circularly around the user, one AudioSource after the other, as you can see in the meters.

To be sure, I tried creating an examples with plain AudioSources with looping clips, and the spatialization is working as expected.

Yes, I did the same :stuck_out_tongue_winking_eye: have you tried to monitor the amplitude of the samples in the dummy clip? Also, I wasn’t clear, should the dummy clip be stereo or mono? Do mono source get expanded to stereo for spatialisation?

Yes I tried to get the samples of the dummy clip in Update (and this should be avoided since it’s pretty slow) and it always returns 1, so the calculation is not applied to the clip but possibly somewhere on the AudioSource.
I tried either with mono and stereo clips, but it didn’t make any difference.
I think I will try again with the AudioMixer approach (though it is not desired to have to do this), but the issue I have is that I get a duplicated output as a result.

Is there something we might be doing to remove this info? I wonder if you create a basic AudioSource component with its own OnAudioFilterRead() method, and try the same dummy clip approach, do you see the sample values being altered in real time. If not, then we may not be doing this correctly?

I made this simple test with an AudioSource with a looping AudioClip, and panning is working! :face_with_raised_eyebrow:

// Update is called once per frame
void Update()
{
    this.transform.position = new Vector3(Mathf.Sin(Time.time) * 100, 0, 0);
}

private void OnAudioFilterRead(float[] data, int channels)
{
    for (int i = 0; i < data.Length; i += 2)
    {
        for (uint channel = 0; channel < channels; channel++)
        {
            data[i + channel] = data[i + channel];
        }
    }
}

So like this the spatialization data is mantained (of course, it is a copy! :upside_down_face:)
So we need to take this into account, the samples we receive in the OAFR callback have to be multiplied in the output, but without increasing the overall volume… :unamused:

Ok it works!!! Pushing so that you can have a look :wink:

For those interested:

  1. Create a MONO dummy AudioClip for each AudioSource you want to spatialize, with all samples valued 1, like this:

     if (audioSource.clip == null)
     {
         var ac = AudioClip.Create("DummyClip", 32, 1, AudioSettings.outputSampleRate, false);
         var data = new float[32];
         for (var i = 0; i < data.Length; i++)
         {
             data[i] = 1;
         }
         ac.SetData(data, 0);
    
         audioSource.clip = ac;
         audioSource.loop = true;
         audioSource.Play();
     }
    
  2. Inside OnAudioFilterRead, multiply the samples received from the callback with your synthesized output, like this:

     void OnAudioFilterRead(float[] samples, int channels)
     {
         for (int i = 0; i < samples.Length; i += channels)
         {
             for (uint channel = 0; channel < channels; channel++)
             {
                 samples[i + channel] = samples[i + channel] * yourSynthesizedOutput[i + channel]);  
             }
         }
     }
    

Finally! :partying_face:

For each CsoundUnity AudioSource you have?

Is this done in a new AudioSource component script that you add to your CsoundUnity game object?

I just want to be clear here as I probably won’t get a chance to check this for a while. I’m not using Unity at all these days :frowning:

Yes those are general instructions for others with the same issue!
I added the change in CsoundUnity and CsoundUnityChild, it’s already pushed to master!
So nothing to do, it will work out of the box :gift:
Still I need to test what happens if there is a valid AudioClip in the AudioSource (we send its data to Csound as an input), I’ll try this tomorrow!

1 Like