I’m not sure this is doable. All the suggestions I’ve tried have been real hacks, and I haven’t been able to get any of them to work. I guess for now we need to do all panning in Csound. It’s not ideal.
Just got back from a trip and saw the thread. At the moment I don’t have a problem panning in Csound, if I remember to use my own panning calls (perhaps if we had a Csound panning wrapper function we could intercept panning calls to Unity and just send them to Csound? Problem is that every Csound file will need a chnget for pan position). It seems to me that there would be some issues for some developers who want to use Unity spatialization controls. Can Ambisonics be set up to replace Unity 3D spatialization, again using a wrapper function?
Should I assume that the Audio Mixer and Bussing system in Unity works with Csound-sourced sounds? And aside from panning and 3D positioning do the other Unity effects work ok with Csound sources?
I know someone at Unity. He said the audio team is fairly small, maybe he could put pressure on them to expose the info needed for CSound?
The Unity issue tracker marks this topic as duplicate:
But I can’t find the duplicate? Nor the solution. The issue is that Unity seems to do its panning before calls to
OnAudioFilterRead(). Therefore all panning info is overwritten by Csound. I did find some possible hacks online but couldn’t get them to work. The pitch and volume mechanisms works fine.
For sure, but the issue here is with panning. Unity’s 3D Sound Settings still work fine. Attenuation/rolloff, min and max distance all work great for me.
Yes, they all work fine as far as I can tell. I just did some simple experiments there and didn’t have any issues.
i can also confirm on the Mixer front. routing to Mixer paths work fine. too bad about the routing from OnFilterRead. the audio team at Unity is literally two people, Wade and Jan, in Copenhagen though this might have changed possibly. i know one or two folks not at Unity that might be able to shed some light on this as well, as they both designed their own audio engines of sorts.
this thread seems to relate and both posters apparently found a solution for it, though it’s not explained how:
and here’s the original that i think the other was a duplicate of. it was closed as being designed that way, thus not necessary to fix: https://issuetracker.unity3d.com/issues/stereo-pan-on-audiosource-has-no-effect-on-audio-generated-with-onaudiofilterread
so that seems to indicate that 2D panning in Unity is not possible using a custom OnAudioFilterRead - yippee…
but with the 3D position sound settings - have you or can you test that a sound in 3D space is placed to the left or right of a listener? because if not that means that a pretty crucial part of the 3D sound settings is NOT working. volume and attenuation over distance are definitely not the whole picture for 3D audio. you’re gonna need azimuth at least. and since both of the Unity posts referenced panning, it was likely assumed this was 2D related and did not cover 3D position.
oh and further down in that Enzien Audio thread is a couple of threads on how someone got things going with Unity’s vanilla spatialization. haven’t read these but may be worth checking:
Ok I’ve figured this out with vanilla Unity spatialization. Basically I followed the instructions in this thread:
or alternatively in this thread:
that second thread is pretty old - around Unity 5.0, so may not be current info, but that member Gregzo (Gregorio Zanon) really knew his stuff on Unity audio and had a middleware tool he sold on Asset Store for a bit, until he got a full time gig somewhere and basically stuck it up on Github and abandoned it. This was at least two years ago.
Thanks Scott, I’ll take a look at these when I get a chance and see what I can implement. It will probably be Monday before I get it done though. I’ll keep you posted.
So I actually read most of those threads before. The idea to create a dummy AudioClip and use the panning info from that to pan the generated audio sounds like a good idea, but I can’t get it to work. I must be overlooking something. I will give it another go and see if I can figure it out. In the meantime feel free to take a look yourself. We can surely come up with some kind of solution!
i’ve pointed Anastasios Brakis to this thread. he’s one of the guys i mentioned that might be able to help as he’s made audio middleware for Unity and ran into this issue integrating Heavy/PD with his toolset Fabric. his view is that you don’t need a blank AudioClip, you just need a blank Audio Source. he used the mixer SDK to handle the routing. i was thinking it should be relatively easy to make a plugin with stereo channel control to get panning if you route the Audio source to the channel. i’m unfortunately of little use as my coding skills are definitely not up to par, and i also have no idea how you handle spatializing. i think in the meantime i’ll work something out using a dedicated spatial plug-in. it’s not a great solution for games since you can’t mix the audio with any existing audio, which would mean all sound would have to come from Csound but since i’m making an app i don’t have to worry about it.
I’ll try that, but I was hoping to avoid routing of signals to a mixer. Perhaps I can chain one AudioSource directly to another? I’ve never tried that. I’ll take a look tomorrow.
one interesting thing he mentioned was that you could use the FMOD Designer application. not FMOD Studio, to view the the mixing graph. the reason for FMOD Designer is that Unity’s system is built using the FMOD Ex sound engine and Designer works with that version. so he said you could connect to the game via FMOD Designer, and i’m assuming without integration, since that would mean it would not connect to a separate engine. but i don’t know the particulars of exactly how it’s done.
and i’m thinking that yes it probably means chaining Audio Sources together. i’ll ask my friend if that’s the method. he’s going to ask the Unity audio team about these behaviors since i think he tends to have their ear reasonably often. i mentioned the 3D attenuation but not position and he also thinks it’s strange behavior - like Spread was turned up all the way.
But this works fine, it always has? I believe it works fine because this is done after the OAFR method, whereas panning done before it.
but azimuth should not happen before OAFR while attenuation doesn’t. 2D panning beforehand i can sort of get but 3D azimuth/direction and attenuation should be associated closely with each other. i’m wondering if this worked at an earlier time and doesn’t work now? as it is the audio gets split between two paths via the spatial blend. what Unity should really do is move OAFR before panning occurs. it’s useless where it is at the moment to just have attenuation because you could just fake that yourself with Vector3.Distance(a,b) controlling volume. panning and surround are much harder to do a workaround in. and if you really wanted to custom control panning you could do that in the plugin before OAFR and leave Unity at center pan.
i’ll keep working on this with my friend. hopefully i can get a Unity code example where he’s doing that workaround.
I agree. I’m sure there’s reasons why it’s done like this, but with more and more people using the OAFR method to generate audio something needs to change.
well, unfortunately even if it is a bug it will take months if not years or maybe ever to get fixed, so the workaround is more desirable. my friend Taz (Anastasios) did the same setup and got the same result as has already been reported. so it doesn’t seem to be user error. i asked Joe White of Enzien Audio (Heavy toolset for Pure Data) and his response was pretty terse:
i think the issue is that spatialization occurs pre OAFR. so you need to multiply the input buffer in the first filter in the chain against your DSP code
have no idea if that’s even remotely helpful. what i’d love is a code example to show. i’ll keep hammering on this front.
okay, one more Unity thread on this. probably no new information on this, but maybe a bit more detailed on the workaround :