hi Rory very late reply here - but, have you thought about using a method like the Kalimba plug-in does in Unity and PD? essentially it's merely an OSC bridge between Unity and another program (ostensibly Cabbage), that would allow communication of parameters. it might be impractical seeing as how Csound is already working natively inside Unity, but i guess the question is how one would see themselves altering synth parameters.
if you're building an executable, then that seems like a standard GUI issue and you don't need to involve the Mixer - you create onscreen sliders or other controls, and then have those changes modify Csound channels in the script in a custom manner determined by whatever your app needs.
otherwise it seems that the only other method needed is something to modify the sound while you're developing inside Unity. and at that point if you could somehow run the script in Cabbage and have Cabbage send OSC data to Unity to control parameters in Csound Unity for the synth used, maybe that would work? i may be fairly off base here. it may be that to create heavily modifiable synths and samplers rather than effects, you don't use the Mixer plugin method.
sort of on a side front, and possibly off topic, i would ask for a feature to have the Csound script instance appear before the pan position of the Audio Source, so that a Csound script's output can be manipulated via the normal Spatial Blend control in Unity or by positional VR spatial audio middleware ala Oculus Spatializer, Steam Audio, or GoogleVR Audio. as far as i've tested this doesn't yet seem possible. i realize that Csound has a lot of positioning options but it would be helpful to be able to route and position sound in the same manner as existing sounds in Unity, rather than coordinating between two different panning systems.