This is probably not possible from what I can gather, but I just wanted to make sure before moving on. Basically, I am trying to use FMOD as a middleware for audio in a project because I have found the getting input from a microphone in unity through an audio clip comes with a significant bit of latency (to much for VR interaction at least). You can get recorded audio into FMOD using the core API, but it seems it must play as a sound in FMOD a frame after it is recorded. I assume that because the cabbage FMOD plugins are effectively instruments it is impossible to route inputed sound into the plugins. Am I correct in this assumption?
This should be possible, but it would involve some rewriting of the CsoundFMOD plugin. Unfortunately, I don’t have a lot of time right now to look into it. Do you have any C/C++ programming experience, if so I could try to help you through it?
 I just took a look through the Core API guide and it states that realtime playback of input audio incurs a 50ms time delay. I think you might have better luck trying to sort out the latency issues in Unity?Have you set the audio settings to use the smallest possible buffer?
Thanks for the quick reply-- my C++ coding is pretty bad, so it is probably best for me to just run separate application to run sound and connect it over an osc server or something. I have tried getting input from unity before and it has either been very laggy, or glitchy when I try to lower the buffer size. \
Out of curiosity, have you ever tried an FMOD plugin that has realtime playback of input audio? Because it seems from the docs that is will have a 50ms latency. Surely this is way too much for VR too?
I have not. I somehow missed the suggestion with the 50ms which is better than what I was getting in unity with writing to an audio clip, but would certainly still feel weird with any reactive visuals-- especially after any sort of fft analysis.
I think Wwise would be the best middleware for something like this as it has experimental ASIO support. I was going to use FMOD because the cabbage csound implementation would allow me to easily create some timbre analysis tools in combination with audio effects.
A Wwise Csound package would be nice. Leave it with me, I currently have a student that could be up to the task. I’ll reach out and see if he has time.