but for sure letting Csound handle the audio timing as well as the note generation obviously seems the better approach - that’s what i meant about the two complete systems comment. the visual lag can definitely have a fair bit of timing slop and that won’t appreciably affect the operation of the app but not having tight audio timing is definitely a problem - i don’t think it needs to be Squarepusher levels of precision, but it does need to better than what i’ve tested.
when i have some time/opportunity i’ll investigate how the timing is when just sending events rather than triggering sounds in Unity - that was just a test and it’s good to know from Giovanni that it’s not optimized. but my thinking is that if i pick up the clock event, move an object, have it detect active cells in a layer at a specific position, and then send the message to Csound to play notes, that’s three events and three chances of varied timing latency on the receiving end. i’m pretty sure the results won’t be great though possibly they may be better than what i’ve tried.
anyway, i’m going to probably not be working on this as much in the coming weeks, since i’m trying to polish up my VR/XR development skills for work as a software engineer/developer and unfortunately procedural audio/music generation is not really considered to be a marketable skill, so i’ll have to focus on different projects not involving audio for portfolio. i’ll post updates when i have them. thanks a lot to you both for your help and advice!