the farther I dig into csound, the faster my confusion about it grows - at a-rate, or wait, am I still in the init-pass? by now I’m almost certain that I will make a fool of myself by asking questions but hey, I’m still convinced that learning csound is worth it!
Can I use an opcode, like “pitch” for example, that works on an a-rate input and make it process faster than a-rate? As far as I understand it now most of csound’s opcodes are designed to work in “real-time”, that is at “a-rate”.
What if I needed to scan the pitch of a 3 minute long audio sample at intervals of 50ms “at init time” (and store the pitch-information in a table / array for later use), so before any sound gets generated? It looks to me that this requires to build an instrument that “runs the audio-sample-data at a-rate” in order for the pitch-opcode to be able to operate, meaning that the pitch-scanning alone would take 3 minutes. I would require this scanning phase to be as short as possible, faster than a-rate. I assume theoretically it could be done in a couple of seconds. I also had a look at the pvs opcodes and the principle seems to be the same.
Just to make clear, I understand that the init-pass is generally the place where one pre-calculates stuff, sort of “independently” of the k- and a-cycles. My question is really about using opcodes there (I believe).
Can someone give me a hint? Am I missing something or am I trying to use csound for something it’s not meant for? In which case I assume it’s time for python (for handling the “pre-processing” and then call csound via the api for playback).
Any thoughts, warnings, suggestions, hints will be highly appreciated!