I haven’t looked at that example with samphold
yet, but I just wanted to clear up some confusion.
my DSP knowledge isn’t that good either, but I wouldn’t care to predict how the output of regular downsampling will show up on a spectroscope-type analyzer. FFT? it may be an artifact of this process. if it’s just hi-freq noise and not content really then maybe it makes sense. i wouldn’t necessarily assume this means fold opcode is doing any more than exactly what you said, downsample followed by upsample. just based on what little I know about how FFT works…
Second, let’s talk about how you would implement this in a user-defined opcode with your own DIY downsampling algorithm. you don’t need to get involved with setting kr or ksmps at all to do it. Actually you SHOULDNT use a local ksmps at all, that is not the correct tool.
Instead you want to put every single sample coming in (higher sample rate) into some kind of buffer and then read the buffer in a “downsampled” fasion, essentially skipping table values as you go, like playing hopskotch through the table. I say table but you could use an array instead.
You’re trying to handle the mapping of sample values from one sample rate to the lower one via a change in variable type from a to k. But you should be treating a and k as essentially the same. K rate variables are really just arrays of a-rate samples. So when you are working in a rate, you cn do the downsampling in a way that gives the same result no matter how ksmps is set.
If I were gonna do it I would use the non-interpolating table read/write opcodes (table and tablew). You set a write index first (a-rate) and then create a read index that is essentially a stepped version of the write index (but it’s also a-rate). None of this should probably be done thru the use of k-rate operation at this point. Your “read index” will be low-resolution (stepped) but actually still be an a-rate signal regardless. How exactly you do this will decide how exactly your remapping of samples proceeds. I am lazy and would probably use some scaling followed by int() followed by rescaling to quickly create the “stepped” index.
Then you just write the current a-sig (input) to the empty pre-allocated table using the “fine resolution” writing index, followed by a second step where you read back from the buffer using the “low resolution” stepped read index.
Essentially writing perfectly to the table and then reading it back “wrong” (resulting in (X-1) out of every (X) samples being ‘lost’ while 1 out of every X samples gets repeated X times, where X depends on what downsampling-ratio you pick).
Obviously there are other ways to do the actual implementing, including some ways that would involve translating from a-rate into a k-rate buffer array and back to a-rate (but despite making use of k-rate operations, this can be implemented such that the downsampling ratio is controlled separately from ksmps, which would be set constant like normal).
My suggestion of how to implement using table/tablew would be just one example of an implementation, but the point is that you can write your own DIY downsampling algorithm using primitive opcodes, and none of this has to do with setting a local ksmps. It’s much more like what you would be doing if you wrote a downsampling algorithm in C or C++, except with Csound syntax. Put values on a buffer, then take them off. If you want to add realtime controls on the downsample-ratio, like you say, you would do this by adding some calculations in between the “writing index” and the “reading index”.
It’s not the first time I’ve observed people misunderstanding what a local ksmps is for (it’s really only useful in very specific cases) and making things seem more complex than they are.
Let me know if any of that makes sense.