I’m a beginner to Csound and audio programming, I’m building a synthesizer and audio DSP simulator with Unity, I saw the post on this page: about using “Processing Audio” toggle to send audio buffer with ‘inch’ opcode in CsoundUnity, I actually tried by my self but still don’t really know how to make it works.
So can you explain more detail about how to process and send audio clip using “ProcessAudio” toggle from Audio source to Csound with “inch”? More specifically, what function in C# should I use to send the audio buffers (or I don’t even need a C# script)? How do I set up the channels with “inch” in order to receive audio from Unity? What flags should I use in Csound and what does the Csound codes looks like? Thanks!
There seems to be an issue with this at the moment, I’ll take a look.
I might have mentioned this on another thread, but I rarely ever found the need to process audio clips in this way. I usually just load the audio directly into Csound using a table, or the diskin2. For example, the following instrument will load a file called pianoSample.wav from the Assets/Audio folder and start playing it back.
I have a fix for this, but I won’t be able to prepare an OSX version till tomorrow. I’ll also try to work out how we can process audio from the microphone.
It doesn’t contain any samples scenes, but does ships with a native plugin that should enable processing of AudioClips. I hope to remove the need for that compiled plugin in the future, but for now it’s easiest. Processing microphone input works now too, but you have to first fill the AudioSource associated with the CsoundUnity component with input from your microphone. Here’s is a little code that should do the trick.
private CsoundUnity csound;
private AudioSource audio;
// Use this for initialization
void Start()
{
csound = GetComponent<CsoundUnity>();
audio = csound.GetComponent<AudioSource>();
audio.clip = Microphone.Start(null, true, 10, 44100);
audio.loop = true; // Set the AudioClip to loop
while (!(Microphone.GetPosition("USB Audio CODEC") > 0)) { } // Wait until the recording has started
audio.Play(); // Play the audio source!
}
// Update is called once per frame
void Update()
{
}
You could simply attach it to a script and then to the GameObject that has the CsoundUnity component attached. Note that you will need to find the name of your own microphone. Mine happens to be USB Audio CODEC. Yours will most likely be different. Let me know how you get on.
Not really about processing live vocals but the audio clip, haha. I’m trying to build a VR project and add some cool sound FX, since VR is not supported very well on a Mac so I’m working on the project with a PC haha. I’ll show you something once I finish the prototype:)
Sounds interesting. Are you planning to use the hrtf opcodes? Hector Centeno created this Android app with them afaik: https://play.google.com/store/apps/details?id=net.hcenteno.ambiexplorer
There is also a game that uses them but I can’t think of it right now. I’ll ask the person who mentioned it to me. I’ve never used them myself but from what I hear, they are very well implemented.
btw, did you also modified and recopiled some shared library codes? seems after I used the new package in my PC with copying the dll files and got some errors says CsoundUnityNativePlugin.dll is missing.
I’ve just updated the releases. You can find the latest packages here. From now on I’m putting the demos scenes into a separate package which you can also find on that page. Let me know if it works Ok.