Cabbage Logo
Back to Cabbage Site

16 Channel Audio

Hi there,

I am trying to design an in-ear monitor system for musicians. In other words, Cabbage/Csound will be used to stream multichannel audio in real-time.

I will be using a Raspberry Pi 4 8GIG. I will also add 16 audio analogue inputs from Texas Audio Instruments. I will make use of a real-time kernel with Linux. The incoming audio will be processed by the Raspberry Pi using Cabbage/Csound and then send over the air via Spark wireless streaming device (see https://www.sparkmicro.com/wp-content/uploads/2020/03/SPARK-Audio-apps-note-v2.5.pdf)

I need a minimum of 8 stereo channel or 16 mono channels. That will allow me to use 8 stereo wireless headsets.

I understand that Cabbage is mainly used for building instruments and synths etc. but the website says that it can be used as a standalone multichannel software…

So I want to use Cabbage to build an app that will let you:

  1. Sellect inidividual input channels from the analogie inputs
  2. A basic mixer which will let you mix the incoming audio
  3. Basic effects, e.g. reverb that can be added to a specific channel on the mixer via the audio patcher
  4. And then let you output 8 individual stereo mixes to the Spark wireless system or similar

Can someone answer the following questions:

1. Can cabbage support up to 16 channel multichannel audio inputs? That will be 8 stereo channels
2. Can cabbage let you build either a) one instance that will let you select multiple audio inputs and outputs or b) build 8 instances that will run at the same time but with each one able select a specific stereo input channel
3. Let you send that input out again either via analog or something else (I will confirm this with how the Spark chips work)
4. Can Cabbage be used for real-time audio processing for live musicians, that is, as a live monitoring system with basic effects? If I sing into a mic, the signal will travel into the Pi via analog, processed by Cabbage, output by Cabbage via analog or directly to the Spark Chip (tbc)
5. I will have to run Cabbage on a Linux with real-time audio
6. I will have to get the overall processing latency down to 5ms (that is wth 16 mono channels)

Please can someone respond to the above 6 questions. If Cabbage/Csound is not the right software, can I be referred to the correct software.

Thanks

See pic below (purple markup)

Yes.

Hmm, I’m not sure about this. You can create a standalone that will give users access to audio settings, in which they can select the config they want? Maybe? If you run Cabbage within a VST host, then you could use the host to manage the IO configs. Are you familiar with Carla?

This depends on how your configure signals on your PC. If you are using Jack for example, then you can patch the output from Cabbage wherever you like.

Yes, this is one of the most common uses for Csound. And this is how all Cabbage effects plugins work, i.e, sound come in from a host, Csound processes the sound, Cabbage send audio back to the host.

It’s hard to say, but you certainly will get better results that way.

On a RPi, with a non realtime kernel this might be asking a bit much. Note that you can potentially use the ELK RPi OS, which has super-low latency, and ships with Cabbage. It doesn’t provide any out of the box GUI however, so everything is headless. But you can develop from your own machine without any problems.

Of course, you may not need to use Cabbage at all. You could do all of this with vanilla Csound, and use it to manage all the IO stuff. If you don’t need to run your instruments in a host, and you don’t absolutely require a GUI, then vanilla Csound might be the best bet?

Hi, thanks for coming back to me.

I have been talking to various programmers [https://forum.elk.audio/t/elk-os-on-raspberry-pi-4-8gig-without-elk-hat/423/4 / https://forum.bela.io/d/1307-allen-heath-qu32-into-the-bela-via-usb/9 / https://forum.bela.io/d/1309-bela-cape-raspberry-pi-4b-8gig/4],

Bela also designed their own OS, but they work with the Beaglebone pc boards. It is also a very low latency OS and they do have hardware that supports 16 audio inputs but you end up paying up to R10 000 (I reside in South Africa). Unlike Elk OS, You cant use the Bela OS on raspberry pi.

However, I have a simple USB microphone and a simple USB soundcard. I run windows (and not linux). I installed Cabbage Audio and ASIO4ALL that allowed me to hear myself when I speak into the microphone. The sound goes into the mic in to the pc via USB to CabbageStudio, and out to my USB sound card. With all of that I managed to hear myself with a very low latency (48kHz and a very low buffer), but not low enough real time audio monitoring (which is the audio device I am trying to design). So…if I can build my own hardware and software, I can lower the latency.

For example, Audiofusion is a system where you install Soundcaster on a macbook, plug your digital mixer/sound card into the macbook and send multitrack audio over the air with a 5G router. You then use your i-phone to receive the audio and to listen. This is all used for real time audio monitoring. [https://audiofusionsystems.com/]. There is similar software for windows called StageWave [https://stagewave.io/]. All of these software give you low latency but works with usb devices, a pc and a router. So this shows that everything is possible!!!

If you mind giving input on the following:

I like the idea of using vanilla csound from scratch and build my own virtual live mixer with effects. It will run on ALSA and I still need to find a Linux with a real time kernel. Just from personal opinion, will a realtime kernel, light linux (e.g. see DietPi), csound and ALSA be able to do all of this? As far as I am concerned both csound and ALSA support multichannel audio. Then it is left with hooking up the analogue inputs or USB sound card and analogue outputs and/or wireless transmitters or 5G wifi.

The end goal of this is not to build a device that I can solely maintain but a device that uses universal and native OS, drivers and hardware and csound where the community can develop their own ideas on the framework I will provide. I want musicians, sound engineers etc to be able to develop it further to suit their needs. Also make it affordable to people who cant afford the high end audio gear.

Bela is just waaaaay to expensive and better to build my own ‘system’. If I use Elk OS, I have to pay them for driver support and also am limited to their expertise. If I use general toosl as mentioned above, the average guy can build and afford his own system.

Let me know what are your thoughts!!! I just like the csound idea?

In my experience, a realtime kernel has served me fine. I’ve ran KXStudio and Ubuntu Studio, which both have realtime kernel support, without any noticeable problems. Then again, I’ve never really pushed it hard with multichannel audio.

I think a vanilla Csound will always be faster than Cabbage to some extent, because you have no extra overhead. So I would try that first and see how it goes. It sounds like an interesting project. If you have further Csound question feel free to post here, or the Csound mailing list. Lots of helpful users on both forums!

@rorywalsh Thank you