Cabbage Logo
Back to Cabbage Site

Useful learning materials?

Hey all,

Just wondering if any procedural audio veterans would be willing to share any rich sources of information that were particularly enlightening on their journey in learning procedural audio? Books, websites, documentaries or insight that arose from experience, anything at all that informed your perspective and approach

Furthermore, are there many people online creating content for learning procedural audio in the context of Cabbage? I’ve used @rorywalsh’s tutorials which are excellent and very useful indeed, was just wondering if anyone else is out here talking about Cabbage or producing additional learning materials?

I do learn many things from examining the structure/processes included in Cabbage examples, breaking them apart and changing their behaviours etc. But i’d love to hear from anyone talk broadly about their process, why they make the decisions they do when designing sounds, particularly for sounds that mimic the real world, organic timbres such as earth,wind,fire :man_dancing:, metal,glass etc… and other organic sounds such as the human voice.

This is my current reading list:

  • Andy Farnell - Designing Sound (Andy Farnells website is indeed amazing and contains lots of audio processing examples i’m looking for, so I may translate some of these processes to Cabbage as an exercise: http://aspress.co.uk/sd/
  • Perry R. Cook - Real Sound Synthesis
  • Udo Zölzer - DAFX (Digital Audio Effects)
  • David Creasey - Audio Processes

Is there some intuition that arises from ear training, coupled with bedrock audio processing knowledge, that you have developed over time which has made it easier to figure out what components you need to create that sound as you hear it, analyse it and decompose it in your mind?

I’m sure every case is different, just curious in the processes of procedural audio enthusiasts when it comes to having a blank Cabbage patch and a sound you want to create. Audio processing can feel like a dark-arts alchemy with all its abstraction, detail and wondrous mystery to a humble noob.

Would love to hear any and all thoughts, just looking for insight

I think the Andy Farnell book is very good. There are also quite a few GDC videos out there on procedural audio which I found good, I think Nicolas Fournel has a good one, but it was a few years since I read it. The Audio Programmers Discord channel has a section on sound design, which might be a nice place to ask more general questions about approaches people take with this stuff. Personally, I tend to crack open the spectrogram when I’m trying to emulate a sound. And go from there, seeing what synthesis technique might work best. It’s not a scientific as Farnell’s models, but the results can be pretty good. FWIW, Iain and I shared some instrument here a few years ago in the Csound for Game sub forum. Feel free to modify, improve, etc.

1 Like

A brief note, I got pretty into procedural audio when I first started with Csound. Here’s an early piece I did:

No samples or recordings, all procedural done in Csound. If you have any questions about it or want to discuss the topic more let me know.

1 Like

Wow incredible work! Really enjoyed listening to that piece. The birds, frogs and owls are extremely true to life. Certainly interested in how you achieved that? Did Andy Farnell’s Csound resources inform you in your design or perhaps other resources?

Also, if anyone is interested, the following Farnell talk is one of the best presentations i’ve seen so far for a nice panoramic look at Procedural Audio, great balance of technical and philosophical ideas, the history and state of procedural audio in the games industry (though 10 years out of date i’m sure it holds much relevance still), amazing demonstrations and really insightful, it’s in 5 parts for extended listening: https://www.youtube.com/watch?v=Dc04hDcy3lo&list=PLLHtPBwbWUW5EG-4ajfz5BQBOIm31ClhC

1 Like

Thanks for the link! I enjoyed that. Here’s another one, perhaps better general quality (it’s easier to hear & see him/examples).

I think it compliments the other well.

For my purposes his approach is hit or miss although obviously brilliant. Some, like the engines, are really good. Others, such as footsteps (and birds), are not great IMHO. I wouldn’t fault his process though, I have no idea how much effort he’s put into each “model” for lack of a better term.

The birds were tweaked from Jeanette C.'s wonderful Csound code which she has now shared:
http://juliencoder.de/sound/m_aves-1.0.zip

I think she based these on research by Bill Aves which she had found:
https://ccrma.stanford.edu/courses/220b-winter-2002/lectures/1/examples/bird.clm

Her main sound page is here:
http://juliencoder.de/sound/index.html

I have adaptations to make them possibly a little easier to setup, I can post them somewhere if you like.

The frogs, crickets & owls I designed myself by analyzing soundfiles. Listening carefully and measuring the frequencies by ear and with an FFT based spectrum analyzer - subtle rises and drops in pitch/amplitude and so on. Carefully viewing the waveform shapes (for example how sinusoidial they are, which surprisingly many were). Exponential vs linear enveloping of pitch and amp, minor but still important details. I used some relatively elaborate transeg envelopes for that. Well, elaborate considering my abilities😆.

One aspect I again found important was placing them in an atmosphere, I think Andy touched briefly on this in his book under Propogation Effects (Reflection, Scattering etc.). First a series of slightly low passed delays to mimic early reflections, bouncing off objects closer to a listener, and longer delays (objects further away). Then thru some reverb to create a sense of spaciousness.

Also, adapting the space such as linking the bird amplitude to the reverb send level. The lower the amp (bird farther away) the more reverb & vice versa.

For these timbres Andy’s book was more of an inspiration in terms of trying to think thru the chain from initial source to the end result as a process, like a block diagram, and imaging what factors might influence it along the way. I completely ignored his cricket example as it was just too convoluted for my poor little brain at the time. And unfortunately I didn’t find my attempts at recreating the PD examples in Csound particularly convincing. User error!

I even downloaded the PD manual (I unfortunately can’t run PureData) so I could understand what each module did.

The frogs, crickets & owls primarily used simple sinusoids as an initial source.

The firepit was loosely adapted from his block diagram in the beginning, primarily conceiving it a 3 elements (hissing, crackling, lapping [roar]). And looking at the PD examples where I saw he cleverly used low pass filtering to create small envelopes to encapsulate noise to create the crackle.

I’ve since used that for several things like running water. For that I found it helpful to learn about different opcodes for randomization. For example, for a babbling brook I wanted the bulk of the primary bubbles to fall within a certain bandwidth with some, but less, drifting into higher frequencies (smaller bubbles). For that I found beta distribution worked better, I think I used the betarand opcode.

I haven’t seen any of Andy’s references to Csound resources though, do you recall where they might be mentioned?

Have you checked out the example pages?

http://aspress.co.uk/sd/

There are links to all the PD codes as well as audio examples at the bottom of each page.

Anyways, fun & interesting discussion. If you have any other questions or want to chat about or share other ideas by all means.

1 Like

Amazes me what can be done with a sine wave. And yes the ambience really does add to the realism! Thank you for taking the time to discuss and provide the methods used and all those resources!

I have indeed checked out Andy’s example page, though, I do suspect that many are no longer compatible with new versions of PD as i’ve tested a few and particularly the Footsteps patch doesn’t seem to sound anything like it does in presentations.

I’m developing plugins specifically for a 3D game implementation (made in Unity, using CsoundUnity) so I think i’ll be handling ambient settings a little differently due to the necessity of dynamic spatialisation but the principles are just as relevant. This is the type of thing i’ll be implementing for various objects and events in the world where they are nicely coupled with visual changes: https://www.youtube.com/watch?v=AzKjFwTLtPs

I imagine one master plugin attached to an empty GameObject, of which all cabbage plugin instances in game world space are routed into (they’ll need to be routed dynamically as it’s all procedurally generated, hoping that’s possible), so that it acts as a bus for post processing effects. Automation changing relative to the listener (Player) for dynamically calculating position, occlusion etc… That way, it could be controlled conveniently from one point, which also opens the door for more custom world-bending post process effects, hopefully.

Not sure if this is possible yet, or even the necessary approach, i’m currently in early stages of building the plugins and learning about Unity’s audio system and Csound Unity. I’ve been using FMOD for the first year of development and decided to switch recently. I know Unity has its own built-in distance and occlusion system, so i’m just wondering how i’m going to use that for more tailored dynamic world reverb. @rorywalsh do my speculations sound reasonable/feasible?

1 Like

The bnice thing about the way CsoundUnity is set up is that it’s an AudioSource, so you can treat it in the same way you would any source. One thing to watch out for is having too many instances of Csound running. Each CsoundUnity object creates an instance of Csound, which will allocate a certain amount of resources, so you should probaby use them sparingly (although I’ve never tested the limits of this). You can however route audio from a single instance of CsoundUnity to other game objects using a CsoundUnityChild. This makes it easier to manage the number of instances, and still have sounds localaised to a particular game object.

I think there could be some confusion around the definition of “plugins” and how CsoundUnity works.
Technically we are updating a Unity AudioSource content using the OnAudioFilterRead callback, and not using a Unity Native Audio Plugin.

@rorywalsh made some work to be able to export Unity Native Audio Plugins directly from Cabbage, but I don’t think this export is working at the moment (I tried this on the latest Cabbage artefact for Windows on Azure, v2.9.105):
image
Those would allow Csound based plugins to be used directly in the Unity Audio Mixer, or added directly on an AudioSource.

@WillH0ward-1 to achieve routing and applying effects to a group of AudioSources you can rely completely on the Unity Audio Mixer, setting an output group on each AudioSource:
image

Then in the Audio Mixer you’re free to add the existing audio effects (those are not Csound effects):

.

Hope this helps!

whoops, that menu item should be gone. while this did work, performance was absolutely horrendous. it was basically unusable.