Cabbage Logo
Back to Cabbage Site

Microtonal VST

Hi there, I’m working on a microtonal synthesizer that eventually I would like to turn into a physical instrument - but currently I am attempting to build it virtually.

This is my first coding project and while I have a lot of little questions (the csd file I have attached is more of a skeleton for the end project and has a lot of nonfunctional parts at the moment), the biggest question I have is regarding the organization of the instruments.

Currently I have individual instruments with specific Hz values mapped individually per button for a few octaves of 17-EDO. Ideally I want to have these organized in a way that I could easily swap octaves or build a generic template that could easily swap in other Hz values for other EDOs. I’m sure there’s a way I could organize this through a table or some other method, but I’m not quite sure what to look for in the manual or how to implement it in practice. I’m more than willing to start over from scratch with a much simpler method than what I came up with, but I figured that building a semi-functional version of the synthesizer would be easier than trying to explain what I am attempting to create through text.

Test.csd (15.7 KB)

Why not use a keyboard? That way you can play your synth with an actual keyboard or sequence it from a DAW. Your current approach will really limit the accessibility of the synth, but perhaps there is a reason you chose this approach? If not, let me know and we can discuss much simpler ways of implementing this. :slight_smile:

The reason is mostly because of the limitations of existing midi keyboard layouts. Piano layouts designed around 12-EDO make it difficult to visualize other EDOs geographically. Isomorphic layouts have an existing implementation/user base (even if they are cost prohibitive in the case of the Lumatone). Other layouts like the Archiphone or other variations on the Wicki/Huygens-Fokker designs exist only in extremely niche context.

I’m a multi instrumentalist, and learning and developing muscle memory in different live performance contexts (Saxophone, Trombone, Guitar, Piano, etc.) lead me to approach music in completely different ways. My ultimate goal is to create open source build docs such as CAD files, PCB layouts, and easily sourceable mechanical keyboard parts for an affordable microtonal instrument template that has a symmetrical layout for EDOs up to 35-EDO (limited by double sharp/flat notation, which is its own can of worms).

All this to say, the layout is the point of this particular project. Having a variety of “playable” instruments makes the point of entry more accessible to performing microtonal music with others. That is far down the line though, first steps are figuring out how not to have to type madsr for each individual instrument. :wink:

I believe that there is some support in csound for scala files which might be a lot more suited for your use case, i could look more into it as I have some resources for that if you’d like

Potentially! I initially was hard coding the Hz values in from a spreadsheet that calculated each note value starting from A4 = 440 (for example, 17-EDO: [previous higher chromatic note]*(2^-((1200/17)/1200)), and the same without the negative for ascending notes), so I wasn’t sure if integrating scala would be proactively adding flexibility down the line or adding another point of failure to get caught up on.

My current train of thought would be to either code the previous formula referenced to each keyboard button value as modified by an octave button, or have the Hz values listed in a table that could be called from. A4 = 440 was an arbitrary choice, but it does result in C0 through C9 neatly fitting into audible range, and even at a full ten octaves in 31-EDO that’s only 310 values to either calculate or recall. But if scala implementation makes this trivial I’m all for giving it a go.

Sorry I couldn’t get back to you sooner. I’ll take some time to reply in the next day or two, I’ve been wiped out with a flu for the past week. I’m only starting to feel a little bit better now.

1 Like

I took another look at your implementation. It seems like it could be simplified quite a bit. For example, one synth instrument should be enough. To simplify things and make it easier to swap for other temperaments in the future, why not pass the frequency directly to the synth instrument? Can’t you use the 17 EDO formula directly? I’m thinking something like this:

instr 001
    iBaseFreq = 129.486
    
    kBut chnget "but101"
    if changed:k(kBut)==1 then
        event "i", 101, 0, .5, iBaseFreq 
    endif
    
    kBut chnget "but102"
    if changed:k(kBut)==1 then
        kFreq = iBaseFreq * pow(2, 2/17)
        event "i", 101, 0, .5, kFreq
    endif
    
    kBut chnget "but103"
    if changed:k(kBut)==1 then
        kFreq = iBaseFreq * pow(2, 3/17)
        event "i", 101, 0, .5, kFreq
    endif
(...)

Your one synth instrument would then look like this:

instr 101
  aenv madsr .1, .15, .1, 0.1
  ares vco aenv, p4, 2, 0.5
  outs ares, ares
endin 

Instrument 001 could be reduced to about 10 lines if you use a combination of cabbageGetWidgetChannels and cabbageChanged opcodes. Have a go on your own first and if you get stuck I’m happy to help out. I’d hate to rob you of that rare feeling a developer gets when they manage reduce their entire code base to a fraction of it’s size :slight_smile:

P.S. I clearly wasn’t thinking straight when I asked why you don’t use a regular keyboard—the very title of your post answers that question very well.

1 Like