Cabbage Logo
Back to Cabbage Site

Searching for an easy way to implement and sonify datasets

Sounds like an interesting project. What format are your ‘datasets’ in? If they are just text files then you can open them in Csound and use the data to populate/manipulate various variables. But how is up to you. I think it would be better to work this way than from a score.

I’ve also experienced some issues when saving bad Csound code. I may have a solution, but I’m a little busy at the minute so I won’t be able to try it out for a few days,

Hi,

this is a very interesting project. I ask only a few things:

  • you already have in mind which synthesis technique to use?
  • this project is for real-time? Let me explain: the data values change or are fixed?
  • the acquisition is in real time? (with one microcontroller like Arduino or RaspberryPi3, for example…).

For this approach is very interesting the additive synthesis: every partial change his amplitude or frequency value, according to data received (I do not know what you want to measure, but if the data values are many, this is a good solution. This is my personal opinion… :wink: )

keep us updated :grinning:

TI

hey guys!

thanks for your replies and sorry for me being so quite in the past weeks!

i’m still in research about the main topic of my thesis. so i’m not really sure if or how i should sonify things best with csound.

i guess it would the blank generation of different sound(-qualities), that differ from each other. so several ‘instruments’ that may vary in key/freq, amplitude, adsr-wise, quality-wise (e.g. timbre), rate of change and/or location.

not sure what you exactly mean. some (sound)values i would define as fixed values and the (geo)data values would influence some variables (in sequences). my question here is: is it for example a good way to just copy/paste data into the project, that it would make things listenable by defining different values (such as freq aso)? if yes, can this happen in CsScore section or is it better to code it in CsInstrument section? how is the approach in this case? in the end i may only use csound to generate and save short pieces that represent the data and integrate these soundfiles in other applications where they’ll be available.

the acquisition is separated from the sonification, for now. so no real time acquisition like streaming data. i’ll get some data, generalize it if needed, transform it for the use in csound/cabbage. the data should influence the sound variables only, in a few examples.

but i’d also like to try to use data that defines the sound directly amp- or freq-wise (e.g. data from electromagnetic radiation or magnitude of earthquakes). but that’s an other approach.

are there any ‘presets’ where several instruments and their sequences are ran by values from external or pasted data?

greetings

Not that I can think off. But take a look at readk. It will let you read values from a file. You can then use those values to control whatever parameters of your instruments you wish,

thank you! sounds promising, i will have a look. :eyes:

Hi,

For this purpose I think that you could build a tool that contains multiple synthesis techniques and that these can be controlled from your input data (data that will be rescaled appropriately).

  • FM with multiple portant or modulant that change the index, ratio or frequency, taking the values ​​from your data;
  • One noise generator whose signal is filtered by one array of band pass filters: the cutoff freqency and band width is controlled from your data;
  • A series of final effects (bit reduction, chorus, echo and reverb) they can beautify the final sound.

is a good mode to insert your data in your instrument…

  • Some of these values they can be further adjusted at will with knobs, sliders and so on…

In the beautiful Iain McCurdy collection are presents some Cabbage examples helping you to have a great place to start

The final project is one laptop with your Cabbage Instrument that send your audio output to a sound system.

simple project used to start :wink:

keep us updated,

TI

@n3p0muk, I’ve uploaded a new version of Cabbage64 for Windows which I hope addressing some of those crashes you’ve been having. Perhaps you could try it out when you get a chance and let me know how it works for you?

sure i’d try it somewhen soon! just one question: is it correct that i downloaded it from this page (cabbageaudio) and it’s the same setup.exe having exact the same size as the previous one? if not, where do i find the new version you mentioned? thanks in advance

edit 2: nevermind! i guess i found it. (beta link / dropbox)

edit: also thanks @Codesound for your reply! good input, i hope to start working on this soon.

short ‘update’ here now:

[quote=“rorywalsh, post:8, topic:258”]new version of Cabbage64 for Windows which I hope addressing some of those crashes you’ve been having[/quote]sadly it still mostly appears to crash when there is something strange/wrong in the *.csd (but also the *.txt-file i was trying to use in combination with the readk opcode). but: i found out csoundqt was also really unstable on my system. i really don’t have a clue why. sometimes both frontends crashed on the same data/action. sometimes just one of them. generally not the best conditions when try-and-error things like i need to. so i thought a reinstall could fix it. but it didn’t.

so time is running and i’m pretty frustrated 'cause i can’t really try out and create things. it’s a pity. also that i’m not a programmer which would make things way easier. i’ll think about alternative ways now to reach at least a small part of origin goal. sorry for not having better news by now! :.(

Can you post a simple version of your Csound file and text file? We’re here to help, but it’s impossible to unless we get to see a little of what’s causing the problem. It could be many things…

[edit] here is the simple readk example that I tried. Make sure both the .csd file and the .txt file reside in the same folder.
fibonacci.txt (87 Bytes)
readkTest.csd (898 Bytes)

@rorywalsh well, thanks a lot! your csd (and the txt) is just working fine for me.

surely i’m still too newbie to see the main errors in my previous ‘experiments’. also i didn’t know the textfile data is separated in breaks, this might have caused the crashes as well. so, now that some (well formed / coded) data obviously is working on my system, i’ll try to go on based on your construction and adjust it to different needs.

though bad code seems to make csound crash a lot, i need to make even less mistakes. :wink:
i’ll keep you informed.

[edit] this example file readk-2.csd (886 Bytes) from the csound manual dumps random values, also reads and plays them but makes cabbage (win7, 64bit) crash afterwards. maybe you could test it on your system.

If you can send any bad errors that did crash it I can take a look. The few simple errors I try to insert don’t cause any problems. I guess errors made on purpose aren’t really errors at all!

It sure does! Not sure why. I’m going to take a look at it now. In the meantime, here’s another example that does work and does more or less the same thing, albeit with different opcodes. Perhaps it can help.

readkTest.csd (1.3 KB)

I don’t know why, but reading the data as ASCII floats seems to cause an issue with readk and Csound. I’ve updated the manual example so that it reads a different format of floats and it now works fine. I’ve attached it here.
newTest.csd (1.2 KB)

p.s. I’m not in a position to do any Cabbage development for the next few weeks, so I can dig any further. We can always find a workaround though :wink:

[quote=“rorywalsh, post:13, topic:258”]
If you can send any bad errors that did crash it I can take a look.
[/quote]before i saw your replies i tried to read additional data to define the amplitude parameter, which made it crash again somehow (using the previous code). i guess it will happen again somewhen. the only response i was getting so far were usual windows ‘appcrash’ messages. would they help it or do i have to look up some other documents to post them here?

both csd files you posted are working for me, too. i can tell you, listening to this simply modulated ‘data sonification’ already makes me really happy. progress, great stuff! (though i actually won’t need the dumpk opcode part of it, just sent you this as an example of a file that’s crashing.)

anyway, i’ll try to set up something step by step using various external data-based parameters. could for instance readk4 be helpful there (to read 4 parameters from one file)?

but at first i will have to carry on with the theoretical part of my thesis a bit. thus i may be less active in actual ‘sound creation’ temporarily. meanwhile i’m also looking out for and collecting some fitting (geo-)data.
thank you again!

You don’t really need to as you can use readk to read as many parameters as you like using a simple loop mechanism. If for example you wanted to grab every 4 numbers from a file you could do something like this(untested code alert!):

kCnt = 0

while kCnt < 4 do
   kVal1 kfreq "text.txt", 6, 1
   kVal2 kfreq "text.txt", 6, 1
   kVal3 kfreq "text.txt", 6, 1
   kVal4 kfreq "text.txt", 6, 1 
   kCnt+=1
od

Or you could use an array:

while kCnt < 4 do
   kVal[KCnt] kfreq "text.txt", 6, 1
   kCnt+=1
od

There are many options! But first get your geo-data together!

don’t want to get off the track but there’s another technique that wouldn’t let me go: audification of earthquake amplitudes.

to test it i have downloaded and normalized (-1 to 1) some time-series float data of a seismograph with the original resolution of 40 sps.
quake_test.txt (263.1 KB)

one would need to speed up the waveform to make it audible (e.g. x100). at first i assigned the data to the amp of an oscil but for different reasons this clearly doesn’t seem to be the right way.

is there a known solution for the process of waveform generation from (this kind of) data? if so i’d like to make a small map with several ‘audified’ earthquakes. if not i’ll abandon these thoughts. :wink:

The best thing to do would be to write the data to a function table and then use an oscillator to read it. It’s not going to sound anything like an earthquake but should be interesting all the same! Check out gen23. That might do the trick. If not, let us know.

a bit hard for me to understand this, pls correct if i’m wrong:
the gen23 example uses spectral amp and freq values to be read at once from the table spectrum.txt. for this use it gets looped.

i tried a lot to make gen23 use my waveform data (the oscillation from -1 to 1, interval of 40 sps) and play it back as a sequence, so that the max/min values effect the amplitude and the speed of variation effects the frequency. tbh my poor coding knowledge wouldn’t allow me to get the intended result.
Playback03_1_quake.csd (1.1 KB)

then i adapted my data to have separated amp- and freq-columns like in the spectrum.txt. i unsigned and recalculated the values to make the amplitude start at lowest at 0 to highest 1. but this approach seems physically and logically wrong, too.
Playback04_quake.csd (1.1 KB)
01_MAJO.txt (746.9 KB)

there should be a way to read the pure waveform and play it back as a sequence. in addition the reading sample rate should be determinable (via metro opcode?) and, if needed, a base frequency added. the speed multiplication should be around the factor of 1000, because the waves period is partly > 10 seconds (≘ 0.1 hz).

this scenario seems complex but i imagine its realisation is quite possible. am i completely on the wrong track or didn’t i consider some minor stuff?

The idea with GEN23 is that you can load all your data to a function table directly and then use it any way you wish. The example uses amp/freq data but you may do anything you wish with it. I don’t quite follow what you are trying to do? It’s simple to use the data from your text files as a signal’s waveform:

<Cabbage>
form caption("Untitled") size(400, 300), colour(58, 110, 182), pluginID("def1")
keyboard bounds(8, 158, 381, 95)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d --midi-key-cps=4 --midi-velocity-amp=5
</CsOptions>
<CsInstruments>
; Initialize the global variables. 
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

;instrument will be triggered by keyboard widget
instr 1
kEnv madsr .1, .2, .6, .4
aOut oscili p5, p4, 1
outs aOut*kEnv, aOut*kEnv
endin

</CsInstruments>
<CsScore>
f1 0 16384 -23 "data.txt"
;causes Csound to run for about 7000 years...
f0 z
</CsScore>
</CsoundSynthesizer>

I’m not sure what way you wish to use the data to sequence the track? You could for example do something as simple as(note, it’s doesn’t sound the best!)

<Cabbage>
form caption("Untitled") size(400, 300), colour(58, 110, 182), pluginID("def1")
keyboard bounds(8, 158, 381, 95)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d --midi-key-cps=4 --midi-velocity-amp=5
</CsOptions>
<CsInstruments>
; Initialize the global variables. 
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

;instrument will be triggered by keyboard widget
instr 1
kCnt init 0
if metro(2)==1 then
	kFreq tab kCnt, 1
	event "i", 2, 0, 1, kFreq*1000, .5
	kCnt = kCnt<ftlen(1) ? kCnt+1 : 0
endif
endin

instr 2
	kEnv madsr .1, .2, .6, .4
	aOut oscili p5, p4, 1
	outs aOut*kEnv, aOut*kEnv
endin
</CsInstruments>
<CsScore>
f1 0 16384 -23 "data.txt"
;causes Csound to run for about 7000 years...
f0 z
</CsScore>
</CsoundSynthesizer>

There is obviously lots of aliasing when you fill a waveform with your data so you may well have to explore other options. For example, if you set your table size to 128 it sound much better. You could also use map the numbers to frequencies from a tempered tuning system rather than directly mapping to hertz values as I’ve done.