thanks for your replies and sorry for me being so quite in the past weeks!
i'm still in research about the main topic of my thesis. so i'm not really sure if or how i should sonify things best with csound.
i guess it would the blank generation of different sound(-qualities), that differ from each other. so several 'instruments' that may vary in key/freq, amplitude, adsr-wise, quality-wise (e.g. timbre), rate of change and/or location.
not sure what you exactly mean. some (sound)values i would define as fixed values and the (geo)data values would influence some variables (in sequences). my question here is: is it for example a good way to just copy/paste data into the project, that it would make things listenable by defining different values (such as freq aso)? if yes, can this happen in CsScore section or is it better to code it in CsInstrument section? how is the approach in this case? in the end i may only use csound to generate and save short pieces that represent the data and integrate these soundfiles in other applications where they'll be available.
the acquisition is separated from the sonification, for now. so no real time acquisition like streaming data. i'll get some data, generalize it if needed, transform it for the use in csound/cabbage. the data should influence the sound variables only, in a few examples.
but i'd also like to try to use data that defines the sound directly amp- or freq-wise (e.g. data from electromagnetic radiation or magnitude of earthquakes). but that's an other approach.
are there any 'presets' where several instruments and their sequences are ran by values from external or pasted data?