I just imported the sample and built with IL2CPP on and for ARM64
https://drive.google.com/file/d/1Xtr9KnpX2hA_yuVscGtspE1gEk0S4dxF/view?usp=drive_link
hey there - i need the link to be public, thanks!
I just imported the sample and built with IL2CPP on and for ARM64
https://drive.google.com/file/d/1Xtr9KnpX2hA_yuVscGtspE1gEk0S4dxF/view?usp=drive_link
hey there - i need the link to be public, thanks!
sorry try now
okay - opened it up and just trying it in the Editor but getting no sound at all so far. looks like the SF2 files copied into the persistent path you set however. is it supposed to play audio by itself or is it waiting for MIDI triggering?
ahhh - never mind. first run copied the files. second run plays back - all good! now to try the Android build
That’s strange as it should play on the first run
okay the build and run worked! although it looks like a portrait oriented mobile app and not a VR app. so from what i can see it looks like you used Persistent data path and not StreamingAssets folder, so i will try that option and see.
i’m doubtful it will have much effect on my end in VR since it does seem the SF2 files are loading, but perhaps there are different issues explaining why it crashed in VR builds and isn’t right now (but is also not playing audio).
Yes Csound on Android cannot read the StreamingAssets folder as you would need to use a WebRequest, instead the persistentDataPath is readable
Does you soundfont plays ok in Cabbage?
Does it play ok in the editor?
Is it only a Quest issue?
sorry for delay - i figure you’re in bed or close to it. i meant the Persistent Data Path for the MacOS. i was already setting Persistent Data for Android so no issues there.
and then i ran into other issues - unfortunately the Environment demo scene had a CubeController script conflicting with one in the project and i deleted the wrong one, which has resulted in no audio playing again, but for a different reason. so i’m trying to fix that - i should be back up and running in a few hours - then i can try testing from my end.
okay @giovannibedetti - HUZZAH!!!
i got it to work with sfinstr and you were absolutely right about disabling the game objects - i didn’t want to do it because there were other components that needed to be there for the initial start up. but i delayed when those hit until after Csound initialized and it plays back just fine now. there are still many many miles to go, but i FINALLY have something i can start demoing around. THANK YOU
for all the help and patience troubleshooting. i will certainly have more questions later on, but i’m on the way! i’ll set up a screen recording session in the next few days to demo the basic interaction.
also, so far, starting and stopping the 8 instances (currently) of CsoundUnity via a UI button seems to be working fine and is running with no dropouts or performance bottlenecks.
YEEEEEEE
You’re welcome!
This project is very interesting and has a lot of potential so I was eager to help as much as I could
Keep us posted!
How are you stopping the instances?
I suggest you to use score events to stop instruments instead of disabling the object completely.
It will still work, but you could have clicks as you are basically stopping the OnAudioFilterRead callback, which means that PerformKsmps (so the Csound block calculation) stops being called.
So you could have jumps in the audio output samples values and so clicks.
Edit:
btw the best way to be sure Csound loads the sfonts and not having to start with disabled CsoundUnitys, is having a different “loading scene”
There you do the copies, and when finished you go into your main scene
so far it’s setting tempo to 0.0 for everything. seems to work well!
Ah nice trick!
Well done guys, that was epic But I’m delighted to hear things are looking better now
it was being able to set ksamps to larger values that opened the door. after that it was more like i should have taken GB’s advice almost to the letter. so far i’ve learned:
what i got confused about was everything working in the desktop fine. but doing so in the build on Android in Quest is another matter. if there’s documentation on this i think the wording should be amended regarding these as requirements and not suggestions.
anyway looking forward to pushing on - BTW if i have more Csound-ish questions should i use this thread, as it all relates to the project? or is this one getting too epic?
Got it! I will add this to the docs when I have time.
This is up to you, this thread has lots of important info but not a very clear title
Yeah probably best to try to isolate the issues a little. As @giovannibedetti, this thread of choc full of great info, but you need to dig deep to find it! But, it’s still infinitely easier to parse that a discord discussion!
ok got it.
BTW i started experiencing some crashing again after i spent a fair amount of time troubleshooting other SF2 files - i was trying to find timbres that sounded less cheezy, as i know the music generation currently is a bit contributing to this situation by just randomly turning on notes. anyway after running through that gauntlet and getting some decent sounds i finally started building again - and crashing repeatedly whenever i started playback.
everything i checked looked fine - files loaded and copied. still crashing…
want to know what it was crashing on? it was the letter case of the SF2 file. i saved it with Polyphone and this changed the case to ‘.sf2’ not ‘.SF2’ but the sfload statement used ‘SF2’ for th extension.
anyway, changing that letter case solved the issue. and here’s the resulting playthrough:
lots to keep working on but i’m starting to feel closer to having a build happening.
Thanks for letting us know. As soon as I have some time, I will add this info to the docs. Plenty of useful info in this thread.
Great job, keep on working and feel free to post your updates here! It’s always a pleasure to see that CsoundUnity is useful
okay - @rorywalsh and @giovannibedetti i wanted to update this a bit. things are definitely progressing and i’ll share the results soon, but i wanted to add a sort of realtime ability to play a Layer’s sound. i’ll be using the controllers or a finger to ‘paint’ individual stars / events / nodes, and the interaction is working fine visually BUT it doesn’t play a note. right now painting notes usually takes place when the sequencer is stopped but to do that i set tempo to 0.0. this works fairly well so far but i have a feeling that the note (that is the ‘row’) won’t play at all if tempo’s at 0.0 since playing of notes is tied up with the sequencer. or maybe it’ll just sustain continuously?
so i think what i need is a live input to the active instrument via a channel. i can probably figure out the note number i need to send on the Unity side to the Csound channel but obviously it needs to be independent of the sequencer tempo and just fire off manually. how can i manage this?
Can you just call the event opcode in real time, independent of the sequencer?