Hi Alina,
Apologies for the delayed reply.
Yes it’s gone thank you!
Great! FYI, I released MWorks 0.12.1 with this bug fix, as it seemed like it could potentially affect/annoy many users.
Indeed I re-wrote my video task (hack to get audio stimuli…) as an audio task and now it appears to be working!
Also good news!
Though to make a sound group to work it is my impression that I need to add each individual sound as a resource (not just a folder). Otherwise I get a path not found error with an identical path. Not sure what’s going on there. This isn’t the case for visual stimuli as far as I remember…
I’m not seeing this. In your file sound_set_definition_puretones_set1.mwel
, if I remove all lines of the form
resource("sounds/tone_001_n_150.wav")
and replace them with the single line
resource('sounds')
the experiment still works. Do you have an example where this fails?
If I play a video with sound instead of using play_sound as I did before, do you have an idea how reliable the timing of the sound will be?
The start of the audio is still subject to delays due to audio sample batching, so the latency will be similar to what you’d see when attempting to start a sound at the next frame time.
One difference with video playback vs. audio-only playback is that the OS prioritizes audio/visual synchronization in video playback. This means that, if the audio does start late, the audio samples that were missed are just skipped, rather than delayed. I believe this is also what happened with the old, MWorks 0.11 play_sound: MWorks asked the OS to start the sound immediately (start time = now), and the OS dropped any samples that corresponded to times before the first sample batch request. (I believe that was the root of the issue in this discussion, where the first click in the click train was dropped if it was within the first 16-20ms of the audio file.)
To be clear, I didn’t know about any of this until I implemented the MWorks 0.12 audio changes. As of those changes, play_sound does not drop late samples. Instead, it plays them late and issues a warning message, with the assumption that you really want to play the whole sound. However, nothing changes with regard to the audio track of a video file. In that case, the sound playback is handled entirely by the OS and is out of MWorks’ hands (although I think that prioritizing A/V sync is the correct behavior, so I don’t see this as an issue).
Are these delays recorded somehow/ is the trigger taking these into account and as accurate as can be?
As I said, playback of a video’s audio track is out of MWorks’ hands. If there are delays, MWorks has no knowledge of them. If you need to know precisely when a video’s sound started, you’ll have to measure it externally (e.g. with a microphone or other form of audio capture).
If I call the play_sound with next_Frame+x ms and then call update_display, the best estimate is that the tone has started x ms after the beginning of the screen refresh?
Yes, but you have to be careful about how you do it. In pureTones.mwel
, you do this:
play_sound (
sound = sound_stimuli[sound_stim_index]
start_time = next_frame_time()
)
update_display ()
This isn’t 100% reliable, because the “next” frame may have changed between play_sound and update_display. This would be better:
var sound_start_time = 0
...
update_display (predicted_output_time = sound_start_time)
play_sound (
sound = sound_stimuli[sound_stim_index]
start_time = sound_start_time
)
This way, you know that you’re asking the sound to start playing at the same time that the frame associated with the update_display will begin to be drawn. However, this still isn’t perfect, because you’ve lost some time waiting for update_display to complete. The best approach is to do what I do in my Simon example and invoke play_sound within a Render Actions stimulus:
var current_sound_started = false
...
render_actions start_current_sound {
if (not current_sound_started) {
play_sound (
sound = sound_stimuli[sound_stim_index]
start_time = next_frame_time()
)
current_sound_started = true
}
}
...
current_sound_started = false
queue_stimulus (start_current_sound)
queue_stimulus (photodiode_image)
update_display ()
With this approach, you’re calling next_frame_time
and scheduling the sound as early as possible. However, you still may get warnings saying “Sound x is starting y ms later than requested”, due to audio sample batching.
That said, I think I’ve found a solution to that issue. Specifically, after some additional research and experimentation, I found a way to reduce the number of samples in the batches and thereby reduce the start latency for audio playback. With this change, I can run Simon.mwel
on both my iMac (60Hz refresh rate) and iPad Pro (120Hz refresh rate) without any warnings. I want to run some more tests, and if everything looks good, I’ll get this change in to the nightly build so you can try it. I’ll let you know when it’s available.
Cheers,
Chris