Hi Alina & Yoon,
It’s taken a while, but I’ve finally completed major revisions to MWorks’ audio support. Everything I describe below is in the current nightly build.
The basic audio file stimulus now supports any audio file format readable by the operating system. In other words, if Music or QuickTime Player can play it, MWorks should be able to play it, too. Accordingly, the type of the stimulus is now
sound/wav_file is still accepted as an alias.
Additionally, audio files can now be configured to loop indefinitely or repeat a fixed number of times. Your experiment can also be notified when playback has ended.
In addition to the audio file stimulus, MWorks now has simple tone stimulus. I don’t know how useful it will be in real experiments, but it’s there if you want it.
The tone stimulus is 100% dynamically generated, so it serves as a template for other generated sounds we might add in the future. (An obvious example is a white noise generator.)
Dynamic volume and pan
In addition to amplitude (i.e. volume), you can now control a sound’s stereo pan. Both amplitude and pan can be changed at any time, even when the sound is playing.
Like visual stimuli, sounds can now be loaded and unloaded while an experiment is running. Also, you can control whether a sound loads automatically when the experiment loads via the new autoload parameter. If you’re running an experiment that uses hundreds or thousands of distinct sound stimuli, manually loading and unloading individual sounds as needed can significantly reduce memory and/or CPU usage.
When playing a sound, you can now optionally specify a start time. This allows you to synchronize the playback of multiple sounds and/or coordinate sound playback with other experiment events (e.g. stimulus display updates).
This feature gives you very precise control over when playback begins. (For example, see this experiment, which shifts two tones with identical frequencies in and out of phase via small offsets in start time.) However, there’s a limit to how near in the future the start time can be. The reason for this is that the system acquires samples from each sound in batches, and these batches typically encompass 10-20ms of play time. Hence, if you attempt to schedule a sound to play 10ms from the present time, you may already be too late, in that the batch of samples that includes now+10ms has already been acquired. On the other hand, scheduling a sound to play 30ms from the present should be no problem. If a sound does start playing later than requested, MWorks will issue a warning message.
Importantly, the 10-20ms “batching” interval described above is very close to the refresh period of typical displays. This means that attempting to synchronize a sound with the next display refresh (e.g. by setting the start time to next_frame_time()) may not work reliably. I’m not sure what the right solution is here, but I’m confident we’ll figure something out.
Just as visual stimuli have stimulus groups, sounds now have sound groups. These can be nested (groups can contain other groups).
Sound groups should make it much easier to design experiments that dynamically select from a set of available sounds. They should also work well with replicators.
For a (slightly) fun example that demonstrates some of the new features described above, see this experiment. It mimics the game Simon by playing a random sequence of four pre-defined tones, synchronized with different colors.
Hopefully these new features will provide a solid foundation for audio-focused experiments. I’m sure there’s still more we’ll need to add, so please don’t hesitate to suggest additional changes and improvements. I’ll be happy to hear your feedback about any or all of this.