Hi Chris,
I am in the process of implementing a standardized pipeline for neural recordings. I am interested in utilizing stimulus hashes, from parsing #stimDisplayUpdate, to fool-proof/prevent novice mistakes in stimulus indexing. All works well for image stimuli, but ran into large number of ‘video’ items (relative to the number of stimulus presentations). Are there any #stimDisplayUpdate events different with video stimuli? Also, what is your hashing policy for video stimuli?
Thanks,
Yoon
Hi Yoon,
Announcements for video stimuli have the same filename
field as images. They don’t include file_hash
at present, but they should (as should announcements of audio file stimuli in #announceSound).
Cheers,
Chris
Hi Chris,
Thanks for the clarification. I can take advantage of file paths in lieu of file_hashes (for now).
Could you help me with one more question?
For an experiment presenting video stimuli, I am seeing redundant video-type events when parsing #stimDisplayUpdate. For any given MPEG (.mp4) stimuli lasting 400ms, I see the same video-type event 23 times (filepath, actions, start_time, etc.).
I’d like to match the counts of video events, based on #stimDisplayUpdate, with an independent photodiode signal attached to the monitor. This is how I’ve been doing with image stimuli and was hoping to generalize it to video stimuli and audio stimuli. Any advice?
Best,
Yoon
Hi Yoon,
If there are no other dynamic stimuli onscreen, there should be about one #stimDisplayUpdate
event for each frame in the video. If your 400ms video runs at 60 frames per second, then 23 events sounds about right.
The one piece of data that varies across all the similar-looking announcements should be the current_video_time_seconds
field. This gives the time, in seconds on the video’s timeline, of the currently-displayed frame. It should start at zero on the first frame and then advance roughly in steps of the inverse of the frame rate. (There’s a bit more info in this discussion.) If you just want to count videos played, then looking for events with current_video_time_seconds
equal to zero should work.
For audio stimuli, you’ll need to look at #announceSound. Since sound playback isn’t tied to the display refresh cycle, each sound stimulus generates an independent #announceSound
event whenever it plays, pauses, resumes, or stops. If you want to know when each sound starts playing, look for #announceSound
events where the action
field has a value of play
.
Cheers,
Chris