We’re considering running a modified version of our RSVP experiment, where each trial simulates a saccade-like scan of a “big” image.
Each trial will consist of a sequence of crops (K = 6-8) from an image, presented without masks within a trial (200ms “on” per image, ~2-5sec per trial).
The idea is that the user just hands over these N crops as the RSVP stimuli: 1…N where N = K * n, n being the total number of “big” images, and every subset of images, [1 + (j - 1) * K, 1 + j * K]; j=1…n, defines a valid trial/sequence.
A) How should we revise our standard RSVP .mwel file to implement this experiment? (our current .mwel is attached)
B) There is also another version of the above experiment, where the “saccades” appear in different orders. Specifically, we would allow for pemurations within a defined sequence across repeated presentations (but sequences should not mix).
Lastly, for this experiment, an entire trial should be invalidated if not fully completed (even if a part of the trial was successfully fixated on).
each set of crops of a particular image constitutes a trial,
the order of the crops within a trial should be variable, and
the trial is a failure if the monkey breaks fixation at any time.
Is that right? If so, then I have a few questions/comments:
How will the order of the crops in a trial be determined? Is it random, or will you draw from a set of predetermined sequences?
Rather than creating separate image files for each crop, you could use just the “big” image and “crop” at runtime using masks (or even just four rectangles). That would give you the option of varying the content of each crop, if you wanted. Just something to consider.
We would ideally like to have both configurations available: Random order and fixed (predetermined). But definitely the more important one is the predetermined sequence.
Interesting option but we would ideally have the crops controlled and determined on the user’s end.
I’ve attached an example that demonstrates a possible way to implement this experiment. Here are the key points:
Each image file is named with two indices (e.g. img_3_5.png). The first index identifies the “big” image, and the second index gives the crop number. (An alternate approach would be to have a separate directory for each “big” image.)
The selection variable (image_selection) selects a “big” image.
The set of crops for each “big” image is loaded at the beginning of the trial and unloaded at the end.
If fixation is broken at any point during the trial, the whole “big” image is rejected and will be selected again in a later trial.
At present, the crop order is sequential. If you wanted to switch to random order, you could add another selection variable (e.g. crop_selection) to determine the crop order.
Perfect! Thank you so much, Chris.
To clarify, is this tested and ready to go as is as a substitute for the RSVP example previously sent (up to hyperparameters ofc)? What about stimulus_presented and those other guys relied on during preprocessing?
We are going to be tight in monkey time for this experiment so just trying to minimize the room for errors in execution.
You need to declare the “images” directory as a resource, like my example does. You’ll need to do the same for the sounds directory (and any other resource files that your experiment needs).
Somewhere in your MWEL file, add the following lines:
Regarding the “There are no more items left to draw” error:
My example set the selection variable’s advance_on_accept parameter to true, so that it didn’t have to invoke next_selection. Since your experiment already uses next_selection, just remove that parameter from the definition of image_selection: