Hi Chris,
I was wondering if you’d be able to tell me about how repetitions are run in mworks. We have some large datasets that only require two repetitions and instead of using mworks to track repetitions, some of the researchers are following old guidelines and doubling the images so the set becomes twice as big (ex. Instead of running a 9k image set twice, the 9k images are duplicated and we run an 18k image set for one repetition). I’m hoping to figure out why this trend of duplicating images began and what mworks repetitions do differently. All that I understand so far is that having all of the images and their duplicates in the image set will allow for a smaller time window between the first and second repetition of an image (image reps have a greater chance to be shown consecutively).
Thank you,
Sarah
Hi Sarah,
It’s hard to know what the original intent was, but I’m guessing that you’re correct. By duplicating each image, you can ensure that the experiment uses each image twice without insisting that every image is used once before any image is repeated. I don’t know why that would be desirable or important, but if it were, I can see why doubling the images would be a straightforward solution.
That said, there’s probably a better way to do it. If you can send me an example experiment (just the MWEL or XML code, not the images) that uses the image-doubling technique, I can take a look and suggest alternate approaches.
Cheers,
Chris
Thanks, Chris! This is one of the recent experiments we ran. One repetition, with 22k images (11k unique images). If there is any way to have mworks randomize the images in this way while still keeping track of repetitions that would be fantastic!
Best,
Sarah
robustness_v10.mwel (22.5 KB)
Hi Sarah,
Thanks for sharing the experiment.
In this case, all that’s required is a small change to the definition of the selection variable RSVP_test_stim_index
. In short, replace this line:
values = 0 : stimulus_set_size - 1
with this:
values = 0 : stimulus_set_size - 1, 0 : stimulus_set_size - 1
In other words, just include the full range of image indices twice. Obviously, you should also change the definition of stimulus_set_size
to reflect the number of unique images.
I think that should resolve the issue (at least for experiments that rely on selection variables for image selection). If you run in to any problems, please let me know.
Cheers,
Chris
Jon asked:
In MWEL, it looks like you can use either random_without_replacement
or random_with_replacement
in defining a random sequence. Users have been giving us image sets with duplicated image files, possibly in order to allow an image to show up more than once before all the images have been presented. We’re wondering if just switching from random_without_replacement
to random_with_replacement
would give them what they really want.
If the experiment’s goal is to show each image exactly twice, then no, that’s not what you want. If you have 1000 images and select from them using random_with_replacement
, MWorks will make 1000 completely random draws from the image set, with no regard for the outcome of previous draws. This means that a given image could be presented any number of times, including zero times.
Please see my comment above for a solution for experiments that use selection variables.
Chris
Thanks Chris, this is such an easy fix! And saves us a lot of space and time. I will pass it on to the researchers currently using duplicate images.
Best,
Sarah
Hi Sarah,
You probably already thought of this, but you’ll also need to change the value of stimulus_set_repetitions
from 1 to 2.
Cheers,
Chris
Hi Chris,
Yes, ran into that one. Right now we’re stuck because it seems to be showing the same image more than twice, sometimes 3-5 times while showing other images once or not at all.
For example, we ran two repetitions on an image set of size 3 just to be sure, and it presented images 1,3,3,2,1,3. Showing image 3 three times and image 2 only once.
Can you send me the experiment file that you used for that test?
Thanks. It looks like things are set up correctly.
Are any of the trials failing? If the experiment enters the state “RSVP stim reject”, it will call reject_selections, which will put the current index back in the pool to be selected again on a subsequent trial.
If no trials are failing, there must be a logic error in the experiment code. Let me take a longer look and see if I can find the problem.
Ah I don’t think there would be any failed trials. I’m just setting the eye_in_window to true and letting it run.
I think I figured it out. Since the repeated indices are now included in the selection variable’s values, just changing stimulus_set_repetitions
to 2 isn’t sufficient. We also need to change some code in state “RSVP stim accept”. Specifically, the choose
should look like this:
choose {
when (stimuli_shown < stimulus_set_size * stimulus_set_repetitions) {
next_selection (RSVP_test_stim_index)
}
otherwise {
reset_selection (RSVP_test_stim_index)
stimulus_set_repeat_count += stimulus_set_repetitions
stimuli_shown = 0
}
}
In words: If the number of stimuli shown is less than the total number of selectable indices in the selection variable (stimulus_set_size * stimulus_set_repetitions
), then advance to the next selection. Otherwise, reset the selection variable, and increment stimulus_set_repeat_count
by stimulus_set_repetitions
(i.e. 2).
With the change to the definition of RSVP_test_stim_index
, tracking stimulus set repetitions no longer makes much sense, and the experiment could be made simpler by removing said tracking. But changing the choose
as described above is the quickest way to get things working correctly.
I’ve attached a new version of your experiment that contains my changes. Can you test it and confirm that it works correctly for you, too?
(One note: When the experiment completes, stimulus_presented_list
is going to contain 8 indices, not 6, because stimuli_per_trial
is set to 8. However, its first 6 elements should always be 1,1,2,2,3,3 in random order.)
Cheers,
Chris
normTest-fixed.mwel (22.4 KB)
Thanks Chris! Yes this seems to work well. Hopefully we can use this next week!
Best,
Sarah