Incorrect number of accepted selections by num_accepted()

Hi Chris,

I’m trying to make an experiment which uses selection variables instead of range replicators to avoid resetting selection when I stop an experiment, as you suggested some time ago. Please see ‘Attention Sampling Protocol’ where it is implemented.

What I did is creating a list of all possible conditions in SOA_location. In this variable, there are 16*2 elements which correspond to 16 SOA by 2 target locations. To select from this variable, I made a selection variable, SOA_location_select. Selection from this variable is accepted in case of participants make a correct response (‘distributed_success’) and rejected in other cases (‘distributed_false_alarm’, ‘distributed_fixation_break’, ‘distributed failure’).

The experiment will stop when number of accepted selection (num_accepted() ) is equal to the size of SOA_location, 32. However, it seems like the number provided by num_accepted is not correct. Every time a trial is correctly completed, it should increase by 1 but instead in a lot of trials it increased by 2 (e.g. 1 3 4 5 6 8). This resulted in experiment stopped before reaching 32 trials because the num_accepted() has reached 32. Do you have any idea what might caused this?

Best
Tenri

Hi Tenri,

I see that the “distributed_pre_acquire” state returns to “distributed_start” when not(fix_flag) is true. When this happens, next_selection is invoked again, without a preceding reject_selections. Maybe this is the source of the discrepancy?

Chris

Hi Chris,

I made a new state to which “distributed_pre_aquire” will go if not(fix_flag) is true. Here, I don’t do next_selection. However, the problem persists.

I think it is caused by faulty reject_selection() somewhere in the code. To test, I made the selection method of SOA_location_select sequential. When I went to states which should reject selections (e.g. “distributed_fixation_break”, “distributed_ignore”), MWorks went to the next selection values, instead of redoing the rejected selection values. After this, when I made a success (“distributed_success”), num_accepted() increased by more than 1 value.

Do you know how to solve this?

Thank you
Tenri

Because it was missing in the code I sent you, I also added reject_selections() at “distributed_ignore” when testing the above, so this should not be the problem.

Hi Tenri,

I see the problem now. When used on a selection variable, reject_selections implicitly invokes next_selection. Since your “distributed_start” state also invokes next_selection (explicitly), the result is that you make two selections after every reject_selections. Then, at the next accept_selections, you accept both of these selections.

Looking back, I see that my “persistent selection” example led you astray on this point, as it invokes next_selection explicitly at the start of every trial (but gets away with it, as it never invokes reject_selections). Sorry about that!

I think the right approach is to remove the next_selection call entirely and instead set SOA_location_select’s advance_on_accept parameter to true, e.g.

selection SOA_location_select (
	values = 0:size(SOA_location) - 1
	selection = random_without_replacement
	advance_on_accept = true
	)

This will cause accept_selections to invoke next_selection automatically. (Also, you don’t need to call next_selection after reset_selection, as it will be called automatically the first time you use the value of SOA_location_select.)

Hopefully, that will fix things. If not, please let me know!

Chris

Hi Chris,

I tried what you suggested and the problem is solved now. Thank you!

Best
Tenri