About staircases in Mworks

Dear Chris,
I hope you’re doing well. Here Jaime from the Michael Schmidt group at the University of Fribourg.

I’m interested in using multiple staircases for an experiment and noticed that MWorks includes a staircase optimizer. I was wondering if there is an existing mwell script I could review to help implement this in our experiments.

Additionally, I wanted to ask whether MWorks supports a Quest staircase or if there is a way to implement it in mworks
Thanks a lot in advance

Cheers,
Jaime

Hi Jaime,

Apologies for the delayed response.

I’ve attached a simple experiment that demonstrates how MWorks’ staircase optimizer works.

MWorks does not currently provide a QUEST staircase, but we could certainly add one. Do you have a description of how the algorithm should work? I found the documentation on PsychoPy’s QUEST staircase, so I could probably extract the algorithm from that code, but it would be easier if I had a clearly-explained procedure to implement.

Cheers,
Chris
staircase.mwel (693 Bytes)

Thanks for checking this one. About the algorithm:

For some of our experiments, we need to estimate the perceptual threshold at which the subject can detect a change in contrast of a visual stimulus—specifically, a square shown on the screen. The idea is to find the contrast level at which the animal can reliably detect a change, but not so high that it’s always obvious.

The basic approach is similar to a classic staircase method: we present a change in contrast (say, +20%), and observe whether the monkey detects it. If it does, we lower the contrast slightly on the next trial (e.g., to 19%); if it doesn’t, we increase it (e.g., to 21%). Over time, this back-and-forth helps us hone in on the threshold—this is where we see staircase reversals, meaning the contrast direction switches (up to down or vice versa).

In our current experiment we need multiple staircases, for that purpose I wrote the following

var staircase_min_max=[.1,3]
var staircase_step_down=.1
var staircase_step_up=.3

var list_staircase_state=[1,1,1,1,1,1,1,1,1,1,1,1]
var curr_staircase=0

var eval_staircase = 1 {
    // calculate current staircase based on active params
    curr_staircase=(par_side+par_iscongruent*2+par_color*4)

    if (eval_staircase == 1) {
        list_staircase_state[curr_staircase] = max(list_staircase_state[curr_staircase] - staircase_step_down, staircase_min_max[0])
    }
    if (eval_staircase == 0) {
        list_staircase_state[curr_staircase] = min(list_staircase_state[curr_staircase] + staircase_step_up, staircase_min_max[1])
    }
}

I execute the command eval_staircase=1 or =0 depending of the response of the monkey

QUEST follows a similar logic but uses a Bayesian approach, which makes the estimation much more efficient. Instead of using simple rules like “go up/down by 1%”, QUEST maintains a probability distribution (a prior) over the possible values of the threshold. It updates this distribution with each trial based on the subject’s response—whether the stimulus was detected or not.

You start by telling QUEST what kind of psychometric function you’re assuming. This includes parameters like:

  • The slope (how quickly the detection probability increases with contrast)
  • The lapse rate (to account for occasional random errors)
  • The guess rate (chance-level performance, e.g., 50% for a 2AFC task)
  • The target performance level (e.g., 75% correct), which defines what we consider “threshold”

Once that’s set, QUEST will suggest a contrast value to present. You show that to the monkey, record whether the animal detected it (1) or not (0), and feed that response back into the algorithm. Based on the updated distribution, QUEST picks a new value for the next trial. Over time, the distribution gets narrower and centers on the most likely threshold.

This runs continuously, and you can define stopping criteria however you like—number of trials, number of reversals, or when the uncertainty (posterior width) is small enough.

In practice, QUEST is usually implemented as an object that holds the internal state: the prior, the responses, and the logic to suggest the next stimulus.

Let me know if you need more details

Cheers,

Jaime

Hi Jaime,

For QUEST: This one i need it as soon as possible (for the moment I have in place a staircase with asymmetric steps)

OK, I’ve added it to my to-do list.

However, I found some useful code that can do the trick (https://www.palamedestoolbox.org). Just the only issue I’m having right now is making mworks to work with MATLAB. I followed up the tutorial for setting it up but so far, i cant see the matlab option (just python script bridge). Probably because we are working with mac studios. Any suggestions?

MWClient’s MATLAB window isn’t going to help you here, because it can only receive data from MWorks, not send it back.

One way to make this work would be to use MWorks’ Python actions and the MATLAB Engine API for Python to invoke the MATLAB code via Python. However, if you’re going to use Python anyway, it would probably be simpler to use PsychoPy’s implementation. I’ll send you an example of how to do this.

Cheers,
Chris

Hi Jaime,

I’ve attached an example that uses a Python QUEST implementation with MWorks.

The actual QUEST code (in quest.py) comes from PsychoPy. The PsychoPy code is a minimally-modified version of Andrew Straw’s QUEST implementation, which itself is a Python port of Psychtoolbox’s QUEST routines. The latter were written by Denis Pelli, one of the authors of the original QUEST paper, I believe.

The MWorks interface is defined in quest_staircase.mwel, which uses quest_staircase.py. The experiment file (demo.mwel) shows how it works by reproducing (mostly) the demo function from quest.py. Hopefully the code will make sense. If not, let me know.

Also, the code in quest.py should provide a straightforward reference for me to use when developing a built-in MWorks QUEST implementation. If it seems to work correctly and do what you want, please let me know!

Thanks,
Chris
quest_staircase.zip (9.1 KB)

Hi Jaime,

Have you had a chance to try the Python-based QUEST implementation? If so, does it work as you’d hoped? I’m hoping to use this code as a reference when developing built-in MWorks QUEST support, so I’d appreciate any feedback you have.

Thanks,
Chris

Hi Chris,

Sorry for the late reply. I was able to test the code but encountered some issues and have a few questions.

Issues:

I wanted to implement the code in a 2AFC task. Our programs/scripts are organized such that we have a folder for appendables (configuration of hardware, sounds, special stimuli) and a folder for tasks (all the tasks that a monkey can perform). Typically, we have these folders plus a master script/protocol per monkey that appends its tasks, hardware to use, and specific preferences of that monkey.

I added the QUEST scripts to our appendables folder. When I append them, I get an error stating that quest_staircase.py or quest.py cannot be found.
I solved this by making a copy of the .py files in the folder where the master script lives.

However, at the beginning of each experiment, I load multiple sound files,
e.g. sound/wav_file reward_sound (path = ‘/Users/lab/Documents/Experiment_Sounds/reward.wav’).
This works fine for all our experiment. But then I add the QUEST staircase, and I get the following error:

ERROR: Failed to create object.
Extended information:
reason: Path does not exist: /var/folders/jh/2_67mn016sndl5nfdn2h28sc0000gr/T/MWorks/Experiment Cache/\_Users_lab_Documents_GitHub_SRG_repo_Projects_Optogenetics_Experiment_Test_staircase2AFCT.mwel/tmp/Users/lab/Documents/Experiment_Sounds/reward.wav
location: IO_devices_Variables_human_psychophysics.mwel: line 203, column 1
object_type: sound/audio_file
ref_id: idp105553148889343
component: reward_sound
parser_context: mw_create

Therefore, when I add the QUEST staircase, we are unable to play any sound.

Regarding using QUEST in MWorks:
From what I understood, to initialize the staircase, I need to set the variables that control the Weibull distribution:

quest_t_guess = .75
quest_t_guess_sd = .2
quest_p_threshold = 0.8
quest_beta = 3.5
quest_delta = 0.1
quest_gamma = 0.5

Then set an ID:

quest_state_id = 0

And then use:

quest_reset()

If I change the staircase for another condition, I need to change the variables that control the Weibull distribution, choose another ID, and then use quest_reset() again.

Is this correct?
If so, I have a couple of questions.

Let’s assume I am in state_id 0 and I do not change state. Then I change the mean of the Weibull distribution and use quest_reset(). Does this reset all values from QUEST, updating the mean of the staircase?
If so, is the QUEST object storing the previously used intensities and responses, or does pressing reset erase everything and revert to the first assigned values?

I ask because it would be useful to switch between different staircases for different conditions (which I assume is what the state_id setting is for). But it is not clear how to assign starting values, purge or overwrite them, and whether doing so erases everything or keeps the previously collected responses and intensities (but just doesn’t use them).

I mention this because, in one test, QUEST suggested values outside the physically possible range. Although my program corrects values that exceed limits, QUEST continued diverging. That’s why I think it would be useful to have a command to purge the staircase for a given ID.

Next, would it be possible to define minimum and maximum values to prevent out-of-bounds issues?

About the next stimulus: so far, I have been using quest_mean() to select the next intensity. From what I understand, if I set quest_p_threshold = 0.82, QUEST centers the mean of the distribution around the intensity value that corresponds to this probability of correct detection. This is useful, but I also saw a method called calculateNextIntensity:

def calculateNextIntensity(self):
    """based on current intensity and counter of correct responses"""
    self._intensity()

    # Check we haven’t gone out of the legal range

    if self.maxVal is not None and self._nextIntensity > self.maxVal:
        self._nextIntensity = self.maxVal
    elif self.minVal is not None and self._nextIntensity < self.minVal:
        self._nextIntensity = self.minVal
    self._questNextIntensity = self._nextIntensity

From what I understand, this also estimates the mean but corrects it if it is out of bounds. If that is correct, would it make sense to use it instead?

About setting intensities: in the description of the Weibull distribution, is said that the physical units (e.g. contrast/gun value) are in log10 units. However in the example form psychopy it enters the direct gun values. I did as well and seems to be working just fine. Is that ok= or would be recommendable to work with log10 units ( I have not tested this one yet)

Last but not least, in a 2AFC task, QUEST seems to be quite efficient at maintaining performance around a given value. So far, we have tested only with humans due to the issues mentioned above, but I think it could be a good addition, especially for handling multiple conditions and training monkeys.

Let me know your thoughts.

Cheers,
Jaime

Hi Jaime,

Therefore, when I add the QUEST staircase, we are unable to play any sound.

This happens because the QUEST demo uses a Python file resource. As noted in the manual, if an experiment uses any resources, then all external files must be declared as resources. Assuming your only external files are sounds, you should just need to add the following line to your experiment:

resource ('/Users/lab/Documents/Experiment_Sounds')

If you still see errors after that, let me know, and we’ll figure out what else needs to change.

If I change the staircase for another condition, I need to change the variables that control the Weibull distribution, choose another ID, and then use quest_reset() again.

Is this correct?

Yes, that’s correct.

Let’s assume I am in state_id 0 and I do not change state. Then I change the mean of the Weibull distribution and use quest_reset(). Does this reset all values from QUEST, updating the mean of the staircase?

If so, is the QUEST object storing the previously used intensities and responses, or does pressing reset erase everything and revert to the first assigned values?

quest_reset erases all previous intensities and responses and resets to a completely clean state, using the current values of quest_t_guess, etc., and the current quest_state_id.

I see that the QuestObject class has a recompute method that is described as follows:

Call this immediately after changing a parameter of the psychometric function.
recompute() uses the specified parameters in ‘self’ to recompute the
psychometric function. It then uses the newly computed psychometric function
and the history in self.intensity and self.response to recompute the pdf.

So if you want to change the parameters of the Weibull distribution without losing the history, this is what you should invoke. If that’s what you want, let me know, and I’ll add it to the example code (probably as a quest_recompute macro).

Next, would it be possible to define minimum and maximum values to prevent out-of-bounds issues?

Sure. QuestObject has a parameter range, described thus:

range is the intensity difference between the largest and smallest intensity
that the internal table can store. E.g. 5. This interval will be centered on
the initial guess tGuess, i.e. [tGuess-range/2, tGuess+range/2]. QUEST
assumes that intensities outside of this interval have zero prior probability,
i.e. they are impossible.

Would it be sufficient to make that configurable by your experiment, or do you need explicit min/max bounds on the intensity?

From what I understand, this also estimates the mean but corrects it if it is out of bounds. If that is correct, would it make sense to use it instead?

If we add support for explicit min/max intensity, then, yes, I would do something like that.

About setting intensities: in the description of the Weibull distribution, is said that the physical units (e.g. contrast/gun value) are in log10 units. However in the example form psychopy it enters the direct gun values. I did as well and seems to be working just fine. Is that ok= or would be recommendable to work with log10 units ( I have not tested this one yet)

It says that the intensity is usually in log10 units, but I don’t think there’s any assumption/requirement that it is. Your experience seems to back this up. As for whether log10 units would be better, I have no idea. Maybe the original QUEST paper has something to say about that?

QUEST seems to be quite efficient at maintaining performance around a given value. So far, we have tested only with humans due to the issues mentioned above, but I think it could be a good addition, especially for handling multiple conditions and training monkeys.

OK, great! Thanks for all the feedback.

Chris

Hello,

I have just a couple of questions regarding the switching between staircases. The process of switching between staircases is not entirely clear to me.

For example, let’s assume I have an experiment with three conditions, each using a different staircase. I assign an ID (e.g., ID = 0) and set the parameter values as follows:
• quest_t_guess = 0.75
• quest_t_guess_sd = 0.2
• quest_p_threshold = 0.8
• quest_beta = 3.5
• quest_delta = 0.1
• quest_gamma = 0.5

I then call reset(). Next, I select a new ID (ID = 1), configure the parameters again (e.g. I am initialising the staircase with the same values, so i dont change the variables quest_t_guess, quest_t_sd, etc ) and call reset() once more.

However, when I start the experiment and randomly select a condition (say ID = 3) and call reset(), how does MWorks know to load the parameter values associated with ID = 3 rather than simply using the most recently stored values of quest_t_guess_sd, quest_p_threshold, etc.?

(Or is there something I might have misunderstood?)

Regarding sound handling:
The issue is likely what you mentioned (not declaring this as a resource).

I added the line of code you suggested, but now I receive the following error:

“Can’t find resource github/srg_experiments/Users/lab/Documents/Experiment_Sounds”

My interpretation is that MWorks is treating this as a local path (my scripts live in github/srg_experiments). Do you have any suggestions? Should I copy all sound files into the same directory? I’ve been avoiding this because our code lives on GitHub, and it’s easier to keep the sound files in a separate directory than to add them to .gitignore. But if there is no alternative, I can move them.

Regarding min and max values:
I would prefer to define these directly within the Quest object. I have observed that, in a specific scenario, Quest suggests physically impossible values. Even though I return valid values during the experiment, it continues to diverge toward unrealistic ones. My assumption is that defining min and max within the Quest object could help ensure more accurate results.

Thanks again for the support!
Have a good week

Jaime

Hi Jaime,

However, when I start the experiment and randomly select a condition (say ID = 3) and call reset(), how does MWorks know to load the parameter values associated with ID = 3 rather than simply using the most recently stored values of quest_t_guess_sd, quest_p_threshold, etc.?

It does use the most recently stored values. Invoking quest_reset erases the QUEST state associated with the current value of quest_state_id and replaces it with a new one, using the current values of the QUEST parameters.

Note that if you’re just selecting a condition, all you need to do is assign to quest_state_id. Once you do, quest_update, quest_mean, etc., will use the previously-created QUEST state associated with that ID. There’s no need to invoke quest_reset unless you want to completely reset the state.

I added the line of code you suggested, but now I receive the following error:

“Can’t find resource github/srg_experiments/Users/lab/Documents/Experiment_Sounds”

If the sound files are in /Users/lab/Documents/Experiment_Sounds, as the error message you shared previously indicated, then the resource declaration should be

resource ('/Users/lab/Documents/Experiment_Sounds')

Or is the Users directory in question in fact a subdirectory of github/srg_experiments?

Regarding min and max values:
I would prefer to define these directly within the Quest object.

OK, I’ll make that change.

Cheers,
Chris

Hi Chris,

About resource folder: Adding the sounds folder as resource worked just fine ( probably I had a duplicated file hence the error)
Explanation quest:
Thanks for the explanation! i get the mechanics now
Cheers,

Jaime

Hi Jaime,

I’ve attached an updated version of the QUEST demo code with the following changes:

  1. There’s now a quest_recompute macro, which reads the current values of beta, delta, and gamma and recomputes the psychometric function and PDF. It uses and preserves the QUEST state’s history (since the last quest_reset on the given quest_state_id).

  2. The variables quest_intensity_min and quest_intensity_max set the minimum and maximum for values returned by quest_mean, quest_mode, and quest_quantile. If QUEST tries to suggest a value outside of these bounds, the minimum or maximum will be returned instead.

  3. The variables quest_grain and quest_range control the quantization and range, respectively, of the internal table used by QUEST. The range is always centered on quest_t_guess and, by default, will be [t_guess-2.5, t_guess+2.5]. Unless a range of 5 is appropriate for your experiment, you should probably change this.

  4. Warnings from the QUEST code are now output as MWorks warnings.

The new QUEST parameters, like the old ones, are read during quest_reset and associated with a particular quest_state_id.

When you have a chance, please try out these changes, and let me know if you run in to any issues or have further suggestions for improvement.

Cheers,
Chris
quest_staircase_v2.zip (9.7 KB)