Calibration Algorithm

Hi Chris,

We have an NHP who’s having specific eye position issues, and in our detective work of the eye position data we’ve found that on a small proportion of trials, the calibrated eye data will include large values outside of the fixation window, but the program does not consider this a ‘break fix’. We wanted to dig deeper into the calibration’s algorithm, we are using the Linear one so it should be pretty simple. I couldn’t find the algorithm in the knowledge link: would you happen to know where it is, or could copy and paste a description of the calibration process and the algorithm itself?

Hi Travis,

the calibrated eye data will include large values outside of the fixation window, but the program does not consider this a ‘break fix’

This doesn’t sound like an issue with the calibration. It sounds like there’s a problem with a fixation point or eye monitor, or it could be caused by an error in your experiment’s logic for testing whether fixation has been broken.

Can you send me the code (XML or MWEL) of the experiment where you’re seeing this issue? I’ll take a look and see if I can spot any problems.

Thanks,
Chris

Hi Chris,

I agree, but we’re just covering our bases, and since I’ve had the most contact with you they tasked me with getting the linear algorithm from you :slight_smile: One scenario is if we know this NHP is particularly jittery, maybe increasing the box filter to 6 (it’s default is 5) or even 7 might help. But we’d like to understand the system a bit better than we do at the moment before tinkering.

I’m attaching the xml just in case you notice anything.

Cajal_2AFC_Training_v5_VE_Long_LargeStim_Mask.xml (108 KB)

Hi Travis,

This discussion describes the equations on which the linear eye calibrator performs a fit. The actual fitting is done by the SGELSD function from LAPACK (via Apple’s Accelerate framework).

Thanks for sending the XML. I’ll take a look and let you know if I see any potential issues.

Cheers,
Chris

Hi Travis,

When you say “the calibrated eye data will include large values outside of the fixation window,” are you referring to the pre-boxcar filter data (eye_h_calibrated and eye_v_calibrated) or the post-boxcar filter data (eye_h and eye_v)?

In your experiment, the eye monitor and fixation points are watching the post-boxcar filter data (which is normal). If you’re seeing the large outlier values in the pre-boxcar filter data, then I assume that the boxcar filter is averaging out these variations, such that eye_h and eye_v never reach the limits of saccade entry and/or broken fixation. In this case, the right solution may be to reduce the size of the filters’ width_samples parameter.

On the other hand, if you see the large outliers in the post-boxcar filter data, then I suspect a logic error in your experiment. That said, I’m not seeing any obvious issues in your code. I do notice that, in your main protocol (“Reaction Time 2-AFC”), you aren’t checking the saccade variable at all. However, if anything, that should result in more broken fixations, not fewer.

Cheers,
Chris

Hi Chris,

Thanks for the info! Yes the values that we are referring to are eyeh and eye_v the post boxcar filtering values. When we calculated the % of outlier values, it was actually quite small, around 3% so we are becoming less worried by that value. We’re starting to think the issue is more that as she is calibrating, her eye seems to be ‘snaking’ into the window on a few calibration trials, and this may be throwing her calibration off. So we are using the calibration variables you mentioned to dig deeper into this. 1 - we don’t know what is causing this ‘snaking’ and 2 - we are trying to see if there is a hardware fix or software fix that can reduce it. The anomaly is such that during a trial her eye will appear up and to the left of the fixation point on a lot of trials. Have you had any experience with such a calibration issue, and if so do you have any tips?

FYI, Barnes, a grad student who is spear heading this problem, will probably be posting in the coming days, just to give you a heads up to quickly onboard him into the conversation.

Hi Chris,
Thanks for all of your help through this process!
As Travis mentioned above, we have been having some difficulties with broken fixations and are attempting to troubleshoot that number down. One thing we noticed is that after calibration, the monkey’s eye is consistently up and to the left of center (looking at eye_h and eye_v, what I believe to be the calibrated and filtered data.) I have attached a histogram of the monkey’s eye positions during fixation of successfully completed trials which indicates to me that she is not being calibrated correctly. The concern is that this position (and the high variance around it) might be indicators of a miss-calibrated eye.

What I would love is the ability to visualize what the monkey is doing during the calibration phase, and how the un-calibrated values compare to what they would be after calibration. Essentially I would love to be able to plot her eye positions as a grid and see how the uncalibrated and (hypothetical) calibrated signals compare to get a better sense of what is going wrong. Do you know of a plotting script (ideally in matlab) that would be able to create a plot like that? If not, is it obvious where one would begin in making that?

Two hypotheses of what could be going wrong below, but would welcome more:

  • We think that at least some aspect of her miss-calibration is coming from her behavior, she seems to be slowly dragging her eye into the calibration targets and we fear this might be contributing to the calibration offset, but we are unsure.
  • We also have concerns that the camera position could be contributing to miss-calibration and wonder if we should be moving it to a different position. (It is currently just below the monitor’s right corner pointed at the left eye of the monkey.) It is worth noting that this camera position was successful for other monkeys in the past, but we are hoping to be able to determine from the plots I mentioned above if there is some strange warping going on.

Very Best,
Barnes

Hi Travis & Barnes,

The anomaly is such that during a trial her eye will appear up and to the left of the fixation point on a lot of trials. Have you had any experience with such a calibration issue, and if so do you have any tips?

Sorry, I don’t recall seeing any similar issues in the past.

What I would love is the ability to visualize what the monkey is doing during the calibration phase, and how the un-calibrated values compare to what they would be after calibration. Essentially I would love to be able to plot her eye positions as a grid and see how the uncalibrated and (hypothetical) calibrated signals compare to get a better sense of what is going wrong. Do you know of a plotting script (ideally in matlab) that would be able to create a plot like that? If not, is it obvious where one would begin in making that?

I think you want to compare eye_h_raw and eye_v_raw (uncalibrated eye positions) to eye_h_calibrated and eye_v_calibrated (calibrated, but pre-boxcar filter eye positions). Currently, your experiment saves none of these variables, because they all have their logging parameter set to never. You need to change this to when_changed in order for these values to be saved in the event file. Once you have an event file with these variables, you can use the standard data analysis tools to extract their values and make plots.

Cheers,
Chris

Hi Chris,
Thank you very much this was most helpful. I have taken a stab at plotting the calibration and I got some results that are confusing to me and was hoping I could get your thoughts on it. (I have attached some plots I generated) (I have also emailed you the .xml file as I was notified that new users could not attach files,
Initial Calibration 28Mar23 Linear Zoomed Out
also I am required to upload each plot individually I believe for the same reason)

I plotted Both the Raw, and the Calibrated eye signals (eye_h/v_raw and eye_h/v_calibrated) for the last 100ms for each calibration loop (I parsed this it by inserting a variable “CalibrationOC” at the beginning and end of the calibration trials). I also plotted the means to make it visually less cluttered. The color bar represents ms with yellow being the last recorded value in each calibration trail (t) and blue being the location (t-100).

The read squares are my understanding of where the fixation cues were placed on the screen (taken from the variables fixation_pos_x and fixation_pos_y).

My expectation was that I should see a rough grid of fixation positions which loosely match the fixation cue target locations. I am fairly confident that I am plotting the correct data here, but I am having trouble making sense of it. If this was indeed how off the calibration is, wouldn’t the monkey be unable to calibrate at all? (the monkey successfully calibrated at a 3.5 deg window).

As a sanity check,I used the calibration parameters from #announceCalibrator and confirmed that the linear calibrator below applied to my raw values got the calibrated values.
h_cal = offset_h + gain_hh * h_raw + gain_hv * v_raw
v_cal = offset_v + gain_vh * h_raw + gain_vv * v_raw

I think I must be fundamentally misunderstanding something about the way that MWorks reads in these values but I cant seem to figure out what it is. Any thoughts or suggestions you have would be greatly appreciated!
Many Thanks!
Barnes

Plot 2: see above^ (I was unable to upload the mean value plot I believe as I am a new user and believe I am limited to 3 replies per post?)
Raw Values Calibration 28Mar23 Linear

Hi Barnes,

Sorry you were having trouble uploading files. I boosted your user trust level, so hopefully that won’t happen again.

Also, I didn’t receive the XML file. Can you try uploading it again?

Thanks,
Chris

Hi Chris,
Thank you for your help with User level, and sorry for the delay on the xml file. Here is a copy of the program.
Cajal_2AFC_Training_v5_VE_Long_LargeStim_Mask_V3.xml (108.9 KB)
Very Best,
Barnes

Hi Barnes,

Thanks for posting the XML.

My expectation was that I should see a rough grid of fixation positions which loosely match the fixation cue target locations.

That’s what I would expect, too. I’m not sure why your plots don’t match that expectation, but I have a few thoughts:

  1. I’m not sure “last 100ms” is the right window for looking at the eye positions. What you really want is to see the eye positions between begin_calibration_average (in state “cal fixation”) and end_calibration_average_and_take_sample (in state “cal success”). That’s the time window where you’ve determined that the animal is fixating and are collecting eye positions that will be used to compute the calibration.

  2. Your experiment uses eye_lx and eye_ly as the “raw” eye coordinates, but it really should be using pupil_lx and pupil_ly. See this discussion for more info. Presumably this isn’t having a big effect (if any), since you’re able to calibrate successfully. Still, I don’t know how the EyeLink assigns values to eye_lx/ly in the absence of an EyeLink-side calibration, so it’s probably better to start with the true raw data (pupil_lx/ly) when calibrating in MWorks.

    Also, the bit where it says there may be a non-linear relationship between raw (pupil) data and true gaze position is interesting. It’s hard to image the data being sufficiently non-linear to explain your plots, since you can successfully calibrate using MWorks’ linear eye calibrator. Still, it’d be worth checking if/how things change when you switch to pupil_lx/ly.

  3. My recommendation to compare eye_h/v_raw and eye_h/v_calibrated doesn’t make sense during a calibration. The calibration protocol starts by executing Clear Calibration, meaning that the raw and “calibrated” values will be identical until you complete the calibration (except for gains and/or offsets you’ve set manually via MWClient’s eye calibrator window, which are included in the “calibrated” values). I was thinking that you’d compare them after completing a calibration, to get a sense for how the computed calibration transformed the raw eye coordinates.

    If you want to see how the raw eye positions used to compute the calibration are transformed by the calibration, then you’ll need to manually apply the final calibration parameters to the raw data and plot that. I assume you haven’t done this already, because, apart from a scale change, your raw vs. calibrated plots look identical.

  4. At some point, it might be worth trying a tracker-driven calibration. This is described in the docs as well as this discussion. The lab that requested this feature seemed pretty happy with it, but your results may vary.

Sorry I can’t provide any definitive answers. Hopefully some of the above suggestions will help you move forward.

Cheers,
Chris

Hi Chris,
Thank you for all your suggestions, they are most helpful.

1/2) Thank you for these suggestions. I plan to recollect calibration data tomorrow using the pupil_lx and pupil_ly instead as the raw values for calibration. I will also be taking samples during the start and stop of average collection as you suggested in order to more accurately plot where the monkey is looking.

  1. Absolutely, I actually did compute the plot of those values that I have called “recalibrated”. Here is a plot of the recalibrated values. As well as a plot showing both the “initial calibration” and the “recalibrated” means. These are still for the last 100ms so as you pointed out might not be the right range analyze. However I am confident that they are the correct transformation as the calibration parameters for the recalibrated version. My understanding is that because the 3rd H_parameter and 2nd V_Parameter are near zero, the shape of the raw and recalibrated values would be identical in shape and merely stretched slightly and shifted to a new center?
    H_params: -82.2595 0.1411 -0.0010
    V_Params: -93.3829 -0.0032 0.1423
    Recalibrated 28Mar23 Linear
    Means Initial vs Recalibrated 28Mar23 Linear

For the “initial” calibration plot above I used the result of the eye_v/h_calibrated which I believed to be computed using the calibration variable set that I loaded in before calibration These are the calibration settings for the same monkey from a previous day, and I am confident that loading them aids in calibration. However I am now confused, is the calibration presets are cleared completely at the beginning of each calibration run, why does the calibration improve from one calibration run to the next?

  1. This is also a good suggestion, if tomorrow’s calibration with the pupil values does not yield anything productive I think this will likely be the next step.

Thank you again for your help getting to the bottom of this.
Very Best,
Barnes

Hi Chris,
Thank you for all your help. Here is the result of calibrating with the pupil data as well as only sampling with the values in the range you suggested. Things look mostly good and as expected!
Recalibrated Position Traces- Velvet 05Apr23
Means Initial vs Recalibrated - Velvet 05Apr23

One thing that we have noticed is that the monkey is fixating up and to the left of the fixation dot during our task. We believe this is because the offset of calibration is slightly off. Our task is entirely based around the center of the screen is there a way to increase the accuracy near the center specifically? Either algorithmically or perhapse by increasing the sampling of fixation dots in the center of the screen?
Very Best,
Barnes

Hi Barnes,

My understanding is that because the 3rd H_parameter and 2nd V_Parameter are near zero, the shape of the raw and recalibrated values would be identical in shape and merely stretched slightly and shifted to a new center?

Yes, that’s correct.

However I am now confused, is the calibration presets are cleared completely at the beginning of each calibration run, why does the calibration improve from one calibration run to the next?

Sorry, that was my mistake. Clear Calibration clears any old samples, so that only subsequent samples will be used to compute the new calibration. But, as your plot illustrates, it does not clear the existing calibration parameters. (I must have been thinking of the “Reset” button in MWClient’s calibrator window, which does clear the calibration parameters.)

Here is the result of calibrating with the pupil data as well as only sampling with the values in the range you suggested. Things look mostly good and as expected!

Yes, that all looks much more reasonable!

One thing that we have noticed is that the monkey is fixating up and to the left of the fixation dot during our task. We believe this is because the offset of calibration is slightly off. Our task is entirely based around the center of the screen is there a way to increase the accuracy near the center specifically? Either algorithmically or perhapse by increasing the sampling of fixation dots in the center of the screen?

You might try reducing the value of fixation_width between successive calibrations, so that the monkey has to look more precisely at the fixation point before you take a calibration sample. That could increase the overall accuracy of the calibration.

You certainly could try taking more calibration samples near the center, but I can’t say whether that would help. Also, you previously questioned whether the camera position could be having an effect. I don’t know the answer, but it might be worth investigating. The EyeLink 1000 (Plus) user manual has a lot to say about camera positioning, so maybe consult that, too.

Cheers,
Chris

Hi Chris,
Thanks again for all the insights. I will definitely be attempting to better understand stuff from the eyelink end. Two outstanding questions I have that I would love your insight on are as follows:

  1. if you notice on the plot I posted above, I have plotted each data point during each calibration trial. What is strange to me is that there is a wide variance in the number of samples for each fixation cue. Ranging from ~100 to ~5. Do you have any idea what might cause this discrepancy?

  2. we are still getting a lot of error (distance of calibrated points to the actual displated cue.) This persisted even after we reduced the fixation window. (we can reliably get down to ~2.7 fixation window) Does this seem to be what you would expect from a normal calibration? or does something seem off to you?

Here is the xml file in case it is helpful. The integration time looks at changes in the Calibration OC variable (changes from 1->2 or 1->3) and fixation location are taken between (0->2 and 0->3)
Cajal_2AFC_Training_v5_VE_Long_LargeStim_Mask_Calibration_with_Pupil_NonLinear.xml (109.3 KB)
Many Thanks!
Barnes

Hi Barnes,

The integration time looks at changes in the Calibration OC variable (changes from 1->2 or 1->3) and fixation location are taken between (0->2 and 0->3)

Shouldn’t fixation just be between 1 and 2? When CalibrationOC is 0, the animal isn’t yet fixating, and when it’s 3, you know the animal has broken fixation, and you’re discarding the calibration samples. Or am I misunderstanding?

if you notice on the plot I posted above, I have plotted each data point during each calibration trial. What is strange to me is that there is a wide variance in the number of samples for each fixation cue. Ranging from ~100 to ~5. Do you have any idea what might cause this discrepancy?

Does that data include trials where the animal broke fixation and the calibration samples were discarded? If so, then I think that’s probably the source of the variability.

Based on your latest experiment code, successful trials should have 100ms of fixation (split between states “cal fixation” and “cal fixation monitor”, which wait on cal_timer running for dur_fixationCal) plus up to an additional 30ms in state “cal pre reward”. If the EyeLink is sampling at 1000Hz, then you’d expect around 100-130 eye positions in that interval.

we are still getting a lot of error (distance of calibrated points to the actual displated cue.) This persisted even after we reduced the fixation window. (we can reliably get down to ~2.7 fixation window) Does this seem to be what you would expect from a normal calibration? or does something seem off to you?

I really don’t know. I haven’t done many eye calibrations myself. (I didn’t write MWorks’ calibration routines, and I don’t work with animals.) Maybe check with other folks in your lab?

Also, I still think it’d be worth trying a tracker-driven calibration. Presumably the EyeLink folks know how to calibrate their hardware between than me or you, so maybe their method would yield better results.

Cheers,
Chris