Update eye calibration trial by trial to obtain drift correction

Dear MWORKS community!
We are working on an experiment with eye fixation control. For calibration, we use a simple 9-point-calibration experiment that in the end updates a linear eye calibrator (with X/Y-gain and X/Y-offset). As we observe a drift in the eye tracker signal offset over time, we would like to correct for this drift before each trial. Our idea so far: Use the same linear eye calibrator, take an eye position sample before the start of each trial and use this to update the calibrator to the new offset, like a small one-point-calibration that only changes the offset, not the gain.
The related questions are:

  • Is this approach going to work / are there more elegant approaches instead of using the “update”-functions that are part of the calibrator?
  • Has anyone already developed such an approach/plugin/etc. to correct for eye tracker signal drift on an automatic (or even trial-by-trial) basis?
  • Is there a documentation available about the eye calibrator object that could give us some insight about what would happen to the calibration values during a one-point-calibration?

Happy for any tips on this issue:
Jan
(DPZ, Goettingen)

Hi Jan,

Our idea so far: Use the same linear eye calibrator, take an eye position sample before the start of each trial and use this to update the calibrator to the new offset, like a small one-point-calibration that only changes the offset, not the gain.

Is this approach going to work / are there more elegant approaches instead of using the “update”-functions that are part of the calibrator?

No, that isn’t going to work, as we don’t provide a way to fit only the offsets while keeping the gains constant. (The current “update calibration” action always fits both.) If you try to calibrate using a single eye position sample, the calibration will fail, and the fit parameters will remain unchanged.

If the eye tracker signal is drifting over time, then it seems like you’re just going to have to recalibrate periodically.

Has anyone already developed such an approach/plugin/etc. to correct for eye tracker signal drift on an automatic (or even trial-by-trial) basis?

Not to my knowledge, but I can ask around. On what eye tracker are you seeing the signal drift? (EyeLink 1000?)

Is there a documentation available about the eye calibrator object that could give us some insight about what would happen to the calibration values during a one-point-calibration?

Unfortunately, no – but as I said above, a one-point calibration is just going to fail and leave the calibration values unchanged.

Cheers,
Chris Stawarz

Hi Chris,
thanks for your reply (and your offer to ask around for somebody who might have encountered the same problem).
We are using the 2006-version of the SMI iView X Hi-Speed (http://www.smivision.com/en/gaze-and-eye-tracking-systems/products/iview-x-hi-speed.html).

What you said sounds like we would be forced to have at least 2 sample points / eye positions that are collected each trial to get to change the calibration fit parameters. I assume, the respective eye positions should be related to 2 different (fixation) points in space to confer enough information for the update?

(Without knowing how realistic this is in terms of bothering the subjects a lot:) Would such a trial-by-trial 2-point-re-calibration work, based on a initial calibration that uses more fixation targets? We have been using 9point calibrations for now. How would the “old” calibration fit resulting from the calibration(s) before be taken into account if an update was performed? Will it average information with the old data or use only the 2 new points for fitting?

Cheers!
Jan

Hi Jan,

Sorry for the delay in responding. I’ve been out of the office for a couple days.

What you said sounds like we would be forced to have at least 2 sample points / eye positions that are collected each trial to get to change the calibration fit parameters.

Actually, you’ll need 3 sample points. The linear calibrator fits equations of the form

x_cal = x_offset + x_gain * x_raw + x_other * y_raw
y_cal = y_offset + y_gain * y_raw + y_other * x_raw

There are probably better names for x_other and y_other, but the key point is that the equation for x_cal has a term that includes y_raw, and vice versa, so you need at least 3 samples to perform a fit.

I assume, the respective eye positions should be related to 2 different (fixation) points in space to confer enough information for the update?

Yes, I believe the fit will fail unless you use three distinct samples. However, a fit with only three samples, while mathematically valid, will still be of very low quality, right? So I’m not sure that’s an approach you’d want to take.

How would the “old” calibration fit resulting from the calibration(s) before be taken into account if an update was performed? Will it average information with the old data or use only the 2 new points for fitting?

The calibrator holds all previous calibration samples in memory until either (1) you execute a “Clear Calibration” action or (2) you close your experiment. When you create a saved variable set, only the calibrator’s fit parameters are saved/restored, not the samples that led to them.

So, if the initial, 9-point calibration is performed in the same “session” – that is, you load the experiment, run your calibration protocol, and then run all your trials without closing the experiment at any point – then subsequent calibration samples will be added to the initial set, and subsequent calibration updates will fit to all accumulated samples. On the other hand, if you perform the initial calibration, store the fit parameters in a saved variable set, and then close and reload the experiment, all previous samples are forgotten, and subsequent calibration updates will take into account only samples from the current session.

However, it seems like neither of those scenarios is what you really want. If the eye signal is steadily drifting over time, then it seems like you ought to be fitting to the N most recent samples (for some N > 3), as older samples become less meaningful over time and should eventually be discarded. MWorks doesn’t support this type of calibration at present, but it’s something we could add.

Alternatively, if you have some other means of determining new offsets, then we could add an action that just updates those. How are you detecting the drift in the eye signal?

Chris