EyeCalibrator and Vergence

Hey Christopher,

as we are heavily moving towards mworks-3D I now finished writing a plugin for the Eyelink 1000 binocular eyetracker, connecting over TCP-socket.
This thing gives me two separate eyes with their gaze-coordinates. Now, these informations have to be combined to compute the vergence of the eyes (3rd dimension). Therewith comes the necessity for a 3D-EyeCalibrator and a 3D-EyeWindow. I am working on the latter, but I can hardly test it before the Calibrator supports 3D.

And where can I upload my eyetracker-plugin ?

-Philipp

Hi Philipp,

as we are heavily moving towards mworks-3D I now finished writing a plugin for the Eyelink 1000 binocular eyetracker, connecting over TCP-socket.
This thing gives me two separate eyes with their gaze-coordinates. Now, these informations have to be combined to compute the vergence of the eyes (3rd dimension). Therewith comes the necessity for a 3D-EyeCalibrator and a 3D-EyeWindow. I am working on the latter, but I can hardly test it before the Calibrator supports 3D.

Can you tell me some more about how you envision these components working together? Does the Eyelink 1000 plugin supply uncalibrated, 3-D eye positions (x, y, and z/vergence) that you would then feed in to the 3-D eye calibrator (which would have 3 inputs and 3 outputs)?

Such a calibrator could be implemented as a plugin, although you’d probably end up duplicating much of the existing EyeCalibrator code. Maybe we can clean that up a bit and make it easier to reuse/specialize.

And where can I upload my eyetracker-plugin ?

I recommend creating an account on GitHub and hosting the code there. If you want to distribute pre-compiled binaries, you can host those on GitHub, too.

Chris

Hi Chris,

the Eyelink 1000 plugin currently supplies raw data for both eyes separately.
It seems to be a good idea to compute the z-coordinate within the Eyelink-plugin, calibration will be much easier then, i belief. It would also reduce the amount of data that has to be streamed to the data-file. I am going to code that in the near future.

So let’s say the calibrator takes three coordinates instead of two, how much work would it be to change the existing code into one that takes this third coordinate into account?

Philipp

So let’s say the calibrator takes three coordinates instead of two, how much work would it be to change the existing code into one that takes this third coordinate into account?

Probably the right approach is to make the existing eye calibrator more generic, so that it supports an arbitrary number of eye coordinates. Then both 2D and 3D calibrators would be instances of the same class. I don’t think it would be a lot of work, but let me consider it a bit more.

Chris

Hi Philipp,

I’ve thought about the 3D eye calibrator some more. While I still think that the correct approach is for the 2D and 3D calibrators to be instances of the same class, I’m not comfortable with just replacing the existing 2D calibrator, since many folks use and rely on it in their experiments.

Instead, I’d like to implement the 3D calibrator in a plugin. This will allow you to move forward without any changes to (or potential breakage in) the MWorks core. Once the 3D calibrator is fully developed and well tested, we can consider moving it into the core, potentially replacing the current eye calibrator.

I’m happy to work on the 3D calibrator implementation, but I’ll have to schedule it in with the other stuff I’m working on. What time frame do you need it in? A week? A month? Please let me know so I can prioritize it appropriately.

Thanks,
Chris

Hi Chris,

I think I will be ready with building and calibrating the 3D-hardware within 2 weeks from now. The work on the 3D EyeWindow will take more time, though. I haven’t yet begun and it’s not exactly at the top of my queue right now.
Anyway, it would be nice to test the setup in two to three weeks if you can make it by then.

As for the Eyelink code sharing, I will set up a GIT account later today.

Thank you!
Philipp

… and here it is

Anyway, it would be nice to test the setup in two to three weeks if you can make it by then.

That sounds fine. I should be able to have the 3D calibrator done by then.

… and here it is

Thanks!

Chris

Hi Philipp,

I have a quick question: In the current (2D) eye calibrator, the parameter names are eyeh_raw/calibrated and eyev_raw/calibrated. In the new 3D calibrator, what should the parameters for the third coordinate be called? I’m thinking eyed_raw/calibrated, where “d” stands for “depth”. Does that sound OK, or is there another name that would be more appropriate or meaningful?

Thanks,
Chris

Hi Chris,

eyed_raw/calibrated sounds fine to me although i don’t use this terminology at all because the depth-plane is technically also horizontal. Since “horizontal” is already defined to be the x-axis in the plugin, I used to fill in my notation (eye_x _y) to eyeh and eyev. Likewise I would use variable names of eye_foo_z for the “depth” axis.
But I think this is only a matter of taste. Everybody will understand that “d” is for “depth”.

Greetings,
Philipp

Hi Philipp,

I have an initial version of the 3D eye calibrator ready for you to try. You can get the code from GitHub at

I haven’t made any binary packages, so you’ll have to download and compile it yourself. Note that you’ll need a very recent nightly build (2011/2/12 or later) in order to compile and use the plugin, as it relies on some changes to the MWorks core that I made last week.

The plugin provides both linear and second-order calibrators. They work just like the existing 2D calibrators, except that they have two additional parameters (eyed_raw and eyed_calibrated, as we’ve discussed). It also implements a component called “Fake Calibratable Object 3D”, which provides known, “gold standard” values to calibrate against and serves as a stand-in for a 3D fixation point. The plugin includes editor definitions for all the objects, so once you’ve compiled and installed it, you should see them in the editor’s library pane.

For this first pass, I took the easy path, starting with the existing 2D calibrator code and extending it to 3D. As I’ve said before, I don’t think this is the “right” way to do it, but it was the quickest way to get you something you can use. I still plan to make the code more general and merge the 2D and 3D calibrators.

Also, I should warn you that I’ve done almost no testing on the new calibrators. I’ll be doing more in the coming days and will fix/update the code as needed, but just be aware that you may run in to some bugs.

Please give this a try and let me know how it works for you. I’ll keep you posted on further updates.

Thanks,
Chris

Hi Chris,

today I tested the plugin but I couldn’t get it to work…

The fist issue was that the server used to crash when the client runs in 32bit mode. I don’t know if this is the result of the recent updates in the nightly build or the plugin, but there was no reason for the client to run like this, so I changed it and the problem is resolved.

I could compile your plugin and insert it in an experiment. However, I don’t seem to get your concept of the “Fake Calibratable Object 3D”. How do I use this? Of course, an attempt to use a standard fixation point to provide the gold standards fails. But even when I use the “Take Calibration Sample” command and explicitly define the Fake Object to provide the reference values, it fails by throwing an error (not enough input arguments).

I guess this is because I have to have at least one old fixation point to be there, otherwise I miss the trigger-watching functionality and can’t run my calibration using a real subject. Maybe the Take Calibration Sample does not work and ignores the settings? Up to now I just used the “Begin averaged Calibration Sample” routine, which lacks a parameter to define the calibratable object…

So how exactly is this supposed to work?

Sorry for the confusion,
Philipp

Hi Philipp,

The fist issue was that the server used to crash when the client runs in 32bit mode.

That’s strange. Can you send me a crash report?

However, I don’t seem to get your concept of the “Fake Calibratable Object 3D”. How do I use this? Of course, an attempt to use a standard fixation point to provide the gold standards fails. But even when I use the “Take Calibration Sample” command and explicitly define the Fake Object to provide the reference values, it fails by throwing an error (not enough input arguments).

I’ve attached an experiment that demonstrates how to use the 3D calibrator. Basically, the x_gold, y_gold_ z_gold parameters of the fake calibratable object take the place of the fixation dot’s position coordinates. In the experiment, I use a linear 3D calibrator and generate 10 calibration samples, incrementing the raw coordinates from 1 to 5 in steps of 0.5 (adding a small random offset each time) and using known linear functions to generate the gold-standard values for each sample. I then update the calibration and verify that the calibrator accurately predicts the calibrated values when the raw coordinates are set to 6. The experiment runs fine for me (although the server is crashing intermittently, which I need to investigate).

Can you try my example and see if it works for you?

Chris

Attachment: linear_cal_3d_demo.xml (3.78 KB)

(although the server is crashing intermittently, which I need to investigate)

Ugh – it turns out there are a bunch of memory bugs in the core calibrator code. I’ll check in the fixes this afternoon, so tonight’s nightly build should be a lot less crashy.

Chris

Hi Chris,

fist, for some reasons I wasn’t able to reproduce the crashing issue. It occurred whenever I tried to load an experiment, but now it’s gone… strange. Sorry for the false alarm.

With the help of your example I now managed to have a (supposedly) 3D calibrated system! Thanks!

Apparently, the error was that I tried to set the value of the gold standard z-coordinate to zero directly. When I began using a variable for this, it didn’t throw any errors any more. Secondly, because I don’t have the 3D stimuli yet, I calibrated only against one value of Z, namely zero, and that failed as well. It didn’t throw an error but the calibrated output of z was always zero, regardless of the input. For some random values of z reference, the calibrator produced some (random) calibration for the depth-axis. X and Y on the other hand calibrated accurately as usual, which is good!

I am planning to do a real 3D calibration routine with a subject at some point in the future. To do this I might just move the monitor as long as we don’t have the 3D Fixation Point. Upon completion, I will confirm that everything works fine. For now, however, it seems ok!!

Something that would be nice though is to have the Averaged Calibration Sample method back. I have the feeling that recording samples over a period of something like a second is much more precise than just taking one single sample at a semi arbitrary point in time. Do you think it would be possible to get that working again?

Thank you!!
Philipp

for some reasons I wasn’t able to reproduce the crashing issue. It occurred whenever I tried to load an experiment, but now it’s gone… strange. Sorry for the false alarm.

No problem. If it happens again, please let me know.

Something that would be nice though is to have the Averaged Calibration Sample method back.

It’s still there, and it works with the 3D calibrator. I’ve attached a new version of my example experiment that uses averaged samples.

Chris

Attachment: linear_cal_3d_averaged_demo.xml (4.35 KB)

Indeed! Thank you!!