Basic eye monitor backend

Hi Chris,

I’m looking to optimize my online saccade detection, as I need it to be exact for measuring stimulation-generated saccades and adapting my experiment online.

I am using the following to keep track of whether eye is in saccade:

boxcar_filter_1d (
    in1 = eye_x_calibrated
    out1 = eye_x
    width_samples = 5
    )

boxcar_filter_1d (
    in1 = eye_y_calibrated
    out1 = eye_y
    width_samples = 5
    )

basic_eye_monitor (
    eyeh_calibrated = eye_x
    eyev_calibrated = eye_y
    eye_state = eye_in_saccade
    width_samples = 5
    saccade_entry_speed = 40
    saccade_exit_speed = 30
    )

I plan to optimize by taking the raw data and running offline saccade detection algorithms, and tuning the parameters of the basic eye monitor to align to those results.

I have a couple questions to that end:

  1. Is it possible to get the backend code for basic_eye_monitor? I can infer the general method of going based off of entry speed and exit speed, but it would be nice to have the exact implementation so that I can test with different parameters offline on the raw eye data.
  2. Would there be a way to define and use an online, custom eye monitor that I can implement myself, e.g. a saccade detection algorithm that captures what I need the best.

Thank you for your guidance,
Hokyung

Hi Hokyung,

Apologies for the delayed response.

Is it possible to get the backend code for basic_eye_monitor?

The eye monitor code is old and pretty gnarly. If you want to look at it, see EyeMonitors.cpp and FilterTransforms.cpp, but I’ll warn you that it’s not easy to follow.

Having looked through it myself, I don’t see anything surprising in how the eye state is determined. I think your inference from the parameters would be close enough.

Would there be a way to define and use an online, custom eye monitor that I can implement myself, e.g. a saccade detection algorithm that captures what I need the best.

Sure. While you could probably do it in pure MWEL, I think it would be easier with Python code.

If you want, I can work up an example of how to do this (and maybe extract the gist of MWorks’ algorithm in the process). Just FYI, I’m going to be away on vacation next week, so I wouldn’t be able to work on this until the following week at the earliest.

Cheers,
Chris

Hi Chris,

Yes, it would be great if you could show me a minimal example of how Python-based would work. I am planning to put it to use in a few weeks, so towards the end of next week would work well for me. Thank you!

Best,
Hokyung

Hi Hokyung,

I’ve attached an example of a custom eye monitor implemented in Python. As written, it mostly reproduces MWorks’ standard eye monitor. The only (intentional) difference is that it omits the internal boxcar filters that the standard eye monitor applies to both the incoming eye positions and the computed velocities. If you want those, adding them should be straightforward.

If you have any questions, please let me know.

Cheers,
Chris
custom_eye_monitor.zip (1.7 KB)

Hi Chris,

Thanks a lot for this! I’ll implement it and let you know if I have any questions.