Hi Hokyung,
If the latency patterns look like the last graph that I showed above (copied below), where there are certain “tiers” forming where within a prolonged time the latency stays relatively stable within a short timescale, I thought I could make the assumption that the latency would be roughly the same within a trial. Thus, to properly align and analyze with neural data, I would add the latency that I measure at each trial start to the event times of subsequent display updates happening throughout that trial. Does this make sense? Let me know if I’m mistaken.
I understand what you’re saying. Ideally, the latency inherent in your display updates would be constant, and adding that constant latency to predicted_output_time
would get you very close to the actual display update time.
But you’ve demonstrated that the latency is not constant. While there are the “tiers” where the latency isn’t changing, we don’t know what causes them, and we can’t predict when the latency will shift between them. Until we get a handle on why these shifts are happening, I don’t think I’d be comfortable assuming that the latency would be consistent within a trial.
As I said previously, my guess is that your Mac isn’t consistently providing frames at 120Hz. It’d be interesting to see what happens if you perform your tests again with the display running at 60Hz. If the “tiers” disappear, that would be strong evidence that my guess is correct.
From what I’ve understood, MWorks determines there was a frame skip based on the predicted_output_time being more than one refresh period in the past.
MWorks compares the predicted output time for the current display update to the predicted output time for the previous display update. If the time for the previous update is more than one refresh period in the past, then we missed one or more display refreshes in between. For example, if the current predicted output time is t, the display’s refresh period is T, and the predicted output time for the previous update was t-2T, then we missed the update at t-T.
But you also say:
A frame skip may also mean that the previous frame made it to the display later than expected (i.e. on the “missed” refresh cycle).
In this case, how does MWorks know that there was a frame skip, if predicted_output_time was within one refresh period in the past?
Sorry, I wasn’t trying to say this was a different condition for a frame skip. The frame skips reported by MWorks are always for the reason I cited above. But, in addition, a frame skip reported by MWorks may indicate that the previous frame made it to the display late. In fact, this is probably what happened, but unless you were monitoring display updates via a photodiode, there’s no way to know.
Is there a predicted output time associated with every frame MWorks renders, even if there was no update_stimulus_display() call?
The OS is supposed to give MWorks a chance to draw on every display refresh. Every time this happens, MWorks compares the predicted output time for the current refresh with that of the previous refresh and reports skipped frames if the difference is greater than one refresh period. This happens even when MWorks doesn’t need to update the display. As long as the stimulus display window is visible, these checks are happening.
Taking a step back, I’m curious if you still think the plan to add the latency is good, or if you think there is a better way to handle this. I suppose I could turn the photodiode on and off multiple times within a trial, but I have multiple types of tasks with different phases and I think it would be ideal for analysis and later parsing to have a single photodiode signal for each trial. I’d appreciate your guidance.
Like I said already, I would first want to better understand what’s happening with your display to produce the variable latency. If you could get the measured latency to be constant (e.g by switching to a 60Hz refresh rate), then I would probably feel OK about your plan.
That said, if you really need to know when a particular display update happened, you need to use a photodiode. Ideally, the photodiode signal would be recorded by the same system as the neural data, so that the two would be on the same clock. But, if your stimulus presentation is complex, and particularly if it includes dynamic stimuli (e.g. videos, drifting gratings), this could be tricky to implement.
If it would help, I’d be happy to chat more about this over Zoom.
Cheers,
Chris