Latency from MWorks display time to actual display

Hi Chris,

I had some questions regarding the latency from MWorks time to actual display time.

I tested the latency with a photodiode on my display (which sends an analog input back to MWorks via an arduino) and compared the predicted_output_time variable value to the times that the photodiode signals crossed a threshold. Here is the latency plot I got.


Latency is sync_pulse_crossing - t_stimuli_on, where t_stimuli_on is set in MWorks every time the photodiode comes on:

queue_stimulus(photodiode)
update_stimulus_display(predicted_output_time=t_stimuli_on)

As you can see, the average latency is around 20ms. The standard deviation of the latency values is 6.59ms. Is this expected? If not, is there anything wrong with the test? FYI, I am using a brand new OLED display as my monitor so I expect it to be low-latency.

One other thing I noticed was that if I instead use the time of the t_stimuli_on variable events and not the value (i.e. predicted_output_time), I see an almost identical graph, with just values shifted by approximately 25ms. Here’s the graph

Given this, I went ahead and computed the difference between t_stimuli_on values and times for 64 such pairs.

Mean difference: 0.02451 seconds
Standard deviation: 0.00029 seconds
Maximum difference: 0.02478 seconds
Minimum difference: 0.02371 seconds

So it seems like the predicted_output_time effectively calibrates only up to 1ms, and mostly just adds around 24ms to whenever update_stimulus_display() was called. Is this also to be expected? With a 60Hz refresh rate on the display, each frame would be updated every 16.6ms or so. If update_stimulus_display() calls happen anywhere within this window, shouldn’t the adjustment made for predicted_output_time also reflect this? The calibration seems relatively small.

Let me know if I’m misunderstanding anything here. Thank you for your help!

Hi Hokyung,

As you can see, the average latency is around 20ms. The standard deviation of the latency values is 6.59ms. Is this expected? If not, is there anything wrong with the test? FYI, I am using a brand new OLED display as my monitor so I expect it to be low-latency.

The average latency is entirely plausible, but the variance is definitely not expected.

There are a couple reasons why the average latency might be higher than you expect:

First, you shouldn’t assume that your display is low latency. Most displays and TV’s apply some kind of image processing (e.g. motion smoothing) by default, and that can add significant latency. Some or most of the 20ms average latency could be due to this. If your display has a “game mode”, you should enable that, as the whole point of such modes is to minimize latency. If you really want to know the inherent latency of your display, you should use a lag tester. (I believe the Jazayeri lab purchased a few of these some years ago. I have an old, 720p model that you could borrow, but you’ll probably want to test at the actual resolution you use in your experiments.)

Second, where you place the photodiode on the display generally matters. Almost all of the displays I’ve tested refresh left to right, top to bottom, over the course of the refresh period. Therefore, if you place the photodiode on the lower-right corner of the display, you’ll measure a threshold-crossing time almost one refresh cycle later than if you’d placed it on the upper-left corner. For a 60Hz display, this could be the difference between a 3-4ms latency (upper left) and a 20ms latency (lower right).

As for the variance in latency, one possiblity is that you’re just getting the data at irregular intervals. Regarding your Arduino:

  • What specific model are you using?

    Some models (like the original Uno) are just really slow, so you may not be getting analog reads as regularly as you expect.

  • What is the value of the Firmata device’s data_interval parameter in your experiment?

    This parameter “discretizes” the possible times associated with the threshold-crossing events, so it may be making the variance look larger than it really is.

I wouldn’t expect the display itself to have a variable latency, those I suppose it’s possible. If it does, a lag tester might reveal it.

When you run your test, do you see any warnings about skipped refresh cycles? I’m inclined to think there must have been one associated with the outlier at index 43 on your graph, if nowhere else.

One other thing I noticed was that if I instead use the time of the t_stimuli_on variable events and not the value (i.e. predicted_output_time), I see an almost identical graph, with just values shifted by approximately 25ms.

The interval between the invocation of update_stimulus_display and when the rendered frame is actually sent to the display (i.e. the time predicted via predicted_output_time) depends on how far “ahead” of the display refresh cycle the graphics hardware is running. Often it’s just one frame ahead (i.e. rendering the next frame while the current one is on screen), but 2-3 frames ahead is also quite possible. Hence, I don’t think the 25ms offset you measured is at all unreasonable.

If update_stimulus_display() calls happen anywhere within this window, shouldn’t the adjustment made for predicted_output_time also reflect this?

update_stimulus_display waits for the thread that updates the stimulus display to submit all drawing commands to the graphics hardware. The execution loop of that thread is tied to the refresh cycle of the display. If everything is running smoothly, with a consisent load on the GPU, you would expect the executions to begin and end at regular intervals. Therefore, no matter when an update_stimulus_display invocation begins, it should end around the same point in the refresh cycle, which corresponds to the time that t_stimuli_on gets its value. Given this, I think the small variance you’re seeing here makes sense.

Cheers,
Chris

Hi Chris,

Thank you for the detailed reply. My monitor allows up to 240Hz but I think the HDMI cables weren’t supporting that – I have ordered them and will test again when they come. My photodiode is in the bottom right of the OLED monitor, so as you said it makes sense that the latency a bit large.

Meanwhile, I tried doing the test again while setting data_interval, which I hadn’t set at all before. I am using a Arduino Nano 33 IoT for my firmata device. I tried a test with data_interval = 5ms, and another with data_interval = 1ms. I did about 80 trials of photodiode onset for each setting. These are the results I got.
For data_interval = 5ms,
Latency Statistics:
Mean latency: 0.019282 seconds
Std deviation: 0.005747 seconds
Min latency: 0.009946 seconds
Max latency: 0.029778 seconds

For data_interval = 1ms,
Latency Statistics:
Mean latency: 0.017162 seconds
Std deviation: 0.005742 seconds
Min latency: 0.008711 seconds
Max latency: 0.025796 seconds

What I’m seeing generally feels mysterious to me. The latency seems to consistently decrease with small variance, then jump up all of a sudden, and then start going back down. Each test of 80 trials were around 5 minutes long in real time. Do you have a sense of what might be going on? Could it be related to the OLED monitor refreshing row by row?

Thank you for your guidance!
Hokyung

Hi Hokyung,

I tried doing the test again while setting data_interval, which I hadn’t set at all before.

Ah, I forgot that parameter wasn’t required for analog input. FYI, the default (at least in StandardFirmata and StandardFirmataBLE) is 19ms.

I am using a Arduino Nano 33 IoT for my firmata device.

Great. That board has a modern, relatively fast processor, so I’d expect that your analog reads are executing close to the schedule you requested.

What I’m seeing generally feels mysterious to me. The latency seems to consistently decrease with small variance, then jump up all of a sudden, and then start going back down. Each test of 80 trials were around 5 minutes long in real time. Do you have a sense of what might be going on? Could it be related to the OLED monitor refreshing row by row?

That is indeed mysterious. The “jump” looks to be almost exactly one refresh period (~17ms). It’s almost like the starting location of the refresh cycle is oscillating, steadily moving from the top to the bottom of the display and then jumping back. Or could the rate at which the GPU delivers frames and the actual display refresh rate be slightly different, requiring a periodic re-sync that produces the jump? I don’t know how or why that would be happening, but maybe it’s possible. Either way, this isn’t something I’ve seen before, although I haven’t measured any OLED displays with a photodiode, either.

I was hoping to find the equivalent of this video (which is still super interesting) that showed the refresh cycle of an OLED display. This one is pretty good, although it doesn’t shed any light on what might be happening with your display.

Chris

Another question: Is the display connected via HDMI cable directly to an HDMI port on the Mac, or does it go through any adaptors and/or I/O hubs?

Hi Chris,

Thanks for the reply.
For all previous tests, the Mac sent out an HDMI cable to a splitter adapter, which then split the display to two outputs, one OLED and the other standard (to mirror and monitor the OLED). I was also using a standard HDMI cable, which was preventing the OLED display from doing better than 60Hz.

Given this, I modified my setup tried another test without the other mirroring screen. Now I have confirmed the OLED display refreshes at 120Hz, which is the refresh_rate() value that I see in MWorks as well. This is the result with the new setup:

Latency Statistics:
  Mean latency: 0.023788 seconds
  Std deviation: 0.004647 seconds
  Min latency: 0.012056 seconds
  Max latency: 0.035564 seconds


This also seems quite mysterious, in a way quite different from previous behavior. The standard deviation on the latency slightly decreased, but the latencies seem to be grouped in tiers. I think the one 35+ms latency event is likely a skipped frame issue. Other than that, I’m not too sure how to interpret this.

By the way, just FYI, when I connect the third standard monitor (capped at 60Hz) with a separate HDMI cable to the Mac and ask it to mirror the OLED display, MWorks starts consistenly skipping 60 frames per second. If I stop mirroring, frame skipping will go back to normal. I could get another monitor with a fast refresh rate and necessary adapters to test again. But in the meantime, any guidance with the 120Hz OLED’s behavior would be appreciated!

Thanks,
Hokyung

Hi Hokyung,

For all previous tests, the Mac sent out an HDMI cable to a splitter adapter, which then split the display to two outputs, one OLED and the other standard (to mirror and monitor the OLED).

Interesting. The splitter could potentially be the source of the “oscillating” behavior you measured. If the refresh rates of the two displays connected to the splitter are slightly different (e.g. 59.94Hz vs 60Hz), the Mac is only going to drive one of them at its true refresh rate. The other will necessarily be out of sync with the updates the Mac is sending out, with the offset growing or shrinking over time until it eventually comes back into alignment.

Note that I’m just speculating here. I have no idea if an HDMI splitter actually would or could work this way, but it at least seems plausible. Also, the fact that the oscillation seems to be absent from your single-monitor results provides some evidence in favor of this theory.

This also seems quite mysterious, in a way quite different from previous behavior. The standard deviation on the latency slightly decreased, but the latencies seem to be grouped in tiers. I think the one 35+ms latency event is likely a skipped frame issue. Other than that, I’m not too sure how to interpret this.

It looks like the two main tiers are separated by a single 120Hz refresh period (~8ms). This suggests that the Mac isn’t consistently providing frames at 120Hz. Are you seeing skipped refresh warnings in MWorks? If not, that’s a little distressing.

By the way, just FYI, when I connect the third standard monitor (capped at 60Hz) with a separate HDMI cable to the Mac and ask it to mirror the OLED display, MWorks starts consistenly skipping 60 frames per second. If I stop mirroring, frame skipping will go back to normal.

Well, more displays means more work for the graphics card, although you’d think that display mirroring would be a pretty cheap operation.

This may be due to the different refresh rates: Each update on the mirror display (60Hz) may block the next update on the OLED display (120Hz), making the OLED miss every other frame. It’d be interesting to see if the issue goes away if the mirror display is also 120Hz.

Also, when you say “frame skipping will go back to normal”, is “normal” no frame skips?

Chris

Hey Chris,

Thanks for your input. By normal, I mean little or no frame skips. It still happens occasionally. On a related note, I was wondering if there was a way to know whether a frame skip happened by looking at the event file?

I’m considering just always going by the photodiode onset time for analysis, and adding whatever latency I see at the beginning of the trial to rest of the events for that trial. But then I’m not sure how to handle latencies that get created due to frame skips happening right at photodiode onset. If you have any suggestions for making this robust, I’d appreciate it. Thanks!

Best,
Hokyung

Hi Hokyung,

I was wondering if there was a way to know whether a frame skip happened by looking at the event file?

Yes, you can extract any warning messages and look for ones that contain “display refresh cycle”. I’ve attached a Python script that demonstrates how to get the warning messages.

I’m considering just always going by the photodiode onset time for analysis

Yes, that makes sense, as the photodiode time is the only actual measurement you have of when the display update occurred. (Again, predicted_output_time only gives a prediction of when this should happen, without any guarantee as to when it will actually happen.)

and adding whatever latency I see at the beginning of the trial to rest of the events for that trial

I don’t understand why you would do this. The “latency” we’ve been discussing is just the difference between when you ask MWorks to update the display and when it actually updates. It doesn’t affect the timing of other events in your experiment (e.g. receipt of input data).

But then I’m not sure how to handle latencies that get created due to frame skips happening right at photodiode onset.

A frame skip doesn’t mean that MWorks attempted to draw something but failed. It means that MWorks wasn’t given the opportunity to draw on a particular display refresh.

We only know this happened retrospectively: When drawing the current frame, we look at the predicted output time for the previous frame we rendered. If it’s more than one refresh period in the past, we know that we missed a drawing opportunity.

A frame skip may also mean that the previous frame made it to the display later than expected (i.e. on the “missed” refresh cycle). We can’t know for sure via software alone, but that’s what the photodiode is for: to tell you when the display was really updated, irrespective of how long it took the update to reach it.

So, as long as you’re using photodiode onset as the “on” time for your stimulus, I don’t see why frame skips would affect your data analysis.

Cheers,
Chris
print_warnings.py (587 Bytes)

Hi Chris,

Thanks for the reply and the demo for frame skip analysis.

I didn’t explain very well what I am trying to do and why.
In analyzing neural data in relation to MWorks data, I am not particularly interested in the moment of trial start, when the first fixation stimuli pops up. Different kinds of stimuli pop up and move throughout the trial, and ideally, I would have a ground truth measurement for when all of those display changes actually occur. But I only have an on or off photodiode signal, so my plan was to use the latency at the trial start to infer the display change times of the rest of the trial.

If the latency patterns look like the last graph that I showed above (copied below), where there are certain “tiers” forming where within a prolonged time the latency stays relatively stable within a short timescale, I thought I could make the assumption that the latency would be roughly the same within a trial. Thus, to properly align and analyze with neural data, I would add the latency that I measure at each trial start to the event times of subsequent display updates happening throughout that trial. Does this make sense? Let me know if I’m mistaken.

Assuming the above logic makes sense, I was asking how to handle cases when the assumption breaks, namely cases where the display latency for the trial start is particularly off from the rest of the trial because of the one-time frame skip that happened right at trial start. You mentioned before that the peak latency around 330s in the plot above could be due to such a frame skip. I was asking what the best way to handle such cases would be.

Given your reply, I think I’m a bit confused. From what I’ve understood, MWorks determines there was a frame skip based on the predicted_output_time being more than one refresh period in the past. But you also say:

A frame skip may also mean that the previous frame made it to the display later than expected (i.e. on the “missed” refresh cycle).

In this case, how does MWorks know that there was a frame skip, if predicted_output_time was within one refresh period in the past? Is there a predicted output time associated with every frame MWorks renders, even if there was no update_stimulus_display() call? In general, does this not still make it problematic to add the particular latency value I see for a trial start photodiode signal to the rest of the trial, if the trial start photodiode signal display had longer latency due to a frame skip compared to adjacent display updates?

Taking a step back, I’m curious if you still think the plan to add the latency is good, or if you think there is a better way to handle this. I suppose I could turn the photodiode on and off multiple times within a trial, but I have multiple types of tasks with different phases and I think it would be ideal for analysis and later parsing to have a single photodiode signal for each trial. I’d appreciate your guidance.

Best,
Hokyung

Hi Hokyung,

If the latency patterns look like the last graph that I showed above (copied below), where there are certain “tiers” forming where within a prolonged time the latency stays relatively stable within a short timescale, I thought I could make the assumption that the latency would be roughly the same within a trial. Thus, to properly align and analyze with neural data, I would add the latency that I measure at each trial start to the event times of subsequent display updates happening throughout that trial. Does this make sense? Let me know if I’m mistaken.

I understand what you’re saying. Ideally, the latency inherent in your display updates would be constant, and adding that constant latency to predicted_output_time would get you very close to the actual display update time.

But you’ve demonstrated that the latency is not constant. While there are the “tiers” where the latency isn’t changing, we don’t know what causes them, and we can’t predict when the latency will shift between them. Until we get a handle on why these shifts are happening, I don’t think I’d be comfortable assuming that the latency would be consistent within a trial.

As I said previously, my guess is that your Mac isn’t consistently providing frames at 120Hz. It’d be interesting to see what happens if you perform your tests again with the display running at 60Hz. If the “tiers” disappear, that would be strong evidence that my guess is correct.

From what I’ve understood, MWorks determines there was a frame skip based on the predicted_output_time being more than one refresh period in the past.

MWorks compares the predicted output time for the current display update to the predicted output time for the previous display update. If the time for the previous update is more than one refresh period in the past, then we missed one or more display refreshes in between. For example, if the current predicted output time is t, the display’s refresh period is T, and the predicted output time for the previous update was t-2T, then we missed the update at t-T.

But you also say:

A frame skip may also mean that the previous frame made it to the display later than expected (i.e. on the “missed” refresh cycle).

In this case, how does MWorks know that there was a frame skip, if predicted_output_time was within one refresh period in the past?

Sorry, I wasn’t trying to say this was a different condition for a frame skip. The frame skips reported by MWorks are always for the reason I cited above. But, in addition, a frame skip reported by MWorks may indicate that the previous frame made it to the display late. In fact, this is probably what happened, but unless you were monitoring display updates via a photodiode, there’s no way to know.

Is there a predicted output time associated with every frame MWorks renders, even if there was no update_stimulus_display() call?

The OS is supposed to give MWorks a chance to draw on every display refresh. Every time this happens, MWorks compares the predicted output time for the current refresh with that of the previous refresh and reports skipped frames if the difference is greater than one refresh period. This happens even when MWorks doesn’t need to update the display. As long as the stimulus display window is visible, these checks are happening.

Taking a step back, I’m curious if you still think the plan to add the latency is good, or if you think there is a better way to handle this. I suppose I could turn the photodiode on and off multiple times within a trial, but I have multiple types of tasks with different phases and I think it would be ideal for analysis and later parsing to have a single photodiode signal for each trial. I’d appreciate your guidance.

Like I said already, I would first want to better understand what’s happening with your display to produce the variable latency. If you could get the measured latency to be constant (e.g by switching to a 60Hz refresh rate), then I would probably feel OK about your plan.

That said, if you really need to know when a particular display update happened, you need to use a photodiode. Ideally, the photodiode signal would be recorded by the same system as the neural data, so that the two would be on the same clock. But, if your stimulus presentation is complex, and particularly if it includes dynamic stimuli (e.g. videos, drifting gratings), this could be tricky to implement.

If it would help, I’d be happy to chat more about this over Zoom.

Cheers,
Chris

Hey Chris,

I was in the process of updating the code to have a render_actions photodiode stimulus. One quick question: let’s say I am changing the photodiode_alpha within the render_actions stimulus. Does this get reflected immediately, i.e. at the time of event for photodiode_alpha? If not, is there a way to know when the stimulus update happened (similar to predicted_output_time in update_stimulus_display?)

Hi Hokyung,

is there a way to know when the stimulus update happened (similar to predicted_output_time in update_stimulus_display?)

#stimDisplayUpdate reports the parameters of all stimuli drawn in a given update. The time stamp of a #stimDisplayUpdate event is the same predicted time that would be reported via predicted_output_time for that update. So, if you find the #stimDisplayUpdate event where the alpha_multiplier parameter assumes a given value, the time of that event gives the operating system’s prediction of when that updated appeared on the display.

Chris