Latency from MWorks display time to actual display

Hi Chris,

I had some questions regarding the latency from MWorks time to actual display time.

I tested the latency with a photodiode on my display (which sends an analog input back to MWorks via an arduino) and compared the predicted_output_time variable value to the times that the photodiode signals crossed a threshold. Here is the latency plot I got.


Latency is sync_pulse_crossing - t_stimuli_on, where t_stimuli_on is set in MWorks every time the photodiode comes on:

queue_stimulus(photodiode)
update_stimulus_display(predicted_output_time=t_stimuli_on)

As you can see, the average latency is around 20ms. The standard deviation of the latency values is 6.59ms. Is this expected? If not, is there anything wrong with the test? FYI, I am using a brand new OLED display as my monitor so I expect it to be low-latency.

One other thing I noticed was that if I instead use the time of the t_stimuli_on variable events and not the value (i.e. predicted_output_time), I see an almost identical graph, with just values shifted by approximately 25ms. Here’s the graph

Given this, I went ahead and computed the difference between t_stimuli_on values and times for 64 such pairs.

Mean difference: 0.02451 seconds
Standard deviation: 0.00029 seconds
Maximum difference: 0.02478 seconds
Minimum difference: 0.02371 seconds

So it seems like the predicted_output_time effectively calibrates only up to 1ms, and mostly just adds around 24ms to whenever update_stimulus_display() was called. Is this also to be expected? With a 60Hz refresh rate on the display, each frame would be updated every 16.6ms or so. If update_stimulus_display() calls happen anywhere within this window, shouldn’t the adjustment made for predicted_output_time also reflect this? The calibration seems relatively small.

Let me know if I’m misunderstanding anything here. Thank you for your help!

Hi Hokyung,

As you can see, the average latency is around 20ms. The standard deviation of the latency values is 6.59ms. Is this expected? If not, is there anything wrong with the test? FYI, I am using a brand new OLED display as my monitor so I expect it to be low-latency.

The average latency is entirely plausible, but the variance is definitely not expected.

There are a couple reasons why the average latency might be higher than you expect:

First, you shouldn’t assume that your display is low latency. Most displays and TV’s apply some kind of image processing (e.g. motion smoothing) by default, and that can add significant latency. Some or most of the 20ms average latency could be due to this. If your display has a “game mode”, you should enable that, as the whole point of such modes is to minimize latency. If you really want to know the inherent latency of your display, you should use a lag tester. (I believe the Jazayeri lab purchased a few of these some years ago. I have an old, 720p model that you could borrow, but you’ll probably want to test at the actual resolution you use in your experiments.)

Second, where you place the photodiode on the display generally matters. Almost all of the displays I’ve tested refresh left to right, top to bottom, over the course of the refresh period. Therefore, if you place the photodiode on the lower-right corner of the display, you’ll measure a threshold-crossing time almost one refresh cycle later than if you’d placed it on the upper-left corner. For a 60Hz display, this could be the difference between a 3-4ms latency (upper left) and a 20ms latency (lower right).

As for the variance in latency, one possiblity is that you’re just getting the data at irregular intervals. Regarding your Arduino:

  • What specific model are you using?

    Some models (like the original Uno) are just really slow, so you may not be getting analog reads as regularly as you expect.

  • What is the value of the Firmata device’s data_interval parameter in your experiment?

    This parameter “discretizes” the possible times associated with the threshold-crossing events, so it may be making the variance look larger than it really is.

I wouldn’t expect the display itself to have a variable latency, those I suppose it’s possible. If it does, a lag tester might reveal it.

When you run your test, do you see any warnings about skipped refresh cycles? I’m inclined to think there must have been one associated with the outlier at index 43 on your graph, if nowhere else.

One other thing I noticed was that if I instead use the time of the t_stimuli_on variable events and not the value (i.e. predicted_output_time), I see an almost identical graph, with just values shifted by approximately 25ms.

The interval between the invocation of update_stimulus_display and when the rendered frame is actually sent to the display (i.e. the time predicted via predicted_output_time) depends on how far “ahead” of the display refresh cycle the graphics hardware is running. Often it’s just one frame ahead (i.e. rendering the next frame while the current one is on screen), but 2-3 frames ahead is also quite possible. Hence, I don’t think the 25ms offset you measured is at all unreasonable.

If update_stimulus_display() calls happen anywhere within this window, shouldn’t the adjustment made for predicted_output_time also reflect this?

update_stimulus_display waits for the thread that updates the stimulus display to submit all drawing commands to the graphics hardware. The execution loop of that thread is tied to the refresh cycle of the display. If everything is running smoothly, with a consisent load on the GPU, you would expect the executions to begin and end at regular intervals. Therefore, no matter when an update_stimulus_display invocation begins, it should end around the same point in the refresh cycle, which corresponds to the time that t_stimuli_on gets its value. Given this, I think the small variance you’re seeing here makes sense.

Cheers,
Chris

Hi Chris,

Thank you for the detailed reply. My monitor allows up to 240Hz but I think the HDMI cables weren’t supporting that – I have ordered them and will test again when they come. My photodiode is in the bottom right of the OLED monitor, so as you said it makes sense that the latency a bit large.

Meanwhile, I tried doing the test again while setting data_interval, which I hadn’t set at all before. I am using a Arduino Nano 33 IoT for my firmata device. I tried a test with data_interval = 5ms, and another with data_interval = 1ms. I did about 80 trials of photodiode onset for each setting. These are the results I got.
For data_interval = 5ms,
Latency Statistics:
Mean latency: 0.019282 seconds
Std deviation: 0.005747 seconds
Min latency: 0.009946 seconds
Max latency: 0.029778 seconds

For data_interval = 1ms,
Latency Statistics:
Mean latency: 0.017162 seconds
Std deviation: 0.005742 seconds
Min latency: 0.008711 seconds
Max latency: 0.025796 seconds

What I’m seeing generally feels mysterious to me. The latency seems to consistently decrease with small variance, then jump up all of a sudden, and then start going back down. Each test of 80 trials were around 5 minutes long in real time. Do you have a sense of what might be going on? Could it be related to the OLED monitor refreshing row by row?

Thank you for your guidance!
Hokyung

Hi Hokyung,

I tried doing the test again while setting data_interval, which I hadn’t set at all before.

Ah, I forgot that parameter wasn’t required for analog input. FYI, the default (at least in StandardFirmata and StandardFirmataBLE) is 19ms.

I am using a Arduino Nano 33 IoT for my firmata device.

Great. That board has a modern, relatively fast processor, so I’d expect that your analog reads are executing close to the schedule you requested.

What I’m seeing generally feels mysterious to me. The latency seems to consistently decrease with small variance, then jump up all of a sudden, and then start going back down. Each test of 80 trials were around 5 minutes long in real time. Do you have a sense of what might be going on? Could it be related to the OLED monitor refreshing row by row?

That is indeed mysterious. The “jump” looks to be almost exactly one refresh period (~17ms). It’s almost like the starting location of the refresh cycle is oscillating, steadily moving from the top to the bottom of the display and then jumping back. Or could the rate at which the GPU delivers frames and the actual display refresh rate be slightly different, requiring a periodic re-sync that produces the jump? I don’t know how or why that would be happening, but maybe it’s possible. Either way, this isn’t something I’ve seen before, although I haven’t measured any OLED displays with a photodiode, either.

I was hoping to find the equivalent of this video (which is still super interesting) that showed the refresh cycle of an OLED display. This one is pretty good, although it doesn’t shed any light on what might be happening with your display.

Chris

Another question: Is the display connected via HDMI cable directly to an HDMI port on the Mac, or does it go through any adaptors and/or I/O hubs?

Hi Chris,

Thanks for the reply.
For all previous tests, the Mac sent out an HDMI cable to a splitter adapter, which then split the display to two outputs, one OLED and the other standard (to mirror and monitor the OLED). I was also using a standard HDMI cable, which was preventing the OLED display from doing better than 60Hz.

Given this, I modified my setup tried another test without the other mirroring screen. Now I have confirmed the OLED display refreshes at 120Hz, which is the refresh_rate() value that I see in MWorks as well. This is the result with the new setup:

Latency Statistics:
  Mean latency: 0.023788 seconds
  Std deviation: 0.004647 seconds
  Min latency: 0.012056 seconds
  Max latency: 0.035564 seconds


This also seems quite mysterious, in a way quite different from previous behavior. The standard deviation on the latency slightly decreased, but the latencies seem to be grouped in tiers. I think the one 35+ms latency event is likely a skipped frame issue. Other than that, I’m not too sure how to interpret this.

By the way, just FYI, when I connect the third standard monitor (capped at 60Hz) with a separate HDMI cable to the Mac and ask it to mirror the OLED display, MWorks starts consistenly skipping 60 frames per second. If I stop mirroring, frame skipping will go back to normal. I could get another monitor with a fast refresh rate and necessary adapters to test again. But in the meantime, any guidance with the 120Hz OLED’s behavior would be appreciated!

Thanks,
Hokyung

Hi Hokyung,

For all previous tests, the Mac sent out an HDMI cable to a splitter adapter, which then split the display to two outputs, one OLED and the other standard (to mirror and monitor the OLED).

Interesting. The splitter could potentially be the source of the “oscillating” behavior you measured. If the refresh rates of the two displays connected to the splitter are slightly different (e.g. 59.94Hz vs 60Hz), the Mac is only going to drive one of them at its true refresh rate. The other will necessarily be out of sync with the updates the Mac is sending out, with the offset growing or shrinking over time until it eventually comes back into alignment.

Note that I’m just speculating here. I have no idea if an HDMI splitter actually would or could work this way, but it at least seems plausible. Also, the fact that the oscillation seems to be absent from your single-monitor results provides some evidence in favor of this theory.

This also seems quite mysterious, in a way quite different from previous behavior. The standard deviation on the latency slightly decreased, but the latencies seem to be grouped in tiers. I think the one 35+ms latency event is likely a skipped frame issue. Other than that, I’m not too sure how to interpret this.

It looks like the two main tiers are separated by a single 120Hz refresh period (~8ms). This suggests that the Mac isn’t consistently providing frames at 120Hz. Are you seeing skipped refresh warnings in MWorks? If not, that’s a little distressing.

By the way, just FYI, when I connect the third standard monitor (capped at 60Hz) with a separate HDMI cable to the Mac and ask it to mirror the OLED display, MWorks starts consistenly skipping 60 frames per second. If I stop mirroring, frame skipping will go back to normal.

Well, more displays means more work for the graphics card, although you’d think that display mirroring would be a pretty cheap operation.

This may be due to the different refresh rates: Each update on the mirror display (60Hz) may block the next update on the OLED display (120Hz), making the OLED miss every other frame. It’d be interesting to see if the issue goes away if the mirror display is also 120Hz.

Also, when you say “frame skipping will go back to normal”, is “normal” no frame skips?

Chris