Hi Hokyung,
As you can see, the average latency is around 20ms. The standard deviation of the latency values is 6.59ms. Is this expected? If not, is there anything wrong with the test? FYI, I am using a brand new OLED display as my monitor so I expect it to be low-latency.
The average latency is entirely plausible, but the variance is definitely not expected.
There are a couple reasons why the average latency might be higher than you expect:
First, you shouldn’t assume that your display is low latency. Most displays and TV’s apply some kind of image processing (e.g. motion smoothing) by default, and that can add significant latency. Some or most of the 20ms average latency could be due to this. If your display has a “game mode”, you should enable that, as the whole point of such modes is to minimize latency. If you really want to know the inherent latency of your display, you should use a lag tester. (I believe the Jazayeri lab purchased a few of these some years ago. I have an old, 720p model that you could borrow, but you’ll probably want to test at the actual resolution you use in your experiments.)
Second, where you place the photodiode on the display generally matters. Almost all of the displays I’ve tested refresh left to right, top to bottom, over the course of the refresh period. Therefore, if you place the photodiode on the lower-right corner of the display, you’ll measure a threshold-crossing time almost one refresh cycle later than if you’d placed it on the upper-left corner. For a 60Hz display, this could be the difference between a 3-4ms latency (upper left) and a 20ms latency (lower right).
As for the variance in latency, one possiblity is that you’re just getting the data at irregular intervals. Regarding your Arduino:
-
What specific model are you using?
Some models (like the original Uno) are just really slow, so you may not be getting analog reads as regularly as you expect.
-
What is the value of the Firmata device’s data_interval parameter in your experiment?
This parameter “discretizes” the possible times associated with the threshold-crossing events, so it may be making the variance look larger than it really is.
I wouldn’t expect the display itself to have a variable latency, those I suppose it’s possible. If it does, a lag tester might reveal it.
When you run your test, do you see any warnings about skipped refresh cycles? I’m inclined to think there must have been one associated with the outlier at index 43 on your graph, if nowhere else.
One other thing I noticed was that if I instead use the time of the t_stimuli_on variable events and not the value (i.e. predicted_output_time), I see an almost identical graph, with just values shifted by approximately 25ms.
The interval between the invocation of update_stimulus_display
and when the rendered frame is actually sent to the display (i.e. the time predicted via predicted_output_time
) depends on how far “ahead” of the display refresh cycle the graphics hardware is running. Often it’s just one frame ahead (i.e. rendering the next frame while the current one is on screen), but 2-3 frames ahead is also quite possible. Hence, I don’t think the 25ms offset you measured is at all unreasonable.
If update_stimulus_display() calls happen anywhere within this window, shouldn’t the adjustment made for predicted_output_time also reflect this?
update_stimulus_display
waits for the thread that updates the stimulus display to submit all drawing commands to the graphics hardware. The execution loop of that thread is tied to the refresh cycle of the display. If everything is running smoothly, with a consisent load on the GPU, you would expect the executions to begin and end at regular intervals. Therefore, no matter when an update_stimulus_display
invocation begins, it should end around the same point in the refresh cycle, which corresponds to the time that t_stimuli_on
gets its value. Given this, I think the small variance you’re seeing here makes sense.
Cheers,
Chris