Stereo screen implementation

Hi Chris,

finally, I completed to build the hardware for our stereo-setup. We now have the possibility to show stimuli for the left and right eye separately using two screens.
The next step, of course, is to modify MWorks so that it is able to draw to two screens simultaneously. How difficult would it be to make the needed modifications? And is there anything I can do to make it work?

Greetings,
Philipp

Hi Philipp,

Adding support for multiple displays will require a fair amount of work, but exactly how much will depend on how you want to use the additional displays.

Are you planning on treating the two screens as independent, 2-D displays? In that scenario, your experiment would maintain separate stimulus queues for the left and right displays, and all display-related actions (queue stimulus, update display, etc.) would take an additional argument indicating which display to act on. This would be a straightforward extension of MWorks’ existing stimulus display infrastructure.

Or are you trying to use the two displays together to produce a simulated 3-D effect? In that case, how are you going to generate 3-D stimuli? Are you using something like NVIDIA 3D Vision? This scenario seems like it would require more substantial code changes, probably a whole new “3-D Display” I/O device.

Chris

Hi Chris,

it’s a passive stereo setup.

my hardware consists of two similar projectors that project through linear polarizers onto a single screen. The observer wears glasses with polarization filters so that there is one image for each eye coming from a single projector.
So to answer your question in a bit more detail, it is a rather simple setup with two screens for the left and the right eye respectively, so I would treat them as two “independent” 2-D displays.

To generate 3D stimuli I would first need to calculate the two images separately for both eyes. Then these images have to be drawn to their corresponding screens at the exact same time (that is a critical requirement). For that I think it might be better to have a display update call that simply acts on all the screens that are currently active. For this it would not even be required to add a parameter to the call. How many displays are used by the server could for example be specified in the setup_variables.xml .

It would also be convenient to have a stimulus that takes x,y,z coordinates instead of computing two independent stimuli for both eyes. We are currently writing a dynamic stimulus that does exactly this: compute two images from one x,y,z coordinate set and then draw it to two screens. However, two screens are currently not supported. In principle it should be easy to update stimuli like the fixation point to also take an additional z-position and then draw to two screens (if present).

I think there is a quick, hack-like solution to this request which would only require to expand the output window to span both screens. In this scenario there would be only one display from MWork’s point of view but this screen would be displayed on two physical screens. However, that would be a rather awful solution, yet easy to implement …

I hope that answers most of your questions,

Philipp

Thanks, I think I understand your setup now.

I think there is a quick, hack-like solution to this request which would only require to expand the output window to span both screens. In this scenario there would be only one display from MWork’s point of view but this screen would be displayed on two physical screens. However, that would be a rather awful solution, yet easy to implement …

That’s an interesting idea. Making the full screen window span two physical displays should be straightforward, as long as the displays are part of the same virtual screen. However, I’m not sure how that will affect timing, as I don’t expect the refresh cycles for the displays will be synchronized. I’ll have to play around with this some and see how it works.

Chris

Hi Chris,

yes, that could work! My setup would then look like figure 1-10 in your link.

Recently I contacted the apple developer support and quizzed them about syncing two screens’ refresh cycles. They told me that screens that are physically similar should be synced.

I then contacted the manufacturer of the projectors and they told me that the projectors will be exactly synced when the graphics card provides the same sync timing for both DVI outlets.

So having two screens that are exactly refresh-synced seems to be something unusual to ask for, but in principle it should work. It is also something that is crucial to my experiments.
If you have anything that I can test for you, I would be thrilled to try it out!

Greetings,
Philipp

Recently I contacted the apple developer support and quizzed them about syncing two screens’ refresh cycles. They told me that screens that are physically similar should be synced.

Interesting. I’ve found several threads on the mac-opengl mailing list that conclude that similar displays are not synced:

My own experimentation also indicates that the displays are not synced. I attached two identical LCD’s to the Mini DisplayPort outputs on a single Radeon 5770. I then made a minor change to MWorks so that the stimulus display window spanned both monitors. When running an experiment that displayed four dynamic random dots fields (two on each monitor), MWorks reported many missed display refreshes, and the animation was visibly jittery. However, if I disabled the bit of code that waits for the GL buffer swap to complete, then the timing problems disappeared. This suggests that the buffer swaps for the two displays are not synchronized, so that waiting for both swaps to complete eats up a substantial fraction of the refresh period and causes the drawing code to fall behind.

So it seems like we can’t rely on the OS to keep multiple monitors synchronized. We can keep the animation smooth by creating separate OpenGL contexts and drawing threads for each display. However, that won’t help in your case, since you need the two displays to be precisely synchronized.

On the bright side, my research also turned up a piece of hardware which may solve your problem: the Matrox DualHead2Go. It’s a box that makes two identical monitors look like one double-sized monitor to your Mac. With it, you should be able to use your two projectors as a single display, with no modifications to MWorks at all. In fact, there’s another mailing list post that mentions precisely that setup as a reliable way to do stereo display on a Mac. What do you think?

Chris

Huh… that’s bad news for me. Thank you very much for your research, I guess I will give the DualHead2Go a try.

I was wondering before if splitting one video signal to two projectors would be a good solution, but decided against that because the DVI-D standard is not capable of providing enough bandwidth for two 1920x1200 displays running at 60Hz.

For some reason, however, the Display-Port edition of the DualHead is in fact capable of driving both displays at 60Hz, which I find a bit odd (the DVI edition scales down to 58Hz at 1920x1200). So the DP must have some advantages over normal DVI and that, in turn, is good news!

So I guess until the order for the DualHead has passed our administration (oh we germans love our bureaucracy … damn) I will put this request on hold.

Thanks again for you help !!

Philipp

Hi Chris,

so the DualHead2Go finally arrived and I was amazed how easy it is to integrate it into the system configuration. This was an excellent suggestion, thank you again!

Displaying two fixation points for the two eyes is, as you said, pretty straight forward. Right now I am modifying the Random Dots Plugin to display with correct disparity for both eyes. To compute the horizontal offset between points for both eyes I need the size of the output display in degrees visual angle.
I figured out that using “getDisplayBounds” works from within the C-Code. However, I am not able to find a good way to get the observer-screen distance that was presumably used at some point to compute pixels to degrees visual angle. This value is set in the configuration file and is crucial for computing depth in degrees visual angle.

So I have a question: Is there an easy way to access the observer-screen distance value from both within the plugin code and the XML-based protocol (e.g. to compute the position of fixation points) ?

Also, if the 3D-stuff should be integrated into the main code at some point in the future, it would be good to have one additional configuration variable, namely the eye-to-eye distance of the observer.

Thank you,
Philipp

Hi Philipp,

so the DualHead2Go finally arrived and I was amazed how easy it is to integrate it into the system configuration. This was an excellent suggestion, thank you again!

Great, I’m glad to hear it!

Is there an easy way to access the observer-screen distance value from both within the plugin code and the XML-based protocol (e.g. to compute the position of fixation points) ?

I’m not sure what you mean by “the XML-based protocol”, but here’s how you can get the distance within the plugin code:

double distance = mainDisplayInfo->getValue().getElement(M_DISPLAY_DISTANCE_KEY).getFloat();

Also, if the 3D-stuff should be integrated into the main code at some point in the future, it would be good to have one additional configuration variable, namely the eye-to-eye distance of the observer.

You’re free to add that value to the #mainScreenInfo dictionary in your configuration file and then use it in your plugin code. For example, you could add the following to your config file:

<dictionary_element>
    <key>eye_to_eye_dist</key>
    <value type="float">5.67</value>
</dictionary_element>

and then access it just like other display parameters:

double eyeToEyeDist = mainDisplayInfo->getValue().getElement("eye_to_eye_dist").getFloat();

As an aside, you can set other system variables in the config file, too. For example, I have the following in my config so that #warnOnSkippedRefresh is on by default:

<variable_assignment variable="#warnOnSkippedRefresh" type="integer" value="1"/>

Chris

Hi Chris,

all this worked fine, although I am still struggling to transform the new distances into degrees visual angle (to be compatible to the position of stimuli).

What I ment by “the XML-based protocol”, was the experimental protocol made by the MW-Editor. I would like to position stimuli half way to the left for the left eye’s beamer or half way to the right for the right eye’s beamer, respectively. “Half way” would be half the width in degrees visual angle, but I couldn’t figure out how to get the display width in angles from within the experimental protocol. Is there a keyword that I can use in an assignment and that’s not documented?

Thanks,
Philipp

Hi Philipp,

What I ment by “the XML-based protocol”, was the experimental protocol made by the MW-Editor. I would like to position stimuli half way to the left for the left eye’s beamer or half way to the right for the right eye’s beamer, respectively. “Half way” would be half the width in degrees visual angle, but I couldn’t figure out how to get the display width in angles from within the experimental protocol. Is there a keyword that I can use in an assignment and that’s not documented?

No, the display bounds aren’t accessible within experiment files. Probably the easiest way to get that info is to load an experiment and note the announced display bounds. For example, on my system, the server reports the following:

Display bounds set to (-26.4631 left, 26.4631 right, 16.5394 top, -16.5394 bottom)

Hence, the display width in degrees visual angle is 2*26.4631 = 52.9262. You can then store that value in a variable and use it to position your stimuli.

Chris

Hi Chris,

I have it all working now and it looks great! Thank you for your help and your suggestions!

Philipp

Hi Chris,

on second thought, there is still a minor thing that needs to be fixed here:

The width of the display in degrees of visual angle is computed in MWorks as:
GLdouble half_width_deg = (180. / M_PI) * atan((width_unknown_units/2.)/distance_unknown_units);

In case of the DualHead2Go, the width of the display is double the width that actually gets projected. Therefore this formula has to be changed to:
GLdouble half_width_deg = (180. / M_PI) * atan((width_unknown_units/4.)/distance_unknown_units);

Is it possible to add a global parameter (like #warnOnSkippedRefresh), that can be used to switch to the corrected formula in case of a (virtually) doubled screen width?

Thank you!
Philipp

Thinking about this some more I have to admit that probably the most clever way to solve this without modifying anything is to just specify the physical screen dimensions, not the virtual ones.

Only then the mirror window is contracted horizontally which looks a bit ugly but is harmless…

So if there is no easy way to fix this cosmetic issue, I guess forgetting about it is the right thing to do here…

Sorry for the confusion,
Philipp