Guidance on Integrating VR Task and Camera Setup with XBI

Hi Chris,

My name is Simona Vaitekunaite and I am setting up an XBI device in Fribourg, Switzerland in collaboration with Michael Schmid’s group.

We have three main goals:

  • Developing new tasks in mworks including running Unity-based Virtual Reality tasks
  • Set-up video-recordings of arm movements (side cameras)
  • Set-up a face recognition procedure for animal identification (front camera)

At this stage we managed to have Unity talking to the XBI touchscreen and reward system and we are brainstorming the type of cameras we should be using.

We have three specific questions:

  • How can we best integrate (if possible at all) our VR task into mworks?
  • In humans we use Basler cameras for arm tracking (model acA1920-155uc)—do you think they’d be suitable for mworks & XBI?
  • Do you have a list of recommended Mac-compatible cameras for the face-recognition procedure? We have seen a XIMEA interface described in the MWorks documentation (XIMEA Camera Device — MWorks 0.14.dev documentation), but we are aware that any Mac-compatible camera could work (as you noted previously – Integrating video camera recording into MWorks - #9 by cstawarz). Could you recommend specific camera models, whether XIMEA or otherwise, compatible with mworks & XBI and suitable for reliable real-time recording?

I’d be happy to schedule a meeting to discuss this further.

Thanks in advance for your advice!

Sincerely,
Simona

Hi Simona,

How can we best integrate (if possible at all) our VR task into mworks?

I’m not sure exactly what kind of integration you have in mind.

Does the application running the VR task expose some kind of API or network interface for external control? If so, you could probably use MWorks’ Python actions to communicate with and/or control it.

If you want to use MWorks’ fixation points or EyeLink interface with the VR task, you could use a transparent stimulus display window. You can enable this by going to MWServer → Preferences → Display → Advanced and unchecking “Make stimulus display window opaque”. Then, in your experiment, include a stimulus display device whose background color components and background alpha multiplier are all set to zero. This will make MWorks’ stimulus display window 100% transparent, so anything beneath it (such as the VR task window) will be visible.

In humans we use Basler cameras for arm tracking (model acA1920-155uc)—do you think they’d be suitable for mworks & XBI?

To be usable with MWorks’ face recognizer, a camera must be recognized by macOS and usable via the standard system API’s. I suspect the Basler cameras are only usable via their own software, but I don’t know. Have you used them with macOS in the past?

At present, MWorks doesn’t support video recording at all. If that’s what you need, I would need to implement a new I/O device plugin for it. This shouldn’t be too hard, but it will take some time.

For questions specific to XBI, I recommend contacting Ralf Brockhausen at DPZ. I have never worked with it and know very little about it.

Could you recommend specific camera models, whether XIMEA or otherwise, compatible with mworks & XBI and suitable for reliable real-time recording?

I don’t have any specific recommendations. As I said in one of the discussions you referenced, any camera that claims to be UVC compliant and/or work on macOS without drivers should be fine for the face recognizer. This list probably has some good options.

I think MWork’s XIMEA interface has only even been tested with one camera. (Sorry, I don’t remember the model, although I know it was an infrared camera.) But it should work with any camera supported by xiAPI. At the moment, it captures only 8-bit, grayscale images, but it should be straightforward to add support for other image formats.

Cheers,
Chris

Thank you, Chris, for your prompt and detailed response. We will review the various sections shortly.

Regarding your first question, yes, Unity does run a network interface for external control. We will explore the Python actions you suggested to see if they are compatible.

While this approach might work for ‘simple’ behavioral recordings in the XBI, we are concerned that it might lack the necessary control and precision when we transition to ephys recordings or microstimulation.

We have been brainstorming an idea similar to your transparent stimulus display window, but our requirements are a bit more complex than just simple fixation.

Here is our idea in bullet points:

  1. Instead of running an application with its network interface, we record a video of the VR scene and the exact the spatial locations of the items the monkey needs to touch or fixate on.
  2. We then play the video using MWorks.
  3. For each video frame, we inform MWorks about the relevant locations to be touched or looked at.

Would something like this be feasible within MWorks?

Sincerely,
Simona

Hi Simona,

Would something like this be feasible within MWorks?

Yes, I think so. You could play the video using MWorks’ video stimulus, and you could use render actions to update the fixation/touch targets on a frame-by-frame basis. As long as you don’t need the VR scene to be interactive, that sounds like a good solution.

Cheers,
Chris

Hi Chris,

I hope you’re well.

Following up on your previous advice regarding camera compatibility, I wanted to discuss the best approach for setting up video recordings of monkeys during tasks using Basler cameras (model acA1920-155uc). Specifically, we want to start and stop recordings as the monkeys perform tasks and synchronize them with MWorks.

We’ve seen that Basler cameras are macOS-compatible through their pylon Camera Software Suite, but I understand from your response that MWorks would need the cameras to be natively recognized by macOS for seamless integration. We’ll test this to see if they work directly via macOS APIs, but if they aren’t compatible, we’ll look into the alternative UVC-compliant options from the list you shared.

Additionally, I came across Campy (GitHub - ksseverson57/campy: Python package for streaming video from multiple cameras to disk. Features real-time compression and debayering using FFmpeg.), an open-source library designed for high-performance video recording with Basler cameras. Do you think this could be a viable solution for our needs, either as a standalone tool or integrated with MWorks?

Lastly, you mentioned the possibility of developing an I/O device plugin for video recording within MWorks. If we were to pursue this, regardless of camera compatibility, how long might this integration take?

Thank you again for your help!

Best regards,
Simona Vaitekunaite

Hi Simona,

We’ve seen that Basler cameras are macOS-compatible through their pylon Camera Software Suite, but I understand from your response that MWorks would need the cameras to be natively recognized by macOS for seamless integration.

They need to be natively supported to work with MWorks’ face recognizer. But as I said previously, MWorks currently has no support for video recording, so that’s something I would need to add (via a new plugin).

Additionally, I came across Campy, an open-source library designed for high-performance video recording with Basler cameras. Do you think this could be a viable solution for our needs, either as a standalone tool or integrated with MWorks?

If it works on macOS, then yes. You could integrate it in to your experiment using MWorks’ Python actions.

Lastly, you mentioned the possibility of developing an I/O device plugin for video recording within MWorks. If we were to pursue this, regardless of camera compatibility, how long might this integration take?

It would depend on the details, but I don’t think it would be a huge project – maybe 2-3 weeks of my time to get you an initial version? When would you need it by?

Chris

Hi Chris,

I hope you are doing well. I want to follow up on your suggestion to use render actions to update fixation targets. Unity offers an output file that updates the X/Y position and size of the stimulus. I wonder if this information can be read by MWorks? In the render actions sections, formulas are used to reflect the size and position changes of the circle, but maybe there is a way to utilise the output file to update the position and size changes, instead of estimating it with a formula.

Let me know what you think!

Kind regards,
Simona

Hi Simona,

Sure, you should be able to use the output file. You could use Python actions to read the data from the file and store it in MWorks variables. (You’d probably want to do this just once, when your experiment starts.) Then, you could use render actions to pull the data out of the variables and update the relevant touch or fixation targets.

If you need more info, please let me know.

Cheers,
Chris