Hi Christopher -
I’m a new faculty member at Columbia University - and a close colleague of Najib Majaj (I did my PhD work with Tony Movshon at NYU).
I’m setting up my lab - and very interested in using MWorks (in combination with Blackrock, Eyelink and a ViewPixx3D monitor).
I know that the Movshon lab already uses MWorks with Blackrock and Eyelink - but I don’t know if anyone is using it to display images on a ViewPixx monitor.
Any reason this is not possible? Thought I should ask before buying very expensive monitors!
Many thanks,
Yasmine
Hi Yasmine,
While I have no experience with VPixx monitors myself, some other folks have reported using them with MWorks, so I guess it’s possible. As long as you can connect it to a Mac via DVI/HDMI/DisplayPort/etc., there shouldn’t be any issue.
However, MWorks won’t have access to any of the VIEWPixx’s integrated I/O functions, so any signals you want to capture from the VIEWPixx will need to be conveyed to an I/O device that MWorks does support. It sounds like it may be possible to implement direct communication between MWorks and the VIEWPixx (the data sheet mentions a “low-level ANSI C API”, which sounds promising), but I’m not aware of any MWorks users who have done this. If it is possible, I could create an MWorks plugin that provides this integration, but I would need access to the actual VIEWPixx device to do so.
I hope that helps. If you have any other MWorks-related questions, please don’t hesitate to ask!
Cheers,
Chris Stawarz
Thanks for this quick response!
Looking forward to working with you on all this!
Cheers,
Yasmine
Hi Yasmine,
Two quick questions regarding the MWorks/ViewPixx integration:
- Are you planning on using a RESPONSEPixx box with your VIEWPixx?
- Do you need support for analog input and/or output?
Thanks,
Chris
Hi Chris -
Hope you’re well - and here are two quick answers to your questions:
Are you planning on using a RESPONSEPixx box with your VIEWPixx?
NO
Do you need support for analog input and/or output?
I would like to get the values for pixel refresh out of the monitor (so that it can be used to sync data) - but I believe that’s all digital not analog, correct? Any idea what people use analog input/output for?
Thank you!
Yasmine
Hi Yasmine,
Thanks for the quick answers!
I would like to get the values for pixel refresh out of the monitor (so that it can be used to sync data) - but I believe that’s all digital not analog, correct?
Correct. You can output either the VSYNC signal (on digital output pin 23) or the RGB value of the upper left pixel on the screen (presumably using all 24 digital output bits). Either way, it’s all digital.
Any idea what people use analog input/output for?
They can be useful for general purpose I/O tasks. For example, some folks in the Jazayeri lab use analog levers as an input device (like a pressure/force sensor, I think), and they record that signal as an analog input. Another lab uses analog output to drive a laser.
Chris
Hi Yasmine,
Support for DATAPixx and VIEWPixx devices is now in the MWorks nightly build. Digital and analog input and output are supported, as well as pixel and VSYNC output. Also, digital output via the input port is supported, so you can use digital outputs even if you’ve enabled pixel output mode.
Here’s an example of how you’d configure a VIEWPixx device with pixel mode and a single digital output line (e.g. to control a juice pump):
var output = false
datapixx viewpixx (
update_interval = 3ms
enable_dout_pixel_mode = true
) {
datapixx_bit_output (
bit_number = 7
value = output
use_input_port = true
)
}
I can provide more specific examples, if that would be helpful.
I imagine that, given the current circumstances, it may be a while before you’re able to try the new interface, which is fine. I’ll just note that it would be very helpful if I could hold on to the device you loaned me until you’ve had a chance to test things thoroughly. That way, if/when issues come up, I’ll be able to quickly debug and fix them.
Cheers,
Chris
Hi Chris -
I hope you and all your colleagues at MIT are healthy and safe - and
thanks for the follow-up! I’ve been meaning to email you but you beat me to
it!
This all sounds good - and yes, as expected I won’t be able to
trouble-shoot much in the next few weeks so let’s hold off till I can. And
yes - you can hold on to the device until we know that everything is up and
running smoothly!
Take care!
Yasmine
PS - what’s the status of MWorks doing closed-loop type experiments? For
example, evolving visual stimuli on the fly based on a neuron’s
response strength?
Hi Yasmine,
what’s the status of MWorks doing closed-loop type experiments? For example, evolving visual stimuli on the fly based on a neuron’s response strength?
The only issue is getting the neural data in to MWorks. If/how that’s possible depends on the recording system you’re using and its ability to send out the relevant information in a format that MWorks can use.
What system will you be using to record neural data?
Chris
Hi Chris -
Thank you for elaborating on this! I plan to use MWorks with Blackrock and
Eyelink. I was under the impression from talking to my close colleague
Najib Majaj that Blackrock plays well with MWorks which is why I went with
them for neural recordings.
For my work, I think it would be really cool to evolve 2D shapes (along the
domain of contour curvature). Definitely doesn’t have to be the first
experiment we do - but will come in handy in the near term.
In terms of experimental priorities, however, it would be incredibly
important for me to be able to have a paradigm that would allow me to map
visual receptive fields manually in real time, as we discussed briefly
before. For this, I’d like to be able to use mouse and keyboard commands to
change the size, orientation, position and type of stimulus on the fly (ie
to draw from a battery of stimuli in memory and to manipulate
their displayed properties).
Could you please give me a sense of how feasible that would be? I know my
PhD mentor Tony Movshon has also expressed interest in this functionality -
so it could be a win for him too.
Cheers - and stay safe!
Yasmine
Hi Yasmine,
I was under the impression from talking to my close colleague Najib Majaj that Blackrock plays well with MWorks which is why I went with them for neural recordings.
As far as I know, Najib uses only one-way communication between MWorks and the Blackrock system. Specifically, his experiment uses an intermediate I/O device (e.g. a NIDAQ or Firmata device) to send synchronization words from MWorks to Blackrock.
What you’re asking about is two-way communication, where the Blackrock system analyzes the neural data and communicates the results back to MWorks. While that may be possible (I’m not familiar with Blackrock’s software), I’m not aware of any labs that do this currently.
In terms of experimental priorities, however, it would be incredibly important for me to be able to have a paradigm that would allow me to map visual receptive fields manually in real time, as we discussed briefly before. For this, I’d like to be able to use mouse and keyboard commands to change the size, orientation, position and type of stimulus on the fly (ie to draw from a battery of stimuli in memory and to manipulate their displayed properties).
Could you please give me a sense of how feasible that would be?
As I’ve noted previously, MWorks has built-in interfaces for both keyboard and mouse input, so this should be entirely feasible. If you provide me with the specifics of how things should work, I can help you implement it in an MWorks experiment.
Cheers - and stay safe!
You, too!
Chris