MWorks interactive game design

Hi Chris,

I’m following up to tell you that our python environment is implemented and we’re ready to start thinking about the MWorks interaction!

See below for details, but maybe it would be good to have a zoom chat about it? I’m mostly free tomorrow and most days next week, so whatever works for you is probably good.

It seems the biggest hurdle might be the rendering. Our PIL-based python renderer is ​moderately fast (~250 frames per second for 512x512, drops to ~150 fps for 1024x1024), but you were saying that sending image arrays from Python through MWorks for displaying might be slow, so rendering in MWorks would be better.

In that case, we can send a serialized state of the environment to MWorks. The raw state that the environment has is (easily converted to) a list of ​(polygon, color) pairs, ordered by z-layer. So it would be really easy to ​JSON-serialize the state​​ as a list of ([vertices], ​(R, G, B)) pairs.

Can MWorks render a polygon given a list of vertices and a color?

If yes, then our lives will be really easy! If no, then how do you suggest we handle the rendering?

By the way, the environment codebase is here (it’s a private repo but I’ve added you as collaborator so you should have access):
https://github.mit.edu/jazlab/object_oriented_games
Most of the codebase isn’t very for the MWorks interaction, but there are some example tasks if you scroll down on that page and the python renderer is in there here:
https://github.mit.edu/jazlab/object_oriented_games/blob/master/oog/observers/pil_renderer.py
The actual rendering happens around line 100, where it iterates through the environment state’s sprites and draws their vertices filled with their color.

Thank you,
Nick

Hi Nick,

Sorry for the delayed response. I’m currently mulling over a couple different approaches to this. I’ll get back to you with more info soon.

Cheers,
Chris

Hi Chris,

Thank you, and just checking in to see if you have some ideas for polygon rendering for the interactive games.

Here are a couple thoughts on my mind:

  1. Could we realistically use our Python renderer (which can tick up to 300Hz) and use MWorks only to control timing and display, with an image stimulus? Each image is less than 1Mb — I’m not sure how slow that would be to pass through MWorks or what conversions would be involved. Ideally we’d like something fast enough to render at the screen refresh rate (around 60Hz), though would probably still be fine rendering half that speed.

  2. Alternative idea, but potentially ​an easy solution: Would it be possible to run the display directly from Python? We can precisely control the timing in Python with psychopy, which can directly display our renderer’s image arrays. And with the entire task structure already being done in our game environment, ​the only python-interfacing thing we need are the joystick inputs (yep, no need for any fixation — trials are initiated with the joystick). We could still use MWorks for event logging, joystick inputs, photodiode, and eye tracker. Does this sound feasible?

  3. Does MWorks’ renderer use an OpenGL backend? If so, since polygons can be drawn in OpenGL, hopefully it’s possible for MWorks to render polygons. If not, we could always resort to triangulation (​though that could potentially slow things down a bit).

Thanks,

Nick

Hi Nick,

Sorry again. Things are moving slower than usual at the moment.

Could we realistically use our Python renderer (which can tick up to 300Hz) and use MWorks only to control timing and display, with an image stimulus?

I’m working on a test of this right now. It will certainly work, but you may not be able to maintain a constant 60Hz. I’ll send you the example once it’s done, and you can assess its performance.

Alternative idea, but potentially ​an easy solution: Would it be possible to run the display directly from Python?

Yes, it’s possible. In this scenario, you’d probably need to run Python as an external process (i.e. not within the MWServer application). Also, if you’re going to render stimuli in PsychoPy, you might want to consider running the whole experiment with PsychoPy. (I’m not very familiar with PsychoPy, so I can’t really speak to the pros/cons vs. MWorks, but I know they solve similar problems.)

Does MWorks’ renderer use an OpenGL backend? If so, since polygons can be drawn in OpenGL, hopefully it’s possible for MWorks to render polygons.

Yes, although in the near future, we’ll be switching to Apple’s Metal framework. Either way, rendering polygons won’t be an issue. If passing rendered images from Python to MWorks proves ineffective, then the next best thing will probably be a new MWorks plugin that does the rendering you need. I’ve thought about a couple ways that might work. We can discuss it if/when we decide we need it.

Cheers,
Chris

Hi Chris,

Thanks for this information!

  1. Thanks for testing the Python-rendered images passing through MWorks — looking forward to benchmarking your example.

  2. Yeah, running the whole experiment in PsychoPy crossed my mind. I’ll check with Mehrdad and others in the lab about whether this is a good idea, but I am definitely inclined to use MWorks if possible since the lab already has joystick/photodiode/eye tracking/… setups with MWorks so leveraging the existing code (and expertise) would be good.

  3. Sounds like a good plan. I don’t think there’s a need to meet at the moment, but if the image pass-through approach isn’t fast enough then there might be.

Thanks again,

Nick

Hi Nick,

I’ve attached the promised example. I recommend running it with the current MWorks nightly build, which embeds Python 3.8. In the Python file (render_scene.py), you’ll need to set the variable local_site_packages to the directory where PIL is installed.

The example animates a red circle orbiting around the center of a black square, which is helpful for assessing the smoothness of the animation. On my machine, it runs pretty well, but there are some stutters. You can try it out and see what you think. If anything is unclear or you have any questions, please let me know.

Would it be possible to alter your renderer so that, instead of outputting arbitrary polygons, it generates a list of predetermined shape types (e.g. circle, rectangle, star) with corresponding center points, sizes, rotations, and colors? If so, then I think I can work out a way to render inside MWorks without requiring a custom plugin. (Even if you think the pre-rendered image solution is sufficient, I may play around with this just to see how it works.)

Cheers,
Chris

Attachment: rendered_images.zip (1.92 KB)

Hi Chris,

Thank you very much for this! I will try it out soon (but fair warning, might not be until next Thursday or Friday) and will let you know.

The Sprite object in my game engine does have shape types for some kinds of shapes (specifically, the ones in this file Account Provider Selection), and we could absolutely expose that instead of the vertices for those shapes. However, for custom shapes that are not in that set, I feed my sprite a list of vertices.

Whether that feature is used depends on the task. We haven’t settled on tasks yet, but currently most of my prototypes do have custom shapes (in particular a half-annulus shape). Once we’ve settled on tasks I’ll let you know if that’s still the case.

My current TODO list is:

  1. Test your example and measure inter-frame distribution.
  2. If it’s too laggy, investigate whether entirely using PsychoPy could be a viable approach.
  3. If using PsychoPy would be non-trivial work, see if we can formulate our tasks to use only simple shape primitives.

I’ll keep you posted!

Thank you,
Nick

Hi Chris,

Sorry for the delay in benchmarking your implementation! But just to fill you in, there are now 5 of us (cc’ed on this email) using or soon to be using the python-based task framework we talked about for human/monkey experiments, and a couple of us are now nearly ready to start collecting data on the psych rig.

The results from benchmarking (credit to Ruidong) the frame-passing approach that you sent show that it’s unfortunately too slow to be practical — 30-35ms of lag just to write the image to disk and read on the MWorks side. In contrast, for the monkey ephys experiments we’d like to run at 60Hz. Since the python environment takes up to 4ms to step and render, this gives us at most 12ms to send an image to MWorks. In light of this, a couple of questions:

  1. Do you think there might be a faster way to send images to MWorks than writing png to disk? I’m guessing the answer is “no,” but perhaps with the new MWorks updates an alternative possibility has opened up.
    If not, then we’ll have to take the approach of sending a serialized state description to MWorks for rendering there. As we discussed earlier, for this we’d need some way to render polygons in MWorks — our python environment uses a list of polygons as it’s state, and while in some tasks those are all simple shapes, in others we have shapes other than ellipses and rectangles. Just to bring up what you previously wrote about that for the sake of everyone cc’ed:
    “””
    In the near future, we’ll be switching to Apple’s Metalhttps://developer.apple.com/metal/ framework. Either way, rendering polygons won’t be an issue. If passing rendered images from Python to MWorks proves ineffective, then the next best thing will probably be a new MWorks plugin that does the rendering you need. I’ve thought about a couple ways that might work. We can discuss it if/when we decide we need it.
    “””

So the second question is:

  1. What do you think is the best way to render polygons in MWorks?
    If you’d like to have a zoom chat about this, please let us know!

Thank you!
Nick

Hi Nick,

I’m on vacation this week and will get back to you about this next week.

Cheers,
Chris

Hi Chris,

Thanks very much.

Also, would it be possible for us to meet and discuss this at some point? Aside from polygon rendering, we’d like to get your advice for how to set up our experiment pipeline, which includes passing eyelink and joystick data from MWorks to python, as well as display and reward information from python to MWorks. If so, what times work best for you?

Thanks,
Nick

Hi Nick,

Do you think there might be a faster way to send images to MWorks than writing png to disk?

Not without some changes to MWorks, but I don’t think that can be avoided at this point.

My first thought is just to eliminate the trip to disk. It should be straightforward to allow image stimuli to get their data from a variable instead of a file. Once that’s done, your Python code could render the image to a BytesIO object, and MWorks could read the image data directly from the in-memory byte string. This change alone may be enough to make your target frame rate achievable. Let me try it out and get back to you.

Also, would it be possible for us to meet and discuss this at some point?

Sure. I could do a Zoom meeting this Thursday or Friday morning.

Cheers,
Chris

Hi Chris,

Thanks! The BytesIO sounds well worth the try — if you wouldn’t mind sending that along once you’ve implemented it, we’ll benchmark it like the writing-to-disk approach and compare.

Unfortunately, I’m fully in classes Thursday and Friday 9am-12. Does any time Thursday or Friday afternoon work for you? Hansem and I are both almost entire free Thurs/Fri afternoons, so pick your time and we’ll be there!

Thanks,
Nick

Hi Chris,

Looking forward to talking later. Just wanted to give some data on new benchmarking:

Playing my tasks in MWorks with disk-writing frame-passing (512x512 images) takes ~34ms per step. This is broken down as:

2ms: Python-side physics simulation in the game engine.

2ms: Python-side PIL rendering.

16ms: Saving PIL image to disk. Yep, lots of time spent writing to disk!

14ms: MWorks side of things (reading image from disk, clearing/displaying).

This makes it seem possible we could get within 60Hz (16 ms/step) by saving on the disk-writing.

I then tried writing to BytesIO instead of disk. I couldn’t get MWorks to read BytesIO, but benchmarked the python side writing:

13ms: PIL image to BytesIO as PNG. This is still too slow, regardless of how fast MWorks can then read/display it.

4ms: PIL image to BytesIO as JPEG. That’s reasonably fast enough, though we might want to avoid JPEG if we’re worried about image artifacts.

By the way, PIL’s image.tobytes() (Python PIL | tobytes() Method - GeeksforGeeks) method takes <0.5ms. That’s super-fast, surely not re-writing in memory. I don’t know what data format it writes to or whether we can re-convert on the MWorks side, but maybe that’s a possibility.

Also, by the way, converting PIL image to numpy array takes 4ms.

Anyway, we can talk through the options given this when we meet.

Best,

Nick

Hi Chris,

Thanks for meeting with us today!

Attached is a zip with a dummy python task that has the same API as the ones we’ll be training monkeys on. There’s a README in there with instructions for how to play a demo with your mouse, as well as an image of my slide from today. If you have a chance to help us get started building an MWorks task to run this, that would be great — None of us have used mwel before, so we appreciate all the help we can get!

Also, random question: Does any USB HID joystick work with MWorks? Michael has used this one before:

which is USB HID but apparently made for Windows, so we’re slightly concerned whether it’ll work on a mac.

Thanks,
Nick

Attachment: dummy_task.zip (80.4 KB)

Hi Nick,

Attached is a zip with a dummy python task that has the same API as the ones we’ll be training monkeys on.

Thanks, I’ll check it out.

Does any USB HID joystick work with MWorks? Michael has used this one before: which is USB HID but apparently made for Windows, so we’re slightly concerned whether it’ll work on a mac.

If it’s USB HID, then it should work. For example, in my office I have one of these (or possibly an older variant of the same), which doesn’t advertise Mac support either. However, it’s HID compliant, and macOS has no trouble discovering and interfacing with it.

Chris

Hi Chris,

Just checking whether you’ve had a chance to consider the python-MWorks setup, specifically:

  1. How we can pass images from python to MWorks within 12ms (i.e. without converting to png)
  2. How to control the timing of the python calls to 60Hz
  3. How to aggregate lists of NIDAQ/Eyelink signals to send to python.

I think Ruidong has gotten the frame-passing via png-writing working on the psychophysics rig (and I’ll probably be ready to as well in a week or two), but we haven’t worked out yet how to do those three things, with (1) being the most important.

Thanks,
Nick

Hey Nick,

I guess you just sent it to me.

Setayesh

Hi Nick,

How we can pass images from python to MWorks within 12ms (i.e. without converting to png)

Please see the attached example. To run it, you’ll need to use this MWorks build.

In short: I’ve added a special image type that gets its pixel data from a Python object that implements the buffer protocol. In the example, I used a persistent NumPy array as the buffer, which eliminates the need to re-allocate memory every rendering pass. Note that the pixel data must by in RGBA format. (In the future, we can make the pixel format an option, if needed.)

This approach has just about the lowest possible overhead (no image encoding/decoding, no memory copying), so hopefully it will leave plenty of time for your rendering code. If not, we’ll have to try an entirely different approach. Please try the example on the provided MWorks build. If it seems to work, I’ll get it in to the MWorks nightly build.

How to control the timing of the python calls to 60Hz

The Python image will redraw itself on every display refresh, so as long as the rendering completes within the allocated per-frame time, this is covered.

How to aggregate lists of NIDAQ/Eyelink signals to send to python.

I will try to get an example of this to you soon.

Chris