Movement data – what to do with it?



Now that we have recorded the approximately 450MB of movement data, it’s time to start processing and analysing it, and getting it ready for the next stages of the project. This, and probably the next couple of posts are about movement data: in this post I’ll explain briefly how optical motion capture works, and in the next ones what we can do with the data.  

Optical motion capture is a great way to digitise full-body movements of participants. In optical motion capture, the position of reflective markers in the capture space is tracked with multiple cameras. The cameras emit infra-red light, and then record the reflections of this infra-red light from the markers. To triangulate the position of a marker, the reflections need to be seen by at least two cameras at the same time. In our project, we tracked the 2*37 markers using 20 cameras.

Where the reflective markers are placed in the OptiTrack Motive system.

Where the reflective markers are placed in the OptiTrack Motive system.

Although the cameras are carefully positioned around the capture space, angled and tilted so that they would cover every part of it, there are always instances where one or more markers will not be visible to the system. In our data this happened sometimes when Jarkko was really reaching high with his arms: his hand markers ended up too high to be seen. This we tried to minimise by re-configuring the cameras to match the choreography, and extend the space where it was needed.

Another way to “lose” data is that a marker is occluded by other body parts. For example, if a performer’s hand covers the marker on her hip, the cameras obviously can’t see it, and there will be a gap in the data.

Gappy data is problematic not only because of the missing frames, but also because the mocap software works on trajectories. As long as a marker is continuously in view, the system recognises that it sees this one marker and builds a trajectory out of the measurements of its position. Our system measures the position of the markers 100 times every second (or in other words, its sampling rate is 100Hz), and so it is important to organise the measurements of each marker neatly in labelled bins. Only when having a complete time-series of the position (in X, Y and Z), can further analysis (e.g. calculating the speed of the movement) be made.

If the marker goes out of the view, however, the trajectory breaks, and to put together the full trajectory of any marker, these fragments need to be combined, sometimes from dozens, or even hundreds of fragments, which can be very time-consuming. Luckily the OptiTrack system that we have is pretty good in automatically joining these fragments into trajectories. The markers are placed on the performers bodies to pre-defined spots (see the figure above), and then a model of each performer is made before a recording session. The system then fits that model into the cloud of markers that it sees, and is pretty accurate in recognising which marker is Jarkko’s right knee and which is the left elbow of Johanna.

Once the trajectories are fixed, we can fill in the gaps by interpolation. This means that we look where the marker was just before it went invisible and where it reappeared, and just fill the gap with the most likely path in between. Short gaps (less than a second) can be filled easily, without anyone noticing. Filling in larger gaps might need more work, but again the software helps, as it can use the neighbouring markers to estimate where the missing one might have gone. Our research assistant Maija is very proficient in cleaning up and preprocessing movement data, and our hard work in preparing the setup paid off, as the data was very clean, and the occasional small gaps easy to fix. So, now we have the dataset ready.

As with any other data, the first step is visualisation and descriptive stats. I’ll return to these in more detail in a future post, but I will share with you what I always do first in these projects: plot some frames of the data, and make a short animation. These are done in Matlab using the awesome MoCap Toolbox that Petri Toiviainen and Birgitta Burger from Jyväskylä University have made. In the gif on top of the post, you can see a 10-second snapshot of one take, at a low 10Hz frame rate. The black dots are the positions of the joints that were calculated based on the marker positions. At this point, the 37 markers attached to each dancer have already been reduced to these 21 points representing body joints. A set of “bones” or connectors are drawn between these joints to represent limbs, trunk etc. This stick-figure animation is a good representation of the data, and could, with some editing, be used as a stimulus in a perceptual experiment. However, for the next step in the project, the visualisation is just for fun, as now it is time to start crunching the numbers.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s