The avatars did not take the retargeting well, and their limbs ended up twisted and even more glitchy than the originally recorded skeleton, and our lack of experience with MotionBuilder meant we could not fix these errors (if they were even fixable in the first place). Naturally, we tried this and got some uncanny dancing characters. In this way, you can have many characters doing the same dance, or simply change the appearance of the dancer virtually without having to re-record another animated sequence. One common thing to do with motion capture data is to do something called retargeting which is taking the animated data and rigging it into an avatar that is different than the original one. The Kinect collected data at thirty frames/second with a fairly smooth and consistent result, although there was a quirky error in which the feet would glide around instead of staying steadily grounded. This makes sense considering the Kinect can only see one side of a body at any given time. The Kinect is less accurate when a limb is obscured, like an arm going behind a person’s back. Then, we analyzed and played with the data. fbx file to MotionBuilder the developers had some helpful video tutorials on their documentation page that detailed the workflow. From there, it was also easy to transfer the. After downloading the software, we just plugged in the Kinect and hit record. fbx file which can then be edited and altered in motion graphics software like Autodesk’s MotionBuilder. KAS takes this tracking a step further, providing an interface for recording 3D skeleton data that is saved as a. The cost of all this is what it would cost for a strong computer, a Kinect, and a Kinect to Windows adapter, if you can still get your hands on one now that they are out of production. Out of the box, the Kinect can be used with Processing and some libraries to access the depth sensor and do things like track facial expressions and create skeletons from tracked bodies. KinectĪfter Mocha, we tried out Microsoft’s Kinect (Version 2) sensor with an application called Kinect Animation Studio (KAS). Could a machine use only 2D tracked data to learn phrases of movement to which it could react or for generating its own dance? We hope to answer questions like these with more research with this software. Consider applications with artificial intelligence to augment a VR performance. We moved on to another motion capture technique after reaching this result, but the jury is still out on whether Mocha would be a completely useless solution for Oscillations’ purposes. This was the software we initially used to attempt a prototype with spatialized audio, and while we could visually fake spatialization we could not spatialize audio without Z-axis information. This is a limitation especially if Oscillations were to incorporate depth into their virtual reality experiences. However, right off the bat we noticed how tracking a 360 video is still just motion tracking in two dimensions. Ranging from $995 to $1695, this option is on the relatively cheap side of motion capture technology and benefits from the fact that, except for a computer, Mocha does not need any additional hardware to work. The company has a VR (which they really mean 360) version that works specifically with equirectangular footage. Mocha is a graphic tracking software by Boris FX known for its planar tracking capabilities. Our experiments are by no means scientifically sound nor rigorous, but we hope to give readers an intuition of each technology’s efficacy and even how to improve upon them for motion capture purposes. Each description is formatted similarly to a tech review you would see online and at the end, we summarize our findings in a chart. Apart from using ambisonic recordings, we attempted to sharpen the effect of audio-visual synchrony by connecting sounds to movements in post-production, but this proved to be difficult using only two-dimensional motion tracking software.Īlthough we did not have time to return to our spatial audio prototype with 3D data, below we describe the results of having played with five different hardware and software options for motion capture. With Oscillations’ connection to the movement arts, it made sense to experiment with existing motion capture technology to find accurate, consistent, and scalable ways to obtain three-dimensional motion data for purposes such as animation or machine learning to augment performances in virtual reality.Īn additional motivation to learn more about motion capture was connected to our early experiments with spatial audio (read more about them in our spatial audio blog post).
0 Comments
Leave a Reply. |