This page describes my Masters' degree thesis. Provided here is a brief description, demonstration and theoretical and technical explanation. The thesis related to 3D animation and physics simulation, and consisted of implementing a large library in C++.
Animation comes in two forms:
- Keyframes (i.e. pre-recorded motion capture, or an export from an animation suite)
- Procedural or dynamic (e.g. ragdoll physics)
The project aim was to create an enitrely standalone animation engine which could handle both of these on any given input model. Most animation engines tend to be hard to re-use as they rely on assumptions or private metadata about the models, and/or about the software environment (i.e. interfacing with other subsystems like graphics, physics, collision detection, etc). The project aimed to generalise the process.
It also used the ragdoll model to develop techniques to create so-called 'animation synthesis' via a process known as Inverse Kinematics (IK). Further methods were devised to allow blending of animation between a separate keyframe and dynamic source to produce a single animation stream which responded dynamically to world events, but followed a keyframed recording as best it could.
A small sample of what the library was capable of is provided here in the form of a few gif animations, which should display in your browser (if you have problems try using Firefox). Please note
: these run at a lower frame rate than the library really renders, making them appear jerky, and secondly, the screen-shot code was quite inefficient and caused slowdown, so the time-spacing is not perfectly uniform.
Thesis writeup: pdf
So how does it all work?
It's all maths and physics so if you aren't into that, look away now.
Basically, a model is represented as a tree-structure; each node is a limb. The root node is usually the waist, and for example, a hand is a child-node of the (fore)arm. The actual animation concerns the nodes' positions, which are represented using a transformation matrix to describe their orientation in their own 'local space' (which is independent of their parent). To build the whole model, the matrices are recursively applied down the tree, so each child node inherets its parent's orientation, then applies its own. This means that when you move a character's arm, its hand moves with the arm without having to apply any special calculation to the hand. If you then make it move the hand, the fingers move accordingly. This keeps the model's limbs connected.
Keyframe animation is encoded on disk as a descrete sequence of transformation matrices for each node at a given time. These tend to be encoded at a lower frame rate than you'd expect in playback. The animation engine's job is to create a satisfactory continuous (or at least, less descrete) transition between these matrices which is independent to frame-rate, which is usually done by linear interpolation based on time.
The ragdoll system breaks from the aforementioned description, and uses a 'constrained particle system' to represent a skeleton. There is a set of particles (3 or 4) which make up a node and between them an imaginary infinitely tense spring ensures they stay the correct distance apart. If at any point they aren't, they are moved closer togther, or further away, as appropriate. A ragdoll system needs only consist of some simple Newtonion physics, a differential equation solver and a constraint evaluator, and the result should be a ragdoll skeleton which stays sort-of human-shaped after you push it over. Although to do it well more detail has to be considered.
The trick to doing this generically is to add in a skeletal retargetting system so that you can take the input model, retarget it to a ragdoll model, apply physics to the ragdoll, then translate the result back into the standard model.
Here's an example of a model and its ragdoll skeleton.
Blending two animation sources (e.g. to make a character walk AND shoot a gun at the same time) is easy if they're both keyframes, as one can use interpolation to take a weighted average between the two (or more) sets of data. This method leads to visual artifacts when trying to blend a ragdoll and keyframe stream; instead this project implemented a powered-ragdoll approach where the ragdoll's limbs gained 'motors' which propelled them towards the 'target' positions given by the keyframe stream.
The library is about 10,000 lines of object oriented C++ and a simple demonstration program added up another few thousand. It uses the Open Asset Import Library (assimp, http://assimp.sourceforge.net
[assimp.sourceforge.net]) to provide the models from disk into memory (i.e. Assimp does all the file parsing and puts it into appropriate data structures).
The library itself and Assimp should compile on pretty much anything, although Assimp also requires Boost.
I used Valgrind and GDB for debugging and GNU Gprof for profiling.
The average update loop execution time (as recorded on one core of an Athlon X2 3000+) is as follows:
| Behaviour || Time (s) |
| 1 Keyframe animation || 3.88 x 10-6 |
| 2 Keyframe animations || 4.20 x 10-6 |
| 3 Keyframe animations || 4.71 x 10-6 |
| Pure ragdoll || 1.27 x 10-3 |
| Ragdoll and 1 keyframe blend || 1.42 x 10-3 |
RAM usage was roughly 20MiB plus 2-3MiB per model loaded. Very suitable for desktop PC/console games, but likely too memory-heavy for portable devices.
Infrequently Asked Questions (but the most interesting)
You've got the library online, why not the demonstration program too?
- Although the library is cross-platform, the actual demo program uses some POSIX extras so you couldn't easily compile it on Windows, which I assume the vast majority of visitors are using.
- The models it's set up to work with are proprietary so I couldn't legally redistribute them. Even if you could compile it, you wouldn't be able to run it without setting it up for new models, which isn't a trivial process due to the retargetting system.
- Bonus reason: The code was hideous.
Your screenshots are really ugly!
Yes, it's just rendering a wireframe. The animation system isn't really a visual thing, except that its results are intended to be rendered, but it didn't affect anything above the wireframe level (i.e. skins or meshes or textures, etc) and there weren't any marks for prettiness.
How successful was the generic aspect, really?
Not perfectly but acceptably so. The main drawback is that the library's caller has to have some involvement with the ragdoll retargetting system as the library is not capable of determining a sensible retargetting specification itself. It can't look at a model and say "there's a leg, oh, and another one". That's an entirely different problem.
Collision detection (CD) has to come from somewhere, but it is not at all sensible to build a full CD system into an animation library. This is solved by function pointers; the caller provides the library with a way to register with a CD system. The library then relies on the CD system to keep the model within legal world space boundaries.
A minor problem I had was related to the fact that models' tree-structures are sometimes arranged quite bizarrely, which causes problems when you start pulling it around and expecting limbs to follow other limbs sensibly. The generic ragdoll goes a long way to abstracting away any unrealistic structures, but it can be seen in some of the gifs on this page that the model shown had a couple of vertices at the back of her head that don't seem to stay attached as they should. I'd argue this is a problem with the model, not with the library.
Apart from that it was very successful. The retargetting to a generic ragdoll skeleton proved very helpful, and my supervisor thought it may have been an original idea(?).
Would you recommend this for real use?
Not really, the main reason I have put this online is for others' personal or academic interest and to demonstrate that I can write non-trivial software.
I wouldn't recommend actually using it because:
- The university asserts that it owns the IP (although whether or not this is true is debatable) which may cause problems if you were planning on redistributing it for any purposes more nefarious than amateur interest.
- There's a big difference between writing some software for a university project and writing some software with the intent of actually using it. It's probably in 'alpha' quality; it's pretty much feature complete but hasn't seen much real-life testing and the ragdoll system could do with a lot of tweaking.
Are there any clever extensions you could add to this?
Yes, a few interesting ones:
- Level of Detail: The generic ragdoll skeleton could be made such that it contains more nodes or fewer, dependent upon how close the model is to the viewer. Obviously, the more nodes, the higher the computational cost, but it will give higher detail and articulation. Scaling this based on how much detail the viewer is likely to appreciate (i.e. by distance) would be beneficial.
- The animations could be expanded to include a caller-set speed (ie: walk: 1 run: 5). When going from keyframe to ragdoll mode, the ragdoll system would take the speed as an initial condition for its differential equations, thereby giving some interesting subtle effects by effectively altering their resistance to forces based on which direction the force affects them from.
- The ragdoll retargetting is not fully automated, making it so would be hugely beneficial but probably a thesis in its own right.
I am doing a similar/related project, do you know of any useful resources?
The ones I found most useful were: