So Raf Anzovin’s got some cool demos of a rig on cg-char.
It made me start to think about how a person might do autoskinning. The first thing that came to mind was using the bind joints to build a polygonal mesh that could have skin data applied vertex by vertex, and then using transfer skin weights to get that data to your mesh. It would need to account for different sizes and shapes of meshes, though. So you could do that by giving it layers, with the weights repeated, essentially making a point cloud of skin weights.
It wouldn’t be too hard to do with nurbs modelling tools: take a joint chain, and build a cylinder around them by snapping a nurbs circle to each joint, orienting the circle between the two joints, and then lofting from one to the next. And do that for a couple different sized circles, so that you end up with a couple different layers of cylinders that conform to the shape of the joints. Since Nurbs cv’s are all organized and structured, it would be easy to assign weight gradients to them.
The problem comes with anything more complicated than a single joint chain. 🙂 Plus, to do the transfer you’ve got to have one solid mesh, as far as I know, and that is only possible with polys (right?). So maybe you convert them to polys (does that maintain skin weights?). Or I suppose you could store all the weights and vert positions in a couple huge arrays, and then apply it that way.
Anyway, it sounds like a complicated approach, but that was the first thing that came to mind. I’m not enough of a programmer to do something like this without actually building a structure that I can understand. Maybe I’ll try it someday.
Also, if anyone reading this has been using mlHogan or Norman, do you like attributes for upper and lower limb stretch, or would it be preferrable to have something similar to the knee slide in one of those videos? (Don’t know if that would be good or bad with FK, and I’d like to have a universal attribute that affects both.)