PyQt: Maya Character Picker

Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Thursday, 25 August 2016

Blendshape: Instancing Scene Data using inMesh

Whilst looking into the Blendshapes for the purpose of Corrective Targets in a Facial Rig I discovered something interesting.
An empty shape node can be created and then have the outMesh of another plugged into its inMesh. It will then inherit the component data for the input mesh. This appears to have no cost on scene size.
Try it out:

Create a cube
Set its subdivisions to 100 in all axis
Save the scene
Inspect the file and check its size

Now create another cube and plug the outMesh of the first into the inMesh of the second (from and to the shape nodes). You should notice that the second cube ends up looking identical to the first.
Save the scene
Inspect the file and check its size

You should notice that the file size will only have increased by a nominal amount.
It would appear that as the data is in effect instanced the cost is only placed for one of the cubes. Maya will only make you pay for the differences between the first and second cubes - their deltas.

Try editing the second cube. If you move a large group of faces and save the file you will now see that the file size has increased again.
What use is this? I don't need a bunch of identical mesh data within a scene. Well actually this could be useful for populating large scenes with stuff like trees and rocks. However I propose that it could also be useful for storing out the deltas used when working with blendshape targets.
For instance I have noticed people online complaining that file size gets very large when they work with large amounts of corrective data for projects. Most of the time the fix is to either delete the targets or to store them away in another file. It can be useful to keep them in the same file however for such things as editing or retrieving normal data.
So as an alternative create a 'base mesh', a duplicate of the object you are working with, and then each time you need a new corrective duplicate the base mesh and plug its outMesh into the inMesh of the new corrective. You should be able to make your edits and apply as a blendshape corrective. Maya will only tot up a cost for the vertex difference, scene size will hence be lowered and you have the added bonus that any edits to the base mesh will propagate through all target shapes so you can make relatively complex changes quickly without having the problem of passing the data back to the correctives.

By the way, I haven't actually implemented this yet... but in theory it should work.

Friday, 26 February 2016

MFnMesh Constraint Node

I recently decided to have a play with some of the functions available in the MFnMesh class within the Maya Python API . This led to the creation of a basic constraint that would allow a transform to follow a point based on the average position of a set of vertices. This was derived from a combination of getClosestPoint, getPolygonVertices and getPoints. Initial attempts failed as I fell into the clutches of the dreaded cycle error. This happened because when testing the idea in a custom node I created an input plug that took the worldMatrix of the transform, passed it through the node, decomposed the result and dropped it back on the translate input plug of the said transform.
I then altered the way the matrix data was sampled from the transform. Rather than connecting it and have the plug refresh on each update I read the worldMatrix manually, performed the required calculations before storing the result statically within the node. This pre-calculation was more than sufficient to give me the desired result and break the cycle.
This then took me back to an idea I had toyed with a year or so ago. Switching between constraints without offsetting the affected transform. This would be an incredibly useful tool for disciplines such as Character Technical Direction and Character Rigging. When I had originally tested this I had also hit the cycle wall. Discussing this with a colleague suggested that Maya, due to its nature of its Dependency Graph would not behave well when trying to implement such functionality. Never being one to give up I have decided to have another stab at it.
In addition to the implementation of the node described above I have concluded that aside from the usual cosmetic bits and bobs like functionality these extra steps will need to be added.

Read in local matrix of driven transform in relation to driver A and store in offset matrix. Offset matrix is not visible

On every update calculate position in relation to driver

On switch multiply local matrix by current driver position giving world space of driven transform

Multiply this new matrix by inverse world of other driver to get local in relation to that. Overwrite local matrix.

It does seem deceptively simple which begs the question, 'Why has no one else tried it'. The answer is of course that they probably have which suggests it will not work otherwise we would have hundreds of solutions for this by now. However succeed or fail part of the process is all about the journey and what I learn from it. I'll post up the results in due course with an explanation.