PyQt: Maya Character Picker

Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Friday, 8 December 2017

RBF Based Bind Conversion

Continuing on with the foray into RBF has produced a python function that is able to convert the joint positions in one bind file to another simply based on the vertex data from the two meshes. This concept has been demonstrated in the past by Hans Goddard (3m 30 in) who shows the conversion of mesh data such as clothing from one mesh to another in his video.
Building on this with a bind conversion seems like a logical progression. It should be possible to make quick iterations on multitudes of character types by taking a base mesh with clothing, bound to a skeleton and converting the clothing to a differing mesh before binding it to a skeleton that has been regenerated using the same process. As long as the source skin weights are decent then there is no reason that the target mesh should not achieve the same level of deformation.

We have tried combining the two processes at our studio with a high level of success. This video highlights the bind section of the process.

RBF_based_bind_conversion from SBGrover on Vimeo.

Wednesday, 6 December 2017

Animation Ghosting Locator

I have played around with locators before in Maya. I have produced a couple that have actually been of some use in the daily grind. During the transition to Maya 2016 from previous versions I had to convert these to run with the new drawOverride method that was required to display these in viewport 2.0. I found it a headache as the new way of assembling the data seemed rather opaque especially compared to the simpler Legacy Viewport days.
More recently I decided to return to this. Now that I have a better knowledge of the API and am more confident using C++ I wanted to understand it the process a more completely.
The new classes available to me now include MUIDrawManager which actually encapsulates the tools needed to create the visual aspects of the locators.

I had opted to create a locator that would provide the animation team with some kind of ghosting on the meshes in the scene to allow them to compare between the current frame and others without changing the frame they are on. For this I identified a number of things I would need.

1. A way of returning the mesh at a different frame
2. A way of rendering that returned data in the current view
3. Options to change the visual aspects as required
4. Options to change the frame read
5. An option to have the result track the current frame with the existing offset

I was already aware of a class that would give me the ability to read items at a different frame from the current one. MDGContext. I had never used it before but doing so did not prove difficult. The sample code below shows the section that deals with the read ahead for the ghosting data.

void GhostingNodeDrawOverride::getMeshPoints(const MDagPath& objPath)
{
 MStatus status;
 MObject GhostObj = objPath.node(&status);

 if (status)
 {
  // get plug to mesh
  MPlug plug(GhostObj, GhostingNode::mesh);
  
  // if data exists do something
  if (!plug.isNull())
  {
   // get user input data (desired frame/tracking switch)
   double frame = getFrame(objPath);
   int isTracking = getTracking(objPath);
 
   // is tracking on for the first time?
   if (isTracking == 1 && switched == 0)
   {
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    
    // returns the current time as NTSC
    int cur_time = (int)currentTime.as(MTime::kNTSCFrame);
    
    // calculate the difference between desired frame and current frame
    difference = (int)cur_time - (int)frame;
    
    switched = 1;
   }

   // or is tracking already on? 
   else if(isTracking == 1) 
   {
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    
    // return the current time as NTSC
    int cur_time = (int)currentTime.as(MTime::kNTSCFrame);
    
    // calculate offset based on previously calculated difference
    frame = cur_time - difference;
    
    // convert the calculated frame to NTSC
    MTime desiredFrame(frame, MTime::kNTSCFrame);
    
    // set up an MDGContext to the desired frame
    MDGContext prevCtx(desiredFrame);
    
    // get MObject from plug at given context
    MObject geoPrev = plug.asMObject(prevCtx);
    
    // create MeshPolygon iterator from MObject
    MItMeshPolygon meshIt(geoPrev);
   }
  
   // else everything is off
   else 
   { 
    switched = 0;
    difference = 0;
   }

Something to bear in mind is that making use of a context based read can be expensive. Maya has to re-evaluate a frame under the hood to pass back the correct data which can slow down performance somewhat.

Items 3, 4 and 5 are easily dealt with as they are simply extra attributes on the node that give a user options as to how to interact with it.
Item 2 involved the actual rendering of the locator which involved finding out how a number the MPxDrawOverride functions operate with each other. Amongst them a key function is addUIDrawables in which the locator is 'built'. Most of your code will sit in this function as it is where you will assemble points, edges, colors etc, and package them ready to be rendered. A large bulk of locator code, at least in my case sits under the override

For my locator I wanted to allow a user to draw a whole mesh, a wireframe and a set of points either one at a time or altogether. I also wanted to allow control over colour and transparency which would be very important to distinguish the locator in a busy scene. Below is the section of code where this is set up. I have previously created a point list, edge list and color list from the iterator in the code snippet above and am passing them to MUIDrawManager functions to get the expected result.

if (trianglePointList.length() > 0)
{

 // get the color of the points
 MColor pointColor = colorWeights[0];

 // rest and start drawing
 drawManager.beginDrawable();

 //user defined display settings
 
 // render edges
 if (displaySettings[2] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kFlat);
  drawManager.mesh(MHWRender::MUIDrawManager::kLines, edgePointList, NULL, NULL, NULL, NULL);
 }

 // render faces
 if (displaySettings[1] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kShaded);
  drawManager.mesh(MHWRender::MUIDrawManager::kTriangles, trianglePointList, NULL, &colorWeights, NULL, NULL);
 }

 // render points
 if (displaySettings[0] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kFlat);
  drawManager.setPointSize(3);
  drawManager.setColor(pointColor);
  drawManager.points(trianglePointList, false); //second parameter is draw in 2d
 }

 // finish drawing for now
 drawManager.endDrawable();
} 

As you can see from above I am using a mixture of kLines, kTriangles and just points. The first two I supply to the mesh function and come from MUIDrawManager::Primitive.

Even though I was initially a little put off with the way Maya now needs the code structured for Viewport 2.0 drawing of locators it did not take long to get a grasp on how the class should be utilised and although I am sure I still have much to learn I feel a good start has been made.

Maya Animation Ghosting Locator from SBGrover on Vimeo.

RBF Based Colour Reader

Years back I produced a node based colour reader that utilised the closest point utility node to read back texel values at a uv position. By supplying a ramp texture you could have the system spit out RGB values which being 0 to 1 based lend themselves perfectly to driving other systems. A drawback of this method was that the texture provided had to be procedurally generated. Textures created by hand in Photoshop would not work. This made it more difficult to customise the colours for a specific output. Another was that the type of surface it was limited to was NURBS.


However it did work well, especially on areas where setting an extreme position was not always simple. For example we have a couple of other pose space solutions available to us at our studio. One is the standard cone reader that takes the angle between two vectors to return a value based on how far that angle is between the centre and outer edge of a given radius.
The second takes a vector magnitude between a radial centre point and a target point and again tests if that magnitude sits inside or out of a given radius. These two different ways to return a value suffer from the same shortcoming. When the target point or angle passes into the given radius and hits 0 the output is maximum. Continue on though and the target passes the centre and travels back outside the radius. This leads to problems. For instance driving a blendshape with the output of either of these setups will result with the blendshape target climbing to full application before decreasing again. In areas such as shoulders on characters this can lead to unpredictable deformation unless many of these readers are utilised to counteract the problem.
Hitting and passing these extremes with the colour space reader results in the extreme value always reading maximum. This means that when a limb is pushed a bit too far the driven shapes do not start collapsing inwards again.

My colleagues and I have recently been investigating the application of RBF based solving in regards to large amounts of data. I decided it was time to rewrite the colour space reader, this time utilising the Maya API, C++ and the new RBF learnings.
So by using each mesh vertex on the reader as a ‘node point’ for the rbf and then throwing in the colours at each of these vertices as values it was possible to extrapolate a weighted result that could be output as a single rgba value. The beauty of this method is that rather than sampling texels which are more difficult to apply to the method the user can simply create any mesh, apply vertex colours as they wish to any of its vertices and get a result. Want to change the result? Change the colours or reshape the mesh. It’s nice and simple

The solution still uses closest point but through MFnMesh this time.
I have included a python version of this node with the post to get you started if you fancy a stab at it. This version does not use RBF instead weighting the colours based on distances.

RBF Based Colour Reader from SBGrover on Vimeo.

It’s worth noting that whilst creating this node i found an issue with Maya and the worldMesh output. If worldMesh is used then the colour output does not update when colours on the mesh change. This does not appears to happen with the python version but is worth keeping your eye on. If you find that you get this result you will need to adjust the node to use outMesh which will involve multiplying the inMesh by its own worldMatrix to convert the points to world space. You will also need to multiply the centre object MPoint by the inverse of this worldMatrix and use this new MPoint with the closestPoint calculation.

"""

import maya.cmds as mc
mc.delete(mc.ls(type='ColorSpaceReaderPy'))
mc.flushUndo()
mc.unloadPlugin('ColorSpaceReaderPy')
mc.loadPlugin('ColorSpaceReaderPy.py')
mc.createNode('ColorSpaceReaderPy')


mc.connectAttr('pPlaneShape1.worldMesh', 'ColorSpaceReaderPy1.inMesh', f=True)
mc.connectAttr('locator1.worldMatrix', 'ColorSpaceReaderPy1.centre', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outClosest', 'pSphere1.translate', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outColor', 'lambert2.color', f=True)

"""

import maya.OpenMaya as om
import maya.OpenMayaMPx as omMPx

kPluginNodeTypeName = "ColorSpaceReaderPy"
kPluginNodeClassify = 'utility/general'
kPluginNodeId = om.MTypeId(0x81012)

class ColorSpaceReader(omMPx.MPxNode):

 inMesh = om.MObject()
 inCentre = om.MObject()
 outClosest = om.MObject()
 outColor = om.MObject()
 output = om.MObject()

 def __init__(self):

  omMPx.MPxNode.__init__(self)

 def compute(self, plug, data):

  inMeshData = data.inputValue(ColorSpaceReader.inMesh).asMesh()
  inCentreMatrix = data.inputValue(ColorSpaceReader.inCentre).asMatrix()
  outColorHandle = data.outputValue(ColorSpaceReader.outColor)
  outClosestHandle = data.outputValue(ColorSpaceReader.outClosest)

  if not inMeshData.isNull():
   meshFn = om.MFnMesh(inMeshData)
   sourceVerts = om.MPointArray()
   colors = om.MColorArray()
   meshFn.getPoints(sourceVerts, om.MSpace.kWorld)
   meshFn.getVertexColors(colors)
   centrePos = om.MPoint(inCentreMatrix(3, 0), inCentreMatrix(3, 1), inCentreMatrix(3, 2))
   closestPoint = om.MPoint()
   mu = om.MScriptUtil()
   mu.createFromInt(0)
   polygon = mu.asIntPtr()
   meshFn.getClosestPoint(centrePos, closestPoint, om.MSpace.kWorld, polygon)
   closestPolygon = mu.getInt(polygon)

   faceVertArray = om.MIntArray()
   meshFn.getPolygonVertices(closestPolygon, faceVertArray)

   colorArray = om.MColorArray()
   magArray = om.MFloatArray()

   for i in range(faceVertArray.length()):
    mag = closestPoint.distanceTo(sourceVerts[faceVertArray[i]])
    magArray.append(mag);
    colorArray.append(colors[faceVertArray[i]])

   weights = om.MFloatArray(magArray.length(), 0.0)
   foundOne = 0
   weightsTotal = 0.0

   for i in range(magArray.length()):

    if magArray[i] == 0.0:
     weights.set(1.0, i)
     foundOne = 1
     weightsTotal = 1

   if foundOne == 0:

    for i in range(magArray.length()):
     weights.set(1.0 / magArray[i], i)
     weightsTotal += weights[i]

   unit = 1.0 / weightsTotal
   weightedColor = [0, 0, 0]

   for i in range(magArray.length()):
    w = unit * weights[i]
    weightedColor[0] += colorArray[i][0] * w
    weightedColor[1] += colorArray[i][1] * w
    weightedColor[2] += colorArray[i][2] * w

   weightedColorVec = om.MFloatVector(weightedColor[0], weightedColor[1], weightedColor[2])
   outClosestHandle.setMFloatVector(om.MFloatVector(closestPoint))
   outColorHandle.setMFloatVector(weightedColorVec)


def nodeCreator():

 return omMPx.asMPxPtr(ColorSpaceReader())
 
def nodeInitializer():
 nAttr = om.MFnNumericAttribute()
 mAttr = om.MFnMatrixAttribute()
 tAttr = om.MFnTypedAttribute()

 ColorSpaceReader.inMesh = tAttr.create("inMesh", "im", om.MFnData.kMesh)
 tAttr.setReadable(0)
 tAttr.setKeyable(1)

 ColorSpaceReader.inCentre = mAttr.create("centre", "c")
 tAttr.setReadable(0)
 tAttr.setKeyable(1)

 ColorSpaceReader.outClosest = nAttr.createPoint("closestPoint", "cp")
 tAttr.setReadable(1)
 tAttr.setKeyable(0)

 ColorSpaceReader.outColor = nAttr.createPoint("outColor", "col")
 tAttr.setReadable(1)
 tAttr.setKeyable(0)

 ColorSpaceReader.addAttribute(ColorSpaceReader.inMesh)
 ColorSpaceReader.addAttribute(ColorSpaceReader.inCentre)
 ColorSpaceReader.addAttribute(ColorSpaceReader.outClosest)
 ColorSpaceReader.addAttribute(ColorSpaceReader.outColor)

 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outClosest)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outClosest)

def initializePlugin(mobject):
 fnPlugin = omMPx.MFnPlugin(mobject)
 fnPlugin.registerNode(kPluginNodeTypeName,kPluginNodeId,nodeCreator,nodeInitializer,omMPx.MPxNode.kDependNode,kPluginNodeClassify)
 
def uninitializePlugin(mobject):
 fnPlugin = 
omMPx.MFnPlugin(mobject)
 fnPlugin.deregisterNode(kPluginNodeId)