PyQt: Maya Character Picker

Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Thursday, 9 August 2018

RBF Character Conversion

Not more RBF stuff!? Oh yes! Recently I extended the use of the RBF Bind Retargeter to include mesh retargeting.

Similar to the bind retargeter this tool will make use of two source meshes to perform the retarget but instead of joints fed into its hungry mouth a set of conversion meshes will be offered up. For instance a male figure, a female figure and a set of clothing built for the male. If successful the tool should retarget all the clothing to fit the female.
The character modeling department was very happy about this as it extends the use of our NPCs and other simple character types without creating them any remodeling work. It can also work with pretty much anything - bangles, watches, hats, t-shirts, jackets, armour onto taller, shorter, fatter, thinner, different shaped limbs etc.
However in the rigging department this would mean we have a sudden backlog of new NPC meshes to skin. They may be simple but it still takes time to do as there are lots of files to process and...... repetition = time to write a tool :-P.

So rather than bothering the character modelers and rigging department with any part of the process I decided the tool should take a male NPC file and transfer everything to a female counterpart including any skinning, lod’ing and rig behaviours. Obviously the bulk of the work would still be the main retarget but now this would include a number of other steps. Deriving skinning, adding a skeleton with its automation intact, checking for and re-creating LODGroups and maintaining the scene structure
The prerequisites for this tool would be a male to female source file which would contain the two topologically identical counterparts which the retarget requires for its calculations. A skeleton for the female which is built using the previous Bind Retargeter tool. By making this a separate step I was able to add in any functionality the skeleton file requires in addition to the retarget such as the automation, and by doing this to the one file I would avoid having to go back into each and every retargeted female file and add in the skeleton automation
In addition to the above I would also require a set of source assets for retargeting such as clothing or accessories and a basic UI to allow me to batch the task over any number of files whilst I go off for a croissant and a cup of coffee.

So to sum up the pre-requisite steps consisted of:

Creating a conversion file - the male to female topologically identical examples

Creating a female base skeleton - rather than consuming time wrapping this into the process which would repeat multiple times I use the older Bind Retargeter to generate the skeleton and add in the basic automation. This is only done once

Sets of clothing intended for the original male, skinned and on a skeleton with the same joint count.

After completing and running this tool it succeeded in rebuilding and exporting dozens of NPC parts into our game engine without anyone having to work on them. Check out the video below for a quick example of the tool doing its 'thang'.

Retargeting Character Geometry using RBF from SBGrover on Vimeo.

By the way, if you wish to read up a little more on RBF which is the core element of this retarget process then take a look at this site. It belongs to an ex-colleague of mine and I think he wraps up some of the basics very nicely. I still check this out when I forget how the process works. Additionally this site is a great place for checking out the workings a little more deeply. It has some code examples which you could use to practice with.

Monday, 6 August 2018

PyQt Character Picker

I finally got my lazy ass into gear and posted up my example of a character picker. This is used by our animation department who happily don’t seem to have much in the way of complaints about it.
In this post I will run quickly through the main tool. In a further post I will describe the building process in a little more detail.

The tool is built using PyQt. The main display runs on an QGraphicsView which takes a list of shapes stored in a json file rebuilds them as QGraphicsPolygonItem before display. These shapes are simply a stored set of points with extra details such as colour, functionality and applied commands. These are all derived from the building process within Maya where a set of curve objects are created to define the appearance of the picker view. It is during this creation process that the behaviors of the picker items are also defined before being written out to the json file which then becomes available for that character or object whenever it is loaded into a scene.
When an animator runs the viewer it looks in the scene for an attribute that holds the path to the json file. There is usually one per character but there can be any number of these for any use case. We use them for complex characters but it could be also be utilised for anything such as a set of basic props or more complex mechanical objects. The tool lists the characters in a drop down and switches it’s visual context based on which one is currently active. The use of the QGraphicsView class gives the ability to zoom, pan and drag select items. The QGraphicsPolygonItem class also allows you to control the way picker objects are highlighted and their standard appearance down to features such as controlling the size of outlines.
By right clicking on the picker view the animator gains access to a set of hidden commands that appear in a menu. Again these can be anything at all but would obviously be best suited to commands where using a click-able button does not make sense or simply to remove clutter in the picker.
Additional features include the use of a QDockWidget for the dock-able menus at the bottom of the interface. In our case these contain common animation tools so that animators do not need to move far away from the picker to find tools that they will need to use. Of course some do not like them visible so additional to the ability to position them anywhere on the screen they can also be hidden away if needed.
All of this is pulled together into the standard QMainwindow which allows animators to resize the window for certain purposes. When closed the picker remembers the way it has been laid out and adopts this style the next time it is opened.

I have to give a shout out to Cesar Seaz who was my inspiration in the way I approached this picker. Check out his example here. You can navigate his document by using the cursor keys.

Here is an example video of the final picker.

PyQT Character Picker for Maya from SBGrover on Vimeo.

Friday, 8 December 2017

RBF Based Bind Conversion

Continuing on with the foray into RBF has produced a python function that is able to convert the joint positions in one bind file to another simply based on the vertex data from the two meshes. This concept has been demonstrated in the past by Hans Goddard (3m 30 in) who shows the conversion of mesh data such as clothing from one mesh to another in his video.
Building on this with a bind conversion seems like a logical progression. It should be possible to make quick iterations on multitudes of character types by taking a base mesh with clothing, bound to a skeleton and converting the clothing to a differing mesh before binding it to a skeleton that has been regenerated using the same process. As long as the source skin weights are decent then there is no reason that the target mesh should not achieve the same level of deformation.

We have tried combining the two processes at our studio with a high level of success. This video highlights the bind section of the process.

RBF_based_bind_conversion from SBGrover on Vimeo.

Wednesday, 6 December 2017

Animation Ghosting Locator

I have played around with locators before in Maya. I have produced a couple that have actually been of some use in the daily grind. During the transition to Maya 2016 from previous versions I had to convert these to run with the new drawOverride method that was required to display these in viewport 2.0. I found it a headache as the new way of assembling the data seemed rather opaque especially compared to the simpler Legacy Viewport days.
More recently I decided to return to this. Now that I have a better knowledge of the API and am more confident using C++ I wanted to understand it the process a more completely.
The new classes available to me now include MUIDrawManager which actually encapsulates the tools needed to create the visual aspects of the locators.

I had opted to create a locator that would provide the animation team with some kind of ghosting on the meshes in the scene to allow them to compare between the current frame and others without changing the frame they are on. For this I identified a number of things I would need.

1. A way of returning the mesh at a different frame
2. A way of rendering that returned data in the current view
3. Options to change the visual aspects as required
4. Options to change the frame read
5. An option to have the result track the current frame with the existing offset

I was already aware of a class that would give me the ability to read items at a different frame from the current one. MDGContext. I had never used it before but doing so did not prove difficult. The sample code below shows the section that deals with the read ahead for the ghosting data.

void GhostingNodeDrawOverride::getMeshPoints(const MDagPath& objPath)
{
 MStatus status;
 MObject GhostObj = objPath.node(&status);

 if (status)
 {
  // get plug to mesh
  MPlug plug(GhostObj, GhostingNode::mesh);
  
  // if data exists do something
  if (!plug.isNull())
  {
   // get user input data (desired frame/tracking switch)
   double frame = getFrame(objPath);
   int isTracking = getTracking(objPath);
 
   // is tracking on for the first time?
   if (isTracking == 1 && switched == 0)
   {
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    
    // returns the current time as NTSC
    int cur_time = (int)currentTime.as(MTime::kNTSCFrame);
    
    // calculate the difference between desired frame and current frame
    difference = (int)cur_time - (int)frame;
    
    switched = 1;
   }

   // or is tracking already on? 
   else if(isTracking == 1) 
   {
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    
    // return the current time as NTSC
    int cur_time = (int)currentTime.as(MTime::kNTSCFrame);
    
    // calculate offset based on previously calculated difference
    frame = cur_time - difference;
    
    // convert the calculated frame to NTSC
    MTime desiredFrame(frame, MTime::kNTSCFrame);
    
    // set up an MDGContext to the desired frame
    MDGContext prevCtx(desiredFrame);
    
    // get MObject from plug at given context
    MObject geoPrev = plug.asMObject(prevCtx);
    
    // create MeshPolygon iterator from MObject
    MItMeshPolygon meshIt(geoPrev);
   }
  
   // else everything is off
   else 
   { 
    switched = 0;
    difference = 0;
   }

Something to bear in mind is that making use of a context based read can be expensive. Maya has to re-evaluate a frame under the hood to pass back the correct data which can slow down performance somewhat.

Items 3, 4 and 5 are easily dealt with as they are simply extra attributes on the node that give a user options as to how to interact with it.
Item 2 involved the actual rendering of the locator which involved finding out how a number the MPxDrawOverride functions operate with each other. Amongst them a key function is addUIDrawables in which the locator is 'built'. Most of your code will sit in this function as it is where you will assemble points, edges, colors etc, and package them ready to be rendered. A large bulk of locator code, at least in my case sits under the override

For my locator I wanted to allow a user to draw a whole mesh, a wireframe and a set of points either one at a time or altogether. I also wanted to allow control over colour and transparency which would be very important to distinguish the locator in a busy scene. Below is the section of code where this is set up. I have previously created a point list, edge list and color list from the iterator in the code snippet above and am passing them to MUIDrawManager functions to get the expected result.

if (trianglePointList.length() > 0)
{

 // get the color of the points
 MColor pointColor = colorWeights[0];

 // rest and start drawing
 drawManager.beginDrawable();

 //user defined display settings
 
 // render edges
 if (displaySettings[2] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kFlat);
  drawManager.mesh(MHWRender::MUIDrawManager::kLines, edgePointList, NULL, NULL, NULL, NULL);
 }

 // render faces
 if (displaySettings[1] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kShaded);
  drawManager.mesh(MHWRender::MUIDrawManager::kTriangles, trianglePointList, NULL, &colorWeights, NULL, NULL);
 }

 // render points
 if (displaySettings[0] == 1)
 {
  drawManager.setPaintStyle(MHWRender::MUIDrawManager::kFlat);
  drawManager.setPointSize(3);
  drawManager.setColor(pointColor);
  drawManager.points(trianglePointList, false); //second parameter is draw in 2d
 }

 // finish drawing for now
 drawManager.endDrawable();
} 

As you can see from above I am using a mixture of kLines, kTriangles and just points. The first two I supply to the mesh function and come from MUIDrawManager::Primitive.

Even though I was initially a little put off with the way Maya now needs the code structured for Viewport 2.0 drawing of locators it did not take long to get a grasp on how the class should be utilised and although I am sure I still have much to learn I feel a good start has been made.

Maya Animation Ghosting Locator from SBGrover on Vimeo.

RBF Based Colour Reader

Years back I produced a node based colour reader that utilised the closest point utility node to read back texel values at a uv position. By supplying a ramp texture you could have the system spit out RGB values which being 0 to 1 based lend themselves perfectly to driving other systems. A drawback of this method was that the texture provided had to be procedurally generated. Textures created by hand in Photoshop would not work. This made it more difficult to customise the colours for a specific output. Another was that the type of surface it was limited to was NURBS.


However it did work well, especially on areas where setting an extreme position was not always simple. For example we have a couple of other pose space solutions available to us at our studio. One is the standard cone reader that takes the angle between two vectors to return a value based on how far that angle is between the centre and outer edge of a given radius.
The second takes a vector magnitude between a radial centre point and a target point and again tests if that magnitude sits inside or out of a given radius. These two different ways to return a value suffer from the same shortcoming. When the target point or angle passes into the given radius and hits 0 the output is maximum. Continue on though and the target passes the centre and travels back outside the radius. This leads to problems. For instance driving a blendshape with the output of either of these setups will result with the blendshape target climbing to full application before decreasing again. In areas such as shoulders on characters this can lead to unpredictable deformation unless many of these readers are utilised to counteract the problem.
Hitting and passing these extremes with the colour space reader results in the extreme value always reading maximum. This means that when a limb is pushed a bit too far the driven shapes do not start collapsing inwards again.

My colleagues and I have recently been investigating the application of RBF based solving in regards to large amounts of data. I decided it was time to rewrite the colour space reader, this time utilising the Maya API, C++ and the new RBF learnings.
So by using each mesh vertex on the reader as a ‘node point’ for the rbf and then throwing in the colours at each of these vertices as values it was possible to extrapolate a weighted result that could be output as a single rgba value. The beauty of this method is that rather than sampling texels which are more difficult to apply to the method the user can simply create any mesh, apply vertex colours as they wish to any of its vertices and get a result. Want to change the result? Change the colours or reshape the mesh. It’s nice and simple

The solution still uses closest point but through MFnMesh this time.
I have included a python version of this node with the post to get you started if you fancy a stab at it. This version does not use RBF instead weighting the colours based on distances.

RBF Based Colour Reader from SBGrover on Vimeo.

It’s worth noting that whilst creating this node i found an issue with Maya and the worldMesh output. If worldMesh is used then the colour output does not update when colours on the mesh change. This does not appears to happen with the python version but is worth keeping your eye on. If you find that you get this result you will need to adjust the node to use outMesh which will involve multiplying the inMesh by its own worldMatrix to convert the points to world space. You will also need to multiply the centre object MPoint by the inverse of this worldMatrix and use this new MPoint with the closestPoint calculation.

"""

import maya.cmds as mc
mc.delete(mc.ls(type='ColorSpaceReaderPy'))
mc.flushUndo()
mc.unloadPlugin('ColorSpaceReaderPy')
mc.loadPlugin('ColorSpaceReaderPy.py')
mc.createNode('ColorSpaceReaderPy')


mc.connectAttr('pPlaneShape1.worldMesh', 'ColorSpaceReaderPy1.inMesh', f=True)
mc.connectAttr('locator1.worldMatrix', 'ColorSpaceReaderPy1.centre', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outClosest', 'pSphere1.translate', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outColor', 'lambert2.color', f=True)

"""

import maya.OpenMaya as om
import maya.OpenMayaMPx as omMPx

kPluginNodeTypeName = "ColorSpaceReaderPy"
kPluginNodeClassify = 'utility/general'
kPluginNodeId = om.MTypeId(0x81012)

class ColorSpaceReader(omMPx.MPxNode):

 inMesh = om.MObject()
 inCentre = om.MObject()
 outClosest = om.MObject()
 outColor = om.MObject()
 output = om.MObject()

 def __init__(self):

  omMPx.MPxNode.__init__(self)

 def compute(self, plug, data):

  inMeshData = data.inputValue(ColorSpaceReader.inMesh).asMesh()
  inCentreMatrix = data.inputValue(ColorSpaceReader.inCentre).asMatrix()
  outColorHandle = data.outputValue(ColorSpaceReader.outColor)
  outClosestHandle = data.outputValue(ColorSpaceReader.outClosest)

  if not inMeshData.isNull():
   meshFn = om.MFnMesh(inMeshData)
   sourceVerts = om.MPointArray()
   colors = om.MColorArray()
   meshFn.getPoints(sourceVerts, om.MSpace.kWorld)
   meshFn.getVertexColors(colors)
   centrePos = om.MPoint(inCentreMatrix(3, 0), inCentreMatrix(3, 1), inCentreMatrix(3, 2))
   closestPoint = om.MPoint()
   mu = om.MScriptUtil()
   mu.createFromInt(0)
   polygon = mu.asIntPtr()
   meshFn.getClosestPoint(centrePos, closestPoint, om.MSpace.kWorld, polygon)
   closestPolygon = mu.getInt(polygon)

   faceVertArray = om.MIntArray()
   meshFn.getPolygonVertices(closestPolygon, faceVertArray)

   colorArray = om.MColorArray()
   magArray = om.MFloatArray()

   for i in range(faceVertArray.length()):
    mag = closestPoint.distanceTo(sourceVerts[faceVertArray[i]])
    magArray.append(mag);
    colorArray.append(colors[faceVertArray[i]])

   weights = om.MFloatArray(magArray.length(), 0.0)
   foundOne = 0
   weightsTotal = 0.0

   for i in range(magArray.length()):

    if magArray[i] == 0.0:
     weights.set(1.0, i)
     foundOne = 1
     weightsTotal = 1

   if foundOne == 0:

    for i in range(magArray.length()):
     weights.set(1.0 / magArray[i], i)
     weightsTotal += weights[i]

   unit = 1.0 / weightsTotal
   weightedColor = [0, 0, 0]

   for i in range(magArray.length()):
    w = unit * weights[i]
    weightedColor[0] += colorArray[i][0] * w
    weightedColor[1] += colorArray[i][1] * w
    weightedColor[2] += colorArray[i][2] * w

   weightedColorVec = om.MFloatVector(weightedColor[0], weightedColor[1], weightedColor[2])
   outClosestHandle.setMFloatVector(om.MFloatVector(closestPoint))
   outColorHandle.setMFloatVector(weightedColorVec)


def nodeCreator():

 return omMPx.asMPxPtr(ColorSpaceReader())
 
def nodeInitializer():
 nAttr = om.MFnNumericAttribute()
 mAttr = om.MFnMatrixAttribute()
 tAttr = om.MFnTypedAttribute()

 ColorSpaceReader.inMesh = tAttr.create("inMesh", "im", om.MFnData.kMesh)
 tAttr.setReadable(0)
 tAttr.setKeyable(1)

 ColorSpaceReader.inCentre = mAttr.create("centre", "c")
 tAttr.setReadable(0)
 tAttr.setKeyable(1)

 ColorSpaceReader.outClosest = nAttr.createPoint("closestPoint", "cp")
 tAttr.setReadable(1)
 tAttr.setKeyable(0)

 ColorSpaceReader.outColor = nAttr.createPoint("outColor", "col")
 tAttr.setReadable(1)
 tAttr.setKeyable(0)

 ColorSpaceReader.addAttribute(ColorSpaceReader.inMesh)
 ColorSpaceReader.addAttribute(ColorSpaceReader.inCentre)
 ColorSpaceReader.addAttribute(ColorSpaceReader.outClosest)
 ColorSpaceReader.addAttribute(ColorSpaceReader.outColor)

 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outClosest)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outClosest)

def initializePlugin(mobject):
 fnPlugin = omMPx.MFnPlugin(mobject)
 fnPlugin.registerNode(kPluginNodeTypeName,kPluginNodeId,nodeCreator,nodeInitializer,omMPx.MPxNode.kDependNode,kPluginNodeClassify)
 
def uninitializePlugin(mobject):
 fnPlugin = 
omMPx.MFnPlugin(mobject)
 fnPlugin.deregisterNode(kPluginNodeId)

Friday, 3 November 2017

Blood and Truth (a.k.a What I have been working on)

Its always nice when titles that you work on are finally shown to public for the first time, especially if the title is relatively well received which came as a nice surprise.
Blood and Truth is the successor to the bite-size experience released for Playstation VR by London Studio over a year ago called The London Heist. Naturally being VR it is a first person shooter that combines on rails and way-point based movement. Its worth noting that the way-point movement is not teleportation which makes a nice change and that in many areas the routes diverge so there are options to the player as to how they wish to move around the level.
The gun-play can be a bit frantic at times but this is at least mixed with the possibility of stealthy movement so that enemies simply do not spot you.
Anyway, take a look and if you feel like it, leave a comment. I would be interested in hearing some thoughts.

Thursday, 28 September 2017

Blendshape Conversion Tool

Just a quick update to the UV Based Blendshape Conversion post I added a while back.
I have now embedded the logic used in the previous post into a useful tool that can amongst one or two other things project blendshape target data through multiple LODS. Watch the video to see an example.

Blendshape Conversion Tool from SBGrover on Vimeo.