PyQt: Maya Character Picker

Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Friday, 8 December 2017

RBF Based Bind Conversion

Continuing on with the foray into RBF has produced a python function that is able to convert the joint positions in one bind file to another simply based on the vertex data from the two meshes. This concept has been demonstrated in the past by Hans Goddard (3m 30 in) who shows the conversion of mesh data such as clothing from one mesh to another in his video.
Building on this with a bind conversion seems like a logical progression. It should be possible to make quick iterations on multitudes of character types by taking a base mesh with clothing, bound to a skeleton and converting the clothing to a differing mesh before binding it to a skeleton that has been regenerated using the same process. As long as the source skin weights are decent then there is no reason that the target mesh should not achieve the same level of deformation.

We have tried combining the two processes at our studio with a high level of success. This video highlights the bind section of the process.

RBF_based_bind_conversion from SBGrover on Vimeo.

Wednesday, 6 December 2017

Animation Ghosting Locator

I have played around with locators before in Maya. I have produced a couple that have actually been of some use in the daily grind. During the transition to Maya 2016 from previous versions I had to convert these to run with the new drawOverride method that was required to display these in viewport 2.0. I found it a headache as the new way of assembling the data seemed rather opaque especially compared to the simpler Legacy Viewport days.
More recently I decided to return to this. Now that I have a better knowledge of the API and am more confident using C++ I wanted to understand it the process a more completely.
The new classes available to me now include MUIDrawManager which actually encapsulates the tools needed to create the visual aspects of the locators.

I had opted to create a locator that would provide the animation team with some kind of ghosting on the meshes in the scene to allow them to compare between the current frame and others without changing the frame they are on. For this I identified a number of things I would need.

1. A way of returning the mesh at a different frame
2. A way of rendering that returned data in the current view
3. Options to change the visual aspects as required
4. Options to change the frame read
5. An option to have the result track the current frame with the existing offset

I was already aware of a class that would give me the ability to read items at a different frame from the current one. MDGContext. I had never used it before but doing so did not prove difficult. The sample code below shows the section that deals with the read ahead for the ghosting data.

void GhostingNodeDrawOverride::getMeshPoints(const MDagPath& objPath)
 MStatus status;
 MObject GhostObj = objPath.node(&status);

 if (status)
  // get plug to mesh
  MPlug plug(GhostObj, GhostingNode::mesh);
  // if data exists do something
  if (!plug.isNull())
   // get user input data (desired frame/tracking switch)
   double frame = getFrame(objPath);
   int isTracking = getTracking(objPath);
   // is tracking on for the first time?
   if (isTracking == 1 && switched == 0)
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    // returns the current time as NTSC
    int cur_time = (int);
    // calculate the difference between desired frame and current frame
    difference = (int)cur_time - (int)frame;
    switched = 1;

   // or is tracking already on? 
   else if(isTracking == 1) 
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    // return the current time as NTSC
    int cur_time = (int);
    // calculate offset based on previously calculated difference
    frame = cur_time - difference;
    // convert the calculated frame to NTSC
    MTime desiredFrame(frame, MTime::kNTSCFrame);
    // set up an MDGContext to the desired frame
    MDGContext prevCtx(desiredFrame);
    // get MObject from plug at given context
    MObject geoPrev = plug.asMObject(prevCtx);
    // create MeshPolygon iterator from MObject
    MItMeshPolygon meshIt(geoPrev);
   // else everything is off
    switched = 0;
    difference = 0;

Something to bear in mind is that making use of a context based read can be expensive. Maya has to re-evaluate a frame under the hood to pass back the correct data which can slow down performance somewhat.

Items 3, 4 and 5 are easily dealt with as they are simply extra attributes on the node that give a user options as to how to interact with it.
Item 2 involved the actual rendering of the locator which involved finding out how a number the MPxDrawOverride functions operate with each other. Amongst them a key function is addUIDrawables in which the locator is 'built'. Most of your code will sit in this function as it is where you will assemble points, edges, colors etc, and package them ready to be rendered. A large bulk of locator code, at least in my case sits under the override

For my locator I wanted to allow a user to draw a whole mesh, a wireframe and a set of points either one at a time or altogether. I also wanted to allow control over colour and transparency which would be very important to distinguish the locator in a busy scene. Below is the section of code where this is set up. I have previously created a point list, edge list and color list from the iterator in the code snippet above and am passing them to MUIDrawManager functions to get the expected result.

if (trianglePointList.length() > 0)

 // get the color of the points
 MColor pointColor = colorWeights[0];

 // rest and start drawing

 //user defined display settings
 // render edges
 if (displaySettings[2] == 1)
  drawManager.mesh(MHWRender::MUIDrawManager::kLines, edgePointList, NULL, NULL, NULL, NULL);

 // render faces
 if (displaySettings[1] == 1)
  drawManager.mesh(MHWRender::MUIDrawManager::kTriangles, trianglePointList, NULL, &colorWeights, NULL, NULL);

 // render points
 if (displaySettings[0] == 1)
  drawManager.points(trianglePointList, false); //second parameter is draw in 2d

 // finish drawing for now

As you can see from above I am using a mixture of kLines, kTriangles and just points. The first two I supply to the mesh function and come from MUIDrawManager::Primitive.

Even though I was initially a little put off with the way Maya now needs the code structured for Viewport 2.0 drawing of locators it did not take long to get a grasp on how the class should be utilised and although I am sure I still have much to learn I feel a good start has been made.

Maya Animation Ghosting Locator from SBGrover on Vimeo.

RBF Based Colour Reader

Years back I produced a node based colour reader that utilised the closest point utility node to read back texel values at a uv position. By supplying a ramp texture you could have the system spit out RGB values which being 0 to 1 based lend themselves perfectly to driving other systems. A drawback of this method was that the texture provided had to be procedurally generated. Textures created by hand in Photoshop would not work. This made it more difficult to customise the colours for a specific output. Another was that the type of surface it was limited to was NURBS.

However it did work well, especially on areas where setting an extreme position was not always simple. For example we have a couple of other pose space solutions available to us at our studio. One is the standard cone reader that takes the angle between two vectors to return a value based on how far that angle is between the centre and outer edge of a given radius.
The second takes a vector magnitude between a radial centre point and a target point and again tests if that magnitude sits inside or out of a given radius. These two different ways to return a value suffer from the same shortcoming. When the target point or angle passes into the given radius and hits 0 the output is maximum. Continue on though and the target passes the centre and travels back outside the radius. This leads to problems. For instance driving a blendshape with the output of either of these setups will result with the blendshape target climbing to full application before decreasing again. In areas such as shoulders on characters this can lead to unpredictable deformation unless many of these readers are utilised to counteract the problem.
Hitting and passing these extremes with the colour space reader results in the extreme value always reading maximum. This means that when a limb is pushed a bit too far the driven shapes do not start collapsing inwards again.

My colleagues and I have recently been investigating the application of RBF based solving in regards to large amounts of data. I decided it was time to rewrite the colour space reader, this time utilising the Maya API, C++ and the new RBF learnings.
So by using each mesh vertex on the reader as a ‘node point’ for the rbf and then throwing in the colours at each of these vertices as values it was possible to extrapolate a weighted result that could be output as a single rgba value. The beauty of this method is that rather than sampling texels which are more difficult to apply to the method the user can simply create any mesh, apply vertex colours as they wish to any of its vertices and get a result. Want to change the result? Change the colours or reshape the mesh. It’s nice and simple

The solution still uses closest point but through MFnMesh this time.
I have included a python version of this node with the post to get you started if you fancy a stab at it. This version does not use RBF instead weighting the colours based on distances.

RBF Based Colour Reader from SBGrover on Vimeo.

It’s worth noting that whilst creating this node i found an issue with Maya and the worldMesh output. If worldMesh is used then the colour output does not update when colours on the mesh change. This does not appears to happen with the python version but is worth keeping your eye on. If you find that you get this result you will need to adjust the node to use outMesh which will involve multiplying the inMesh by its own worldMatrix to convert the points to world space. You will also need to multiply the centre object MPoint by the inverse of this worldMatrix and use this new MPoint with the closestPoint calculation.


import maya.cmds as mc

mc.connectAttr('pPlaneShape1.worldMesh', 'ColorSpaceReaderPy1.inMesh', f=True)
mc.connectAttr('locator1.worldMatrix', 'ColorSpaceReaderPy1.centre', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outClosest', 'pSphere1.translate', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outColor', 'lambert2.color', f=True)


import maya.OpenMaya as om
import maya.OpenMayaMPx as omMPx

kPluginNodeTypeName = "ColorSpaceReaderPy"
kPluginNodeClassify = 'utility/general'
kPluginNodeId = om.MTypeId(0x81012)

class ColorSpaceReader(omMPx.MPxNode):

 inMesh = om.MObject()
 inCentre = om.MObject()
 outClosest = om.MObject()
 outColor = om.MObject()
 output = om.MObject()

 def __init__(self):


 def compute(self, plug, data):

  inMeshData = data.inputValue(ColorSpaceReader.inMesh).asMesh()
  inCentreMatrix = data.inputValue(ColorSpaceReader.inCentre).asMatrix()
  outColorHandle = data.outputValue(ColorSpaceReader.outColor)
  outClosestHandle = data.outputValue(ColorSpaceReader.outClosest)

  if not inMeshData.isNull():
   meshFn = om.MFnMesh(inMeshData)
   sourceVerts = om.MPointArray()
   colors = om.MColorArray()
   meshFn.getPoints(sourceVerts, om.MSpace.kWorld)
   centrePos = om.MPoint(inCentreMatrix(3, 0), inCentreMatrix(3, 1), inCentreMatrix(3, 2))
   closestPoint = om.MPoint()
   mu = om.MScriptUtil()
   polygon = mu.asIntPtr()
   meshFn.getClosestPoint(centrePos, closestPoint, om.MSpace.kWorld, polygon)
   closestPolygon = mu.getInt(polygon)

   faceVertArray = om.MIntArray()
   meshFn.getPolygonVertices(closestPolygon, faceVertArray)

   colorArray = om.MColorArray()
   magArray = om.MFloatArray()

   for i in range(faceVertArray.length()):
    mag = closestPoint.distanceTo(sourceVerts[faceVertArray[i]])

   weights = om.MFloatArray(magArray.length(), 0.0)
   foundOne = 0
   weightsTotal = 0.0

   for i in range(magArray.length()):

    if magArray[i] == 0.0:
     weights.set(1.0, i)
     foundOne = 1
     weightsTotal = 1

   if foundOne == 0:

    for i in range(magArray.length()):
     weights.set(1.0 / magArray[i], i)
     weightsTotal += weights[i]

   unit = 1.0 / weightsTotal
   weightedColor = [0, 0, 0]

   for i in range(magArray.length()):
    w = unit * weights[i]
    weightedColor[0] += colorArray[i][0] * w
    weightedColor[1] += colorArray[i][1] * w
    weightedColor[2] += colorArray[i][2] * w

   weightedColorVec = om.MFloatVector(weightedColor[0], weightedColor[1], weightedColor[2])

def nodeCreator():

 return omMPx.asMPxPtr(ColorSpaceReader())
def nodeInitializer():
 nAttr = om.MFnNumericAttribute()
 mAttr = om.MFnMatrixAttribute()
 tAttr = om.MFnTypedAttribute()

 ColorSpaceReader.inMesh = tAttr.create("inMesh", "im", om.MFnData.kMesh)

 ColorSpaceReader.inCentre = mAttr.create("centre", "c")

 ColorSpaceReader.outClosest = nAttr.createPoint("closestPoint", "cp")

 ColorSpaceReader.outColor = nAttr.createPoint("outColor", "col")


 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outClosest)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outClosest)

def initializePlugin(mobject):
 fnPlugin = omMPx.MFnPlugin(mobject)
def uninitializePlugin(mobject):
 fnPlugin = 

Friday, 3 November 2017

Blood and Truth (a.k.a What I have been working on)

Its always nice when titles that you work on are finally shown to public for the first time, especially if the title is relatively well received which came as a nice surprise.
Blood and Truth is the successor to the bite-size experience released for Playstation VR by London Studio over a year ago called The London Heist. Naturally being VR it is a first person shooter that combines on rails and way-point based movement. Its worth noting that the way-point movement is not teleportation which makes a nice change and that in many areas the routes diverge so there are options to the player as to how they wish to move around the level.
The gun-play can be a bit frantic at times but this is at least mixed with the possibility of stealthy movement so that enemies simply do not spot you.
Anyway, take a look and if you feel like it, leave a comment. I would be interested in hearing some thoughts.

Thursday, 28 September 2017

Blendshape Conversion Tool

Just a quick update to the UV Based Blendshape Conversion post I added a while back.
I have now embedded the logic used in the previous post into a useful tool that can amongst one or two other things project blendshape target data through multiple LODS. Watch the video to see an example.

Blendshape Conversion Tool from SBGrover on Vimeo.

Wednesday, 23 August 2017

Tip #2: MDoubleArray Bug

I recently discovered that there is a bug with the Maya Python API when dealing with setting data in an MDoubleArray.

I was writing a few tools to deal with skinning information but found that my previously normalised values kept being adjusted to slightly below or slightly above 1. Visually this had little impact although seams on meshes had evidence of splitting under extreme deformation however I wanted precise results and so was confused with the values I was getting out of the tool.

When passing skinning information between different meshes on the same skeletal hierarchy it can be necessary to re-order the information to suit the influence order of one skin cluster to another. To deal with this part of the process one of my tools would read source skin data into an MDoubleArray then build a new MDoubleArray of the length of the number of influences on the target cluster. Part of this process involved passing weighting values per influence into a python variable to store them and then dropping that into the right location in the new MDoubleArray. I found that when passing the value back into the MDoubleArray was where it was altered. Python apparently does not have a type but its float types have double precision and at first I thought that this was to blame.

Try out the below in the script editor to see what I mean. I use Maya 2016 but I am under the impression that this happens in all versions of Maya.

import maya.OpenMaya as om
source = om.MDoubleArray()
source.set(0.1234567890123, 0)
print source

When speaking with someone from Autodesk it turned out that the python wrapper for the API only passes data using the 'float' method of the MDoubleArray class and so the type conversion causes the error in value. This must affect hundreds of Maya Python scripts around the world so I am hoping this might get addressed in the future. In the meantime this is the way around the problem.

import maya.OpenMaya as om
source = om.MDoubleArray()
source[0] = 0.1234567890123
print source

For some reason this way of setting the value gives a different, more accurate result. Make sure you have set the length of your MDoubleArray first or this will not work.

Friday, 4 August 2017

Stretch Compress Deformer

Although working in games limits me to joint and blendshape solutions to achieve reasonable levels of deformation on characters sometimes it's nice to take a departure from this and think a bit further afield.
A typical problem we have in our engine is that like many others it does not support joint scaling - either uniformly or non uniformly. This can be a bit of a challenge when trying to maintain volume in characters as something that could be driven by one scaling joint and some simple skinning has to end up being driven by three or four joints that translate away from each other. When time is critical it can be frustrating to set up basic stuff like this as it takes time to adjust the weighting and driving to give the right effect.
When dealing with driving, a pose space solution is generally relied on (at least where I work) to help drive these joints in the right manner. Setting this up takes time and can sometimes be broken when a character twists too far or away from the pose readers.

This is where the Stretch Compress Deformer could be of use.

This plugin is applied directly to a skinned mesh and it's result is entirely driven by the measured area of all polygons within the mesh rather than an external reader. Input target shapes give an example of the shape the mesh must achieve in the areas that compress or stretch. It can also be weighted so that only small areas are considered which will of course aid performance.
In approaching the plugin I knew that I would need to calculate the area of a polygon. I did not realise that MItMeshPolygon had its own function specifically for this. GetArea.
Instead I used Herons Formula although there are a number of ways of finding the result.
By storing the area of all triangles on the deformed mesh initially and then on each update comparing this original set to a new set it is possible to obtain a shortlist of triangles who's surface area has decreased - compression, and those that have increased - stretching. Converting those faces to vertices then means that the current shape can be adjusted to match that of the input target shapes based on a weight that can be controlled by the user.
Initially we will have also stored off the vertex positions from the bind (shape of mesh before deformation), target and stretch we can now obtain the deltas between thier corresponding vertices. By multiplying those deltas by the corresponding normal vector from the bind a scalar vector is obtained. Multiplying the deformed normal vector by this scalar before adding this result to the current point position pushes the deformed vertex inwards or outwards depending on triangle surface area.


Stretch Compress Deformer from SBGrover on Vimeo.

I provide python code below. Note that this is not a version of the plugin I wrote but instead a python script example intended to be run in the script editor. As a result it does have certain caveats. The logic is exactly the same but it can only be run once on a mesh and if the mesh has been posed using joints then it will need to be unbound beforehand. The provided script is meant purely as an aid to learning and not as a complete solution to the problem. I leave it to you to push it further and convert it into a plugin.

To use the script:

1. Create a Base shape and a compressed and stretched version of the Base shape. The topology must match EXACTLY.
2. If you wish to, skin the Base shape to joints.
3. Select the Base, Stretch and Compress in that order.
4. Run the first part of the script.
5. Pose the Base shape either by moving the geometry or moving the joints.
6. Delete the history on the Base shape if it is skinned or has been adjusted using a deformer.
7. Run the second script.


import maya.OpenMaya as om
import math

# Need: One compress and once stretch target and a skinned mesh in bind pose

# have the three meshes selected in the following order: bind, stretch, compress
sel = om.MSelectionList()

# bind
dag_path = om.MDagPath()
sel.getDagPath(0, dag_path)
bind_fn = om.MFnMesh(dag_path)

# stretch
sel.getDagPath(1, dag_path)
stretch_fn = om.MFnMesh(dag_path)
stretch_points = om.MPointArray()
stretch_fn.getPoints(stretch_points, om.MSpace.kObject)

# compress
sel.getDagPath(2, dag_path)
compress_fn = om.MFnMesh(dag_path)
compress_points = om.MPointArray()
compress_fn.getPoints(compress_points, om.MSpace.kObject)

# variables
overall_weight = 2 # change this to increase / decrease the overall effect
compress_weight = 5 # change this to increase / decrease the compress effect. 0 means not calculated
stretch_weight = 5 # change this to increase / decrease the stretch effect. 0 means not calculated

# arrays
bind_points = om.MPointArray()
bind_fn.getPoints(bind_points, om.MSpace.kObject)

bind_triangle_count = om.MIntArray()
bind_triangle_indices = om.MIntArray()
bind_fn.getTriangles(bind_triangle_count, bind_triangle_indices)

bind_normal_array = om.MFloatVectorArray()
bind_fn.getVertexNormals(0, bind_normal_array, om.MSpace.kObject)

# get the bind area array from the bind triangles and bind points
bind_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = bind_points[triangle[0]] - bind_points[triangle[1]]
 triangleAC = bind_points[triangle[0]] - bind_points[triangle[2]]
 triangleBC = bind_points[triangle[1]] - bind_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 bind_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1


# NOW POSE YOUR MESH AND RUN THIS. If the mesh is bound you will need to unbind it for this part to work. If you decide to build this as a deformer you will not need to address this

sel.getDagPath(0, dag_path)
deformed_fn = om.MFnMesh(dag_path)
# get the point positions for the deformed mesh
deformed_points = om.MPointArray()
deformed_fn.getPoints(deformed_points, om.MSpace.kObject )

# get the deformed area array from the bind triangles and deformed points
deformed_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = deformed_points[triangle[0]] - deformed_points[triangle[1]]
 triangleAC = deformed_points[triangle[0]] - deformed_points[triangle[2]]
 triangleBC = deformed_points[triangle[1]] - deformed_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 deformed_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1

#get the vertex normals for the deformed mesh
deformed_normal_array = om.MFloatVectorArray()
deformed_fn.getVertexNormals(0, deformed_normal_array, om.MSpace.kObject)

length = len(deformed_area_array_dict)
done_array = []

for num in range(length):

 # check to see if the triangle area between the bind and current is different. If less its compressing, if more its stretching
 deformation_amount = deformed_area_array_dict[num][1] - bind_area_array_dict[num][1]

 if deformation_amount < -0.0001 and compress_weight != 0 or deformation_amount > 0.0001 and stretch_weight != 0:

  compress = False
  stretch = False

  if deformation_amount < -0.0001:
   compress = True

  if deformation_amount > 0.0001:
   stretch = True

  # get list of all indices in current triangle
  idx1 = deformed_area_array_dict[num][0][0]
  idx2 = deformed_area_array_dict[num][0][1]
  idx3 = deformed_area_array_dict[num][0][2]

  # get the current position of each vertex using the indices
  vtx1 = deformed_points[idx1]
  vtx2 = deformed_points[idx2]
  vtx3 = deformed_points[idx3]

  # calculate the delta of the vertices between the bind and the input compress shape
  if compress:
   delta1 = compress_points[idx1] - bind_points[idx1]
   delta2 = compress_points[idx2] - bind_points[idx2]
   delta3 = compress_points[idx3] - bind_points[idx3]

  if stretch:
   delta1 = stretch_points[idx1] - bind_points[idx1]
   delta2 = stretch_points[idx2] - bind_points[idx2]
   delta3 = stretch_points[idx3] - bind_points[idx3]

  # multiply the weights. delta * deformation amount * compress or stretch weight * overall weight
  if compress:
   delta1 *= compress_weight * overall_weight * abs(deformation_amount)
   delta2 *= compress_weight * overall_weight * abs(deformation_amount)
   delta3 *= compress_weight * overall_weight * abs(deformation_amount)

  if stretch:
   delta1 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta2 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta3 *= stretch_weight * overall_weight * abs(deformation_amount)
  # get the current normal direction on the deformed shape - object space - and convert to a MVector from MFloatVector
  deformed_nor1 = om.MVector(deformed_normal_array[idx1])
  deformed_nor2 = om.MVector(deformed_normal_array[idx2])
  deformed_nor3 = om.MVector(deformed_normal_array[idx3])

  # get the corresponding normal direction on the bind shape - object space - and convert to a MVector from MFloatVector
  bind_nor1 = om.MVector(bind_normal_array[idx1])
  bind_nor2 = om.MVector(bind_normal_array[idx2])
  bind_nor3 = om.MVector(bind_normal_array[idx3])

  # get the dot product of the delta and the bind .This will give us a scaler based on how far the delta vector is projected along the bind vector
  delta_dot1 = bind_nor1 * delta1
  delta_dot2 = bind_nor2 * delta2
  delta_dot3 = bind_nor3 * delta3

  # multiply the deformed normal vector by the delta dot to scale it accordingly
  deformed_nor1 *= delta_dot1
  deformed_nor2 *= delta_dot2
  deformed_nor3 *= delta_dot3

  # add this value to the current deformed vertex position
  vtx1 = (vtx1.x + deformed_nor1.x, vtx1.y + deformed_nor1.y, vtx1.z + deformed_nor1.z)
  vtx2 = (vtx2.x + deformed_nor2.x, vtx2.y + deformed_nor2.y, vtx2.z + deformed_nor2.z)
  vtx3 = (vtx3.x + deformed_nor3.x, vtx3.y + deformed_nor3.y, vtx3.z + deformed_nor3.z)
  # put the result back into the deformed point array at the correct vertex index
  deformed_points.set(idx1, vtx1[0], vtx1[1], vtx1[2], 1)
  deformed_points.set(idx2, vtx2[0], vtx2[1], vtx2[2], 1)
  deformed_points.set(idx3, vtx3[0], vtx3[1], vtx3[2], 1)

# apply the result back the vertices
deformed_fn.setPoints(deformed_points, om.MSpace.kObject)

Thursday, 29 June 2017

Python and PYQT Image Compare Tool

As my work is predominantly focused within Maya this means that I do not have a lot of experience of creating standalone tools for use outside a Maya environment.
These days Maya has everything you need to write decent tools. Python is embedded along with PySide for PyQt and now that the API has support for Python this has made it more easy to write complex tooling compared to a purely Mel and C++ offering of the bad old days.
However writing tooling outside is a different beast as none of this stuff comes pre-installed.

ImageCompare Tool

For my first foray into the world of standalone applications I decided to write a simple tool with the aim of getting me used to things. In the end I went for one that would find similar or the same images based on a source image giving you the option of checking or deleting the duplicates.

Python and PyQt
Python Interpreter: Python 2.7 64 bit
PyQt: PyQt for 2.7 64 bit exe

When I first approached the most confusing thing was looking at all the different versions of the software and libraries there were and working out which one was applicable. 64 Bit or 32 Bit? 2.7, 3.3, source, binary, exe, tar...blah, blah, blah. After some false starts I found a good combination that worked well for me although I guess that depending on requirements you may need different versions of the software.
The links above give you enough to get you up and writing tools without much hassle and as all of the above are installers they will be setup automatically skipping the more complicated requirements of the PyQt manual install that includes fiddling with sip.

Python Imaging Library
Python Imaging Library: PIL 64 bit exe

As this tool was intended for finding duplicate images I needed an extra library referred to as Pil, Python imaging library.
I wanted to be able to open an image and then provided it's dimensions matched the source return the pixel RGB values based on a sample rate, for instance every tenth pixel. As long as the values match then the compare continues until the end of the image is reached. If there is no disparity a match has been found. PIL gives all of this and more and seemed to be incredibly quick at pixel sampling

Compile your python: Py2exe 64 bit

To test it rather than running it over and over in the Python environment I opted to convert it to an executable using py2exe. It was quick to convert the Python code into a tool that could be run with a simple double click. The only drawback of this method was that when there was a fault in my code the error was lost as the window would close before it could be read. In the end I had to create a little batch file to run the executable with a pause at the end to allow me to read each problem as it appeared.

To work with py2exe you will need a file. This is used by py2exe and gives it basic instructions about how to compile your py file(s). I found that to run my main program I needed to call it in separate py file. This is linked to the file so that when compiling py2exe will make sure that your program is run correctly based on what is in this file. Py2exe also sources all of the libraries you are using and includes them with the executable.
In addition you might consider using the batch file mentioned earlier to actually run your program whilst you are iterating and testing. This way you can catch any errors that occur.
These files are placed in a relevant location to the Python folder. In my case I placed them directly at the root of the Python27 folder.

If successfully compiled the executable will be placed into a 'dist' folder along with other libraries that the program requires.
One other thing to note is that if you wish to add an icon to your new application then all you need do is specify the filename after the 'icon_resources' key contained within the file. The caveat here is that it appears the setup needs to run twice to properly embed the icon. This is probably a bug or simply perhaps something I have missed. If run twice this obviously doubles the length of compilation time.

Check out the video below to see what I have so far and then below that is the source code for the application.

ImageCompare: Python, PyQt, PIL and py2exe from SBGrover on Vimeo.

Try it out!!
Below I include the files I have created for my application simply to give you an idea of the setup and to have something to try out.

1. Write your code

 # Import the modules  
 import sys  
 from PyQt4 import QtCore, QtGui  
 from functools import partial  
 import Image  
 import os  
 import subprocess  
 class VerticalWidget(QtGui.QWidget):  
   def __init__(self):  
     super(VerticalWidget, self).__init__()  
     self.layout = QtGui.QVBoxLayout(self)  
 class HorizontalWidget(QtGui.QHBoxLayout):  
   def __init__(self, layout):  
     super(HorizontalWidget, self).__init__()  
     layout.addLayout(self, QtCore.Qt.AlignLeft)  
 class MainButtonWidget(QtGui.QPushButton):  
   def __init__(self, layout, name, main_object, command, width):  
     super(MainButtonWidget, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.main_object = main_object  
     self.command = command  
   def mouseReleaseEvent(self, event):  
     if event.button() == QtCore.Qt.LeftButton:  
   def run_command(self, command):  
     exec command  
 class TabLayout(QtGui.QTabWidget):  
   def __init__(self, tab_dict):  
     super(TabLayout, self).__init__()  
     for tab in tab_dict:  
       self.addTab(tab[0], tab[1])  
 class OutputView(QtGui.QListWidget):  
   def __init__(self, layout):  
     super(OutputView, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
 class ListView(QtGui.QListWidget):  
   def __init__(self, layout):  
     super(ListView, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.connect(self, QtCore.SIGNAL("customContextMenuRequested(QPoint)" ), self.rightClicked)  
   def rightClicked(self, QPos):  
     self.listMenu = QtGui.QMenu()  
     menu_item_a = QtGui.QAction("Open in Explorer", self.listMenu)  
     menu_item_b = QtGui.QAction("Delete File", self.listMenu)  
     menu_item_c = QtGui.QAction("Open File", self.listMenu)  
     parentPosition = self.mapToGlobal(QtCore.QPoint(0, 0))  
     self.listMenu.move(parentPosition + QPos)    
   def open_in_explorer(self):  
     path = self.currentItem().text()  
     path = path.replace("/", "\\")  
     subprocess.Popen('explorer /select,' + r'%s' %path)  
   def open_file(self):  
     path = self.currentItem().text()  
   def delete_file(self):  
     path_list = self.selectedItems()  
     for path in path_list:  
 class SpinBox(QtGui.QSpinBox):  
   def __init__(self, layout):  
     super(SpinBox, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
 class FileTextEdit(QtGui.QTextEdit):  
   def __init__(self, layout):  
     super(FileTextEdit, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
 class Text(QtGui.QLabel):  
   def __init__(self, layout, text):  
     super(Text, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignRight)  
 class ImageCompare_Helpers():  
   def get_image(self, path):  
       im =  
       print "Pick a VALID image file (.jpg, .gif, .tga, .bmp, .png, .tif)"  
     rgb_im = im.convert('RGB')  
     return rgb_im  
   def get_pixel_color(self, img, sample_size):  
     width, height = img.size  
     pixel_total = width * height  
     pixel_set = []  
     pixel_range = pixel_total / sample_size  
     for pixel in range(pixel_range)[0::10]:  
       x, y = self.convert_to_pixel_position(pixel, width)  
       r, g, b = img.getpixel((x, y))  
       pixel_set.append([r, g, b])  
     return pixel_set  
   def convert_to_pixel_position(self, val, width):  
     x = val % width  
     y = val / width  
     return x, y  
   def get_file(self, main_object):  
     file = QtGui.QFileDialog.getOpenFileName(None, 'Select Source File', 'c:/',  
     if file:  
   def get_folder(self, main_object):  
     folder = QtGui.QFileDialog.getExistingDirectory(None, 'Select Search Folder')  
     if folder:  
   def do_it(self, main_object):  
     output_view = main_object.output_view  
     directory = main_object.folder_text.toPlainText()  
     filename = main_object.file_text.toPlainText()  
     if filename and directory:  
       matching_images = []  
       self.sample_size = main_object.samples.value()  
       self.accuracy = main_object.threshold.value()  
       self.accuracy = abs(self.accuracy - 100)  
       img1 = self.get_image(str(filename))  
       img1_pixels = self.get_pixel_color(img1, self.sample_size)  
       all_images = self.read_all_subfolders(str(directory))  
       width, height = img1.size  
       img1_size = width * height  
       output_view.addItem("SOURCE IMAGE ----> " + str(filename))  
       for image in all_images:  
         image = image.replace("\\", "/")  
         if image != filename:  
           output_view.addItem("Comparing: " + image)  
           img2 = self.get_image(image)  
           img2_width, img2_height = img2.size  
           if img2_width * img2_height == img1_size:  
             img2_pixels = self.get_pixel_color(img2, self.sample_size)  
             same = self.compare_images(img1_pixels, img2_pixels)  
             if same:  
       output_view.addItem("MATCHING IMAGES")  
       if matching_images:  
         for i in matching_images:  
   def compare_images(self, img1_pixel_set, img2_pixel_set):  
     length = len(img1_pixel_set)  
     for count, i in enumerate(img1_pixel_set):  
       img1_total = i[0] + i[1] + i[2]  
       img2_total = img2_pixel_set[count][0] + img2_pixel_set[count][1] + img2_pixel_set[count][2]  
       img2_upper = img2_total + self.accuracy  
       img2_lower = img2_total - self.accuracy  
       if img2_lower <= img1_total <= img2_upper:  
         if count == length - 1:  
           return 1  
         return 0  
   def read_all_subfolders(self, path):  
     all_images = []  
     suffix_list = [".jpg", ".gif", ".tga", ".bmp", ".png", ".tif"]  
     for root, dirs, files in os.walk(path):  
       for file in files:  
         for suffix in suffix_list:  
           if file.lower().endswith(suffix):  
             all_images.append(os.path.join(root, file))  
     return all_images  
 class ImageCompare_UI(ImageCompare_Helpers):  
   def __init__(self): = None  
     self.main_widget = None  
   def run_ui(self, ImageCompare): = QtGui.QApplication(sys.argv)  
     self.main_widget = QtGui.QWidget()  
     self.main_widget.resize(600, 600)  
     main_layout = VerticalWidget()  
     # VIEW FILES  
     vertical_layout_list = VerticalWidget()  
     self.list_view = ListView(vertical_layout_list.layout)  
     MainButtonWidget(vertical_layout_list.layout, "Delete All", self,  
              "", 128)  
     # FIND FILES  
     vertical_layout = VerticalWidget()  
     horizontal_layout_a = HorizontalWidget(vertical_layout.layout)  
     Text(horizontal_layout_a, "Source File")  
     self.file_text = FileTextEdit(horizontal_layout_a)  
     MainButtonWidget(horizontal_layout_a, "<<", self,  
              "file = self.main_object.get_file(self.main_object)", 32)  
     horizontal_layout_b = HorizontalWidget(vertical_layout.layout)  
     Text(horizontal_layout_b, "Source Folder")  
     self.folder_text = FileTextEdit(horizontal_layout_b)  
     MainButtonWidget(horizontal_layout_b, "<<", self,  
              "file = self.main_object.get_folder(self.main_object)", 32)  
     horizontal_layout_c = HorizontalWidget(vertical_layout.layout)  
     Text(horizontal_layout_c, "Accuracy %")  
     self.threshold = SpinBox(horizontal_layout_c)  
     self.threshold.setToolTip("Deviation threshold for RGB Values. Higher means more deviation but more inaccuracy")  
     Text(horizontal_layout_c, "  Pixel Steps")  
     self.samples = SpinBox(horizontal_layout_c)  
     self.samples.setToolTip("Steps between each pixel sample. Higher is faster but less accurate")  
     self.output_view = OutputView(vertical_layout.layout)  
     MainButtonWidget(vertical_layout.layout, "Cancel Search", self,  
              "", 128)  
     MainButtonWidget(vertical_layout.layout, "Run Image Compare", self, "self.main_object.do_it(self.main_object)", 128)  
     vertical_layout.layout.addStretch()   = TabLayout([[vertical_layout, "Find Files"], [vertical_layout_list, "Results"]])  
 class ImageCompare(ImageCompare_UI):  
   def __init__(self):  
     print "RUNNING Compare"  

 from distutils.core import setup  
 from py2exe.build_exe import py2exe  
 setup_dict = dict(  
   windows = [{'script': "",  
         "icon_resources": [(1, "image_compare.ico")], "dest_base": "ImageCompare"}],  

 import ImageCompare as ic  
 compare = ic.ImageCompare()  

2. Run py2exe to compile it
In cmd type:

 python py2exe   
Unless python folder is in environment variables you will need to do this within the python27 folder.

3. Run a batch file to catch errors and iterate on your code
Create a new batch file that contains:


Friday, 23 June 2017

UV Based Blendshape Conversion

Something we work with a lot in the Games Industry are LOD's. For those who don't know this stands for Level Of Detail and the purpose of them is to provide incrementally more or less detailed versions of a mesh and possibly skeleton based on the current view distance.
These days it is trivial to set these up using third party software but certain things do not always get taken into account. One of these is blendshapes. If our top level mesh - the one that was originally modeled - has a set of corrective shapes then these do not get transferred to the LOD's when they are created.
The point of the tool in this post is to help alleviate that issue by taking the blendshape targets on the top LOD and converting it down to each LOD in turn even though the topology is completely different.
Historically blendshape targets have always had issues working with differing topology as they require an exact match for vertex orders between shapes. If this changes the results can be diabolical.
This tool gets around this by ignoring vertex orders. One thing that each LOD has in common with the original mesh is UV layout such as in the image below. Although these do not match they are be relatively close.

It is more easy to find matches for positions in 2D space than 3D space so it makes sense to try and leverage the UV layout of each LOD to produce a better 3D representation of the mesh.
Looking into this I broke the process down to the following with the idea of converting this into a node for V2.0.

# ON BOOT (or forced update) - HEAVY
# set up iterators for the source and target objects
# get per vertex the uv indices for each mesh ( itr.getUVIndices() )
# set up array for vertex idx ---> uv idx
# set up array for the inverse uv idx ---> vertex idx
# get the UV positions for the source mesh ( MFnMesh.getUVs() )
# get the vertex positions for the source and target mesh ( MFnMesh.getPoints() )

# iterate through the target vertex index to uv index array and set up a vertex to vertex mapping ( target --> source )
# get the uv position for the target uv index
# compare this position to ALL positions in the array returned from the UV positions for the source mesh. The aim is to find the minimum eulidian distance between two sets of UV coordinates
# get the index of the UV that this value applies to and convert it to a a vertex index using the source uv to vertex index array
# mapping: vertex_map_array[target vertex index] = source_vertex_index


# iterate through the vertex mapping
# target_idx = current iteration
# get the source index from the current iteration index in the vertex mapping array
# get the source point position for the current index
# set the target point array at the index of the current iteration to the source point

# set all points onto the target object

This appears to work reasonably well at least as a first step.
There are potential pitfalls to take into account. For example, what do we do with target meshes that have more vertices? At the moment the closest point will be found which will mean we will probably end up with overlapping vertices. Again what do we do with meshes that have less vertices. At the moment the areas that are missing the desired geometry may not fit the source mesh nicely. These are expected issues but could potentially be circumvented. That is for another day. In the meantime take a look at the video below which shows the secret of how to turn a sphere into a torus.

Blendshape Target Convertor V1.0 from SBGrover on Vimeo.

Tuesday, 6 June 2017

Collision Based Deformer

This post briefly details three examples of a deformer that can react to collisions with another object. The end video shows all three examples in action and as ever a basic python version of the compiled plugin is included to get started with.

Direct deformation
The first example is the most basic implementation of the node to achieve direct deformation.
Using the MFnMesh::allIntersections to detect intersection between two meshes and extracting and applying the delta between the intersecting points it is possible to create an effect of direct deformation.
It is worth noting that allIntersections has some caveats.
- The first is that any mesh that you are working with must be a closed surface. This is because it calculates collision by firing a ray from a given point and calculates how many surfaces it has passed through before it dies. If it passes through one it must be inside a mesh, if two it must be outside. An open mesh has the risk of only having one hit even if the point is inside the mesh.
- The second is that as all deltas are obtained by returning the closest point on the collision objects surface from a given point on the colliding object it is possible that the returned closest point might be on the opposite side of the collision object. This is because the colliding object has travelled past a centre line switching where the closest point will now be. This will give the result of vertices snapping to the wrong side of a mesh although the effect can be quite interesting.

Secondary deformation
The second example expands on the first and adds secondary deformation. This version retains all the features of the first but also pushes the intersecting vertex out along its normal to give an idea of volume retention. This is adjustable so that the result can be extended or switched off alltogether. This deformer gives control of the falloff shape using an MRampAttribute and an attribute to define how much of the surface the effect covers. it is also possible to paint its attributes to have fine control over the end result.

Sticky deformation
The third example changes direction and stores all colliding deformed points in an array only updating their position if their delta increases. Added to this is a compute based timer that gradually returns the mesh back to its original shape unless collided with again.

Collision Based Deformer from SBGrover on Vimeo.

Below is a python implementation of the first example to get started with. This will give you the direct deformation. Be aware that as this is using Python the results are much slower than a compiled plugin so it is best not to throw this at dense geometry. Included is a helper function to build a scene with the plugin.


 import maya.OpenMaya as OpenMaya  
 import maya.OpenMayaAnim as OpenMayaAnim  
 import maya.OpenMayaMPx as OpenMayaMPx  
 class collisionDeformer(OpenMayaMPx.MPxDeformerNode):  
      kPluginNodeId = OpenMaya.MTypeId(0x00000012)  
      kPluginNodeTypeName = "collisionDeformer"  
      def __init__(self):  
           OpenMayaMPx.MPxDeformerNode.__init__( self )  
           self.accelParams = OpenMaya.MMeshIsectAccelParams() #speeds up intersect calculation  
           self.intersector = OpenMaya.MMeshIntersector() #contains methods for efficiently finding the closest point to a mesh, required for collider  
      def deform( self, block, geoItr, matrix, index ):  
           #get ENVELOPE  
           envelope = OpenMayaMPx.cvar.MPxGeometryFilter_envelope  
           envelopeHandle = block.inputValue(envelope)  
           envelopeVal = envelopeHandle.asFloat()  
           if envelopeVal!=0:  
                #get COLLIDER MESH (as worldMesh)  
                colliderHandle = block.inputValue(self.collider)  
                inColliderMesh = colliderHandle.asMesh()  
                if not inColliderMesh.isNull():  
                     #get collider fn mesh  
                     inColliderFn = OpenMaya.MFnMesh(inColliderMesh)  
                     #get DEFORMED MESH  
                     inMesh = self.get_input_geom(block, index)  
                     #get COLLIDER WORLD MATRIX to convert the bounding box to world space  
                     colliderMatrixHandle = block.inputValue(self.colliderMatrix)  
                     colliderMatrixVal = colliderMatrixHandle.asMatrix()  
                     #get BOUNDING BOX MIN VALUES  
                     colliderBoundingBoxMinHandle = block.inputValue(self.colliderBoundingBoxMin)  
                     colliderBoundingBoxMinVal = colliderBoundingBoxMinHandle.asFloat3()  
                     #get BOUNDING BOX MAX VALUES  
                     colliderBoundingBoxMaxHandle = block.inputValue(self.colliderBoundingBoxMax)  
                     colliderBoundingBoxMaxVal = colliderBoundingBoxMaxHandle.asFloat3()  
                     #build new bounding box based on given values  
                     bbox = OpenMaya.MBoundingBox()  
                     bbox.expand(OpenMaya.MPoint(colliderBoundingBoxMinVal[0], colliderBoundingBoxMinVal[1], colliderBoundingBoxMinVal[2]))  
                     bbox.expand(OpenMaya.MPoint(colliderBoundingBoxMaxVal[0], colliderBoundingBoxMaxVal[1], colliderBoundingBoxMaxVal[2]))  
                     #set up point on mesh and intersector for returning closest point and accelParams if required  
                     pointOnMesh = OpenMaya.MPointOnMesh()   
                     self.intersector.create(inColliderMesh, colliderMatrixVal)  
                     #set up constants for allIntersections  
                     faceIds = None  
                     triIds = None  
                     idsSorted = False  
                     space = OpenMaya.MSpace.kWorld  
                     maxParam = 100000  
                     testBothDirs = False  
                     accelParams = None  
                     sortHits = False  
                     hitRayParams = None  
                     hitFaces = None  
                     hitTriangles = None  
                     hitBary1 = None  
                     hitBary2 = None  
                     tolerance = 0.0001  
                     floatVec = OpenMaya.MFloatVector(0, 1, 0) #set up arbitrary vector n.b this is fine for what we want here but anything more complex may require vector obtained from vertex  
                     #deal with main mesh  
                     inMeshFn = OpenMaya.MFnMesh(inMesh)  
                     inPointArray = OpenMaya.MPointArray()  
                     inMeshFn.getPoints(inPointArray, OpenMaya.MSpace.kWorld)  
                     #create array to store final points and set to correct length  
                     length = inPointArray.length()  
                     finalPositionArray = OpenMaya.MPointArray()  
                     #loop through all points. could also be done with geoItr  
                     for num in range(length):  
                          point = inPointArray[num]  
                          #if point is within collider bounding box then consider it  
                          if bbox.contains(point):  
                               ##-- allIntersections variables --##  
                               floatPoint = OpenMaya.MFloatPoint(point)  
                               hitPoints = OpenMaya.MFloatPointArray()  
                               inColliderFn.allIntersections( floatPoint, floatVec, faceIds, triIds, idsSorted, space, maxParam, testBothDirs, accelParams, sortHits, hitPoints, hitRayParams, hitFaces, hitTriangles, hitBary1, hitBary2, tolerance )  
                               if hitPoints.length()%2 == 1:       
                                    #work out closest point  
                                    closestPoint = OpenMaya.MPoint()  
                                    inColliderFn.getClosestPoint(point, closestPoint, OpenMaya.MSpace.kWorld, None)  
                                    #calculate delta and add to array  
                                    delta = point - closestPoint  
                                    finalPositionArray.set(point - delta, num)  
                                    finalPositionArray.set(point, num)  
                          #if point is not in bounding box simply add the position to the final array  
                               finalPositionArray.set(point, num)  
                     inMeshFn.setPoints(finalPositionArray, OpenMaya.MSpace.kWorld)  
      def get_input_geom(self, block, index):  
           input_attr = OpenMayaMPx.cvar.MPxGeometryFilter_input  
           input_geom_attr = OpenMayaMPx.cvar.MPxGeometryFilter_inputGeom  
           input_handle = block.outputArrayValue(input_attr)  
           input_geom_obj = input_handle.outputValue().child(input_geom_attr).asMesh()  
           return input_geom_obj  
 def creator():  
      return OpenMayaMPx.asMPxPtr(collisionDeformer())  
 def initialize():  
      gAttr = OpenMaya.MFnGenericAttribute()  
      mAttr = OpenMaya.MFnMatrixAttribute()  
      nAttr = OpenMaya.MFnNumericAttribute()  
      collisionDeformer.collider = gAttr.create( "colliderTarget", "col")  
      gAttr.addDataAccept( OpenMaya.MFnData.kMesh )  
      collisionDeformer.colliderBoundingBoxMin = nAttr.createPoint( "colliderBoundingBoxMin", "cbbmin")  
      collisionDeformer.colliderBoundingBoxMax = nAttr.createPoint( "colliderBoundingBoxMax", "cbbmax")  
      collisionDeformer.colliderMatrix = mAttr.create("colliderMatrix", "collMatr", OpenMaya.MFnNumericData.kFloat )  
      collisionDeformer.multiplier = nAttr.create("multiplier", "mult", OpenMaya.MFnNumericData.kFloat, 1)  
      collisionDeformer.addAttribute( collisionDeformer.collider )  
      collisionDeformer.addAttribute( collisionDeformer.colliderMatrix )  
      collisionDeformer.addAttribute( collisionDeformer.colliderBoundingBoxMin )  
      collisionDeformer.addAttribute( collisionDeformer.colliderBoundingBoxMax )  
      collisionDeformer.addAttribute( collisionDeformer.multiplier )  
      outMesh = OpenMayaMPx.cvar.MPxGeometryFilter_outputGeom  
      collisionDeformer.attributeAffects( collisionDeformer.collider, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderBoundingBoxMin, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderBoundingBoxMax, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderMatrix, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.multiplier, outMesh )  
 def initializePlugin(obj):  
      plugin = OpenMayaMPx.MFnPlugin(obj, 'Grover', '1.0', 'Any')  
           plugin.registerNode('collisionDeformer', collisionDeformer.kPluginNodeId, creator, initialize, OpenMayaMPx.MPxNode.kDeformerNode)  
           raise RuntimeError, 'Failed to register node'  
 def uninitializePlugin(obj):  
      plugin = OpenMayaMPx.MFnPlugin(obj)  
           raise RuntimeError, 'Failed to deregister node'  
 #simply create two polygon spheres. Move the second away from the first, select the first and run the code below.  
 import maya.cmds as cmds  
 cmds.connectAttr('pSphere2.worldMesh', 'collisionDeformer1.colliderTarget')  
 cmds.connectAttr('pSphere2.matrix', 'collisionDeformer1.colliderMatrix')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeX', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxX')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeY', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxY')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeZ', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxZ')