PyQt: Maya Character Picker

Coming Soon:- Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Friday, 8 December 2017

RBF Based Bind Conversion

Continuing on with the foray into RBF has produced a python function that is able to convert the joint positions in one bind file to another simply based on the vertex data from the two meshes. This concept has been demonstrated in the past by Hans Goddard (3m 30 in) who shows the conversion of mesh data such as clothing from one mesh to another in his video.
Building on this with a bind conversion seems like a logical progression. It should be possible to make quick iterations on multitudes of character types by taking a base mesh with clothing, bound to a skeleton and converting the clothing to a differing mesh before binding it to a skeleton that has been regenerated using the same process. As long as the source skin weights are decent then there is no reason that the target mesh should not achieve the same level of deformation.

We have tried combining the two processes at our studio with a high level of success. This video highlights the bind section of the process.

RBF_based_bind_conversion from SBGrover on Vimeo.

Wednesday, 6 December 2017

Animation Ghosting Locator

I have played around with locators before in Maya. I have produced a couple that have actually been of some use in the daily grind. During the transition to Maya 2016 from previous versions I had to convert these to run with the new drawOverride method that was required to display these in viewport 2.0. I found it a headache as the new way of assembling the data seemed rather opaque especially compared to the simpler Legacy Viewport days.
More recently I decided to return to this. Now that I have a better knowledge of the API and am more confident using C++ I wanted to understand it the process a more completely.
The new classes available to me now include MUIDrawManager which actually encapsulates the tools needed to create the visual aspects of the locators.

I had opted to create a locator that would provide the animation team with some kind of ghosting on the meshes in the scene to allow them to compare between the current frame and others without changing the frame they are on. For this I identified a number of things I would need.

1. A way of returning the mesh at a different frame
2. A way of rendering that returned data in the current view
3. Options to change the visual aspects as required
4. Options to change the frame read
5. An option to have the result track the current frame with the existing offset

I was already aware of a class that would give me the ability to read items at a different frame from the current one. MDGContext. I had never used it before but doing so did not prove difficult. The sample code below shows the section that deals with the read ahead for the ghosting data.

void GhostingNodeDrawOverride::getMeshPoints(const MDagPath& objPath)
 MStatus status;
 MObject GhostObj = objPath.node(&status);

 if (status)
  // get plug to mesh
  MPlug plug(GhostObj, GhostingNode::mesh);
  // if data exists do something
  if (!plug.isNull())
   // get user input data (desired frame/tracking switch)
   double frame = getFrame(objPath);
   int isTracking = getTracking(objPath);
   // is tracking on for the first time?
   if (isTracking == 1 && switched == 0)
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    // returns the current time as NTSC
    int cur_time = (int);
    // calculate the difference between desired frame and current frame
    difference = (int)cur_time - (int)frame;
    switched = 1;

   // or is tracking already on? 
   else if(isTracking == 1) 
    // get current frame
    MTime currentTime = MAnimControl::currentTime();
    // return the current time as NTSC
    int cur_time = (int);
    // calculate offset based on previously calculated difference
    frame = cur_time - difference;
    // convert the calculated frame to NTSC
    MTime desiredFrame(frame, MTime::kNTSCFrame);
    // set up an MDGContext to the desired frame
    MDGContext prevCtx(desiredFrame);
    // get MObject from plug at given context
    MObject geoPrev = plug.asMObject(prevCtx);
    // create MeshPolygon iterator from MObject
    MItMeshPolygon meshIt(geoPrev);
   // else everything is off
    switched = 0;
    difference = 0;

Something to bear in mind is that making use of a context based read can be expensive. Maya has to re-evaluate a frame under the hood to pass back the correct data which can slow down performance somewhat.

Items 3, 4 and 5 are easily dealt with as they are simply extra attributes on the node that give a user options as to how to interact with it.
Item 2 involved the actual rendering of the locator which involved finding out how a number the MPxDrawOverride functions operate with each other. Amongst them a key function is addUIDrawables in which the locator is 'built'. Most of your code will sit in this function as it is where you will assemble points, edges, colors etc, and package them ready to be rendered. A large bulk of locator code, at least in my case sits under the override

For my locator I wanted to allow a user to draw a whole mesh, a wireframe and a set of points either one at a time or altogether. I also wanted to allow control over colour and transparency which would be very important to distinguish the locator in a busy scene. Below is the section of code where this is set up. I have previously created a point list, edge list and color list from the iterator in the code snippet above and am passing them to MUIDrawManager functions to get the expected result.

if (trianglePointList.length() > 0)

 // get the color of the points
 MColor pointColor = colorWeights[0];

 // rest and start drawing

 //user defined display settings
 // render edges
 if (displaySettings[2] == 1)
  drawManager.mesh(MHWRender::MUIDrawManager::kLines, edgePointList, NULL, NULL, NULL, NULL);

 // render faces
 if (displaySettings[1] == 1)
  drawManager.mesh(MHWRender::MUIDrawManager::kTriangles, trianglePointList, NULL, &colorWeights, NULL, NULL);

 // render points
 if (displaySettings[0] == 1)
  drawManager.points(trianglePointList, false); //second parameter is draw in 2d

 // finish drawing for now

As you can see from above I am using a mixture of kLines, kTriangles and just points. The first two I supply to the mesh function and come from MUIDrawManager::Primitive.

Even though I was initially a little put off with the way Maya now needs the code structured for Viewport 2.0 drawing of locators it did not take long to get a grasp on how the class should be utilised and although I am sure I still have much to learn I feel a good start has been made.

Maya Animation Ghosting Locator from SBGrover on Vimeo.

RBF Based Colour Reader

Years back I produced a node based colour reader that utilised the closest point utility node to read back texel values at a uv position. By supplying a ramp texture you could have the system spit out RGB values which being 0 to 1 based lend themselves perfectly to driving other systems. A drawback of this method was that the texture provided had to be procedurally generated. Textures created by hand in Photoshop would not work. This made it more difficult to customise the colours for a specific output. Another was that the type of surface it was limited to was NURBS.

However it did work well, especially on areas where setting an extreme position was not always simple. For example we have a couple of other pose space solutions available to us at our studio. One is the standard cone reader that takes the angle between two vectors to return a value based on how far that angle is between the centre and outer edge of a given radius.
The second takes a vector magnitude between a radial centre point and a target point and again tests if that magnitude sits inside or out of a given radius. These two different ways to return a value suffer from the same shortcoming. When the target point or angle passes into the given radius and hits 0 the output is maximum. Continue on though and the target passes the centre and travels back outside the radius. This leads to problems. For instance driving a blendshape with the output of either of these setups will result with the blendshape target climbing to full application before decreasing again. In areas such as shoulders on characters this can lead to unpredictable deformation unless many of these readers are utilised to counteract the problem.
Hitting and passing these extremes with the colour space reader results in the extreme value always reading maximum. This means that when a limb is pushed a bit too far the driven shapes do not start collapsing inwards again.

My colleagues and I have recently been investigating the application of RBF based solving in regards to large amounts of data. I decided it was time to rewrite the colour space reader, this time utilising the Maya API, C++ and the new RBF learnings.
So by using each mesh vertex on the reader as a ‘node point’ for the rbf and then throwing in the colours at each of these vertices as values it was possible to extrapolate a weighted result that could be output as a single rgba value. The beauty of this method is that rather than sampling texels which are more difficult to apply to the method the user can simply create any mesh, apply vertex colours as they wish to any of its vertices and get a result. Want to change the result? Change the colours or reshape the mesh. It’s nice and simple

The solution still uses closest point but through MFnMesh this time.
I have included a python version of this node with the post to get you started if you fancy a stab at it. This version does not use RBF instead weighting the colours based on distances.

RBF Based Colour Reader from SBGrover on Vimeo.

It’s worth noting that whilst creating this node i found an issue with Maya and the worldMesh output. If worldMesh is used then the colour output does not update when colours on the mesh change. This does not appears to happen with the python version but is worth keeping your eye on. If you find that you get this result you will need to adjust the node to use outMesh which will involve multiplying the inMesh by its own worldMatrix to convert the points to world space. You will also need to multiply the centre object MPoint by the inverse of this worldMatrix and use this new MPoint with the closestPoint calculation.


import maya.cmds as mc

mc.connectAttr('pPlaneShape1.worldMesh', 'ColorSpaceReaderPy1.inMesh', f=True)
mc.connectAttr('locator1.worldMatrix', 'ColorSpaceReaderPy1.centre', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outClosest', 'pSphere1.translate', f=True)
mc.connectAttr('ColorSpaceReaderPy1.outColor', 'lambert2.color', f=True)


import maya.OpenMaya as om
import maya.OpenMayaMPx as omMPx

kPluginNodeTypeName = "ColorSpaceReaderPy"
kPluginNodeClassify = 'utility/general'
kPluginNodeId = om.MTypeId(0x81012)

class ColorSpaceReader(omMPx.MPxNode):

 inMesh = om.MObject()
 inCentre = om.MObject()
 outClosest = om.MObject()
 outColor = om.MObject()
 output = om.MObject()

 def __init__(self):


 def compute(self, plug, data):

  inMeshData = data.inputValue(ColorSpaceReader.inMesh).asMesh()
  inCentreMatrix = data.inputValue(ColorSpaceReader.inCentre).asMatrix()
  outColorHandle = data.outputValue(ColorSpaceReader.outColor)
  outClosestHandle = data.outputValue(ColorSpaceReader.outClosest)

  if not inMeshData.isNull():
   meshFn = om.MFnMesh(inMeshData)
   sourceVerts = om.MPointArray()
   colors = om.MColorArray()
   meshFn.getPoints(sourceVerts, om.MSpace.kWorld)
   centrePos = om.MPoint(inCentreMatrix(3, 0), inCentreMatrix(3, 1), inCentreMatrix(3, 2))
   closestPoint = om.MPoint()
   mu = om.MScriptUtil()
   polygon = mu.asIntPtr()
   meshFn.getClosestPoint(centrePos, closestPoint, om.MSpace.kWorld, polygon)
   closestPolygon = mu.getInt(polygon)

   faceVertArray = om.MIntArray()
   meshFn.getPolygonVertices(closestPolygon, faceVertArray)

   colorArray = om.MColorArray()
   magArray = om.MFloatArray()

   for i in range(faceVertArray.length()):
    mag = closestPoint.distanceTo(sourceVerts[faceVertArray[i]])

   weights = om.MFloatArray(magArray.length(), 0.0)
   foundOne = 0
   weightsTotal = 0.0

   for i in range(magArray.length()):

    if magArray[i] == 0.0:
     weights.set(1.0, i)
     foundOne = 1
     weightsTotal = 1

   if foundOne == 0:

    for i in range(magArray.length()):
     weights.set(1.0 / magArray[i], i)
     weightsTotal += weights[i]

   unit = 1.0 / weightsTotal
   weightedColor = [0, 0, 0]

   for i in range(magArray.length()):
    w = unit * weights[i]
    weightedColor[0] += colorArray[i][0] * w
    weightedColor[1] += colorArray[i][1] * w
    weightedColor[2] += colorArray[i][2] * w

   weightedColorVec = om.MFloatVector(weightedColor[0], weightedColor[1], weightedColor[2])

def nodeCreator():

 return omMPx.asMPxPtr(ColorSpaceReader())
def nodeInitializer():
 nAttr = om.MFnNumericAttribute()
 mAttr = om.MFnMatrixAttribute()
 tAttr = om.MFnTypedAttribute()

 ColorSpaceReader.inMesh = tAttr.create("inMesh", "im", om.MFnData.kMesh)

 ColorSpaceReader.inCentre = mAttr.create("centre", "c")

 ColorSpaceReader.outClosest = nAttr.createPoint("closestPoint", "cp")

 ColorSpaceReader.outColor = nAttr.createPoint("outColor", "col")


 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outColor)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inMesh, ColorSpaceReader.outClosest)
 ColorSpaceReader.attributeAffects(ColorSpaceReader.inCentre, ColorSpaceReader.outClosest)

def initializePlugin(mobject):
 fnPlugin = omMPx.MFnPlugin(mobject)
def uninitializePlugin(mobject):
 fnPlugin = 

Friday, 3 November 2017

Blood and Truth (a.k.a What I have been working on)

Its always nice when titles that you work on are finally shown to public for the first time, especially if the title is relatively well received which came as a nice surprise.
Blood and Truth is the successor to the bite-size experience released for Playstation VR by London Studio over a year ago called The London Heist. Naturally being VR it is a first person shooter that combines on rails and way-point based movement. Its worth noting that the way-point movement is not teleportation which makes a nice change and that in many areas the routes diverge so there are options to the player as to how they wish to move around the level.
The gun-play can be a bit frantic at times but this is at least mixed with the possibility of stealthy movement so that enemies simply do not spot you.
Anyway, take a look and if you feel like it, leave a comment. I would be interested in hearing some thoughts.

Thursday, 28 September 2017

Blendshape Conversion Tool

Just a quick update to the UV Based Blendshape Conversion post I added a while back.
I have now embedded the logic used in the previous post into a useful tool that can amongst one or two other things project blendshape target data through multiple LODS. Watch the video to see an example.

Blendshape Conversion Tool from SBGrover on Vimeo.

Wednesday, 23 August 2017

Tip #2: MDoubleArray Bug

I recently discovered that there is a bug with the Maya Python API when dealing with setting data in an MDoubleArray.

I was writing a few tools to deal with skinning information but found that my previously normalised values kept being adjusted to slightly below or slightly above 1. Visually this had little impact although seams on meshes had evidence of splitting under extreme deformation however I wanted precise results and so was confused with the values I was getting out of the tool.

When passing skinning information between different meshes on the same skeletal hierarchy it can be necessary to re-order the information to suit the influence order of one skin cluster to another. To deal with this part of the process one of my tools would read source skin data into an MDoubleArray then build a new MDoubleArray of the length of the number of influences on the target cluster. Part of this process involved passing weighting values per influence into a python variable to store them and then dropping that into the right location in the new MDoubleArray. I found that when passing the value back into the MDoubleArray was where it was altered. Python apparently does not have a type but its float types have double precision and at first I thought that this was to blame.

Try out the below in the script editor to see what I mean. I use Maya 2016 but I am under the impression that this happens in all versions of Maya.

import maya.OpenMaya as om
source = om.MDoubleArray()
source.set(0.1234567890123, 0)
print source

When speaking with someone from Autodesk it turned out that the python wrapper for the API only passes data using the 'float' method of the MDoubleArray class and so the type conversion causes the error in value. This must affect hundreds of Maya Python scripts around the world so I am hoping this might get addressed in the future. In the meantime this is the way around the problem.

import maya.OpenMaya as om
source = om.MDoubleArray()
source[0] = 0.1234567890123
print source

For some reason this way of setting the value gives a different, more accurate result. Make sure you have set the length of your MDoubleArray first or this will not work.

Friday, 4 August 2017

Stretch Compress Deformer

Although working in games limits me to joint and blendshape solutions to achieve reasonable levels of deformation on characters sometimes it's nice to take a departure from this and think a bit further afield.
A typical problem we have in our engine is that like many others it does not support joint scaling - either uniformly or non uniformly. This can be a bit of a challenge when trying to maintain volume in characters as something that could be driven by one scaling joint and some simple skinning has to end up being driven by three or four joints that translate away from each other. When time is critical it can be frustrating to set up basic stuff like this as it takes time to adjust the weighting and driving to give the right effect.
When dealing with driving, a pose space solution is generally relied on (at least where I work) to help drive these joints in the right manner. Setting this up takes time and can sometimes be broken when a character twists too far or away from the pose readers.

This is where the Stretch Compress Deformer could be of use.

This plugin is applied directly to a skinned mesh and it's result is entirely driven by the measured area of all polygons within the mesh rather than an external reader. Input target shapes give an example of the shape the mesh must achieve in the areas that compress or stretch. It can also be weighted so that only small areas are considered which will of course aid performance.
In approaching the plugin I knew that I would need to calculate the area of a polygon. I did not realise that MItMeshPolygon had its own function specifically for this. GetArea.
Instead I used Herons Formula although there are a number of ways of finding the result.
By storing the area of all triangles on the deformed mesh initially and then on each update comparing this original set to a new set it is possible to obtain a shortlist of triangles who's surface area has decreased - compression, and those that have increased - stretching. Converting those faces to vertices then means that the current shape can be adjusted to match that of the input target shapes based on a weight that can be controlled by the user.
Initially we will have also stored off the vertex positions from the bind (shape of mesh before deformation), target and stretch we can now obtain the deltas between thier corresponding vertices. By multiplying those deltas by the corresponding normal vector from the bind a scalar vector is obtained. Multiplying the deformed normal vector by this scalar before adding this result to the current point position pushes the deformed vertex inwards or outwards depending on triangle surface area.


Stretch Compress Deformer from SBGrover on Vimeo.

I provide python code below. Note that this is not a version of the plugin I wrote but instead a python script example intended to be run in the script editor. As a result it does have certain caveats. The logic is exactly the same but it can only be run once on a mesh and if the mesh has been posed using joints then it will need to be unbound beforehand. The provided script is meant purely as an aid to learning and not as a complete solution to the problem. I leave it to you to push it further and convert it into a plugin.

To use the script:

1. Create a Base shape and a compressed and stretched version of the Base shape. The topology must match EXACTLY.
2. If you wish to, skin the Base shape to joints.
3. Select the Base, Stretch and Compress in that order.
4. Run the first part of the script.
5. Pose the Base shape either by moving the geometry or moving the joints.
6. Delete the history on the Base shape if it is skinned or has been adjusted using a deformer.
7. Run the second script.


import maya.OpenMaya as om
import math

# Need: One compress and once stretch target and a skinned mesh in bind pose

# have the three meshes selected in the following order: bind, stretch, compress
sel = om.MSelectionList()

# bind
dag_path = om.MDagPath()
sel.getDagPath(0, dag_path)
bind_fn = om.MFnMesh(dag_path)

# stretch
sel.getDagPath(1, dag_path)
stretch_fn = om.MFnMesh(dag_path)
stretch_points = om.MPointArray()
stretch_fn.getPoints(stretch_points, om.MSpace.kObject)

# compress
sel.getDagPath(2, dag_path)
compress_fn = om.MFnMesh(dag_path)
compress_points = om.MPointArray()
compress_fn.getPoints(compress_points, om.MSpace.kObject)

# variables
overall_weight = 2 # change this to increase / decrease the overall effect
compress_weight = 5 # change this to increase / decrease the compress effect. 0 means not calculated
stretch_weight = 5 # change this to increase / decrease the stretch effect. 0 means not calculated

# arrays
bind_points = om.MPointArray()
bind_fn.getPoints(bind_points, om.MSpace.kObject)

bind_triangle_count = om.MIntArray()
bind_triangle_indices = om.MIntArray()
bind_fn.getTriangles(bind_triangle_count, bind_triangle_indices)

bind_normal_array = om.MFloatVectorArray()
bind_fn.getVertexNormals(0, bind_normal_array, om.MSpace.kObject)

# get the bind area array from the bind triangles and bind points
bind_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = bind_points[triangle[0]] - bind_points[triangle[1]]
 triangleAC = bind_points[triangle[0]] - bind_points[triangle[2]]
 triangleBC = bind_points[triangle[1]] - bind_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 bind_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1


# NOW POSE YOUR MESH AND RUN THIS. If the mesh is bound you will need to unbind it for this part to work. If you decide to build this as a deformer you will not need to address this

sel.getDagPath(0, dag_path)
deformed_fn = om.MFnMesh(dag_path)
# get the point positions for the deformed mesh
deformed_points = om.MPointArray()
deformed_fn.getPoints(deformed_points, om.MSpace.kObject )

# get the deformed area array from the bind triangles and deformed points
deformed_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = deformed_points[triangle[0]] - deformed_points[triangle[1]]
 triangleAC = deformed_points[triangle[0]] - deformed_points[triangle[2]]
 triangleBC = deformed_points[triangle[1]] - deformed_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 deformed_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1

#get the vertex normals for the deformed mesh
deformed_normal_array = om.MFloatVectorArray()
deformed_fn.getVertexNormals(0, deformed_normal_array, om.MSpace.kObject)

length = len(deformed_area_array_dict)
done_array = []

for num in range(length):

 # check to see if the triangle area between the bind and current is different. If less its compressing, if more its stretching
 deformation_amount = deformed_area_array_dict[num][1] - bind_area_array_dict[num][1]

 if deformation_amount < -0.0001 and compress_weight != 0 or deformation_amount > 0.0001 and stretch_weight != 0:

  compress = False
  stretch = False

  if deformation_amount < -0.0001:
   compress = True

  if deformation_amount > 0.0001:
   stretch = True

  # get list of all indices in current triangle
  idx1 = deformed_area_array_dict[num][0][0]
  idx2 = deformed_area_array_dict[num][0][1]
  idx3 = deformed_area_array_dict[num][0][2]

  # get the current position of each vertex using the indices
  vtx1 = deformed_points[idx1]
  vtx2 = deformed_points[idx2]
  vtx3 = deformed_points[idx3]

  # calculate the delta of the vertices between the bind and the input compress shape
  if compress:
   delta1 = compress_points[idx1] - bind_points[idx1]
   delta2 = compress_points[idx2] - bind_points[idx2]
   delta3 = compress_points[idx3] - bind_points[idx3]

  if stretch:
   delta1 = stretch_points[idx1] - bind_points[idx1]
   delta2 = stretch_points[idx2] - bind_points[idx2]
   delta3 = stretch_points[idx3] - bind_points[idx3]

  # multiply the weights. delta * deformation amount * compress or stretch weight * overall weight
  if compress:
   delta1 *= compress_weight * overall_weight * abs(deformation_amount)
   delta2 *= compress_weight * overall_weight * abs(deformation_amount)
   delta3 *= compress_weight * overall_weight * abs(deformation_amount)

  if stretch:
   delta1 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta2 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta3 *= stretch_weight * overall_weight * abs(deformation_amount)
  # get the current normal direction on the deformed shape - object space - and convert to a MVector from MFloatVector
  deformed_nor1 = om.MVector(deformed_normal_array[idx1])
  deformed_nor2 = om.MVector(deformed_normal_array[idx2])
  deformed_nor3 = om.MVector(deformed_normal_array[idx3])

  # get the corresponding normal direction on the bind shape - object space - and convert to a MVector from MFloatVector
  bind_nor1 = om.MVector(bind_normal_array[idx1])
  bind_nor2 = om.MVector(bind_normal_array[idx2])
  bind_nor3 = om.MVector(bind_normal_array[idx3])

  # get the dot product of the delta and the bind .This will give us a scaler based on how far the delta vector is projected along the bind vector
  delta_dot1 = bind_nor1 * delta1
  delta_dot2 = bind_nor2 * delta2
  delta_dot3 = bind_nor3 * delta3

  # multiply the deformed normal vector by the delta dot to scale it accordingly
  deformed_nor1 *= delta_dot1
  deformed_nor2 *= delta_dot2
  deformed_nor3 *= delta_dot3

  # add this value to the current deformed vertex position
  vtx1 = (vtx1.x + deformed_nor1.x, vtx1.y + deformed_nor1.y, vtx1.z + deformed_nor1.z)
  vtx2 = (vtx2.x + deformed_nor2.x, vtx2.y + deformed_nor2.y, vtx2.z + deformed_nor2.z)
  vtx3 = (vtx3.x + deformed_nor3.x, vtx3.y + deformed_nor3.y, vtx3.z + deformed_nor3.z)
  # put the result back into the deformed point array at the correct vertex index
  deformed_points.set(idx1, vtx1[0], vtx1[1], vtx1[2], 1)
  deformed_points.set(idx2, vtx2[0], vtx2[1], vtx2[2], 1)
  deformed_points.set(idx3, vtx3[0], vtx3[1], vtx3[2], 1)

# apply the result back the vertices
deformed_fn.setPoints(deformed_points, om.MSpace.kObject)