PyQt: Maya Character Picker

Coming Soon:- Example of a fully featured character picker using the magic of PySide

Texture Based Deformer

Deform a mesh based on the colour values derived from a procedural texture

Visibility Node v2.0

A tool to help visualise hidden mesh objects by utilising componentModifiers

UV Based Blendshape Conversion

Convert blendshape targets on meshes with differing topologies

Python and PYQT Image Compare Tool

Investigation into writing a standalone application that can be compiled and run within Windows

Friday, 3 November 2017

Blood and Truth (a.k.a What I have been working on)

Its always nice when titles that you work on are finally shown to public for the first time, especially if the title is relatively well received which came as a nice surprise.
Blood and Truth is the successor to the bite-size experience released for Playstation VR by London Studio over a year ago called The London Heist. Naturally being VR it is a first person shooter that combines on rails and way-point based movement. Its worth noting that the way-point movement is not teleportation which makes a nice change and that in many areas the routes diverge so there are options to the player as to how they wish to move around the level.
The gun-play can be a bit frantic at times but this is at least mixed with the possibility of stealthy movement so that enemies simply do not spot you.
Anyway, take a look and if you feel like it, leave a comment. I would be interested in hearing some thoughts.

Thursday, 28 September 2017

Blendshape Conversion Tool

Just a quick update to the UV Based Blendshape Conversion post I added a while back.
I have now embedded the logic used in the previous post into a useful tool that can amongst one or two other things project blendshape target data through multiple LODS. Watch the video to see an example.

Blendshape Conversion Tool from SBGrover on Vimeo.

Wednesday, 23 August 2017

Tip #2: MDoubleArray Bug

I recently discovered that there is a bug with the Maya Python API when dealing with setting data in an MDoubleArray.

I was writing a few tools to deal with skinning information but found that my previously normalised values kept being adjusted to slightly below or slightly above 1. Visually this had little impact although seams on meshes had evidence of splitting under extreme deformation however I wanted precise results and so was confused with the values I was getting out of the tool.

When passing skinning information between different meshes on the same skeletal hierarchy it can be necessary to re-order the information to suit the influence order of one skin cluster to another. To deal with this part of the process one of my tools would read source skin data into an MDoubleArray then build a new MDoubleArray of the length of the number of influences on the target cluster. Part of this process involved passing weighting values per influence into a python variable to store them and then dropping that into the right location in the new MDoubleArray. I found that when passing the value back into the MDoubleArray was where it was altered. Python apparently does not have a type but its float types have double precision and at first I thought that this was to blame.

Try out the below in the script editor to see what I mean. I use Maya 2016 but I am under the impression that this happens in all versions of Maya.

import maya.OpenMaya as om
source = om.MDoubleArray()
source.setLength(1)
source.set(0.1234567890123, 0)
print source

When speaking with someone from Autodesk it turned out that the python wrapper for the API only passes data using the 'float' method of the MDoubleArray class and so the type conversion causes the error in value. This must affect hundreds of Maya Python scripts around the world so I am hoping this might get addressed in the future. In the meantime this is the way around the problem.

import maya.OpenMaya as om
source = om.MDoubleArray()
source.setLength(1)
source[0] = 0.1234567890123
print source

For some reason this way of setting the value gives a different, more accurate result. Make sure you have set the length of your MDoubleArray first or this will not work.

Friday, 4 August 2017

Stretch Compress Deformer

Although working in games limits me to joint and blendshape solutions to achieve reasonable levels of deformation on characters sometimes it's nice to take a departure from this and think a bit further afield.
A typical problem we have in our engine is that like many others it does not support joint scaling - either uniformly or non uniformly. This can be a bit of a challenge when trying to maintain volume in characters as something that could be driven by one scaling joint and some simple skinning has to end up being driven by three or four joints that translate away from each other. When time is critical it can be frustrating to set up basic stuff like this as it takes time to adjust the weighting and driving to give the right effect.
When dealing with driving, a pose space solution is generally relied on (at least where I work) to help drive these joints in the right manner. Setting this up takes time and can sometimes be broken when a character twists too far or away from the pose readers.

This is where the Stretch Compress Deformer could be of use.

This plugin is applied directly to a skinned mesh and it's result is entirely driven by the measured area of all polygons within the mesh rather than an external reader. Input target shapes give an example of the shape the mesh must achieve in the areas that compress or stretch. It can also be weighted so that only small areas are considered which will of course aid performance.
In approaching the plugin I knew that I would need to calculate the area of a polygon. I did not realise that MItMeshPolygon had its own function specifically for this. GetArea.
Instead I used Herons Formula although there are a number of ways of finding the result.
By storing the area of all triangles on the deformed mesh initially and then on each update comparing this original set to a new set it is possible to obtain a shortlist of triangles who's surface area has decreased - compression, and those that have increased - stretching. Converting those faces to vertices then means that the current shape can be adjusted to match that of the input target shapes based on a weight that can be controlled by the user.
Initially we will have also stored off the vertex positions from the bind (shape of mesh before deformation), target and stretch we can now obtain the deltas between thier corresponding vertices. By multiplying those deltas by the corresponding normal vector from the bind a scalar vector is obtained. Multiplying the deformed normal vector by this scalar before adding this result to the current point position pushes the deformed vertex inwards or outwards depending on triangle surface area.

EXAMPLE VIDEO

Stretch Compress Deformer from SBGrover on Vimeo.

I provide python code below. Note that this is not a version of the plugin I wrote but instead a python script example intended to be run in the script editor. As a result it does have certain caveats. The logic is exactly the same but it can only be run once on a mesh and if the mesh has been posed using joints then it will need to be unbound beforehand. The provided script is meant purely as an aid to learning and not as a complete solution to the problem. I leave it to you to push it further and convert it into a plugin.

To use the script:

1. Create a Base shape and a compressed and stretched version of the Base shape. The topology must match EXACTLY.
2. If you wish to, skin the Base shape to joints.
3. Select the Base, Stretch and Compress in that order.
4. Run the first part of the script.
5. Pose the Base shape either by moving the geometry or moving the joints.
6. Delete the history on the Base shape if it is skinned or has been adjusted using a deformer.
7. Run the second script.

SCRIPT PART 1 TO BE RUN ON BASE, STRETCH, COMPRESS

import maya.OpenMaya as om
import math

# Need: One compress and once stretch target and a skinned mesh in bind pose
# RUN THIS ONCE WITH MESH IN BIND POSE

# have the three meshes selected in the following order: bind, stretch, compress
sel = om.MSelectionList()
om.MGlobal.getActiveSelectionList(sel)

# bind
dag_path = om.MDagPath()
sel.getDagPath(0, dag_path)
bind_fn = om.MFnMesh(dag_path)

# stretch
sel.getDagPath(1, dag_path)
stretch_fn = om.MFnMesh(dag_path)
stretch_points = om.MPointArray()
stretch_fn.getPoints(stretch_points, om.MSpace.kObject)

# compress
sel.getDagPath(2, dag_path)
compress_fn = om.MFnMesh(dag_path)
compress_points = om.MPointArray()
compress_fn.getPoints(compress_points, om.MSpace.kObject)

# variables
overall_weight = 2 # change this to increase / decrease the overall effect
compress_weight = 5 # change this to increase / decrease the compress effect. 0 means not calculated
stretch_weight = 5 # change this to increase / decrease the stretch effect. 0 means not calculated

# arrays
bind_points = om.MPointArray()
bind_fn.getPoints(bind_points, om.MSpace.kObject)

bind_triangle_count = om.MIntArray()
bind_triangle_indices = om.MIntArray()
bind_fn.getTriangles(bind_triangle_count, bind_triangle_indices)

bind_normal_array = om.MFloatVectorArray()
bind_fn.getVertexNormals(0, bind_normal_array, om.MSpace.kObject)

# get the bind area array from the bind triangles and bind points
bind_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = bind_points[triangle[0]] - bind_points[triangle[1]]
 triangleAC = bind_points[triangle[0]] - bind_points[triangle[2]]
 triangleBC = bind_points[triangle[1]] - bind_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 bind_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1

SCRIPT PART 2 TO BE RUN ON DEFORMED SHAPE

# NOW POSE YOUR MESH AND RUN THIS. If the mesh is bound you will need to unbind it for this part to work. If you decide to build this as a deformer you will not need to address this

sel.getDagPath(0, dag_path)
deformed_fn = om.MFnMesh(dag_path)
 
# get the point positions for the deformed mesh
deformed_points = om.MPointArray()
deformed_fn.getPoints(deformed_points, om.MSpace.kObject )

# get the deformed area array from the bind triangles and deformed points
deformed_area_array_dict = {}
length = bind_triangle_indices.length()
triangle_index = 0

for count in range(0, length, 3):
 triangle = (bind_triangle_indices[count], bind_triangle_indices[count + 1], bind_triangle_indices[count + 2])
 triangleAB = deformed_points[triangle[0]] - deformed_points[triangle[1]]
 triangleAC = deformed_points[triangle[0]] - deformed_points[triangle[2]]
 triangleBC = deformed_points[triangle[1]] - deformed_points[triangle[2]]
 triangleAB_magnitude = triangleAB.length()
 triangleAC_magnitude = triangleAC.length()
 triangleBC_magnitude = triangleBC.length()
 heron = (triangleAB_magnitude + triangleAC_magnitude + triangleBC_magnitude) / 2
 area = math.sqrt(heron * (heron - triangleAB_magnitude) * (heron - triangleAC_magnitude) * (heron - triangleBC_magnitude))
 deformed_area_array_dict[triangle_index] = [triangle, area]
 triangle_index += 1


#get the vertex normals for the deformed mesh
deformed_normal_array = om.MFloatVectorArray()
deformed_fn.getVertexNormals(0, deformed_normal_array, om.MSpace.kObject)

length = len(deformed_area_array_dict)
done_array = []

for num in range(length):

 # check to see if the triangle area between the bind and current is different. If less its compressing, if more its stretching
 deformation_amount = deformed_area_array_dict[num][1] - bind_area_array_dict[num][1]

 if deformation_amount < -0.0001 and compress_weight != 0 or deformation_amount > 0.0001 and stretch_weight != 0:

  compress = False
  stretch = False

  if deformation_amount < -0.0001:
   compress = True

  if deformation_amount > 0.0001:
   stretch = True

  # get list of all indices in current triangle
  idx1 = deformed_area_array_dict[num][0][0]
  idx2 = deformed_area_array_dict[num][0][1]
  idx3 = deformed_area_array_dict[num][0][2]

  # get the current position of each vertex using the indices
  vtx1 = deformed_points[idx1]
  vtx2 = deformed_points[idx2]
  vtx3 = deformed_points[idx3]

  # calculate the delta of the vertices between the bind and the input compress shape
  if compress:
   delta1 = compress_points[idx1] - bind_points[idx1]
   delta2 = compress_points[idx2] - bind_points[idx2]
   delta3 = compress_points[idx3] - bind_points[idx3]

  if stretch:
   delta1 = stretch_points[idx1] - bind_points[idx1]
   delta2 = stretch_points[idx2] - bind_points[idx2]
   delta3 = stretch_points[idx3] - bind_points[idx3]

  # multiply the weights. delta * deformation amount * compress or stretch weight * overall weight
  if compress:
   delta1 *= compress_weight * overall_weight * abs(deformation_amount)
   delta2 *= compress_weight * overall_weight * abs(deformation_amount)
   delta3 *= compress_weight * overall_weight * abs(deformation_amount)

  if stretch:
   delta1 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta2 *= stretch_weight * overall_weight * abs(deformation_amount)
   delta3 *= stretch_weight * overall_weight * abs(deformation_amount)
   
  # get the current normal direction on the deformed shape - object space - and convert to a MVector from MFloatVector
  deformed_nor1 = om.MVector(deformed_normal_array[idx1])
  deformed_nor2 = om.MVector(deformed_normal_array[idx2])
  deformed_nor3 = om.MVector(deformed_normal_array[idx3])

  # get the corresponding normal direction on the bind shape - object space - and convert to a MVector from MFloatVector
  bind_nor1 = om.MVector(bind_normal_array[idx1])
  bind_nor2 = om.MVector(bind_normal_array[idx2])
  bind_nor3 = om.MVector(bind_normal_array[idx3])

  # get the dot product of the delta and the bind .This will give us a scaler based on how far the delta vector is projected along the bind vector
  delta_dot1 = bind_nor1 * delta1
  delta_dot2 = bind_nor2 * delta2
  delta_dot3 = bind_nor3 * delta3

  # multiply the deformed normal vector by the delta dot to scale it accordingly
  deformed_nor1 *= delta_dot1
  deformed_nor2 *= delta_dot2
  deformed_nor3 *= delta_dot3

  # add this value to the current deformed vertex position
  vtx1 = (vtx1.x + deformed_nor1.x, vtx1.y + deformed_nor1.y, vtx1.z + deformed_nor1.z)
  vtx2 = (vtx2.x + deformed_nor2.x, vtx2.y + deformed_nor2.y, vtx2.z + deformed_nor2.z)
  vtx3 = (vtx3.x + deformed_nor3.x, vtx3.y + deformed_nor3.y, vtx3.z + deformed_nor3.z)
  
  # put the result back into the deformed point array at the correct vertex index
  deformed_points.set(idx1, vtx1[0], vtx1[1], vtx1[2], 1)
  deformed_points.set(idx2, vtx2[0], vtx2[1], vtx2[2], 1)
  deformed_points.set(idx3, vtx3[0], vtx3[1], vtx3[2], 1)

# apply the result back the vertices
deformed_fn.setPoints(deformed_points, om.MSpace.kObject)

Thursday, 29 June 2017

Python and PYQT Image Compare Tool

As my work is predominantly focused within Maya this means that I do not have a lot of experience of creating standalone tools for use outside a Maya environment.
These days Maya has everything you need to write decent tools. Python is embedded along with PySide for PyQt and now that the API has support for Python this has made it more easy to write complex tooling compared to a purely Mel and C++ offering of the bad old days.
However writing tooling outside is a different beast as none of this stuff comes pre-installed.

ImageCompare Tool

For my first foray into the world of standalone applications I decided to write a simple tool with the aim of getting me used to things. In the end I went for one that would find similar or the same images based on a source image giving you the option of checking or deleting the duplicates.


Python and PyQt
Python Interpreter: Python 2.7 64 bit
PyQt: PyQt for 2.7 64 bit exe

When I first approached the most confusing thing was looking at all the different versions of the software and libraries there were and working out which one was applicable. 64 Bit or 32 Bit? 2.7, 3.3, source, binary, exe, tar...blah, blah, blah. After some false starts I found a good combination that worked well for me although I guess that depending on requirements you may need different versions of the software.
The links above give you enough to get you up and writing tools without much hassle and as all of the above are installers they will be setup automatically skipping the more complicated requirements of the PyQt manual install that includes fiddling with sip.


Python Imaging Library
Python Imaging Library: PIL 64 bit exe

As this tool was intended for finding duplicate images I needed an extra library referred to as Pil, Python imaging library.
I wanted to be able to open an image and then provided it's dimensions matched the source return the pixel RGB values based on a sample rate, for instance every tenth pixel. As long as the values match then the compare continues until the end of the image is reached. If there is no disparity a match has been found. PIL gives all of this and more and seemed to be incredibly quick at pixel sampling


Py2exe
Compile your python: Py2exe 64 bit

To test it rather than running it over and over in the Python environment I opted to convert it to an executable using py2exe. It was quick to convert the Python code into a tool that could be run with a simple double click. The only drawback of this method was that when there was a fault in my code the error was lost as the window would close before it could be read. In the end I had to create a little batch file to run the executable with a pause at the end to allow me to read each problem as it appeared.

To work with py2exe you will need a setup.py file. This is used by py2exe and gives it basic instructions about how to compile your py file(s). I found that to run my main program I needed to call it in separate py file. This is linked to the setup.py file so that when compiling py2exe will make sure that your program is run correctly based on what is in this file. Py2exe also sources all of the libraries you are using and includes them with the executable.
In addition you might consider using the batch file mentioned earlier to actually run your program whilst you are iterating and testing. This way you can catch any errors that occur.
These files are placed in a relevant location to the Python folder. In my case I placed them directly at the root of the Python27 folder.

If successfully compiled the executable will be placed into a 'dist' folder along with other libraries that the program requires.
One other thing to note is that if you wish to add an icon to your new application then all you need do is specify the filename after the 'icon_resources' key contained within the setup.py file. The caveat here is that it appears the setup needs to run twice to properly embed the icon. This is probably a bug or simply perhaps something I have missed. If run twice this obviously doubles the length of compilation time.

Check out the video below to see what I have so far and then below that is the source code for the application.

ImageCompare: Python, PyQt, PIL and py2exe from SBGrover on Vimeo.


Try it out!!
Below I include the files I have created for my application simply to give you an idea of the setup and to have something to try out.

1. Write your code

ImageCompare.py

 # Import the modules  
 import sys  
 from PyQt4 import QtCore, QtGui  
 from functools import partial  
 import Image  
 import os  
 import subprocess  
   
   
 class VerticalWidget(QtGui.QWidget):  
   
   def __init__(self):  
     super(VerticalWidget, self).__init__()  
     self.layout = QtGui.QVBoxLayout(self)  
     self.setLayout(self.layout)  
   
   
 class HorizontalWidget(QtGui.QHBoxLayout):  
   
   def __init__(self, layout):  
     super(HorizontalWidget, self).__init__()  
     layout.addLayout(self, QtCore.Qt.AlignLeft)  
   
   
 class MainButtonWidget(QtGui.QPushButton):  
   
   def __init__(self, layout, name, main_object, command, width):  
     super(MainButtonWidget, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.setMaximumWidth(width)  
     self.setMaximumHeight(24)  
     self.setText(name)  
     self.main_object = main_object  
     self.command = command  
   
   def mouseReleaseEvent(self, event):  
   
     if event.button() == QtCore.Qt.LeftButton:  
       self.run_command(self.command)  
   
   def run_command(self, command):  
     exec command  
   
   
 class TabLayout(QtGui.QTabWidget):  
   def __init__(self, tab_dict):  
     super(TabLayout, self).__init__()  
   
     for tab in tab_dict:  
       self.addTab(tab[0], tab[1])  
   
   
 class OutputView(QtGui.QListWidget):  
   def __init__(self, layout):  
     super(OutputView, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.setSelectionMode(0)  
     self.setUpdatesEnabled(True)  
   
   
 class ListView(QtGui.QListWidget):  
   def __init__(self, layout):  
     super(ListView, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection)  
     self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)  
     self.connect(self, QtCore.SIGNAL("customContextMenuRequested(QPoint)" ), self.rightClicked)  
   
   def rightClicked(self, QPos):  
     self.listMenu = QtGui.QMenu()  
   
     menu_item_a = QtGui.QAction("Open in Explorer", self.listMenu)  
     self.listMenu.addAction(menu_item_a)  
     menu_item_a.triggered.connect(self.open_in_explorer)  
   
     menu_item_b = QtGui.QAction("Delete File", self.listMenu)  
     self.listMenu.addAction(menu_item_b)  
     menu_item_b.triggered.connect(self.delete_file)  
   
     menu_item_c = QtGui.QAction("Open File", self.listMenu)  
     self.listMenu.addAction(menu_item_c)  
     menu_item_c.triggered.connect(self.open_file)  
   
     parentPosition = self.mapToGlobal(QtCore.QPoint(0, 0))  
     self.listMenu.move(parentPosition + QPos)  
   
     self.listMenu.show()  
   
   def open_in_explorer(self):  
     path = self.currentItem().text()  
     path = path.replace("/", "\\")  
     subprocess.Popen('explorer /select,' + r'%s' %path)  
   
   def open_file(self):  
     path = self.currentItem().text()  
     os.startfile(str(path))  
   
   def delete_file(self):  
     path_list = self.selectedItems()  
   
     for path in path_list:  
       os.remove(str(path.text()))  
       self.takeItem(self.row(path))  
   
   
 class SpinBox(QtGui.QSpinBox):  
   
   def __init__(self, layout):  
     super(SpinBox, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.setValue(10)  
     self.setMaximumWidth(40)  
     self.setMaximum(100)  
     self.setMinimum(0)  
   
   
 class FileTextEdit(QtGui.QTextEdit):  
   
   def __init__(self, layout):  
     super(FileTextEdit, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignLeft)  
     self.setMaximumHeight(24)  
     self.setWordWrapMode(0)  
     self.setHorizontalScrollBarPolicy(1)  
     self.setVerticalScrollBarPolicy(1)  
   
   
 class Text(QtGui.QLabel):  
   
   def __init__(self, layout, text):  
     super(Text, self).__init__()  
     layout.addWidget(self, QtCore.Qt.AlignRight)  
     self.setText(text)  
     self.setMaximumWidth(70)  
   
   
 class ImageCompare_Helpers():  
   
   def get_image(self, path):  
     try:  
       im = Image.open(path)  
     except:  
       print "Pick a VALID image file (.jpg, .gif, .tga, .bmp, .png, .tif)"  
     rgb_im = im.convert('RGB')  
   
     return rgb_im  
   
   def get_pixel_color(self, img, sample_size):  
     width, height = img.size  
     pixel_total = width * height  
     pixel_set = []  
     pixel_range = pixel_total / sample_size  
   
     for pixel in range(pixel_range)[0::10]:  
       x, y = self.convert_to_pixel_position(pixel, width)  
       r, g, b = img.getpixel((x, y))  
       pixel_set.append([r, g, b])  
   
     return pixel_set  
   
   def convert_to_pixel_position(self, val, width):  
     x = val % width  
     y = val / width  
   
     return x, y  
   
   def get_file(self, main_object):  
   
     file = QtGui.QFileDialog.getOpenFileName(None, 'Select Source File', 'c:/',  
                            selectedFilter='*.jpg')  
   
     if file:  
       main_object.file_text.setText(file)  
   
   def get_folder(self, main_object):  
   
     folder = QtGui.QFileDialog.getExistingDirectory(None, 'Select Search Folder')  
   
     if folder:  
       main_object.folder_text.setText(folder)  
   
   
   def do_it(self, main_object):  
     #main_object.main_widget.close()  
     #main_object.app.quit()  
   
     output_view = main_object.output_view  
     output_view.clear()  
     main_object.list_view.clear()  
   
     directory = main_object.folder_text.toPlainText()  
     filename = main_object.file_text.toPlainText()  
   
     if filename and directory:  
       matching_images = []  
       self.sample_size = main_object.samples.value()  
       self.accuracy = main_object.threshold.value()  
       self.accuracy = abs(self.accuracy - 100)  
   
       img1 = self.get_image(str(filename))  
       img1_pixels = self.get_pixel_color(img1, self.sample_size)  
       all_images = self.read_all_subfolders(str(directory))  
       width, height = img1.size  
       img1_size = width * height  
       output_view.addItem("##################")  
       output_view.addItem("SOURCE IMAGE ----> " + str(filename))  
       output_view.addItem("")  
   
       for image in all_images:  
   
         output_view.scrollToBottom()  
         main_object.app.processEvents()  
         image = image.replace("\\", "/")  
   
         if image != filename:  
           output_view.addItem("Comparing: " + image)  
           img2 = self.get_image(image)  
           img2_width, img2_height = img2.size  
   
           if img2_width * img2_height == img1_size:  
             img2_pixels = self.get_pixel_color(img2, self.sample_size)  
             same = self.compare_images(img1_pixels, img2_pixels)  
   
             if same:  
               matching_images.append(image)  
   
       output_view.addItem("")  
       output_view.addItem("##################")  
       output_view.addItem("MATCHING IMAGES")  
   
       if matching_images:  
   
         for i in matching_images:  
           output_view.addItem(i)  
           main_object.list_view.addItem(i)  
   
       else:  
         output_view.addItem("NONE")  
   
       output_view.scrollToBottom()  
   
   def compare_images(self, img1_pixel_set, img2_pixel_set):  
     length = len(img1_pixel_set)  
   
     for count, i in enumerate(img1_pixel_set):  
       img1_total = i[0] + i[1] + i[2]  
       img2_total = img2_pixel_set[count][0] + img2_pixel_set[count][1] + img2_pixel_set[count][2]  
       img2_upper = img2_total + self.accuracy  
       img2_lower = img2_total - self.accuracy  
   
       if img2_lower <= img1_total <= img2_upper:  
   
         if count == length - 1:  
           return 1  
   
       else:  
         return 0  
         break  
   
   def read_all_subfolders(self, path):  
     all_images = []  
     suffix_list = [".jpg", ".gif", ".tga", ".bmp", ".png", ".tif"]  
   
     for root, dirs, files in os.walk(path):  
   
       for file in files:  
   
         for suffix in suffix_list:  
   
           if file.lower().endswith(suffix):  
             all_images.append(os.path.join(root, file))  
   
     return all_images  
   
   
 class ImageCompare_UI(ImageCompare_Helpers):  
   
   def __init__(self):  
     self.app = None  
     self.main_widget = None  
   
   def run_ui(self, ImageCompare):  
     self.app = QtGui.QApplication(sys.argv)  
     self.main_widget = QtGui.QWidget()  
     self.main_widget.resize(600, 600)  
   
     main_layout = VerticalWidget()  
   
     # VIEW FILES  
     vertical_layout_list = VerticalWidget()  
     self.list_view = ListView(vertical_layout_list.layout)  
     #vertical_layout_list.layout.setContentsMargins(0,0,0,0)  
     MainButtonWidget(vertical_layout_list.layout, "Delete All", self,  
              "", 128)  
   
     # FIND FILES  
     vertical_layout = VerticalWidget()  
     horizontal_layout_a = HorizontalWidget(vertical_layout.layout)  
   
     Text(horizontal_layout_a, "Source File")  
     self.file_text = FileTextEdit(horizontal_layout_a)  
     MainButtonWidget(horizontal_layout_a, "<<", self,  
              "file = self.main_object.get_file(self.main_object)", 32)  
   
     horizontal_layout_b = HorizontalWidget(vertical_layout.layout)  
   
     Text(horizontal_layout_b, "Source Folder")  
     self.folder_text = FileTextEdit(horizontal_layout_b)  
     MainButtonWidget(horizontal_layout_b, "<<", self,  
              "file = self.main_object.get_folder(self.main_object)", 32)  
   
     horizontal_layout_c = HorizontalWidget(vertical_layout.layout)  
   
     Text(horizontal_layout_c, "Accuracy %")  
     self.threshold = SpinBox(horizontal_layout_c)  
     self.threshold.setValue(50)  
     self.threshold.setToolTip("Deviation threshold for RGB Values. Higher means more deviation but more inaccuracy")  
     Text(horizontal_layout_c, "  Pixel Steps")  
     self.samples = SpinBox(horizontal_layout_c)  
     self.samples.setToolTip("Steps between each pixel sample. Higher is faster but less accurate")  
     horizontal_layout_c.addStretch()  
   
     # OUTPUT WINDOW  
     self.output_view = OutputView(vertical_layout.layout)  
     MainButtonWidget(vertical_layout.layout, "Cancel Search", self,  
              "", 128)  
     MainButtonWidget(vertical_layout.layout, "Run Image Compare", self, "self.main_object.do_it(self.main_object)", 128)  
   
     vertical_layout.layout.addStretch()  
   
     self.tab = TabLayout([[vertical_layout, "Find Files"], [vertical_layout_list, "Results"]])  
     main_layout.layout.addWidget(self.tab)  
   
     self.main_widget.setLayout(main_layout.layout)  
     self.main_widget.show()  
     sys.exit(self.app.exec_())  
   
   
 class ImageCompare(ImageCompare_UI):  
   
   def __init__(self):  
     print "RUNNING Compare"  
     QtCore.pyqtRemoveInputHook()  
     self.run_ui(self)  
   
   

setup.py

 from distutils.core import setup  
 from py2exe.build_exe import py2exe  
   
 setup_dict = dict(  
   windows = [{'script': "main.py",  
         "icon_resources": [(1, "image_compare.ico")], "dest_base": "ImageCompare"}],  
 )  
 setup(**setup_dict)  
 setup(**setup_dict)  
   

main.py

 import ImageCompare as ic  
 compare = ic.ImageCompare()  

2. Run py2exe to compile it
In cmd type:

 python setup.py py2exe   
Unless python folder is in environment variables you will need to do this within the python27 folder.

3. Run a batch file to catch errors and iterate on your code
Create a new batch file that contains:

 ImageCompare.exe  
 pause  

Friday, 23 June 2017

UV Based Blendshape Conversion

Something we work with a lot in the Games Industry are LOD's. For those who don't know this stands for Level Of Detail and the purpose of them is to provide incrementally more or less detailed versions of a mesh and possibly skeleton based on the current view distance.
These days it is trivial to set these up using third party software but certain things do not always get taken into account. One of these is blendshapes. If our top level mesh - the one that was originally modeled - has a set of corrective shapes then these do not get transferred to the LOD's when they are created.
The point of the tool in this post is to help alleviate that issue by taking the blendshape targets on the top LOD and converting it down to each LOD in turn even though the topology is completely different.
Historically blendshape targets have always had issues working with differing topology as they require an exact match for vertex orders between shapes. If this changes the results can be diabolical.
This tool gets around this by ignoring vertex orders. One thing that each LOD has in common with the original mesh is UV layout such as in the image below. Although these do not match they are be relatively close.

It is more easy to find matches for positions in 2D space than 3D space so it makes sense to try and leverage the UV layout of each LOD to produce a better 3D representation of the mesh.
Looking into this I broke the process down to the following with the idea of converting this into a node for V2.0.

# ON BOOT (or forced update) - HEAVY
# set up iterators for the source and target objects
# get per vertex the uv indices for each mesh ( itr.getUVIndices() )
# set up array for vertex idx ---> uv idx
# set up array for the inverse uv idx ---> vertex idx
# get the UV positions for the source mesh ( MFnMesh.getUVs() )
# get the vertex positions for the source and target mesh ( MFnMesh.getPoints() )

# iterate through the target vertex index to uv index array and set up a vertex to vertex mapping ( target --> source )
# get the uv position for the target uv index
# compare this position to ALL positions in the array returned from the UV positions for the source mesh. The aim is to find the minimum eulidian distance between two sets of UV coordinates
# get the index of the UV that this value applies to and convert it to a a vertex index using the source uv to vertex index array
# mapping: vertex_map_array[target vertex index] = source_vertex_index

# EVALUATION - LIGHT

# iterate through the vertex mapping
# target_idx = current iteration
# get the source index from the current iteration index in the vertex mapping array
# get the source point position for the current index
# set the target point array at the index of the current iteration to the source point

# set all points onto the target object

This appears to work reasonably well at least as a first step.
There are potential pitfalls to take into account. For example, what do we do with target meshes that have more vertices? At the moment the closest point will be found which will mean we will probably end up with overlapping vertices. Again what do we do with meshes that have less vertices. At the moment the areas that are missing the desired geometry may not fit the source mesh nicely. These are expected issues but could potentially be circumvented. That is for another day. In the meantime take a look at the video below which shows the secret of how to turn a sphere into a torus.

Blendshape Target Convertor V1.0 from SBGrover on Vimeo.

Tuesday, 6 June 2017

Collision Based Deformer

This post briefly details three examples of a deformer that can react to collisions with another object. The end video shows all three examples in action and as ever a basic python version of the compiled plugin is included to get started with.


Direct deformation
The first example is the most basic implementation of the node to achieve direct deformation.
Using the MFnMesh::allIntersections to detect intersection between two meshes and extracting and applying the delta between the intersecting points it is possible to create an effect of direct deformation.
It is worth noting that allIntersections has some caveats.
- The first is that any mesh that you are working with must be a closed surface. This is because it calculates collision by firing a ray from a given point and calculates how many surfaces it has passed through before it dies. If it passes through one it must be inside a mesh, if two it must be outside. An open mesh has the risk of only having one hit even if the point is inside the mesh.
- The second is that as all deltas are obtained by returning the closest point on the collision objects surface from a given point on the colliding object it is possible that the returned closest point might be on the opposite side of the collision object. This is because the colliding object has travelled past a centre line switching where the closest point will now be. This will give the result of vertices snapping to the wrong side of a mesh although the effect can be quite interesting.



Secondary deformation
The second example expands on the first and adds secondary deformation. This version retains all the features of the first but also pushes the intersecting vertex out along its normal to give an idea of volume retention. This is adjustable so that the result can be extended or switched off alltogether. This deformer gives control of the falloff shape using an MRampAttribute and an attribute to define how much of the surface the effect covers. it is also possible to paint its attributes to have fine control over the end result.



Sticky deformation
The third example changes direction and stores all colliding deformed points in an array only updating their position if their delta increases. Added to this is a compute based timer that gradually returns the mesh back to its original shape unless collided with again.

Collision Based Deformer from SBGrover on Vimeo.

Below is a python implementation of the first example to get started with. This will give you the direct deformation. Be aware that as this is using Python the results are much slower than a compiled plugin so it is best not to throw this at dense geometry. Included is a helper function to build a scene with the plugin.

PLUGIN

 import maya.OpenMaya as OpenMaya  
 import maya.OpenMayaAnim as OpenMayaAnim  
 import maya.OpenMayaMPx as OpenMayaMPx  
   
 class collisionDeformer(OpenMayaMPx.MPxDeformerNode):  
      kPluginNodeId = OpenMaya.MTypeId(0x00000012)  
      kPluginNodeTypeName = "collisionDeformer"  
        
      def __init__(self):  
           OpenMayaMPx.MPxDeformerNode.__init__( self )  
           self.accelParams = OpenMaya.MMeshIsectAccelParams() #speeds up intersect calculation  
           self.intersector = OpenMaya.MMeshIntersector() #contains methods for efficiently finding the closest point to a mesh, required for collider  
   
      def deform( self, block, geoItr, matrix, index ):  
             
           #get ENVELOPE  
           envelope = OpenMayaMPx.cvar.MPxGeometryFilter_envelope  
           envelopeHandle = block.inputValue(envelope)  
           envelopeVal = envelopeHandle.asFloat()  
             
           if envelopeVal!=0:  
        
                #get COLLIDER MESH (as worldMesh)  
                colliderHandle = block.inputValue(self.collider)  
                inColliderMesh = colliderHandle.asMesh()  
                  
                if not inColliderMesh.isNull():  
                       
                     #get collider fn mesh  
                     inColliderFn = OpenMaya.MFnMesh(inColliderMesh)  
                       
                     #get DEFORMED MESH  
                     inMesh = self.get_input_geom(block, index)  
                       
                     #get COLLIDER WORLD MATRIX to convert the bounding box to world space  
                     colliderMatrixHandle = block.inputValue(self.colliderMatrix)  
                     colliderMatrixVal = colliderMatrixHandle.asMatrix()  
                       
                     #get BOUNDING BOX MIN VALUES  
                     colliderBoundingBoxMinHandle = block.inputValue(self.colliderBoundingBoxMin)  
                     colliderBoundingBoxMinVal = colliderBoundingBoxMinHandle.asFloat3()  
                       
                     #get BOUNDING BOX MAX VALUES  
                     colliderBoundingBoxMaxHandle = block.inputValue(self.colliderBoundingBoxMax)  
                     colliderBoundingBoxMaxVal = colliderBoundingBoxMaxHandle.asFloat3()  
                       
                     #build new bounding box based on given values  
                     bbox = OpenMaya.MBoundingBox()  
                     bbox.expand(OpenMaya.MPoint(colliderBoundingBoxMinVal[0], colliderBoundingBoxMinVal[1], colliderBoundingBoxMinVal[2]))  
                     bbox.expand(OpenMaya.MPoint(colliderBoundingBoxMaxVal[0], colliderBoundingBoxMaxVal[1], colliderBoundingBoxMaxVal[2]))  
                       
                     #set up point on mesh and intersector for returning closest point and accelParams if required  
                     pointOnMesh = OpenMaya.MPointOnMesh()   
                     self.intersector.create(inColliderMesh, colliderMatrixVal)  
                       
                     #set up constants for allIntersections  
                     faceIds = None  
                     triIds = None  
                     idsSorted = False  
                     space = OpenMaya.MSpace.kWorld  
                     maxParam = 100000  
                     testBothDirs = False  
                     accelParams = None  
                     sortHits = False  
                     hitRayParams = None  
                     hitFaces = None  
                     hitTriangles = None  
                     hitBary1 = None  
                     hitBary2 = None  
                     tolerance = 0.0001  
                     floatVec = OpenMaya.MFloatVector(0, 1, 0) #set up arbitrary vector n.b this is fine for what we want here but anything more complex may require vector obtained from vertex  
                       
                     #deal with main mesh  
                     inMeshFn = OpenMaya.MFnMesh(inMesh)  
                     inPointArray = OpenMaya.MPointArray()  
                     inMeshFn.getPoints(inPointArray, OpenMaya.MSpace.kWorld)  
                       
                     #create array to store final points and set to correct length  
                     length = inPointArray.length()  
                     finalPositionArray = OpenMaya.MPointArray()  
                     finalPositionArray.setLength(length)  
   
                     #loop through all points. could also be done with geoItr  
                     for num in range(length):  
                          point = inPointArray[num]  
                            
                          #if point is within collider bounding box then consider it  
                          if bbox.contains(point):  
                               ##-- allIntersections variables --##  
                               floatPoint = OpenMaya.MFloatPoint(point)  
                               hitPoints = OpenMaya.MFloatPointArray()  
   
                               inColliderFn.allIntersections( floatPoint, floatVec, faceIds, triIds, idsSorted, space, maxParam, testBothDirs, accelParams, sortHits, hitPoints, hitRayParams, hitFaces, hitTriangles, hitBary1, hitBary2, tolerance )  
                       
                               if hitPoints.length()%2 == 1:       
                                    #work out closest point  
                                    closestPoint = OpenMaya.MPoint()  
                                    inColliderFn.getClosestPoint(point, closestPoint, OpenMaya.MSpace.kWorld, None)  
                                      
                                    #calculate delta and add to array  
                                    delta = point - closestPoint  
                                    finalPositionArray.set(point - delta, num)  
                                      
                               else:  
                                    finalPositionArray.set(point, num)  
                                      
                          #if point is not in bounding box simply add the position to the final array  
                          else:  
                               finalPositionArray.set(point, num)  
                                      
                     inMeshFn.setPoints(finalPositionArray, OpenMaya.MSpace.kWorld)  
                                           
      def get_input_geom(self, block, index):  
           input_attr = OpenMayaMPx.cvar.MPxGeometryFilter_input  
           input_geom_attr = OpenMayaMPx.cvar.MPxGeometryFilter_inputGeom  
           input_handle = block.outputArrayValue(input_attr)  
           input_handle.jumpToElement(index)  
           input_geom_obj = input_handle.outputValue().child(input_geom_attr).asMesh()  
           return input_geom_obj  
                  
             
 def creator():  
      return OpenMayaMPx.asMPxPtr(collisionDeformer())  
   
        
 def initialize():  
      gAttr = OpenMaya.MFnGenericAttribute()  
      mAttr = OpenMaya.MFnMatrixAttribute()  
      nAttr = OpenMaya.MFnNumericAttribute()  
        
      collisionDeformer.collider = gAttr.create( "colliderTarget", "col")  
      gAttr.addDataAccept( OpenMaya.MFnData.kMesh )  
             
      collisionDeformer.colliderBoundingBoxMin = nAttr.createPoint( "colliderBoundingBoxMin", "cbbmin")  
        
      collisionDeformer.colliderBoundingBoxMax = nAttr.createPoint( "colliderBoundingBoxMax", "cbbmax")  
        
      collisionDeformer.colliderMatrix = mAttr.create("colliderMatrix", "collMatr", OpenMaya.MFnNumericData.kFloat )  
      mAttr.setHidden(True)  
        
      collisionDeformer.multiplier = nAttr.create("multiplier", "mult", OpenMaya.MFnNumericData.kFloat, 1)  
        
      collisionDeformer.addAttribute( collisionDeformer.collider )  
      collisionDeformer.addAttribute( collisionDeformer.colliderMatrix )  
      collisionDeformer.addAttribute( collisionDeformer.colliderBoundingBoxMin )  
      collisionDeformer.addAttribute( collisionDeformer.colliderBoundingBoxMax )  
      collisionDeformer.addAttribute( collisionDeformer.multiplier )  
        
      outMesh = OpenMayaMPx.cvar.MPxGeometryFilter_outputGeom  
        
      collisionDeformer.attributeAffects( collisionDeformer.collider, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderBoundingBoxMin, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderBoundingBoxMax, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.colliderMatrix, outMesh )  
      collisionDeformer.attributeAffects( collisionDeformer.multiplier, outMesh )  
   
        
 def initializePlugin(obj):  
      plugin = OpenMayaMPx.MFnPlugin(obj, 'Grover', '1.0', 'Any')  
      try:  
           plugin.registerNode('collisionDeformer', collisionDeformer.kPluginNodeId, creator, initialize, OpenMayaMPx.MPxNode.kDeformerNode)  
      except:  
           raise RuntimeError, 'Failed to register node'  
   
             
 def uninitializePlugin(obj):  
      plugin = OpenMayaMPx.MFnPlugin(obj)  
      try:  
           plugin.deregisterNode(collisionDeformer.kPluginNodeId)  
      except:  
           raise RuntimeError, 'Failed to deregister node'  
             
   
   
HELPER CODE
 #simply create two polygon spheres. Move the second away from the first, select the first and run the code below.  
 import maya.cmds as cmds  
   
 cmds.delete(cmds.ls(type='collisionDeformer'))  
 cmds.flushUndo()  
 cmds.unloadPlugin('collisionDeformer.py')  
 cmds.loadPlugin('collisionDeformer.py')  
 cmds.deformer(type='collisionDeformer')  
 cmds.connectAttr('pSphere2.worldMesh', 'collisionDeformer1.colliderTarget')  
 cmds.connectAttr('pSphere2.matrix', 'collisionDeformer1.colliderMatrix')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeX', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxX')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeY', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxY')  
 cmds.connectAttr('pSphere2.boundingBox.boundingBoxSize.boundingBoxSizeZ', 'collisionDeformer1.colliderBoundingBox.colliderBoundingBoxZ')