Friday, 23 June 2017

UV Based Blendshape Conversion

Something we work with a lot in the Games Industry are LOD's. For those who don't know this stands for Level Of Detail and the purpose of them is to provide incrementally more or less detailed versions of a mesh and possibly skeleton based on the current view distance.
These days it is trivial to set these up using third party software but certain things do not always get taken into account. One of these is blendshapes. If our top level mesh - the one that was originally modeled - has a set of corrective shapes then these do not get transferred to the LOD's when they are created.
The point of the tool in this post is to help alleviate that issue by taking the blendshape targets on the top LOD and converting it down to each LOD in turn even though the topology is completely different.
Historically blendshape targets have always had issues working with differing topology as they require an exact match for vertex orders between shapes. If this changes the results can be diabolical.
This tool gets around this by ignoring vertex orders. One thing that each LOD has in common with the original mesh is UV layout such as in the image below. Although these do not match they are be relatively close.

It is more easy to find matches for positions in 2D space than 3D space so it makes sense to try and leverage the UV layout of each LOD to produce a better 3D representation of the mesh.
Looking into this I broke the process down to the following with the idea of converting this into a node for V2.0.

# ON BOOT (or forced update) - HEAVY
# set up iterators for the source and target objects
# get per vertex the uv indices for each mesh ( itr.getUVIndices() )
# set up array for vertex idx ---> uv idx
# set up array for the inverse uv idx ---> vertex idx
# get the UV positions for the source mesh ( MFnMesh.getUVs() )
# get the vertex positions for the source and target mesh ( MFnMesh.getPoints() )

# iterate through the target vertex index to uv index array and set up a vertex to vertex mapping ( target --> source )
# get the uv position for the target uv index
# compare this position to ALL positions in the array returned from the UV positions for the source mesh. The aim is to find the minimum eulidian distance between two sets of UV coordinates
# get the index of the UV that this value applies to and convert it to a a vertex index using the source uv to vertex index array
# mapping: vertex_map_array[target vertex index] = source_vertex_index


# iterate through the vertex mapping
# target_idx = current iteration
# get the source index from the current iteration index in the vertex mapping array
# get the source point position for the current index
# set the target point array at the index of the current iteration to the source point

# set all points onto the target object

This appears to work reasonably well at least as a first step.
There are potential pitfalls to take into account. For example, what do we do with target meshes that have more vertices? At the moment the closest point will be found which will mean we will probably end up with overlapping vertices. Again what do we do with meshes that have less vertices. At the moment the areas that are missing the desired geometry may not fit the source mesh nicely. These are expected issues but could potentially be circumvented. That is for another day. In the meantime take a look at the video below which shows the secret of how to turn a sphere into a torus.

Blendshape Target Convertor V1.0 from SBGrover on Vimeo.


Post a Comment