Explorable Mesh Deformation Subspaces from Unstructured Generative Models

SIGGRAPH Asia 2023

Arman Maesumi
Brown University
Paul Guerrero
Adobe Research
Vladimir G. Kim
Adobe Research
Matthew Fisher
Adobe Research
Siddhartha Chaudhuri
Adobe
Research
Noam Aigerman
University of Montreal, Adobe Research
Daniel Ritchie
Brown University

[Paper, 30mb] [Supp., 2mb] [arXiv] [GitHub] [Bibtex]

Figure 1. We present a method to explore variations among a given set of input shapes (denoted by black Voronoi centers on the left) using a two-dimensional exploration space. This exploration space smoothly and naturally interpolates between the input shapes by constructing a mapping to a sub-space of a pre-trained generator's latent space that optimizes the smoothness of interpolations along any trajectory. Additionally, we transfer the variation over these interpolation trajectories onto the original high-quality meshes, avoiding loss of detail from the unstructured generator output.


Abstract

Exploring variations of 3D shapes is a time-consuming process in traditional 3D modeling tools. Deep generative models of 3D shapes often feature continuous latent spaces that can, in principle, be used to explore potential variations starting from a set of input shapes; in practice, doing so can be problematic—latent spaces are high dimensional and hard to visualize, contain shapes that are not relevant to the input shapes, and linear paths through them often lead to sub-optimal shape transitions. Furthermore, one would ideally be able to explore variations in the original high-quality meshes used to train the generative model, not its lower-quality output geometry. In this paper, we present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model. We first describe how to find a mapping that spans the set of input landmark shapes and exhibits smooth variations between them. We then show how to turn the variations in this subspace into deformation fields, to transfer those variations to high-quality meshes for the landmark shapes. Our results show that our method can produce visually-pleasing and easily-navigable 2D exploration spaces for several different shape categories, especially as compared to prior work on learning deformation spaces for 3D shapes.



Method

Our method creates a two-dimensional exploration space that smoothly interpolates between a set of input shapes. We first find a mapping from the input shapes to a subspace of a pre-trained generative model. We then optimize the mapping to ensure that interpolations between the input shapes are smooth. Finally, we transfer the variations in the exploration space to the original high-quality meshes.

Resolving such a mapping is non-trivial. As seen in the figure above, linearly interpolating in the latent space of generative models results in undesirable behavior. When interpolating, we need to account for the metric induced by the generator—in one-dimensional interpolation this corresponds to finding a geodesic path connecting source and target points on the shape manifold. However, in our setting we seek a two dimensional surface that is immersed in the shape manifold, such that interpolants along the surface are as geodesic as possible.


Our learned mapping smoothly interpolates the landmark shapes by reparametrizing the domain of the generator such that its image is "as geodesic as possible" with respect to the curvature induced by the generator. We formulate a constrained optimization objective that ensures the landmark shapes are always mapped correctly and that the embedding minimizes an energy accounting for the metric of the shape manifold—the Dirichlet energy. The resulting surface given by our map can be thought of as a "smooth membrane" spanning a subset of the shape manifold. Interpolating between shapes along this membrane results in smoother transitions between shapes.


Figure 6. Visualization of the log-energy of our mapping. Naively lifting exploration spaces into the primal domain (e.g. by barycentrically interpolating landmark latents) results in noisy interpolants. Our method incurs far less energy overall, which primarily occurs on the facet boundaries.


Finally, we transfer the variations of the the unstructured (point cloud) shapes onto the original high-quality meshes. To do this, we interpret an interpolation through our 2D exploration space as a flow on the meshes's vertices.


Results

We demonstrate results on shape-to-shape deformation tasks, as well as free-form deformation and shape space exploration (refer to supplementary video below).

Figure 9. Additional shape-to-shape deformation results from our airplane and table exploration spaces. Our method is able to capture complex deformations; for instance, in column four we see the airliner's wings bending forward to match the straight wing's of the propeller plane. Additionally, in the last column we see that our method is able to locally deform the table top while maintaining the geometry elsewhere.

Figure 8. We demonstrate our shape-to-shape deformation results against ShapeFlow, whereby we take random source and target meshes, and deform between them continuously. We visualize the final deformed shapes (at t=1) as well as the source and target meshes. We can see that our method exhibits deformations that better preserve the fine details of the original shapes, while matching the structure of the target shapes more closely compared to ShapeFlow.



Media

Supplementary video:

Paper and Supplementary Material

A. Maesumi, P. Guerrero, V. Kim, M. Fisher, S. Chaudhuri, N. Aigerman, D. Ritchie
Explorable Mesh Deformation Subspaces from Unstructured Generative Models.
In SIGGRAPH Asia, 2023.
(host on arXiv)


Bibtex:
@article{maesumi2023explore, author = {Maesumi, Arman and Guerrero, Paul and Kim, Vladimir G. and Fisher, Matthew and Chaudhuri, Siddhartha and Aigerman, Noam and Ritchie, Daniel}, title = {Explorable Mesh Deformation Subspaces from Unstructured Generative Models}, year = {2023}, booktitle = {ACM SIGGRAPH Asia 2023 Conference Proceedings}, publisher = {Association for Computing Machinery}, doi = {10.1145/3610548.3618192}, }

Acknowledgements

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2040433. This project page template was originally made by Phillip Isola and Richard Zhang.