Pix2NPHM: Learning to Regress NPHM Reconstructions From a Single Image

1Technical University of Munich, 2Woven by Toyota 3Toyota Motor Europe NV/SA

Pix2NPHM is a combination of a feed-forward transformer for MonoNPHM parameter prediction from a single image, that is further refined using optimization against Pixel3DMM normal maps.

Tracking Results of Pix2NPHM.

From left to right: input, overlay, reconstrcutions.

Abstract

Neural Parametric Head Models (NPHMs) are a recent advancement over mesh-based 3d morphable models (3DMMs) to facilitate high-fidelity geometric detail. However, fitting NPHMs to visual inputs is notoriously challenging due to the expressive nature of their underlying latent space. To this end, we propose Pix2NPHM, a vision transformer (ViT) network that directly regresses NPHM parameters, given a single image as input. Compared to existing approaches, the neural parametric space allows our method to reconstruct more recognizable facial geometry and accurate facial expressions. For broad generalization, we exploit domain-specific ViTs as backbones, which are pretrained on geometric prediction tasks. We train Pix2NPHM on a mixture of 3D data, including a total of over 100K NPHM registrations that enable direct supervision in SDF space, and large-scale 2D video datasets, for which normal estimates serve as pseudo ground truth geometry. Pix2NPHM not only allows for 3D reconstructions at interactive frame rates, it is also possible to improve geometric fidelity by a subsequent inference-time optimization against estimated surface normals and canonical point maps. As a result, we achieve unprecedented face reconstruction quality that can run at scale on in-the-wild data.

Video

Results Comparisons NeRSemble.

We show comparisons against two recent SotA FLAME-based reconstruction models, SHeaP (feed-forward) and Pixel3DMM (optimization-based).

Reconstructed Geometry.

Press R to reset views. Press S to stop/start rotation.

Tracking Comparisons.

Comparisons of Sota tracker Pixel3DMM against our feed-forward tracking predictions. Our NPHM-based feed-forward predictor predictions reaches higher fidelity than a FLAME-based counterpart.

Method Overview

1. The input image is encoded using geometrically pretrained ViTs, which remain frozen during the training of Pix2NPHM. The backbones are pretrained using pixel-aligned surface Normal and position prediction tasks, similar to Pixel3DMM.

2. The resulting token sequences is extended with classification tokens for identity (1 global token and 65 local tokens) and expression (1 token).

3. After several transformer layers, the classifcation tokens are read-out for NPHM parameter prediction using a small MLP-head. The implicit surface can be rendered, or extracted using marching cubes.

4. For 3D datasets we supervise using the differences between induces SDF-functions between predicted and g.t. NPHM prarameters.

5. For 2D datasets, we use Pixel3DMM normal estimates as pseudo g.t. supervision against normal renderings.

Related Links

FFWD-FLAME prediction: SHeaP, MICA, EMOCA and TokenFace.

Optimization-based FLAME fitting: Pixel3DMM, FlowFace and MetricalTracker.

Foundations on NPHMs: MonoNPHM and NPHM.

Similar neural 3DMMs: ImHead and ImFace.

Avatars based on NPHM: DiffusionAvatar, NPGA, FaceTalk.

BibTeX

@misc{giebenhain2025pix2nphm,
title={Pix2NPHM: Learning to Regress NPHM Reconstructions From a Single Image},
author={Simon Giebenhain and Tobias Kirschstein and Liam Schoneveld and  Davide Davoli and  Zhe Chen and Matthias Nie{\ss}ner},
year={2025},
url={https://arxiv.org/abs/TODO},
}