A Data-driven Approach to Four-view Image-based Hair Modeling

Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou

SIGGRAPH 2017

Abstract

We introduce a novel four-view image-based hair modeling method. Given four hair images taken from the front, back, left and right views as input, we first estimate the rough 3D shape of the hair observed in the input using a predefined database of 3D hair models, then synthesize a hair texture on the surface of the shape, from which the hair growing direction information is calculated and used to construct a 3D direction field in the hair volume. Finally, we grow hair strands from the scalp, following the direction field, to produce the 3D hair model, which closely resembles the hair in all input images. Our method does not require that all input images are from the same hair, enabling an effective way to create compelling hair models from images of considerably different hairstyles at different views. We demonstrate the efficacy of our method using a wide range of examples.

Pdf

BibTex

@article{zhang2017a, title={A data-driven approach to four-view image-based hair modeling}, author={Zhang, Meng and Chai, Menglei and Wu, Hongzhi and Yang, Hao and Zhou, Kun}, journal={ACM Transactions on Graphics}, volume={36}, number={4}, pages={156}, year={2017}}