SIGGRAPH Asia 2021

Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN

Badour AlBahar1,2Jingwan Lu3Jimei Yang3Zhixin Shu3Eli Shechtman3Jia-Bin Huang1,4
1Virginia Tech2Kuwait University3Adobe Research4University of Maryland College Park

Watch the video here if Youtube doesn't work for you.


We present an algorithm for re-rendering a person from a single image under arbitrary poses. Existing methods often have difficulties in hallucinating occluded contents photo-realistically while preserving the identity and fine details in the source image. We first learn to inpaint the correspondence field between the body surface texture and the source image with a human body symmetry prior. The inpainted correspondence field allows us to transfer/warp local features extracted from the source to the target view even under large pose changes. Directly mapping the warped local features to an RGB image using a simple CNN decoder often leads to visible artifacts. Thus, we extend the StyleGAN generator so that it takes pose as input (for controlling poses) and introduces a spatially varying modulation for the latent space using the warped local features (for controlling appearances). We show that our method compares favorably against the state-of-the-art algorithms in both quantitative evaluation and visual comparison.



Badour AlBahar, Jingwan Lu, Jimei Yang, Zhixin Shu, Eli Shechtman, and Jia-Bin Huang. 2021. Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN. ACM Transactions on Graphics (2021).


   title   = {Pose with {S}tyle: {D}etail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN},
   author  = {AlBahar, Badour and Lu, Jingwan and Yang, Jimei and Shu, Zhixin and Shechtman, Eli and Huang, Jia-Bin},
   journal = {ACM Transactions on Graphics},
   year    = {2021}


We thank Kripasindhu Sarkar for providing the test results for StylePoseGAN.