Deep representation calibrated Bayesian neural network for semantically explainable face inpainting and editing

Hao Xiong, Chaoyue Wang, Xinchao Wang, Dacheng Tao

Research output: Contribution to journalArticle

Abstract

Image inpainting seeks to fill in corrupted areas with pixels that have a similar texture and content with its surroundings. For high-structured data, e.g., human face, some recent works can achieve quite realistic results. However, almost all existing methods learned a determined mapping from a corrupted input to the final result, yet ignored the potential multiple plausible solutions of the same input. Furthermore, they have not explored the underlying connections between those plausible solutions and semantic conditions. In this work, we propose a novel deep representation calibrated Bayesian neural network (DRCBNN) for semantically explainable face inpainting and editing. By leveraging the advantages that Bayesian decision theory deals with uncertainty, the proposed framework exploits deep representation into Bayesian decision theory and derive a deep representation calibrated evidence lower bound (ELBO). In comparison with traditional ELBO in BNN, the newly calibrated ELBO is a more task-specific loss function. After optimizing the newly calibrated ELBO, it allows to inference desired inpainting outputs in accordance with specific semantics. Finally, experiments demonstrated that our method can produce multiple semantics-aware inpainting outputs and outperforms the state-of-the-arts.
Original languageEnglish
Pages (from-to)13457-13466
Number of pages10
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

    Fingerprint

Keywords

  • Bayesian neural network
  • latent variable
  • face inpainting and editing
  • variational inference

Cite this