Image inpainting seeks to fill in corrupted areas with pixels that have a similar texture and content with its surroundings. For high-structured data, e.g., human face, some recent works can achieve quite realistic results. However, almost all existing methods learned a determined mapping from a corrupted input to the final result, yet ignored the potential multiple plausible solutions of the same input. Furthermore, they have not explored the underlying connections between those plausible solutions and semantic conditions. In this work, we propose a novel deep representation calibrated Bayesian neural network (DRCBNN) for semantically explainable face inpainting and editing. By leveraging the advantages that Bayesian decision theory deals with uncertainty, the proposed framework exploits deep representation into Bayesian decision theory and derive a deep representation calibrated evidence lower bound (ELBO). In comparison with traditional ELBO in BNN, the newly calibrated ELBO is a more task-specific loss function. After optimizing the newly calibrated ELBO, it allows to inference desired inpainting outputs in accordance with specific semantics. Finally, experiments demonstrated that our method can produce multiple semantics-aware inpainting outputs and outperforms the state-of-the-arts.
- Bayesian neural network
- latent variable
- face inpainting and editing
- variational inference
Xiong, H., Wang, C., Wang, X., & Tao, D. (2020). Deep representation calibrated Bayesian neural network for semantically explainable face inpainting and editing. IEEE Access, 8, 13457-13466. https://doi.org/10.1109/ACCESS.2019.2963675