Guidefill: GPU accelerated, artist guided geometric inpainting for 3D conversion of film
SIAM Journal on Imaging Sciences
MetadataShow full item record
Hocking, L., MacKenzie, R., & Schönlieb, C. (2017). Guidefill: GPU accelerated, artist guided geometric inpainting for 3D conversion of film. SIAM Journal on Imaging Sciences, 10 (4), 2049-2090. https://doi.org/10.1137/16M1103737
The conversion of traditional film into stereo 3D has become an important problem in the past decade. One of the main bottlenecks is a disocclusion step, which in commercial 3D conversion is usually done by teams of artists armed with a toolbox of inpainting algorithms. A current difficulty in this is that most available algorithms either are too slow for interactive use or provide no intuitive means for users to tweak the output. In this paper we present a new fast inpainting algorithm based on transporting along automatically detected splines, which the user may edit. Our algorithm is implemented on the GPU and fills the inpainting domain in successive shells that adapt their shape on the y. In order to allocate GPU resources as efficiently as possible, we propose a parallel algorithm to track the inpainting interface as it evolves, ensuring that no resources are wasted on pixels that are not currently being worked on. Theoretical analyses of the time and processor complexity of our algorithm without and with tracking (as well as numerous numerical experiments) demonstrate the merits of the latter. Our transport mechanism is similar to the one used in coherence transport [F. Bornemann and T. März, J. Math. Imaging Vision, 28 (2007), pp. 259-278; T. März, SIAM J. Imaging Sci., 4 (2011), pp. 981-1000] but improves upon it by correcting a \kinking" phenomenon whereby extrapolated isophotes may bend at the boundary of the inpainting domain. Theoretical results explaining this phenomenon and its resolution are presented. Although our method ignores texture, in many cases this is not a problem due to the thin inpainting domains in 3D conversion. Experimental results show that our method can achieve a visual quality that is competitive with the state of the art while maintaining interactive speeds and providing the user with an intuitive interface to tweak the results.
The work of the first author was supported by the Cambridge Commonwealth Trust and the Cambridge Center for Analysis. The work of the third author was supported by the Leverhulme Trust project Breaking the Nonconvexity Barrier, the EPSRC grants EP/M00483X/1 and EP/N014588/1, the Cantab Capital Institute for the Mathematics of Information, the CHiPS (Horizon 2020 RISE project grant), the Global Alliance project “Statistical and Mathematical Theory of Imaging,” and the Alan Turing Institute.
Alan Turing Institute (unknown)
European Commission Horizon 2020 (H2020) Marie Sk?odowska-Curie actions (691070)
Leverhulme Trust (RPG-2015-250)
European Commission Horizon 2020 (H2020) Marie Sk?odowska-Curie actions (777826)
External DOI: https://doi.org/10.1137/16M1103737
This record's URL: https://www.repository.cam.ac.uk/handle/1810/292707