[Research] Our source code for deep video inpainting!
We are proud to share our source code for “Learnable Gated Temporal Shift Module for Deep Video Inpainting. Chang et al. BMVC 2019.” We developed a simple module to reduce training & testing time and model parameters for deep free-form video inpainting based on the Temporal Shift Module for action recognition. It achieves similarly good results as our previous work “Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN. Chang et al. arXiv 2019.” with only 33% of parameters.
Contributions and stars are welcome!
submitted by /u/amjltc295