[D] Deploying Deep Learning in Real-Time Video Chat
I’m working on a video chat application that modifies each image in real-time with deep learning. I’m fairly new to backend web development and deploying models, so I’ve been struggling to find the most efficient way to do this. The options I’ve come up with are:
- Run inference in the browser with TensorflowJs, and stream video with WebRTC. As long as TensorflowJs is fast enough, this seems to be the simplest solution. But I’m not sure if TensorflowJs running on a CPU will be able to run in real-time.
- Run inference on a server. This would give me control over the hardware the model was being run on, so the performance would be more consistent. However, it wouldn’t be P2P, which might slow things down, and make it harder to scale.
Is there another solution that I’m missing? So far I’ve been just trying everything to see what works, which is quite time-consuming. If you have some experience deploying deep learning models, please let me know what you would suggest.
submitted by /u/Juggling_Rick
[link] [comments]