Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Deploying Deep Learning in Real-Time Video Chat

I’m working on a video chat application that modifies each image in real-time with deep learning. I’m fairly new to backend web development and deploying models, so I’ve been struggling to find the most efficient way to do this. The options I’ve come up with are:

  1. Run inference in the browser with TensorflowJs, and stream video with WebRTC. As long as TensorflowJs is fast enough, this seems to be the simplest solution. But I’m not sure if TensorflowJs running on a CPU will be able to run in real-time.
  2. Run inference on a server. This would give me control over the hardware the model was being run on, so the performance would be more consistent. However, it wouldn’t be P2P, which might slow things down, and make it harder to scale.

Is there another solution that I’m missing? So far I’ve been just trying everything to see what works, which is quite time-consuming. If you have some experience deploying deep learning models, please let me know what you would suggest.

submitted by /u/Juggling_Rick
[link] [comments]

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.