Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Deploying Deep Learning in Real-Time Video Chat

I’m working on a video chat application that modifies each image in real-time with deep learning. I’m fairly new to backend web development and deploying models, so I’ve been struggling to find the most efficient way to do this. The options I’ve come up with are:

  1. Run inference in the browser with TensorflowJs, and stream video with WebRTC. As long as TensorflowJs is fast enough, this seems to be the simplest solution. But I’m not sure if TensorflowJs running on a CPU will be able to run in real-time.
  2. Run inference on a server. This would give me control over the hardware the model was being run on, so the performance would be more consistent. However, it wouldn’t be P2P, which might slow things down, and make it harder to scale.

Is there another solution that I’m missing? So far I’ve been just trying everything to see what works, which is quite time-consuming. If you have some experience deploying deep learning models, please let me know what you would suggest.

submitted by /u/Juggling_Rick
[link] [comments]