Learn About Our Meetup

4500+ Members

[D] ML Inference optimization, runtimes, compilers

I’m doing a study on inference latency. What are different ways of optimizing your model for this? Let’s say the goal is to get your inference latency as low as possible. I’ve heard of ONNX runtime (apparently used by Microsoft in production), compilers such as Intel nGraph, TVM, Intel OpenVINO and so on. Are these kind of tools used in production, or do most companies just use PyTorch and TF inference mode? If anyone here has experience from unique deployments I’d love to hear about it!

submitted by /u/dilledalle
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.