Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I’m building an open source project that combines TensorFlow Serving, ONNX Runtime, and Kubernetes to automate deploying models as autoscaling web APIs on AWS (GitHub). It supports TensorFlow, Keras, PyTorch, Scikit-learn, XGBoost, and other frameworks.
I started working on this when I realized that while there’s been a lot of recent innovation on machine learning libraries like TensorFlow and PyTorch, actually building and shipping production applications is hard. My colleagues and I see a lot of data scientists and developers without DevOps backgrounds struggling to build model serving infrastructure with tools like Docker, Kubernetes, Flask, TensorFlow Serving, ONNX Runtime, and various AWS services. So we decided to combine these tools in an effort to improve the developer experience. It’s available for anyone to download and self-host on their AWS account for free.
I’d love to hear from anyone who has experience deploying models to production. Especially around the tooling and workflows that work well for you.
submitted by /u/ospillinger
[link] [comments]