Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Deploy models to AWS

Hi everyone,

I posted here ~4 months ago about an open source ML platform I’m working on. The main feedback I got from the community is to add support for more frameworks besides TensorFlow and to focus on making model deployment as simple as possible.

The latest version connects TF Serving, ONNX Runtime, and Flask to automatically deploy TensorFlow, PyTorch, XGBoost, and other models as web APIs. It also uses Docker and Kubernetes behind the scenes to autoscale endpoints, run rolling updates, and support CPU and GPU inference.

Your feedback was really useful to me last time so I’d love to hear your thoughts about this version.

https://github.com/cortexlabs/cortex

submitted by /u/ospillinger
[link] [comments]