[P] Deploy models to AWS
Hi everyone,
I posted here ~4 months ago about an open source ML platform I’m working on. The main feedback I got from the community is to add support for more frameworks besides TensorFlow and to focus on making model deployment as simple as possible.
The latest version connects TF Serving, ONNX Runtime, and Flask to automatically deploy TensorFlow, PyTorch, XGBoost, and other models as web APIs. It also uses Docker and Kubernetes behind the scenes to autoscale endpoints, run rolling updates, and support CPU and GPU inference.
Your feedback was really useful to me last time so I’d love to hear your thoughts about this version.
submitted by /u/ospillinger
[link] [comments]