[P] MAX: Open Deep Learning models on Docker containers
Hello!
I work for an open-source team at IBM. For a year now we have been working on a project called Model Asset eXchange (MAX). The goal of this project is to standardize DL model deployment and consumption. The idea is to make it easier to integrate DL models into web apps and services or deploy it on any cloud platform. So far we have around 25 models as part of this project. Most underlying models themselves are SOTA open-sourced models from various sources and model zoos (Tf/PyTorch/google research/IBM research etc). The value addition that this project offers is a standardized interface to any model using REST API, containerization and optimizations during inference such as loading the graph just once but performing inference based on every API call. Each model has its own github repo and for convenience, we have also hosted the Docker container on a public endpoint for people to try it out. Where possible we have also extended deployment channels to other avenues, such as NodeRed (npm), CodePen, demo web apps, etc. I would like your feedback/suggestions and of course, welcome any issues/pull request on the underlying github repos as well!
Project link: https://developer.ibm.com/exchanges/models/
A model with all deployment options: https://developer.ibm.com/exchanges/models/all/max-object-detector/
Note: the GitHub link (marked “Get this Model”) for each model is separate
submitted by /u/kmh4321
[link] [comments]