Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I tried to deploy my Keras model to server with flask. I try to track the memory on my local.
I found out if I use pure linux server the python take only 200MB of RAM
But if I deploy it with docker. I need 800MB of RAM.
Any suggestion? should I deploy it with docker or pure linux server?
submitted by /u/Uysim
[link] [comments]