[D] Hosting multiple large models online
I want to host some of my models as APIs for a side project, wondering if anyone has done this on typical web hosts/ how well would that go? Looking to host some BERT models for example, which take a few seconds even on my GPU inference so is it doable on web host CPUs?
TalkToTransformer website seems to be hosting GPT2 models and can handle queries quite well, anyone know how that is running?
Obviously would like to keep costs low otherwise would just rent a server with GPU..
submitted by /u/mukaj
[link] [comments]