Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I want to host some of my models as APIs for a side project, wondering if anyone has done this on typical web hosts/ how well would that go? Looking to host some BERT models for example, which take a few seconds even on my GPU inference so is it doable on web host CPUs?
TalkToTransformer website seems to be hosting GPT2 models and can handle queries quite well, anyone know how that is running?
Obviously would like to keep costs low otherwise would just rent a server with GPU..
submitted by /u/mukaj
[link] [comments]