Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I ran an experiment comparing both the latency and cost of serving realtime inference with GPT-2 on AWS using GPUs and CPUs. I also added a bit of context for how latency correlates to real world product requirements:
https://towardsdatascience.com/how-much-difference-do-gpus-make-in-model-serving-c40b885ac096?
submitted by /u/calebkaiser
[link] [comments]