Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
So I’ve used serpent, mss, pil, custom buffer transfer from Windows to numpy and no matter what you’re topping out at 60ish fps. So we have about 60 fps capture running in its own thread. Then I have yolov3 tiny which when capturing from a on disk video can process at 10-20ms per frame. Cool. Also about 60 fps.
I think im losing an additional 15ms when I do expand, and I think it’s resize. So you have an image that is a numpy array and you expand it (which I think just adds a dimension), then you resize it for processing by the model. Those 2 lines are killing me.
For people that are doing things like real time game play, how are you handling your pipeline in?
Edit: too, I’ve read pytorch is faster (I’m dying reimplementing this thing so many times though) would anyone agree?
submitted by /u/halfassadmin
[link] [comments]