[D] This might be better suited for “learnmachinelearning” or stack overflow, but I feel Im a step past that. Any tips on increasing numpy to tensor performance?
So I’ve used serpent, mss, pil, custom buffer transfer from Windows to numpy and no matter what you’re topping out at 60ish fps. So we have about 60 fps capture running in its own thread. Then I have yolov3 tiny which when capturing from a on disk video can process at 10-20ms per frame. Cool. Also about 60 fps.
I think im losing an additional 15ms when I do expand, and I think it’s resize. So you have an image that is a numpy array and you expand it (which I think just adds a dimension), then you resize it for processing by the model. Those 2 lines are killing me.
For people that are doing things like real time game play, how are you handling your pipeline in?
Edit: too, I’ve read pytorch is faster (I’m dying reimplementing this thing so many times though) would anyone agree?
submitted by /u/halfassadmin
[link] [comments]