[N] XNNPACK: High-performance floating-point inference for mobile
https://github.com/google/XNNPACK
Recently found out about this. It works well and is easy to integrate (unlike Arm ComputeLibrary).
Wondering when pytorch will switch to it because NNPACK is definitely slower.
submitted by /u/Nimitz14
[link] [comments]