[D] Feasibility of running an ML model on phone hardware?
I’ve trained a tensorflow model which takes my RTX2080 several seconds per action (in addition to 20-30 seconds to initialize the model). I’ve been looking into turning this into an iOS/Andriod app running on tensorflow lite, but apart from the technical challenge of converting the model into a tensorflow lite model and everything else, am wondering about the feasibility of this running on phone hardware – even on a reasonably modern phone with inbuilt GPU would this still likely be too slow for practical purposes? Can anyone who has built an iOS/Android app with tensorflow lite where the phone is responsible for computation comment on performance and other practical considerations? The only other option of having requests served by my own server(s) on AWS for example would turn into a major expense if the app had significant use.
submitted by /u/hanyuqn
[link] [comments]