Blog

Learn About Our Meetup

4500+ Members

[D] Feasibility of running an ML model on phone hardware?

I’ve trained a tensorflow model which takes my RTX2080 several seconds per action (in addition to 20-30 seconds to initialize the model). I’ve been looking into turning this into an iOS/Andriod app running on tensorflow lite, but apart from the technical challenge of converting the model into a tensorflow lite model and everything else, am wondering about the feasibility of this running on phone hardware – even on a reasonably modern phone with inbuilt GPU would this still likely be too slow for practical purposes? Can anyone who has built an iOS/Android app with tensorflow lite where the phone is responsible for computation comment on performance and other practical considerations? The only other option of having requests served by my own server(s) on AWS for example would turn into a major expense if the app had significant use.

submitted by /u/hanyuqn
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.