Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Feasibility of running an ML model on phone hardware?

I’ve trained a tensorflow model which takes my RTX2080 several seconds per action (in addition to 20-30 seconds to initialize the model). I’ve been looking into turning this into an iOS/Andriod app running on tensorflow lite, but apart from the technical challenge of converting the model into a tensorflow lite model and everything else, am wondering about the feasibility of this running on phone hardware – even on a reasonably modern phone with inbuilt GPU would this still likely be too slow for practical purposes? Can anyone who has built an iOS/Android app with tensorflow lite where the phone is responsible for computation comment on performance and other practical considerations? The only other option of having requests served by my own server(s) on AWS for example would turn into a major expense if the app had significant use.

submitted by /u/hanyuqn
[link] [comments]