Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Few days ago ,I see a article introduceed ShuffleNet V2 .
Based on my curiosity I trained ShuffleNet V2 on MNIST datase, and I got 99.14% accuracy .
( code here : https://github.com/allen108108/Model_on_MNIST/blob/master/MNIST%20-%20ShuffleNetV2.ipynb ).
The result looks good , but I got higher accuracy ( almost 99.5% ) when I trained a typically CNN model on the same dataset
( code here : https://github.com/allen108108/Model_on_MNIST/blob/master/MNIST%20-%20CNN.ipynb).
Is that normal situation ? Did I make some mistakes ?
PS : the ShuffleNet V2 model that I used : https://github.com/opconty/keras-shufflenetV2
submitted by /u/allen108108
[link] [comments]