[D] Best way to convert traditional images into MNIST format, for testing purposes on a CNN trained by MNIST dataset?
I have just trained my first CNN network by using the MNIST dataset, it is the most famous handwriting dataset. However, instead of using their testing images, I want to utilize my own 28×28 testing images.
The rationale behind this, is that I want to make a handwriting recognition program, so obviously I need a way to convert traditional image format to the one-dimensional MNIST format, so that the CNN can read it.
What is the best way to accomplish this task?
submitted by /u/Fengax
[link] [comments]