Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] I think Neural Nets can help r/estoration

The idea of r/estoration is to restorate old images and remove the little specs, tears, discolorations, etc… And I think that this process could be easily automated for your average image via neural networks, I mean trying to recover data from an image as already been done (denoising autoencoder, super resolution for example), the only thing is that what is one of the most valuable thing (IMO) in Deep Learning is the dataset, plenty of autoencoder/GAN are out there but they are not worth anything if they can’t be trained, hopefully someone or mutiple people (and that obviously will make things faster) reading this is/are motivated to make a dataset.

So I am just hoping to spread the word around, if anyone have the patience to make a dataset, maybe even if you have the resources to train a model/host a website for people to use to restore their images plenty of people will be grateful, I really think there is something to be done here.

Also some people will probably tell me “can’t you do it?”, in theory I can, in practice I don’t have the patience to gather 500+ images and edit them, also I have a r9 390 so I am a bit limited hardware-side, I use plaid-ML+keras on windows 10 for those wondering, plus if I spread the word around hopefully someone with a lot more of ressources/knowledge than me can do something 10x better than what I could.

Speaking of knowledge, I have thought about the architecture of the NN, I think in a first time an autoencoder would be easier (maybe a gan after, a fully convolutionnal one (so that it can accept most images sizes, with strides of 2 instead of pools, that output a 3 layer image that is then added to the input image to produce the output image, so as to reduce blurriness and maybe use dssim as loss, I will just leave that out there in case someone wants a “guide” instead of using a random autoencoder.

submitted by /u/fuckEAandTheirGame
[link] [comments]