[D] Has anybody tried to do Audio DeepDream which would learn on datasets of sounds rather than music ?
I think the core reason why DeepDream don’t work on music well is that it’s learnin’ directly on it.
That’s like training a painter on paintings without letting him see any real world objects beforehand.. (or more within my taste, like training a game level designer on other game designs without letting him spend years & years in real world getting ‘useful’ information about the objects present here)
Could someone train a convolutional (or possibly recurrent) network to learn on a large dataset on bird sounds, trucks and many other sounds we hear daily (and make it learn which category it should put these to), and then try to make it imagine something on either an existing track, or rather try to adapt the videoify.py (i know it’s getting repetitive) on it, to progressively build a song ?
submitted by /u/ad48hp
[link] [comments]