Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Jurgen Schmidhuber really had GANs in 1990

he did not call it GAN, he called it curiosity, it’s actually famous work, many citations in all the papers on intrinsic motivation and exploration, although I bet many GAN people don’t know this yet

I learned about it through his inaugural tweet on their miraculous year. I knew LSTM, but I did not know that he and Sepp Hochreiter did all those other things 30 years ago.

The blog sums it up in section 5 Artificial Curiosity Through Adversarial Generative Neural Networks (1990)

The first NN is called the controller C. C (probabilistically) generates outputs that may influence an environment. The second NN is called the world model M. It predicts the environmental reactions to C’s outputs. Using gradient descent, M minimises its error, thus becoming a better predictor. But in a zero sum game, C tries to find outputs that maximise the error of M. M’s loss is the gain of C.

That is, C is motivated to invent novel outputs or experiments that yield data that M still finds surprising, until the data becomes familiar and eventually boring. Compare more recent summaries and extensions of this principle, e.g., [AC09].

GANs are an application of Adversarial Curiosity [AC90] where the environment simply returns whether C’s current output is in a given set [AC19].

So I read those referenced papers. AC19 is kinda modern guide to the old report AC90 where the adversarial part first appeared in section: Implementing Dynamic Curiosity and Boredom, and the generative part in section: Explicit Random Actions versus Imported Randomness, which is like GANs versus conditional GANs. AC09 is a survey from 2009 and sums it up: maximise reward for prediction error.

I know that Ian Goodfellow says he is the inventor of GANs, but he must have been a little boy when Jurgen did this in 1990. Also funny that Yann LeCun described GANs as “the coolest idea in machine learning in the last twenty years” although Jurgen had it thirty years ago

No, it is NOT the same as predictability minimisation, that’s yet another adversarial game he invented, in 1991, section 7 of his explosive blog post which contains additional jaw-droppers

submitted by /u/siddarth2947
[link] [comments]