Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Meta-Generative Adversarial Networks for AGI

So I have this idea for creating AGI based off of some things I’ve been reading from Jürgen Schmidhubers.


Lets say you have a Neural Network (the parent network) which can design arbitrary children networks and learn optimized design patterns for a given task. Of course your parent network won’t be super generalized for any type of network design, just relatively specific tasks. This is kind of why we don’t have “real” AI or AGI. The tasks are still relatively narrow.

Skimming the literature on meta-learning it looks like researchers have been able to get SOME generalization by training their meta-networks on multiple tasks. But of course data and identifying tasks might be a limitation for scale-ability and high levels of generalization. So I purpose a potentially more elegant way.


This is where Jürgen Schmidhubers PowerPlay would come in. The PowerPlay algorithm is split into a solver and a problem generator. The problem generator generates novel problems which the solver has to try to solve. Novel problems are problems which are unsolvable by the current solver. The created problems are just a bit more complicated than the most complicated solvable problem. The solver has to be able to solve all previous problems the generator created plus the new one.


Both the problem Solver and Generator have parent Networks which continually learn to design more sophisticated Solvers and Generators until you have much more general problem solvers, or rather a neural network that can design general problem solvers.

Its kind of similar how GANs work for deepfakes and image problems work, the networks try to outsmart eachother in a feedbackloop but instead of just doing a faceswap, it can generate a general purpose neural networks. Or at least one that is a lot more general than what we currently have.

Of course for this to work, this also assumes that Jürgen Schmidhubers idea that intelligence is actually far simpler than we think and that it could be expressed in a relatively small function once we fully understand it. And therefore, the Meta-Solver will be able to derive this function and encapsulate it in its children.

submitted by /u/cryptonewsguy
[link] [comments]