[D] PowerPlay + Meta-learning a potential path to AGI?
So this is obviously hypothetical and entirely speculative on my part as an ML hobbyist so I’m sure I’m missing something and will get downvoted for my dumb hypothetical.
But here it goes.
Lets say you have a Neural Network (the parent network) which can design arbitrary children networks and learn optimized design patterns for a given task. Of course your parent network won’t be super generalized for any type of network design, just relatively specific tasks. This is kind of why we don’t have “real” AI or AGI. The tasks are still relatively narrow.
Skimming the literature on meta-learning it looks like researchers have been able to get SOME generalization by training their meta-networks on multiple tasks. But of course data and identifying tasks might be a limitation for scale-ability and high levels of generalization. So I purpose a potentially more elegant way.
This is where Jürgen Schmidhubers PowerPlay would come in. The PowerPlay algorithm is split into a solver and a problem generator. The problem generator generates novel problems which the solver has to try to solve. Novel problems are problems which are unsolvable by the current solver. The created problems are just a bit more complicated than the most complicated solvable problem. The solver has to be able to solve all previous problems the generator created plus the new one.
Both the problem Solver and Generator have parent Networks which continually learn to design more sophisticated Solvers and Generators until you have much more general problem solvers, or rather a neural network that can design general problem solvers.
Both the Solver and Generator have meta-networks that find their optimal design patterns.
Step 2: ???
Step 3: profit!