How to encode the whole neural net into a single vector from which it can be recreated? I am trying to think about the problem of how to simulate one neural net on another. As UTM can simulate any TM, there should be a way to translate this concept to differentiable programming. Adversarial reprogramming might be an attack on this, but the injected task must encode the weights of the simulated neural net and and algorithm of how to use the weights. This is a problem because there is just too much data to feed in. The best scenario would be that there is UNN that can simulate all networks that are “smaller” (by the description length of the simulated net). Also, I don’t think there can be a perfect simulation – UNN can probably compute the hosted NN only as an approximation of the real one. Note: The edibles are too strong. Anyway, does somebody have any idea how to make any NN force to compute “(+ 1 1)”?
submitted by /u/AsIAm