Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] relation between the learned parameters of two trained neural networks on the same dataset

I was wondering if there is any work that studies the relation of learned weights between two neural nets.

For example, suppose we have a simple regression task, and we trained an MLP with one hidden layer with 20 neurons. If we train another MLP with 15 neurons in the hidden layer, what would the relation of the weight matrices be between these two networks?

I found some related works on neural network compression literature that start with the bigger model and use matrix pruning with factorization and/or decomposition to reach a smaller model. But, I’m not sure if the obtained parameters will be close to the weights a neural network(with the same parameters as the smaller model) will learn if trained from scratch. I mean, the fact that we can use pruning methods and get good accuracy doesn’t necessarily mean that that is the true relation between the bigger model and the smaller one. What do you think?

submitted by /u/nodet07
[link] [comments]