[D] Machine Learning – WAYR (What Are You Reading) – Week 77
This is a place to share machine learning research papers, journals, and articles that you’re reading this week. If it relates to what you’re researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you’ve read.
Please try to provide some insight from your understanding and please don’t post things which are present in wiki.
Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links.
Previous weeks :
Most upvoted papers two weeks ago:
/u/cafedude: https://arxiv.org/pdf/1911.13299.pdf
/u/nivter: On Mutual Information Maximization for Representation Learning: https://arxiv.org/abs/1907.13625
The authors ran experiments to show that MI maximization between two representations is not directly tied to learning good representations. They did so by maximizing MI while also adversarially training the model to perform badly on linear classification models. One key takeaway for me was that encoders that learn good representations tend to ignore unwanted information as a result of which they are hard to invert (high condition number of Jacobian of output wrt inputs)
Besides that, there are no rules, have fun.
submitted by /u/ML_WAYR_bot
[link] [comments]