Blog

Learn About Our Meetup

4500+ Members

[Discussion] Is MINE (Mutual Information Neural Estimation) suitable for reducing the mutual information?

Hello, i got a old-fashioned but confused question about Belghazi et al., Mutual Information Neural Estimation, ICML 2018.

In the paper, the lower bound of mutual information is achieved by neural-net-parameterized function (what they call ‘statistics network’), and various experiments are conducted including information bottleneck which is case of ‘reducing’ I(X; Z).

Here i’m quite interested with reducing mutual information, so i started to regenerate their results, but it’s quite stucked.

Unfortunately not much details about IB implementation are included in paper, so if you have any experience employing MINE to reduce mutual information, it’d be a big pleasure, please share your way.

The paper is well-written with clear theoretical background, but i’m not sure how lowering the ‘approximated lower bound’ is helpful to reduce the actual mutual information. For those kinds of lower-bound mutual information models; Do you think those models are also practically useful to reduction of MI?

submitted by /u/pky3436
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.