Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I recently read a new NLP paper called CNM: An Interpretable Complex-valued Network for Matching (https://arxiv.org/abs/1904.05298). In summary, the paper proposes a framework for NLP explainability using complex-valued vector spaces, borrowing math frameworks from quantum mechanics. It even won an award at this years NAACL. Reading the paper however, I couldn’t help but to feel a bit mislead. There seemed to be a stretching of the applications of quantum mechanics and there were a lot of ideas proposed in the paper which had little evidence to back them up. I was so bothered by it in fact that I wrote a blog post about it. There’s also a great paper I read that talks more about the state of research in the field: https://arxiv.org/abs/1807.03341.
I wanted to know people’s thoughts on the matter. Do you think we’re critical enough as a community? Do you feel because of the recent successes of machine learning/deep learning research, people maybe are hesitant to speak out or to ask more questions on why things work?
submitted by /u/ShandarTheDestroyer
[link] [comments]