Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Any recent papers pointing to these ideas?
Activation function itself can be learned instead of fixing it to ReLU, tanh, etc.
https://arxiv.org/pdf/1412.6830.pdf (ICLR 2015).
And how about aggregate (reduction) functions like max-pool, average-pool ? Can they be learned instead of being fixed?
submitted by /u/tsauri
[link] [comments]