Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] Why isn’t there more research papers related to active learning for deep computer vision problems?

So perhaps this is a misguided question but to my (limited) understanding the biggest hurdle to applying deep computer vision models from research(ex: classification or object detection) to a new problem is data collection. If you have access to a video stream the problem is: what are the best images to annotate. To me this sounds like it would fall under the Active Learning umbrella, however, I’ve seen a very limited set of papers[1,2,3,4] applied to this. The experiments on these papers also aren’t that great because they don’t reflect reality (images in a video are not i.i.d.)

Am I missing something? Perhaps a better way of selecting what images to annotate that’s not related to active learning?

[1] Deep Active Learning for Object Detection

[2] Cost-Effective Active Learning for Deep Image Classification

[3] The power of ensembles for active learning in image classification

[4] Reducing Class Imbalance during Active Learning for Named Entity Annotation

submitted by /u/CartPole
[link] [comments]

[D] How much progress have we really made in the past decade?

I’m an undergrad about to graduate and debating on doing Graduate school on some form of AI. What I fear is that, every bit of progress that was made in the past decade was either hype or due to faster computation or using relu rather than sigmoid.

IMO AI can be broken down to 3 parts, the algorithm for inference, knowledge representation, and computational speed. Of these 3, have we really made any progress in representation and algorithms in the past decade? Besides getting better GPU’s , everything else was already discover decades ago. Can anyone change my mind? Are we going to have AI winter 2.0 soon?

submitted by /u/uoftsuxalot
[link] [comments]

[D] New biologically discovered activation function in ANNs

Last week there was a paper published in Science describing a new (Gaussian-ish) activation function found only in human cortical neurons (specifically human cortical L2/3 neuron) – never before seen in other species. Just curious if anyone has yet to try implementing the new function in an artificial neural network yet? The authors claim it allows for what would otherwise take multiple layers of typical monotonic activation functions. In theory, if that’s true, you could learn more complex functions with fewer parameters.

submitted by /u/scottyler89
[link] [comments]

[R] AdderNet: Do We Really Need Multiplications in Deep Learning?

Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the ℓ1-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron’s gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolution layer.

arXiv: https://arxiv.org/abs/1912.13200v2

submitted by /u/aiismorethanml
[link] [comments]