Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] “Multitemporal synapse” – I don’t even know what to think

http://standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html

http://standoutpublishing.com/g/The-Stability-Plasticity-Problem.html

I came across this patented idea/site/thing while looking for information on catastrophic forgetting, and now find myself perplexed on a couple different levels.

First, a bold claim from the site:

Now, however, with the advent of multi-temporal synapses, the limitation has been eliminated for artificial neural networks as well. This problem has been solved..” Here problem refers to plasticity vs stability, aka catastrophic forgetting.

Perplexed point 1) What is this technique and how useful is it? I’ve read through the information on the site w/o checking the references cited and glanced at Netlab but nothing seems to have any actual application of this idea.

Perplexed point 2) This blog post is 9 years old, if it was worthwhile wouldn’t it have become wide spread by now?

Perplexed point 3) Since /something/ is patented, not sure what exactly, what does that mean legally for trying to understand or use aspects of this technique in tackling catastrophic forgetting?

submitted by /u/AbitofAsum
[link] [comments]

[D] GPU benchmarks for deep learning tasks

There is a benchmark of desktop and laptop GPU cards for deep learning: AI Benchmark. You can run these tests yourself, see https://pypi.org/project/ai-benchmark/.

More detailed results here: http://ai-benchmark.com/ranking_cpus_and_gpus_detailed.html (TensorFlow training and inference times for: MobileNet-V2, Inception-V3, Inception-V4, Inc-ResNet-V2, ResNet-V2-50, ResNet-V2-152, VGG-16, SRCNN 9-5-5, VGG-19 Super-Res, ResNet-SRGAN, ResNet-DPED, U-Net, Nvidia-SPADE, ICNet, PSPNet, DeepLab, Pixel-RNN, LSTM, GNMT).

I found other useful benchmarks and tests:

Take note that some GPUs are good for games but not for deep learning (for games 1660 Ti would be good enough and much, much cheaper, vide this and that). For general benchmarks, I recommend UserBenchmark (my Lenovo Y740 with Nvidia RTX 2080 Max-Q here.)

For comparison of different cards between frameworks, see Performance in: Keras or PyTorch as your first deep learning framework (June 2018), based on Comparing Deep Learning Frameworks: A Rosetta Stone Approach.

Do you know any other good rankings, benchmarks, and tests? (There is MLPerf, but I guess due to the complication of the procedure, the amount of data is very small.)

submitted by /u/pmigdal
[link] [comments]

[P] Ideas for implementing an original supervised machine learning technique?

I’m taking a machine learning class and I have a project, whose task is to implement an original supervised learning algorithm. It doesn’t have to be something new from scratch and it doesn’t need to be overcomplicated, because it’s a one week project. It can be a combination of two learning algorithms that can accurately classify a labeled data set, such as using genetic algorithms with artificial neural networks. It can also use part of an existing algorithm, as long as I add something substantial to it. The problem is I can’t think of a simple idea that is not already proposed in a published research paper.

To put things into perspective, the learning algorithms I’m familiar with are:

  • Decision Trees
  • KNN
  • ANN
  • SVM
  • Linear and Logistic Regression
  • Genetic Algorithm
  • Clustering

submitted by /u/PatientLookout
[link] [comments]

[D] Who to follow in NeurIPS2019 on Twitter

John Guerra conducts these types of analysis at various academic conferences over the years.

Here is a list of the 783 Twitter accounts most followed by the members of the NeurIPS2019 community (computed by identifying the 679 accounts tweeting using #NeurIPS2019 with at least 3 tweets, between 2019-12-02 and 2019-12-20).

https://johnguerra.co/viz/influentials/NeurIPS2019/

submitted by /u/milaworld
[link] [comments]

[D] PhD in Machine Learning vs PhD in Statistics?

I currently hold a Master’s in Statistics and I know I want to go back and get a PhD, but I’m not quite sure which is the better option. I want to eventually do research with AI/ML type stuff, but I thought a PhD in Statistics would be more “respected” because it’s more theoretical? Also, because Statistics is more general (I think), would it perhaps keep more doors open?

Apologies if my question seems naive, I really don’t know very much so feel free to give your most brutally honest opinions!

submitted by /u/amlewa
[link] [comments]

[R] Peer to Peer Unsupervised Representation Learning

I have produced a prototype for an unsupervised representation learning model which trains over a p2p network and uses a blockchain to record the value of individual nodes in the network.
https://github.com/unconst/BitTensor

This project is open-source and ongoing. I wanted to share with reddit to see if anyone was interested in collaboration.

submitted by /u/unconst
[link] [comments]