Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[Project] Help for University Project: Semi-Supervised ML

Hello everyone,

I’m studying ICT at the University and i’m currently working on a project involving transport data: the dataset me and my colleagues gathered involves territorial information (demographic-economic), points of interest (bars, restaurants, universities…) and some mobility data (users’ trips with origin and destination). The goal of the project is to develop a ML algorithm to classify the trip purpose of the users (based on all these inputs, try to classify if it’s a work trip, entertainment trip, going to eat ecc…).

My question is, if possible, if it is a good idea to use a semi-supervised algorithm that tries to label the unlabelled data (since we don’t have any validation on the mobility data) using some rules to establish some obvious labels. If not, are there any better methods?

submitted by /u/riki4284
[link] [comments]

“[Discussion]” Platform for sharing AI models.

Github does not look like a great place for sharing trained models. I think, there should be a separate open platform specifically for sharing deep learning work. Some features I would like are –

  • Common Repository structure. (On Github, every author uses a different structure and understanding them is time-consuming.)
  • Independently verified inference stats. (Currently, very few models provide stats and even those who do use different hardware which makes comparing them difficult).
  • Model Versioning.
  • One-click/command inference. (I don’t want to spend 3 hours just to know the model does not work.)
  • Containerization. Dockerfiles should be provided with models.
  • Easy Production Deployment – Models must be easily integrable with tools like Tensorflow serving, Deepdetect.
  • Interface – Just like an app store. It should have tabs/tags for Vision, NLP, etc.

Think about the possibility of having a website that hosts hundreds of thousands of ready to deploy pre-trained AI models with the above features.

submitted by /u/dmangla3
[link] [comments]

[D] Which GPU should I buy?

Hey,

I’m currently writing my Bachelor’s thesis about GANs and need some GPU compute power, but I really don’t like the feel of Google Colab and would like to have something local.

Which GPU should I get? My datasets aren’t that large, I’ve already looked at the RTX 2060 Super or RTX 2070.

submitted by /u/der_iraner
[link] [comments]

[P] Composing Bach Chorales Using Deep Learning

This is a 30 minute talk from GOTO Copenhagen 2019 by Feynman Liang – Creator of BachBot. I’ve dropped the full talk abstract below for a read before diving into the talk:

Can musical creativity, something believed to be deeply human, be codified into an algorithm?

While most music theorists are hesitant to claim a “correct” algorithm for composing music like Bach, recent advances in machine learning and computational musicology may help us reach an answer.

In this talk, we describe BachBot: an artificial intelligence which uses deep learning and long short term memory (LSTM) to compose music in the style of Bach. We train BachBot on all known Bach chorale harmonisations and carry out the largest musical Turing test to date. Our results show that the average listener can distinguish BachBot from real Bach only 5% better than random guessing, suggesting that algorithmic composition of Bach chorales is more closed (as a result of BachBot) than open a problem.

What will the audience learn from this talk? How we trained AI to compose music most people cannot tell apart from Bach’s own chorales, and the (not so) surprising discoveries along the way.

submitted by /u/goto-con
[link] [comments]

[R] Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks (NeurIPS2019 Spotlight)

Abstract

We propose a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. The Legendre Memory Unit (LMU) is mathematically derived to orthogonalize its continuous-time history—doing so by solving d coupled ordinary differential equations (ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree d−1.

Backpropagation across LMUs outperforms equivalently-sized LSTMs on a chaotic time-series prediction task, improves memory capacity by two orders of magnitude, and significantly reduces training and inference times. LMUs can efficiently handle temporal dependencies spanning 100,000 time-steps, converge rapidly, and use few internal state-variables to learn complex functions spanning long windows of time—exceeding state-of-the-art performance among RNNs on permuted sequential MNIST.

These results are due to the network’s disposition to learn scale-invariant features independently of step size. Backpropagation through the ODE solver allows each layer to adapt its internal time-step, enabling the network to learn task-relevant time-scales. We demonstrate that LMU memory cells can be implemented using m recurrently-connected Poisson spiking neurons, O(m) time and memory, with error scaling as O(d/√m). We discuss implementations of LMUs on analog and digital neuromorphic hardware.

https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks

submitted by /u/wei_jok
[link] [comments]

[P] For NLP Researchers, Implementation of Text Preprocessing Package, PreNLP

Do very simple text-preprocessing (a.k.a dirty work) with PreNLP Package !

I’m working in NLP part, and implementing a package to do iterative but necessary works for NLP. I want to know what you want to implement on the issue. I’ll implement it on this package.

Here are some exmaples to preprocess text.

from prenlp.data.normalization import * >>> url_normalize('Visit this link for more details: https://github.com/', repl='[URL]') Visit this link for more details: [URL] >>> tag_normalize('Use HTML with the desired attributes: <img src="cat.jpg" height="100" />', repl='[TAG]') Use HTML with the desired attributes: [TAG] >>> emoji_normalize('Hello 🤩, I love you 💓 !', repl='[EMOJI]') Hello [EMOJI], I love you [EMOJI] ! >>> email_normalize('Contact me at lyeoni.g@gmail.com', repl='[EMAIL]') Contact me at [EMAIL] >>> tel_normalize('Call +82 10-1234-5678', repl='[TEL]') Call [TEL] 

LINK: https://github.com/lyeoni/prenlp

submitted by /u/lyeoni
[link] [comments]

[D] I’m a Reinforcement Learning researcher and I’m leaving academia.

I’m a Ph.D. student studying RL. I’m graduating soon, joining a top company as a software engineer. I have never wanted to become a professor, but I liked doing research. I found RL very interesting and still have some ideas that I’d like to work on, but the recent trends in RL in academia discouraged me doing research in RL. I’ve felt this way for a while but the recent RL tutorial at NeurIPS reminded me of it again. Lots of the papers after 2014 introduced in the talk were from either Deepmind, UC Berkeley, or MSR (which makes sense because the speaker is from MSR). I get why the speaker included those papers because they were cited many times and have been frequently discussed in their respective communities. Although there are many good papers from those groups and many of them are certainly amazing researchers, I think there are still other good papers published in top conferences which deserve to be continuously discussed. Things like experimental domains, benchmarks, and specific fields (or research directions) could be selected with a bias because of such trends. I wonder if it has always been this way or if this is something new and makes other people frustrated too.

submitted by /u/clairinf
[link] [comments]

[D] Yoshua Bengio talks about what’s next for deep learning

Tomorrow at NeurIPS, Yoshua Bengio will propose ways for deep learning to handle “reasoning, planning, capturing causality and obtaining systematic generalization.” He spoke to IEEE Spectrum on many of the same topics.

https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next

submitted by /u/newsbeagle
[link] [comments]