Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Author: torontoai

[P] Haskell for managing experiments

While doing some experiments with machine learning, I have grown more and more tired of the prevalent ad hoc way of managing code. Always thousands of files containing parameters for trained models and predictions from different models and uncertainty of what code even generated them. A disaster for reproducibility. I have tried workflow managers like luigi, but they don’t precisely track which source code was used for generating a model. I also generally have an issue with how traditional git version control does not properly handle the machine learning workflow.

Proposed solutions to these kinds of problems seem to often lack a lot of flexibility and force you into ways of doing things with which you don’t necessarily agree with. The worst offenders are companies selling you their amazing cloud platform subscriptions that will do everything for you.

Frustrated with the current state of things, I have decided to take matters into my own hands. Building a deeply customizable tool that lets developers define how they want to do things appears to be a perfect job for Haskell, so I have started writing iterbuild. The idea is that you to write your python models and data manipulation functions like you always do in python(R could be implemented in the future) and then you reference and compose them inside Haskell. The Haskell program will then generate python glue code that composes the pieces as specified. It turns out that adding this new layer gives you the possibility to precisely control how you want to handle things like version control, model management, intermediate data caching, etc. inside Haskell and hiding the implementation details(Haskell excels at that). Don’t expect the project to be actually usable for real ml experiments right now, it is still more of an idea than a reality. I have written about my ideas concerning iterbuild in the readme of the project and I look forward to seeing whether anyone has his own thoughts on the matter.

submitted by /u/adri12_
[link] [comments]

[P] Need help for a DL Spoiler Classification Project using Transfer Learning

Hello everyone,

I have started working on building a classifier to detect spoilers. I wanted to train my model first on a public general dataset, and then fine-tune the model by training it further on a movie/book-specific dataset.

A lot of text classification models make use of one-hot encoding the bag of words as the input layer, and then creating an FFNN, and I was considering doing the same. However, the number of features will differ for both the datasets, as all words will not be common for both datasets. I’m kinda stuck at this point, any help would be appreciated. Apologies if the doubt is too basic. Thanks.

submitted by /u/amoldl
[link] [comments]

[D] IJCAI 2020: Changes in Rules for Resubmissions

We got unlucky with 1 of 3 reviewers for AAAI 2020 and got rejected, we feel like our paper was rejected based on one reviewer who, it seems, did not take the time to read the paper. We wanted to resubmit to IJCAI and I just now saw that the rules for resubmissions have changed drastically…

IJCAI 19: Authors of declined AAAI-19 submissions […]. At the discretion of the authors, the resubmission can be accompanied by all three of the following, as a single file […] (No mention of what happens if none of this is provided.)

IJCAI 20: Authors of papers that have been rejected from AAAI 2020, ECAI 2020, AAMAS 2020, ICCV 2020, or ICAPS 2020 […]. Such resubmissions must contain, in a single PDF file […] A paper rejected from these conferences and omitting to declare resubmission will be directly rejected without further review.

Does anyone know what’s going on here? We did not change anything on the method (we improved some of the explanations and added just one small tables in results).

Will the reviewers now see the previous reviews? Will it be given to the same reviewers?

Why is IJCA 19 not in the list? Is this a thing now that you have to wait 1 year to resubmit a paper?

submitted by /u/schludy
[link] [comments]

[D] How to contact professors for research internships?

For the summer instead of doing an internship at a company I would like to spend my time doing research with a professor at a different university. Does anyone have experience with this? How to contact professors? So far all my efforts have been unsuccessful (nobody actually replies to my emails).

For context I am an undergraduate student with ML 1 internship before.

submitted by /u/andwhata
[link] [comments]

[D] Looking for Deep learning project ideas.

I’m currently studying master’s in mathematics. and I’ve been assigned neural networks as my project topic. I’m looking for some theoretical math project ideas involving neural networks with minimal coding. I’m a beginner and haven’t worked on any deep learning project myself.

The project’s goal is to demonstrate how theorems and algorithms are used in real life situations.

I have googled for some ideas but haven’t found any theory based project focusing on math.

submitted by /u/reuelrds
[link] [comments]

[P] What did I learn – keep track of your reading notes

https://whatdidilearn.org/

https://github.com/PieterBlomme/whatdidilearn

As a software developer getting up-to-speed with machine learning, I’ve been working on a better way to keep track of papers I read, my notes, topics of papers, scores on benchmarks, etc.

I built a small research database server to do that. I figured it might be useful for others too + it might be cool to see what others are reading and what they are picking up from it.

Your personal library is public-access with link, so can be used to share with others. Tags are also public, comments and benchmarks can be set to private. If you’d like more privacy, you can fork it from GitHub and run it locally.

If you find this interesting and/or would like to see some other functionality, do let me know (or make a pull-request on github if you want to help out)

submitted by /u/scruffyfluffyhusky
[link] [comments]

[D] I’m starting a free YouTube course called “Deep Learning (for Audio) with Python”

I’m starting a new YouTube video series called “Deep Learning (for Audio) with Python”. In these videos, I introduce mathematical concepts at the basis of neural nets. I discuss the theory and intuition behind different types of neural networks (e.g., multilayer perceptron, CNNs, RNNs, GANs). I also code different neural nets using Python/TensorFlow.

In the series, I’ll propose numerous sample applications. Even though the applications will mainly be audio/music-related, the tutorials will provide a solid Deep Learning background also to people not interested in audio.

Here’s the first video of the series: https://www.youtube.com/watch?v=fMqL5vckiU0&list=PL-wATfeyAMNrtbkCNsLcpoAyBBRJZVlnf You’ll already find two other videos on my YouTube channel. I aim to post two videos per week.

I’d love to get your feedback!

submitted by /u/diabulusInMusica
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.