Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D]Any non-python deep learning framework available that is in active development?

The thing about Python is that it’s ugly, forces you to code in a certain way and, most important factor there, is slow as a snail compared to other languages, even scripting ones like Lua. Are there any non-Python deep learning frameworks around that are in active development? I mean, there’s Torch (which is Lua, so good) but from what I’ve looked it’s abandoned in favor of PyTorch which is Python (bleh).

Preferably something either for Lua or .NET/Java – maybe not as fast as C/C++ would be but faster than python (blergh) and still allows for high productivity without worrying of random crashes because of some corrupted pointer.

submitted by /u/Darkhog
[link] [comments]

[P] Animal detection robot! Help!

Hi everyone! I’m doing my university final year project on a robot that deters foxes (and maybe cats 😉 ) from peoples gardens! The general plan is to use an artificial neural network to detect foxes from a camera feed. I’m guessing I need night vision data, because they tend to show at night… Anyone know where i can get training data from? Would splitting down a video into frames work as training data? (probably a stupid question, but how would i even do this!)

The initial idea was to have this running on a raspberry pi and i’m not sure if this would this even be possible because of the limited processing power?

Also I’d need a way of testing this! But i’m not allowed to use real animals (not that i could find and use a fox anyways!), Would toy stuffed animals work? But this i’m guessing would not work if I use night vision images in training my model.. I’m not sure

Does anyone have any experience in real time computer vision and advice on how to proceed!

Any help would be appreciated 🙂

submitted by /u/CarrotCakePls
[link] [comments]

[P] An info graphic on how to structure out deep learning projects

[P] An info graphic on how to structure out deep learning projects

Particularly useful for students, self learners to have a smoother start into the field.
I wish I had known this when I started out.
Have made the info-graphic from the code structure suggested by CS230 course taught by Andrew Ng.

https://deeps.site/blog/2019/12/07/dl-project-structure/

https://preview.redd.it/pgg4q3o6sv341.png?width=2000&format=png&auto=webp&s=4190c46db30b9db518edd4abdc95bb01321d686a

submitted by /u/deep_ak
[link] [comments]

[P] See RNN: Kernel-, Gate-, Channel-wise Visualization of Gradients, Weights, and Activations

[P] See RNN: Kernel-, Gate-, Channel-wise Visualization of Gradients, Weights, and Activations

DL is more than shooting in the dark and seeing what sticks; to this end, I present the first comprehensive RNN visualization API for Keras & TensorFlow layers, See RNN:

  • Per-gate, per-kernel, per-channel, and per-direction visuals
  • Gradients, weights, and activations visuals
  • Applicable to CNNs & other meaningfully-related layers

Why use? Introspection – is powerful for debugging, regularizing, and understanding NNs. For example, how can you tell whether your RNN is learning long-term dependencies? Monitor gradients: if a non-zero gradient flows through every timestep, then every timestep contributes to updating weights, so the RNN doesn’t ignore parts of sequences and is forced to learn from them. Or just because the visuals are rather pretty.

Numerous examples explored w/ image results in the link. Functionalities are fully-documented, and are compatible w/ TF 1.15.0 & Keras 2.2.5-, and TF 2.0.0+ & Keras 2.3.0+. Quickstart sandbox code included.

Feedback is welcome.

________________________________________________________________________________________________________

https://preview.redd.it/lm7o59f1lv341.png?width=1170&format=png&auto=webp&s=cb7bd53f51b931dc514b5431c9a1afe687bb37f8

https://preview.redd.it/g7p8es25lv341.png?width=1065&format=png&auto=webp&s=0ef3580189c633ccd0a8ca1f0bc640b54be7392b

https://preview.redd.it/yjmciimnhv341.png?width=1025&format=png&auto=webp&s=69f37b3a66e3821015ec2120ca897f8e97af2223

https://preview.redd.it/52q9u4clhv341.png?width=1060&format=png&auto=webp&s=0e4a75bd6ca167e74b5837f981072ff4187ef82f

submitted by /u/OverLordGoldDragon
[link] [comments]

[D] arXiv Machine Learning Classification Guide

If you are submitting machine learning preprints to arXiv, we now have a helpful categorization guide that we’d appreciate you read: https://blogs.cornell.edu/arxiv/2019/12/05/arxiv-machine-learning-classification-guide/ before submitting your next preprint.

We often see up to 250 new ML papers each day, and choosing the right category upfront really helps us with keeping the moderation process manageable!

Please let me know if you have any questions about the moderation process.

submitted by /u/seraschka
[link] [comments]

[D] (“Soft”) Life compartmentalization and practical aspects of working in ML

Some background on myself: I just finished my phd doing ML research, and I’ll soon begin a postdoc. My apologies if this is not the right subreddit, but I feel many of you may be dealing with the same issues as me, and I think we could benefit from a discussion of this.

I’ve noticed that essentially all the time, I feel I should be working. This is clearly no way to live my life, so I’m trying to identify all the factors in this. One factor is I often stay up late trying to get code to work so I can train it while I sleep. So I go to bed thinking about work, and I wake up thinking about work because the first thing I do is check the results. Since I also work during the day, I never stop thinking about how things are going, etc. This mentality makes me very moody: if I’m getting bad results, I feel like what I’m working on is doomed. In short, I identify my life situation and self-worth with simulation results that are relatively arbitrary.

Does anyone else feel the same way? How might we fix this issue? I think part of a solution is to work on more ‘conceptual’ problems in which things like hyperparameter tuning carries less weight, requiring less amounts of training time. But even though I work on pretty conceptual stuff anyway (no classification problems which often become quite industrialized), small changes in the model do make a difference in quantitative results, which must be there for publication.

I’ve toyed with the idea of a hard cutoff for work, e.g., 8 or 9 pm. But I often find that when I have to train during the day, I feel like I lose a day and just piddle that day away, waiting for the results to finish so I know what to think about or do next.

submitted by /u/Obesogen
[link] [comments]

Datasets of fully, semantically equivalent sentences [Discussion]

Hello all,

I am not sure if this is the correct sub to ask this. Let me know if not. I’ve posted this in r/MLQuestions as well. But this is a question and an attempt to collect datasets of a certain type for research so I’m posting it here as well:

I am looking for a short text classification dataset for de-duping semantically equivalent sentences. It seems that most text classification datasets I can find online classifies text into a relatively small number of topics but doesn’t have classes of fully semantically equivalent sentences. For example I want something which has a class with samples like “where is the cake?”, “where can I find the cake?”, “what is the location of the cake?”, etc. But I instead find datasets where these sentences are labeled “cake” and has other sentences like “do you like cake?”, “what is your favorite cake?”, etc. I can’t find a short-text dataset in which the samples in each class are fully semantically equivalent rather than sharing a general topic. I imagine such a dataset should have at least thousands of classes, if not more, just to be a reasonable dataset since there are many semantically unique English sentences.

All I have found so far can be summarized by what is in this 3 year old repo:

https://github.com/brmson/dataset-sts

Does anyone know of any other such datasets?

Thank you!

submitted by /u/LartTheLuser
[link] [comments]

Call for Abstracts: Machine Learning + Healthcare Symposium in Bermuda

——————————–

SAIL: Symposium on Artificial Intelligence for Learning Health Systems (SAIL)

Location: Hamilton, Bermuda

Dates: April 27-29th, 2020

Website: https://sail.health/

Submission deadline: December 20th, 2019

——————————–

We are excited to announce the Symposium on Artificial Intelligence for Learning Health Systems (SAIL), a new annual international research symposium exploring the integration of artificial intelligence (AI) techniques into clinical medicine. SAIL, which will be held in Hamilton, Bermuda on April 27-29, 2020, will provide a forum for clinicians, machine learning researchers, and clinical informaticians to discuss approaches and challenges to using these approaches in the healthcare domain.

SAIL will feature invited presentations to expose AI practitioners to the clinical workflow and administrative challenges that commonly prevent real-world adoption. Panels will convene seasoned leaders who have overseen the implementation, adoption, and regulation of real clinical AI systems in practice. Tutorials will provide hands-on exposure to open-source tools for integrating apps with hospital IT systems. Finally, we solicit abstracts for podium or poster presentations designed to generate fruitful discussion (and debate!) among conference attendees from diverse backgrounds (clinicians, clinical informaticians, computer scientists, payors, and regulators).

We invite submissions for abstracts, which will be selected for podium and poster presentations. Abstracts may contain either: 1) new and unpublished work, 2) highlights of recently published work or 3) overarching research theses.

Research themes include: integrating AI into clinical workflows, deploying machine learning systems at scale, and methods for evaluation and monitoring of clinical ML systems. Topics of particular interest include fairness, privacy, generalizability across institutions over time, real-time prediction, and regulatory compliance. Descriptions of novel methods for real-world evidence, causal inference, and precision medicine are also welcome. We highly encourage work that involves interdisciplinary collaboration across AI researchers, clinicians, and informaticians.

Abstract submission deadline is December 20, 2019. Abstracts have a 500 word limit, excluding references, figures, and figure captions. Student discounts and travel support are available. See more details at https://sail.health/call_for_papers.html!

Organizers: Harvard Medical School, MIT, Johns Hopkins University, Columbia University, Duke University, Penn Medicine

Sponsors: New England Journal of Medicine, United Health Group

submitted by /u/samfin55
[link] [comments]