Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[Discussion] Millions of different NN models at the same time?

Hi folks,

does anyone know of applications or use cases where one has to deal with hundreds, thousands or even millions of queries of different neural network models? Maybe cloud service providers with user-customised neural networks, or maybe some AR applications… And if there is, how do people handle this situation nowadays, wouldn’t the storage requirements blow up like crazy? And what about latency issues (off-chip loading)?

submitted by /u/swiedema
[link] [comments]

[R] Perception and Detection in Autonomous Navigation

I’ve dropped a resource. It contains explanation of numerous papers from Autonomous Navigation (ICCV,CVPR 19) . I attended ICCV this year. As a result, it also contains content from ICCV 2019 (talks and poster sessions). This video will help you get started in this amazing field and enrich those who are working on some related projects. Mainly it contains:

Mainly, we discuss:

– Different viewpoints used for perception in autonomous navigation

– Depth Perception (Monodepth, Monodepth 2)

– Pseudo Lidar (Pseudo Lidar ++)

– 3D Object detection (Frustum PointNets, StereoRCNN)

– Monocular and Stereo depth estimation and 3D object detection methods

https://www.youtube.com/watch?v=hyEzJF3WcTI

I hope this video helps. Please feel free to give any feedbacks. Thanks !

submitted by /u/vector_machines
[link] [comments]

[D] Faster-RCNN: Why two different bbox regressors?

Can anyone explain why Faster-RCNN has bbox regressors at two different stages?

– Region Proposal Network (RPN): anchor deltas

– Fast RCNN: bbox predictor

Aren’t they doing the same thing? What’s the idea behind having a bbox predictor at the final stage when RPN has already computed an anchor (which along with its delta can be treated as the bbox)?

I’m most likely missing something. Any clarification would be appreciated!

Thanks for reading!

submitted by /u/raghavgoyal14
[link] [comments]

[D] How to prepare for ML jobs as math/stat major?

Hi all,

I am very interested in ML jobs, but all I learned from class are math/stat theory. I used python a lot in Econ research tho, but only did like data cleaning. I tried some kaggle notebooks and getting started competitions so know a bit of the general procedure. I also watched some videos of Andrew Ng.

I feel like I now have a general ideal of how ML works, and can understand the code, but How can I become good at ML and find a job?

Another question is will a master in Econ useful for finding ML jobs? I am one third into the program but not sure if I should finish it. The program let me choose business analytics classes it do you think is worth it?or I should try find a job with my math undergrad directly?

Thanks a lot

submitted by /u/kevinljc
[link] [comments]

[P] Image + Text input classification

[P] Image + Text input classification

Hi, I’m trying to build this network which will run on real world production with inspiration from this article.

Classifying e-commerce products based on images and text

He’s trying to predict a product’s label from given 1 image input and 1 product name text input.

My data set have 6 attributes (5 image and 1 text input) and 1 class label(output). So I want to create a model which takes 5 product image inputs + 1 description text input and predict that product’s category.

model_architecture

My questions are;

  1. I thought ,for image part, instead of merging 5 image into 1 and passing it to a CNN feature extractor, creating 1 CNN feature extractor(into the blue box) and using it 5 times for 5 image with same weights would help. Am I right?
  2. Author is using pre-trained VGG-16 for image feature extraction and he has write that in 2014. Should I change that extractor or not? If so, I have take a look the state of the art classification algorithms from there and saw EfficientNets have pretty good results. Or, even if it’s not a SOTA algorithm, I have used Darknet-53 for different task. How should I choose my extractor? Should I try all of them and find which one is better?
  3. I said 5 image + 1 text but actually there are up to 5 images for each product. Users can upload 1 to 5 images. So there are products with 1 to 5 images in my training set. Would it help feeding the network with 3 image + 2 zero matrix if I have 3 images for a product ?
  4. I wrote “RNN” into the image but I have no idea what to do for text feature extraction part. The author is using a bag of words model. Should I go with that? Or do you know any better, SOTA, idea for text feature extraction ? I took Andrew NG’s deeplearning courses and saw something like that for sentiment classification:

rnn_for_extraction

How can effect using something like this for text feature extraction(without softmax) ? Should I even do this?

submitted by /u/cansozbir
[link] [comments]

[D] Is there software that I can “train” to recognize my handwriting and convert to text? (Or actually decent OCR software out there?)

I recently got an app that allows me to write on a designated notepad, take a picture, convert to a PDF, and then export. This is super cool, but it would be even better if I could convert the handwritten PDFs into text. I’ve read some about OCR but I’m wondering if there is some software out there I could train to recognize specifically my handwriting. I imagine it would be a lot less error prone because it would be personalized, but I’m not sure if there’s such a thing out there.

I did work in an AI lab and understand the basics of how training the programs work, so I know it would technically be possible, but is this software out there somewhere already? Could I foreseeably train some rudimentary program with open source software available on github or the like? Or, maybe is there just some pretty good program out there already that’s decent enough to recognize fairly poor, half-cursive handwriting?

Thanks!

submitted by /u/shockingly-immoral
[link] [comments]

[D] How should I prepare for a top PhD program as an undergrad?

Hello everyone,

I am currently a senior in high school, and I want to pursue a PhD in machine learning. I plan on majoring in math and statistics at a T50 state school.

However, I understand that the top programs (MIT, CMU, UC-Berkeley) are extremely hard to get into, even more so for machine learning. My question is: what should I try to do over the next three years to boost my chances as a candidate? I’ve already taken linear algebra with the local community college and am wrapping up multivariable calculus, so hopefully I can begin learning more machine learning in the coming weeks.

Here are some other questions I have:

I heard published research was huge for the top programs, but what should I aim for there? Would a paper or two on arXiv be enough, or would I have to try to get a paper published to a huge conference like NIPS or ICML? Would co-authoring be enough or would I have to have some first-authorships under my belt?

I’ve read that references are huge as well. However, the state school I’m going to isn’t known for its machine learning program. There are courses on machine learning, but I doubt the profs are going to be world-renown experts. Obviously I’ll utilize any resource the college has to offer, but should I aim to build connections with more renown researchers in the field? How would I go about doing this? Beyond just getting a reference, I really want to work with a expert so that I can have someone to contact if I have a question about the field.

For my summers, would they be better spent at internships, researching, or a mix of the two?

Lastly, I feel like going to a T50 school is hurting my chances the most here. Is my only hope transferring out into a more prestigious school at the cost of an extra year in college, or is it possible to compensate for the lack of prestige through research experience? If I managed to get say, a first-authorship published at a top conference like ICML, would that compensate for the lack of prestige in my undergraduate school? Would that even be possible given that I’ve only had a basic introduction to machine learning through a course with my magnet school?

Cheers!

submitted by /u/MaximumOverkeks
[link] [comments]