Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] What is the state of the art for disentanglement?

I just found out about disentangled variational autoencoders and they seem pretty exciting to me.

I was wondering if there was something similar with GANs and so I searched and found this https://arxiv.org/abs/1906.06034 “InfoGAN-CR: Disentangling Generative Adversarial Networks with Contrastive Regularizers”.

What should I search for in order to find papers that have the best performance, or is there a new technique for disentanglement (or something along those lines)?

Thanks.

submitted by /u/runvnc
[link] [comments]

[D] Has anyone used context to improve object detection and image classification?

We do a lot of image classification using the Tensorflow Object Detection API.

Our images often appear in groups, e.g. a cluster of fish swimming by a camera. Ofter our model will recognize some of the fish – but not all of them. This is obviously a mistake a human would never make.

I am researching whether there are any examples of object detectors/image classifiers using context to improve results? I.e. knowing that there are three fish swimming through would increase the model’s propensity to find a fourth nearby?

Another way to potentially attack this problem would be to identify clusters of objects -> then reexamine the cluster only to identify the number of objects within the cluster. The complexity with this issue is that some of the objects we are examining appear in clusters, while others do not.

So to summarize, my two questions are:

  • Is there research on, and how would you suggest I go about improving an object detector by using contextual features within an image?
  • In situations where there are clusters of objects, would you recommend recognizing those clusters as individual images and then subsequently processing them to identify the number of images within the cluster? Are there examples / is there research on this?

Thanks in advance – obviously doing my own research as well, but keen to hear if the community has any thoughts/examples!

submitted by /u/nitrodolphin
[link] [comments]

[Project] faces4coco dataset released: Face bounding box annotations for the MSCOCO Images dataset

Github: https://github.com/ACI-Institute/faces4coco

Over half of the 120,000 images in the 2017 COCO(Common Objects in Context) dataset contain people, and while COCO’s bounding box annotations include some 90 different classes, there is only one class for people. Those bounding boxes encompass the entire body of the person (head, body, and extremities), but being able to detect and isolate specific parts is useful and has many applications in machine learning. Detecting faces in particular is useful, so we’ve created a dataset that adds faces to COCO.

submitted by /u/frankcarey
[link] [comments]

[Discussion] Comparing UK ML-Neuro Labs: Deepmind, Google Brain, Microsoft Cambridge, etc.

I’m a masters student in the UK hoping to get 1-2 years of research experience at the intersection of machine learning and neuroscience (so not so much image recognition, speech recognition, etc.)

The two academic options that immediately came to mind were the Gatsby Comp. Neuroscience at UCL and Cambridge’s Computational Biological Learning group. I’m not familiar with industry labs and would like your help in comparing Deepmind, Google Brain (UK site), Microsoft Research Cambridge, etc. Which one of these offers the most relevant research in terms of machine learning applied to neuroscience as I am hoping to enter academia for computational neuroscience? A lab that publishes well is also a nice plus as always. Thanks.

submitted by /u/Napoleon-1804
[link] [comments]

[D] Did I find anything useful?

Hello everyone!

I found an algorithm that allows me to solve the problem of the optimal partition quite(very) quickly. For any type of score (gini index of the target for example) I can find the partition of the input that minimizes it (even on really big datasets). I remind you that in the case of decision trees this optimal partition is not always reached because the algorithm is greedy.

Since I’m not a data-scientist, but rather a combinatorial optimization specialist, I’d like to know if it’s really a discovery or it’s already been done, for example in decision trees implementations.

submitted by /u/partitionist
[link] [comments]

[D] Should we start charging for interviews as candidates?

I’ve been interviewing for a few months now with various companies. I have a PhD in ML and 5 years of experience in academia and industry, and been actively publishing in prestigious ML and CV conferences and journals for the past 5 years.

During my very frustrating job hunt, I managed to get interviews with 20+ companies, startups, and academic/industrial collaborative institutions. Apart from a couple of the interviews that I quickly discovered wouldn’t be a good fit for my future, all the interviews went really well, and I communicated well with the recruiting management and the recruiters. I travelled across country, paid for accommodation, paid for interview attire (we wear T-shirts to work), got unpaid leave for many days from my current employer, and as such lost lots of money out of my pocket to get a job; not to mention the many hours I had to spend preparing for the interviews, showing up to the interviews, waiting for the recruiter or hiring committee to not show up and then reschedule the whole thing, etc.

I also spent priceless time doing coding interviews or more time-consuming take home challenges to solve. I paid out of my pocket for Google cloud to do the training for the coding challenges. More often than not I felt me doing the interview is actually offering the company a “FREE CONSULTATION GIG”, rather than them wanting to know more about my background, my resume, and gauge my experience and abilities, and see if I’d be a good cultural fit for their team.

All that to hear first-hand from the hiring manager or my connections (friends who work in the said company) that internal candidate got prioritized, or they can’t offer me work permit sponsorship, or the hiring manager wasn’t too sure if they really want to hire someone at the moment, or get hit by a country-wide hiring freeze because of elections (Canada).

My search still continues for the right job, but I’ve been thinking of starting to send bills to companies that outright waste candidates’ time and money, and don’t offer any compensation for accommodation and travel.

What do you think? Are we selling ourselves too cheaply these days or is this a norm and I should get used to getting treated like this when I’m interviewing?

submitted by /u/EnterOblivionS
[link] [comments]

[D] Go champion Lee Se-dol beaten by DeepMind retires after declaring AI invincible

https://en.yna.co.kr/view/AEN20191127004800315

Announced today in South Korea, and it’s made me think on the sort of impact that these things will have on people in the coming days. There’s definitely a great deal of good that can be achieved, with innovation/growth and so many opportunities in general for the companies and people involved in this work.

But at the same time, it is kind of sad to see some of the human element get left behind. I’m sure Lee Se-dol could have played for many more years if he wanted to, continuing to contribute greatly to the professional Go scene as a player.

This is something that I wonder then, if people working at companies like Google / DeepMind should be thinking about. I’m sure the growing profit margins and money that’s flowing in from all our work is more than satisfactory for the company leadership / investors to not have any issues with all this. As the engineers responsible for actually building everything though, is their any kind of ethical consideration on our parts that we need to recognize? I don’t know myself to be honest. But I am curious as to what you all think here in the r/machinelearning community.

submitted by /u/ilikepancakez
[link] [comments]

[R] SuperGlue: Learning Feature Matching with Graph Neural Networks

Arxiv: https://arxiv.org/abs/1911.11763

Abstract: This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems.

submitted by /u/youali
[link] [comments]

[R] Study on 299 data sets shows that non linear SVMs/ANNs outperform linear SVMs/ANNs only in 20 per cent of cases

When do non-linear versions of algorithms such as SVMs or neural networks outperform linear methods at a statistically significant level? We researched this question by running experiments on 299 data sets in OpenML. Results show that only in around 20 per cent of cases non linear results are better at a significant level. We also investigated this question deeper by looking at for what type of data sets this happens by looking at number of instances, features and building meta learning models.

Reference:

Benjamin Strang, Peter van der Putten, Jan N. van Rijn and Frank Hutter. Don’t Rule Out Simple Models Prematurely: a Large Scale Benchmark Comparing Linear and Non-linear Classifiers in OpenML. In: Seventeenth International Symposium on Intelligent Data Analysis (IDA), 2018

submitted by /u/pppeer
[link] [comments]