Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D]Deciding between ML Masters in Germany (Saarland vs. Tuebingen)

So I have been recently admitted to a relatively new Masters in Data Science and Artificial Intelligence at Saarland University to start in April of next year. I have read great things about the CS departments at Saarland, and the presence of the Max Planck Institute in Informatics + the German Research Center for Artificial Intelligence is really attractive for someone interested in developing an ML career, specially to someone interested in the Computer Vision + ML area like me, since the machine learning laboratory is very focused on this.

Besides applying to Saarland, I was planning to apply to Tuebingen’s Masters in Machine Learning when applications open next year. However, since Saarland admitted me this early, I will have to make a decision before I hear about the results of Tuebingen.

If I accept the offer at Saarland, I would be looking to work as a Research assistant in one of these institutes (ideally with Prof. Bernt Schiele in the Computer Vision and Machine Learning lab) and after graduating, I would search for Machine Learning engineer jobs or if I really enjoy my experience with research, maybe pursue a PhD.

The thing is that at the moment, I see myself more in the future in an industry environment rather than in academia. Because of this, what attracts me from Tuebingen is the presence of Amazon, Bosch, the Max Planck Institute for Intelligent Systems, and all of the current hype it is gaining recently from the “Cyber Valley” consortium (https://cyber-valley.de/), all of which make Tuebingen look like a place where I could get an internship in one of these companies and then more easily transition into industry. In fact, some professors that previosly worked

Both universities seem to be really strong in ML research with amazing professors and opportunities in their respective laboratories, despite this, I can’t help but feel that if I choose Saarland, I may be missing out on internships in Industry such as the ones I mentioned. Am I overthinking the weight of the nature internships in a Masters? (ML Research in laboratories vs ML Industry)

It is also important for me to mention that I am from Venezuela, and while it may sound easy for some people to just defer the admission for the next semester to get an answer from Tuebingen and make a better decision, for me it can be a risk since the country can get worse economically and politically at any moment. Going to Saarland means starting in April, going to Tubingen (If I were to get accepted) means starting in October.

I don’t know if this is the right place to ask but since my doubts are based on the quality of both ML programs and the opportunities available at each university are in ML positions I thought I’d raise my concerns here.

submitted by /u/TomIsOK
[link] [comments]

[Research] Efficient Learning on Point Clouds with Basis Point Sets

[Research] Efficient Learning on Point Clouds with Basis Point Sets

Hi all,

We have just released the code for our ICCV 2019 paper on “Efficient Learning on Point Clouds with Basis Point Sets”:

https://github.com/sergeyprokudin/bps

We present basis point sets (BPS), a simple and efficient method for encoding 3D point clouds into fixed-length representations.

The method is based on a following simple idea: select k fixed points in space and compute vectors from these basis points to the nearest points in a point cloud; use these vectors (or simply their norms) as features:

Basis point set encoding for point clouds.

The basis points are kept fixed for all the point clouds in the dataset, providing a fixed representation of every point cloud as a vector. This representation can then be used as input to arbitrary machine learning methods, in particular it can be used as input to off-the-shelf neural networks.

Below is the example of a simple model using BPS features as input for the task of mesh registration over a noisy scan (you can check the resulting alignments here):

Mesh registration network using BPS features as input. Efficient 3D-convolutional models are also available.

Below are the key differences of BPS compared to standard occupancy voxels and truncated signed distance fields (TSDF):

  • continuous global vectors instead of simple binary flags or local distances in the cells;
  • smaller number of cells required to represent shape accurately;
  • BPS cell arrangement could be different from a standard rectangular grid, allowing different types of convolutions;
  • significant improvement in performance: simply substituting occupancy voxels with BPS directional vectors results in a +9% accuracy improvement of a VoxNet-like 3D-convolutional network on a ModelNet40 classification challenge.

Check our ICCV 2019 paper for more details.

submitted by /u/___sergey
[link] [comments]

[D] Monocular depth perception of autonomous vehicles

The most thorough open source project I’ve found on monocular depth perception is Andrew Ng and Saxena’s. They have published multiple research papers on the topic, though they get just under 70% accuracy at best. I would imagine autonomous cars get close to 100% accuracy at depth perception. How do they do it? Any research papers on this?

submitted by /u/Mjjjokes
[link] [comments]

[P] Machine Learning for error correction – What sort of model should i use?

Hallo!

I currently work on a software product that interfaces with a hardware device. The hardware device takes a set of 5 Parameters. When the software installs, we are able to generate these 5 parameters for the customers specific hardware to a “close enough” degree of accuracy using basic physics and math calculations for the device. However, i have noticed with my logging that in most cases the customers are having to adjust these parameters up to +- 5% to get to an optimal value.

If instead of just logging these parameter changes, would i be able to feed them into an ML model of some kind which would then be able to learn for which values i generate, need to be adjusted by a small amount?
My ML experience so far is mainly just predictive models using Naive Bayes, a few genetiv algorithms and LSTM models for algo trading. However i am not sure how i should approach this problem so i am interested in any insight.

Thanks!

submitted by /u/MopicBrett
[link] [comments]

[P] Deep Danbooru (girl image tag estimation) source code is available on github now

Previous post: https://www.reddit.com/r/MachineLearning/comments/akbc11/p_tag_estimation_for_animestyle_girl_image/

Hi, as promised, I released my code to github now.

Here is brief changes from last post:

  • ML library is changed from CNTK to Tensorflow 2.x
  • Optimizer is changed from Adam to SGD with momentum
  • Dataset is updated (20191108)

Repository URL is here : https://github.com/KichangKim/DeepDanbooru

It contains pre-trained model as well, so you can test your image immediately.

Legacy CNTK version web demo is still available, but URL is changed to http://kanotype.iptime.org:8004/deepdanbooru/

(only port). Current web demo uses latest Tensorflow 2 version. I’ll keep CNTK version demo for a while, but I’ll end it one day.

Have fun.

submitted by /u/KichangKim
[link] [comments]

True accuracy calculation in positive-unlabeled setting [D]

When only the positive class is available (e.g. when we have a pattern where matches are positive but failure to match does not mean the case is negative ). Such data set is much easier to create (often domain experts can provide rules to what could qualify as positive, but cannot formulate a rule to rule-out the positive class), but the accuracy metrics calculated on the positive-unlabeled data set are inaccurate.

There are papers showing how to calculate the true accuracy (i.e. the one calculated on a data set containing both negative an positive examples) [1], but surprisingly they seem to be ignored in more applicative papers – is there a reason? Overall, such method will solve a huge problem especially in the medical domain where the rarity of the positive class requires huge sample size for validation.

The major challenge in these works is estimating the positive class prior, and they propose various algorithms for that (e.g. AlphaMax). Is there any reason not to simply manually review a sample of the positive and unlabeled sets and count the number of positive cases?

  1. Jain, S., White, M. & Radivojac, P. Recovering True Classifier Performance in Positive-Unlabeled Learning. https://www.ccs.neu.edu/home/radivojac/papers/jain_aaai_2017.pdf
  2. Jain, S., White, M., Trosset, M. W. & Radivojac, P. Nonparametric semi-supervised learning of class proportions. arXiv:1601.01944 [cs, stat] (2016).

submitted by /u/CacheMeUp
[link] [comments]

[P] AI Beatmaker that Creates Original Drum Beats

Hi Machine Learning!

I have made a web app that creates original drum beats using Artificial Intelligence

I trained a GAN on 2,000 drum MIDI samples; I used a low learning rate and trained for approx 2 months continuously on a spare PC

What do you think? How can I improve the app?

Link: https://beatz.ai 😊

Please let me know if you have any suggestions for improvements 🙂

Thanks

Cheers
Pete

submitted by /u/itsmybirthday19
[link] [comments]

[D] Societal problems/topics that can be reasonably tackled during a Computer Vision/ML PhD while generating interesting research from a theory standpoint.

Long-time reader, but first-time poster: If this post should go into r/cscareerquestions, or be deleted, just let me know.

I have the opportunity to start a PhD in Computer Vision/ML with a potential focus on SLAM. I am looking for topics related to society that can guide and especially motivate the research. A counterexample would be face-recognition which is technically fascinating, but IMHO has significant negative potential from a political perspective. A possible example would be “helping the blind” where computer vision should generally be beneficial; however, I currently assume that most challenges in this area are more user-interface- and product-design-related.

I am asking this question here, and not at say r/AskReddit, because I was hoping to get more understanding for striking a balance between a topic that is relevant from a research perspective and relevant to society. I do also understand that this might be a topic to discuss with my supervisor, and I will; nevertheless, I am very interested in your opinions 🙂

Thank you all in advance for your answers, and in retrospect for the great content on this sub!

submitted by /u/grad-student-descent
[link] [comments]