Learn About Our Meetup

4500+ Members

Category: Reddit MachineLearning

[R] Symmetry-Based Disentangled Representation Learning requires Interaction with Environments

NeurIPS2019 paper, looks interesting:

Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries.

I’m not familiar with this line of research, but it seems like this could have significant implications on how models are trained, as many current benchmark datasets are static. I’d be interested in hearing thoughts from those more familiar with the method.

submitted by /u/rtk25
[link] [comments]

[P] Visualizing Tensor Operations with Factor Graphs

Hey everyone,

I’ve written a new blog post ( on an awesome visualization tool that I recently came across — factor graphs. Initially, I encountered them in the context of message passing on graphical models, but soon realized that they are useful in more general contexts.

This is the first post in a series that covers the basics and mainly focuses on understanding how factor graphs work as a visualization tool, along with a cool example of a visual proof using them. In future posts, I plan to cover algorithms like message passing and belief propagation using this visualization framework.

I made the animations using manim, a math animation tool created by the amazing 3blue1brown. I built a small library, manimnx, on top of manim to help interface it with the graph package networkx. You can find the code for the animations in this github repo.

Feedback is welcome!

submitted by /u/MindSustenance
[link] [comments]

[R] Meta-Learning Deep Energy-Based Memory Models

Interesting research from DeepMind:

“Our new work on memory uses a neural network’s weights as fast and compressive associative storage. Reading from the memory is performed by approximate minimization of the energy modeled by the network.”

“Unlike classical associative memory models such as Hopfield networks, we are not limited in the expressivity of our energy model, and make use of the deep architectures with fully-connected, convolutional and recurrent layers.”

“For this to work, stored patterns must be local minima of the energy. We use recent advances in gradient-based meta-learning to write into the memory such that this requirement approximately holds.”

submitted by /u/Quantum_Network
[link] [comments]

[Project] pgANN Fast Approximate Nearest Neighbor (ANN) searches with a PostgreSQL database.

We are open-sourcing pgANN – an ANN (approx nearest neighbor) approach with a PostgreSQL backend. The key differentiator between pgANN and the rest (FAISS, Annoy,NearPy etc) is:

  1. this enables “online” learning i.e. doesn’t require retraining with every CRUD, and
  2. works with extremely large datasets, since it’s not held in RAM like the others

We use it internally to QA images and find it consistently provides sub-second query performance with a few million rows of vectors on a 32GB.8 vcpu Ubuntu box and can reasonably be expected to scale-up with normal pgsql scaling techniques. We invite the community to give this a try and share feedback.

submitted by /u/bluzkluz
[link] [comments]

[D] Possible privacy attack method on data shrunk by autoencoder?

Hello, this is my first paper preprint:

Privacy-preserving Federated Bayesian Learning of a Generative Model for Imbalanced Classification of Clinical Data

Though it is not fully biased toward deep learning, or federated learning on a edge device, I made a new framework for learning a global model in a horizontally distributed setting, especially in a clinical field.

AFAIK, it is the first trial to apply Approximate Bayesian Computation(ABC) on federated learning.
(If not, please let me know!)

Without complicated perturbation techniques e.g. Differential Privacy, Homomorphic encryption, Hashing, etc., the proposed method can preserve privacy.

As I said in the paper, unless each local site reveals trained weights and the structure of Autoencoder, shrunk data CANNOT be recovered in the central server. Also not possible even if some local sites conspire against the other site to disclose the information.

  • But this is my hypothesis and expectation, so I want to listen to some feedback or opinions on this.
    Is it really impossible to make leakage on data shrunk by local autoencdoer?

Plus, a global model can be learned in the central server with the minimal information (merely with a distance between local data (perturbed via Autoencoder) and generated data (same dim. with the perturbed one)).

Welcome and thank you in advance for any feedback and questions!

submitted by /u/vaseline555
[link] [comments]

[D] How do you keep up with latest advances in ML which are not directly relevant to your work?

In recent years I have heard of key advances made in NLP by transformer models and BERT. Then there was this paper on neural ODEs by DeepMind.

I have been wanting to dig deeper into the details and understand the key ideas behind these hot topics. They are not directly relevant to my work (which is focused mainly on images/videos) but I still feel it is important as an ML engineer to keep myself up-to-date with key developments in diverse areas. However because it does not directly relate to my work, I find it hard to find the time to get a deeper understanding of these papers.

Has anyone found themselves in a similar situation? How do you deal with it?

submitted by /u/nivter
[link] [comments]

[D] Design methodology for ML engineering.

I am currently preparing for ML Engineer interviews and wanted to learn a design methodology to crack the design rounds. I have looked for standard resources but couldn’t find any. So I thought of coming up with one on my own from my own experiences and material that I have read online. So below is an overview of the approach I have in my mind. Please let me know what you think of it.

  1. Understand the use case.
    1. Figure out what can be solved deterministically
    2. Figure out what needs to be solved using ‘Data + Machine learning’
  2. Pose the problem(1.b) as a math problem.
    1. Come up with an objective that you want to optimize.
  3. Select the right data set.
    1. What data is needed. How is the labeling done/derived?
    2. How do you deal with bias, skewed classes,
  4. Feature Engineering.
    1. Think about what(information) you need to solve the problem optimally.
    2. What aspects from the data could you use to replicate findings in 4.a?
    3. Transform data into features and a form more apt for model’s learning
  5. Model selection:
    1. What model is best suited to solve the problem(2.a) at hand.
    2. Are there any off the shelf(direct or transfer learning) that would help.
  6. Training:
    1. How will you train
    2. How will you handle: skewed classes and other problems associated with the dataset.
    3. Validation: what metrics, experiments do you conduct to validate the model’s learning.
  7. Productionizing:
    1. How will the solution be deployed?
    2. Performance monitoring, feedback loop/retrain,
    3. Application scaling and model maintenance.

submitted by /u/kireeti_
[link] [comments]

[D] Does this sound like I have a reasonable chance at an interview/employee referral?

So I’m premed, but I’m also doing undergrad research in machine learning. Unfortunately, my university is very sparse in terms of machine learning research (there really isn’t any going on).

I found an MD (physician) who happens to have a research role at a Big N company. I checked his LinkedIn, and he’s only been there for 2 years. Because he’s a doctor, I would assume he’s a bit further removed from most standard administrative and workplace stuff than the average Big N employee (am I correct to assume this?). I guess you could consider him to be more of an “adjunct” there – though I really have no idea how it works for a doctor at a tech company.

Anyway, I emailed him saying “if I do really well on the MCAT, and do [list of tasks to learn/practice ML]”, will you put me past the HR resume filter for an interview?

This was his first response: “Sounds reasonable to me. Internships are pretty competitive for some of the teams at [Big N] and some but not all look for students in graduate studies. As long as you enjoy your studies and are passionate about what you are doing, you will be on the right path.”

Me: “It’s my understanding that someone at Big N reaches out to HR, and from there I’m put in touch with a recruiter who starts the interview process. If I pass the interview, HR then confirms with the initial employee/team that there’s an open spot for an intern in the group. If I meet the goals I’ve outlined, would you or someone else on your team be comfortable with reaching out to HR to start the process?”

Him: “i’m not sure if internal referrals change the process for intern selection, which generally has its own application process. feel free to reach out again as you get closer to applying and we can see”

So I’ve met one component of the “goals I’ve outlined” (did really well on the MCAT), and I’ll easily be able to accomplish the rest. Do his responses sound like he was just being polite/he very likely won’t be able to put me past the resume filter? I’m not sure if I should actually consider this an “in” or not hahaha

submitted by /u/alinkawayfrom
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.