Blog

Learn About Our Meetup

4500+ Members

Category: Reddit MachineLearning

[D] ML Inference optimization, runtimes, compilers

I’m doing a study on inference latency. What are different ways of optimizing your model for this? Let’s say the goal is to get your inference latency as low as possible. I’ve heard of ONNX runtime (apparently used by Microsoft in production), compilers such as Intel nGraph, TVM, Intel OpenVINO and so on. Are these kind of tools used in production, or do most companies just use PyTorch and TF inference mode? If anyone here has experience from unique deployments I’d love to hear about it!

submitted by /u/dilledalle
[link] [comments]

[D] Are small transformers better than small LSTMs?

Transformers are currently beating the state of the art on different NLP tasks.

Some examples are:

  • Machine translation: Transformer Big + BT
  • Named entity recognition: BERT large
  • Natural language inference: RoBERTa

Something I noticed is that in all of the papers, the models are massive with maybe 20 layers and 100s of millions of parameters.

Of course, using larger models is a general trend in NLP but it begs the question if small transformers are any good. I recently had to train a sequence to sequence model from scratch and I was unable to get better results with a transformer than with LSTMs.

I am wondering if someone here has had similar experiences or knows of any papers on this topic.

submitted by /u/djridu
[link] [comments]

[D] Looking for suggestions for biomedical datasets similar to the Wisconsin Breast cancer database

I am looking for biomedical databases similar to the Wisconsin breast cancer database (available at https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)) ). This database has 9 features (each feature values being integers ranging from 1 to 10) and two classes – benign and malignant. Defining characteristic of this dataset is that the higher feature values generally indicate higher chance of abnormality (malignancy). I am looking for other biomedical datasets having features with this property (not necessarily integer valued, can also be real valued; preferably with low number of features also – less than 30 or so)

submitted by /u/daffodils123
[link] [comments]

[D] What is the current state-of-art in unsupervised document/information retrieval for NLP tasks?

Hello everybody,

Are there any good unsupervised methods of retrieving top-k documents from corpus based on a rather short query?

I did a bit of googling but couldn’t find anything that isn’t tf-idf based.

Maybe it would be possible to somehow retrieve similarities between docs and query by utilising contextual embeddings (such as from BERT) and use some sort of scoring function to evaluate it.

Anyway, thank you in advance for your answers.

submitted by /u/Slowai
[link] [comments]

[D] Retrain your models, the Adam optimizer in PyTorch was fixed in version 1.3

I have noticed a small discrepancy between theory and the implementation of AdamW and in general Adam. The epsilon in the denominator of the following Adam update should not be scaled by the bias correction (Algorithm 2, L9-12). Only the running average of the gradient (m) and squared gradients (v) should be scaled by their corresponding bias corrections.

In the current implementation, the epsilon is scaled by the square root of bias_correction2
. I have plotted this ratio as a function of step given beta2 = 0.999
and eps = 1e-8
. In the early steps of optimization, this ratio slightly deviates from theory (denoted by the horizontal red line)

See more here: https://github.com/pytorch/pytorch/pull/22628

submitted by /u/Deepblue129
[link] [comments]

[D] What’s a hypothesis that you would really like to see tested, but never will get around to testing yourself, and hoping that someone else will get around to doing it?

My wishlist:

-I really want to see doc2vec but with contextualized vectors (Bert, Elmo, etc) instead of word2vec. I think it’ll be a slam dunk. I don’t think I’ll ever get around to testing this. If anyone wants to do it, i’ll be happy to give some guidance if it’s needed.

-I would really like to see word2vec or glove tested with a context limited to other words within the same sentence as the target word. Or, perhaps extend the context to any word in the same paragraph. I was sort of planning on doing this, but lost some motivation with the rise of contextualized vectors. I think it would give some great insight though.

submitted by /u/BatmantoshReturns
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.