Blog

Learn About Our Meetup

4200+ Members

Category: Reddit MachineLearning

[P] Implementation of VoVNet(CVPRW’19)

Hi,

I implemented VoVNet which is efficient backbone network presented in CVPR workshop on CEFRL.

My implementations provide ImageNet classification and object detections in Detectron.

Highlight

  • 2x faster than DenseNet on ImageNet classification
  • more accurate than ResNet, especially on the small object detection

ImageNet classification : https://github.com/stigma0617/VoVNet.pytorch

Detectron : https://github.com/stigma0617/maskrcnn-benchmark-vovnet/tree/vovnet

submitted by /u/stigma0617
[link] [comments]

Angle prediction problem [P]

Hello people, I would like to develop an algorithm to predict angles for my project. Now what I’m doing is collecting values from two gyro sensors which in the form of angles. One of these angles is the input and the other is to be predicted, the second sensor is used to check whether the predicted angle is right or wrong. Now what we have tried – – we checked the correlation here and it came out to be 91% which was expected and we checked that these angle values can be derived from a lookup table but these mappings change when derivative or rate of change of input changes. And this is my problem I’m not able to come up with an apt algo to solve this problem also the solution needs to less memory hungry😅. Which again a problem. We thought of fuzzy logic but again it is difficult to form proper membership functions. Please please people of Reddit help me!

submitted by /u/Shinigaami7
[link] [comments]

[D] Text character visual similarity algo?

Hey, for an application I’m making I want to find the similarity between how two letters look.

For example, o and O have a high similarity but o and K do not.

Can someone guide me on what sort of techniques I would use (not necessarily ML, but this sounds like a DL task) in order to find a similarity between the look of two characters? It could be any character in any language that’s why I can’t just do it manually.

My proposed algorithm is as follows: 1. Accept 2 letters as argument 2. Generate same size image with same sized characters placed in them 3. Compute the similarity between the two images somehow 4. Get result

How would I do step 3? Any guidelines are appreciated. I’m currently looking into HOG classifiers, any other information is appreciated.

EDIT: My idea right now is to extract key features using algorithms like SURF, ORB etc (find the best one) and then compare 2 images them using cosine similarity between the two feature vectors. Would this work?

submitted by /u/bigDATAbig
[link] [comments]

[D] Machine learning papers that look at the 2nd gradient /gradient of the gradient ?

I feel that looking at the 2nd gradient (gradient of the gradient (2nd derivative) ) may be interesting to look at but I can’t seem to query this. I haven’t found any papers on this.

But I feel that somebody must have looked into this. Has research on this never been performed? Or is there a specific phrase to query ?

submitted by /u/BatmantoshReturns
[link] [comments]

[Discussion] Methods to use alternative form of reconstruction objective for VAE than pixelwise error

I am currently working on a project which involves improving the reconstruction capability of the VAE perceptually. Since the basic VAE objective uses the pixelwise error for the reconstruction part, the generated images have a peculiar blurry characteristic which makes them perceptually unreal. I did keyword searches on Scholar and ResearchGate, but was not able to find works that replace this pixelwise metric with something more appropriate for images.

The closest I got was with the paper titled “Autoencoding beyond pixels using a learned similarity metric” https://arxiv.org/pdf/1512.09300.pdf. This is a great piece of work and I find the idea of combining the GAN discriminator with the VAE superb.

In my search, I also found the flow based papers such as GLOW and RealNVP. But these use the reversible operations because of which, the posterior probability can be easily calculated since it is a deterministic function of the prior probability. I am actually looking for the variational inference generative models which simply use a different form of reconstruction objective for better perceptual results.

I kindly request all the fellow redditors to please provide me with works that you are aware of. It would be a great help. Thanking you.

Best regards,

akanimax

submitted by /u/akanimax
[link] [comments]

[D] Rosalind Picard: Affective Computing | Artificial Intelligence Podcast

[D] Rosalind Picard: Affective Computing | Artificial Intelligence Podcast

Rosalind Picard is a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica. Over two decades ago she launched the field of affective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence, the vital role emotion communication has to relationships between people in general and in human-robot interaction. I really enjoyed talking with Roz over so many topics including emotion, ethics, privacy, wearable computing, her recent work in epilepsy, and even love and meaning.

Video: https://www.youtube.com/watch?v=kq0VO1FqE6I

https://i.redd.it/rkc34eetwx431.png

Outline:

0:00 – Introduction

1:00 – Affective computing

2:45 – Clippy

5:03 – Diversity in computer science

5:55 – Emotion in AI

8:40 – Privacy

18:10 – Forming a connection with AI systems

30:31 – Emotion

39:05 – Measuring signals from the brain and the body

50:20 – Future AI systems

53:50 – Faith and science

56:35 – Meaning of life

submitted by /u/UltraMarathonMan
[link] [comments]

[N] Hindsight Experience Replay (HER) with SAC/DDPG/DQN support + Evolution Strategy bridge | Stable Baselines v2.6.0

Stable Baselines 2.6.0 was just released. It comes with a bunch of new features and improvements:

– a performance tested Hindsight Experience Replay (HER) re-implementation with SAC, DDPG and DQN support included (only custom DDPG was supported in the original OpenAI Baselines)

– you can now mix Reinforcement Learning (RL) and Evolution Strategies (ES) in few lines of code, thanks to the new get/load parameters method. (see example below with A2C + CMAES)

– a guide was added in the documentation to deal wth NaNs and Infs: https://stable-baselines.readthedocs.io/en/master/guide/checking_nan.html

Gist (for an example of mixing ES and RL): https://gist.github.com/araffin/404ef9625a4a78d42396c5292e465337

Colab Notebook (for testing HER): https://colab.research.google.com/drive/1VDD0uLi8wjUXIqAdLKiK15XaEe0z2FOc#scrollTo=qPg7pyvK_Emi

Documentation: https://stable-baselines.readthedocs.io/en/master/modules/her.html

Full changelog: https://github.com/hill-a/stable-baselines/releases

submitted by /u/araffin2
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.