Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] Rohit Prasad: Amazon Alexa and Conversational AI

[D] Rohit Prasad: Amazon Alexa and Conversational AI

Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators.

Video: https://www.youtube.com/watch?v=Ad89JYS-uZM
Audio: https://lexfridman.com/rohit-prasad

Outline:
(click on the timestamp to jump to that part of the video)

0:00 – Introduction
4:34 – Her
6:31 – Human-like aspects of smart assistants
8:39 – Test of intelligence
13:04 – Alexa prize
21:35 – What does it take to win the Alexa prize?
27:24 – Embodiment and the essence of Alexa
34:35 – Personality
36:23 – Personalization
38:49 – Alexa’s backstory from her perspective
40:35 – Trust in Human-AI relations
44:00 – Privacy
47:45 – Is Alexa listening?
53:51 – How Alexa started
54:51 – Solving far-field speech recognition and intent understanding
1:11:51 – Alexa main categories of skills
1:13:19 – Conversation intent modeling
1:17:47 – Alexa memory and long-term learning
1:22:50 – Making Alexa sound more natural
1:27:16 – Open problems for Alexa and conversational AI
1:29:26 – Emotion recognition from audio and video
1:30:53 – Deep learning and reasoning
1:36:26 – Future of Alexa
1:41:47 – The big picture of conversational AI

https://preview.redd.it/j0ut1ljfbm441.png?width=1280&format=png&auto=webp&s=e13fd282f47c36cb752e272ab60fca78c48cc787

submitted by /u/UltraMarathonMan
[link] [comments]

[P] I created artificial life simulation using neural networks and genetic algorithm.

[P] I created artificial life simulation using neural networks and genetic algorithm.

https://preview.redd.it/s9132dyqll441.png?width=1280&format=png&auto=webp&s=b8012705b448f3519b05d42aab2c78ae12622a33

Those are my creatures, each have its own neural network, they eat and reproduce. New generations mutate and behave differently. Entire map is 5000x5000px and starts with 160 creatures and 300 food.

https://www.youtube.com/watch?v=VwoHyswI7S0

submitted by /u/ArdArt
[link] [comments]

[D] Better or worse to include relational data in training/output?

I am solving for a list of 2D features [x,y].

Do you think it’s better to just solve for just the outputs or should I help guide it by reinforcing with relational data. E.g. the vector between 2 points:

param1 [x1,y1],
param2 [x2,y2],
Vector1 [(x1-x2), (y1-y2)],

=> [x1, y1, x2, y2, xVector1, yVector1]
vs
[x1, y1, x2, y2]

My intuition says it’s better to include the ‘hints’ provided by the vectors. But the cautionary voice in my head says “don’t try to impose on the networks, if it’s valuable, it will eventually discover it itself. ” Am I being a naive beginner in trying to help it with additional clues or am I being paranoid in thinking the training will just magically find the optimal solution on its own?

I’ve trained a model on just points and it’s doing okish with my current dataset. And I’m increasing the size of my dataset right now. But I’m also seeing some “obvious” mistakes like points 1 pixel apart from one another. If it was solving for both I feel like the error rate would more accurately reflect whatever is causing the training to not catch those edge cases.

submitted by /u/im_thatoneguy
[link] [comments]

[R] Deep Structured Implicit Functions

Abstract: The goal of this project is to learn a 3D shape representation that enables accurate surface reconstruction, compact storage, efficient computation, consistency for similar shapes, generalization across diverse shape categories, and inference from depth camera observations. Towards this end, we introduce Deep Structured Implicit Functions (DSIF), a 3D shape representation that decomposes space into a structured set of local deep implicit functions. We provide networks that infer the space decomposition and local deep implicit functions from a 3D mesh or posed depth image. During experiments, we find that it provides 10.3 points higher surface reconstruction accuracy (F-Score) than the state-of-the-art (OccNet), while requiring fewer than 1 percent of the network parameters. Experiments on posed depth image completion and generalization to unseen classes show 15.8 and 17.8 point improvements over the state-of-the-art, while producing a structured 3D representation for each input with consistency across diverse shape collections.

Arxiv: https://arxiv.org/abs/1912.06126

PDF: https://arxiv.org/pdf/1912.06126

Video: https://youtu.be/HCBtG0-EZ2s

submitted by /u/aviously1
[link] [comments]

[P] Thinking about RL or a Genetic Algorithm for a “free will” experiment on the browser

I’ve recently read a short story by Ted Chiang where scientists invent a machine that can predict with 100% accuracy the action a person will take, even when taking account that the person knows that the machine is trying to outsmart it, thus proving that free will is an illusion.

I’m thinking about a simple browser page with 2 buttons and a text (e.g. “You will press the red button”). The user will be prompted to choose one button to press. The model would try to serve a layout (button size, position, text font, text size, colors) that would influence the choice and when the user triggers a mouse event the correct button would flash ahead of time.

Thinking about a genetic algorithm the genome would be the x,y positions of all UI elements, sizes, colors, etc.

I don’t think the ideia is THAT pretentious since there are lots of experiments proving that we subcounsciously decide lots of simple things “ahead of time” [1]. And I don’t even think this has anything to do with the “philosophical” free will.

I didn’t find anything similar on the internet using genetic algorithms. Do you guys think this could be a cool experiment?

[1] https://www.youtube.com/watch?v=lmI7NnMqwLQ&vl=en

submitted by /u/thiago_lira
[link] [comments]

[P] I Implemented The Improved StyleGAN (StyleGAN2) in Tensorflow 2.0 – Then Trained It Overnight.

[P] I Implemented The Improved StyleGAN (StyleGAN2) in Tensorflow 2.0 - Then Trained It Overnight.

Like the title says, I implemented all of the improvements to StyleGAN in Tensorflow 2.0, including:

  • Convolution Modulation/Demodulation
  • Non-growing skip block Generator and ResNet Discriminator
  • Lazy Regularization
  • Path Length Regularization

Here are some mixed style samples:

Mixed Style Samples from StyleGAN2 on Landscapes, trained in Tensorflow 2.0

Here is the code:

https://github.com/manicman1999/StyleGAN2-Tensorflow-2.0

Enjoy!

submitted by /u/manicman1999
[link] [comments]

[R] Improving predictive CLV by using non-transactional data

Traditional approaches to predicting CLV (Customer Lifetime Value), like EP/NBD, continue to be effective many years after their initial formulation. In fact, attempts to improve on traditional methods using advanced ML techniques have been minimally effective. We explored combining an ML approach with the use of additional (non-transactional) data and have found early results to be quite promising. We’ve summarized our results in a blog post[1] and a more detailed paper[2].

  1. https://eng.amperity.com/posts/2019/12/predictive-clv
  2. https://amperity.com/assets/downloads/research/predictive-customer-lifetime-value.pdf

submitted by /u/derekslager
[link] [comments]