Blog

Learn About Our Meetup

4500+ Members

Category: Reddit MachineLearning

[P] I created AI playing spout and it’s sometimes better than me.

[P] I created AI playing spout and it's sometimes better than me.

There is a game on google play called spout:

https://play.google.com/store/apps/details?id=com.slipcorp.spout&hl=en_US

I created copy of it in java and then made neural network that can steer an agent. Next I programmed genetic algorithm that does choosing and replicating job.

It is training view:

https://preview.redd.it/qbxyvz1fbm441.png?width=1015&format=png&auto=webp&s=271eaaac626e640310919f160841a323461f29ad

After training few hundred generations I click “Save Best Agent’s Neural Network” and save that net to file. Next, I load that net to other program in which I can play spout and AI can play in the same time.

Competition:

I am on the left, AI plays on the right.

As you can see, it has potential to beat my score and it beats from time to time.

submitted by /u/ArdArt
[link] [comments]

[D] Rohit Prasad: Amazon Alexa and Conversational AI

[D] Rohit Prasad: Amazon Alexa and Conversational AI

Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators.

Video: https://www.youtube.com/watch?v=Ad89JYS-uZM
Audio: https://lexfridman.com/rohit-prasad

Outline:
(click on the timestamp to jump to that part of the video)

0:00 – Introduction
4:34 – Her
6:31 – Human-like aspects of smart assistants
8:39 – Test of intelligence
13:04 – Alexa prize
21:35 – What does it take to win the Alexa prize?
27:24 – Embodiment and the essence of Alexa
34:35 – Personality
36:23 – Personalization
38:49 – Alexa’s backstory from her perspective
40:35 – Trust in Human-AI relations
44:00 – Privacy
47:45 – Is Alexa listening?
53:51 – How Alexa started
54:51 – Solving far-field speech recognition and intent understanding
1:11:51 – Alexa main categories of skills
1:13:19 – Conversation intent modeling
1:17:47 – Alexa memory and long-term learning
1:22:50 – Making Alexa sound more natural
1:27:16 – Open problems for Alexa and conversational AI
1:29:26 – Emotion recognition from audio and video
1:30:53 – Deep learning and reasoning
1:36:26 – Future of Alexa
1:41:47 – The big picture of conversational AI

https://preview.redd.it/j0ut1ljfbm441.png?width=1280&format=png&auto=webp&s=e13fd282f47c36cb752e272ab60fca78c48cc787

submitted by /u/UltraMarathonMan
[link] [comments]

[P] I created artificial life simulation using neural networks and genetic algorithm.

[P] I created artificial life simulation using neural networks and genetic algorithm.

https://preview.redd.it/s9132dyqll441.png?width=1280&format=png&auto=webp&s=b8012705b448f3519b05d42aab2c78ae12622a33

Those are my creatures, each have its own neural network, they eat and reproduce. New generations mutate and behave differently. Entire map is 5000x5000px and starts with 160 creatures and 300 food.

https://www.youtube.com/watch?v=VwoHyswI7S0

submitted by /u/ArdArt
[link] [comments]

[D] Better or worse to include relational data in training/output?

I am solving for a list of 2D features [x,y].

Do you think it’s better to just solve for just the outputs or should I help guide it by reinforcing with relational data. E.g. the vector between 2 points:

param1 [x1,y1],
param2 [x2,y2],
Vector1 [(x1-x2), (y1-y2)],

=> [x1, y1, x2, y2, xVector1, yVector1]
vs
[x1, y1, x2, y2]

My intuition says it’s better to include the ‘hints’ provided by the vectors. But the cautionary voice in my head says “don’t try to impose on the networks, if it’s valuable, it will eventually discover it itself. ” Am I being a naive beginner in trying to help it with additional clues or am I being paranoid in thinking the training will just magically find the optimal solution on its own?

I’ve trained a model on just points and it’s doing okish with my current dataset. And I’m increasing the size of my dataset right now. But I’m also seeing some “obvious” mistakes like points 1 pixel apart from one another. If it was solving for both I feel like the error rate would more accurately reflect whatever is causing the training to not catch those edge cases.

submitted by /u/im_thatoneguy
[link] [comments]

[R] Deep Structured Implicit Functions

Abstract: The goal of this project is to learn a 3D shape representation that enables accurate surface reconstruction, compact storage, efficient computation, consistency for similar shapes, generalization across diverse shape categories, and inference from depth camera observations. Towards this end, we introduce Deep Structured Implicit Functions (DSIF), a 3D shape representation that decomposes space into a structured set of local deep implicit functions. We provide networks that infer the space decomposition and local deep implicit functions from a 3D mesh or posed depth image. During experiments, we find that it provides 10.3 points higher surface reconstruction accuracy (F-Score) than the state-of-the-art (OccNet), while requiring fewer than 1 percent of the network parameters. Experiments on posed depth image completion and generalization to unseen classes show 15.8 and 17.8 point improvements over the state-of-the-art, while producing a structured 3D representation for each input with consistency across diverse shape collections.

Arxiv: https://arxiv.org/abs/1912.06126

PDF: https://arxiv.org/pdf/1912.06126

Video: https://youtu.be/HCBtG0-EZ2s

submitted by /u/aviously1
[link] [comments]

[P] Thinking about RL or a Genetic Algorithm for a “free will” experiment on the browser

I’ve recently read a short story by Ted Chiang where scientists invent a machine that can predict with 100% accuracy the action a person will take, even when taking account that the person knows that the machine is trying to outsmart it, thus proving that free will is an illusion.

I’m thinking about a simple browser page with 2 buttons and a text (e.g. “You will press the red button”). The user will be prompted to choose one button to press. The model would try to serve a layout (button size, position, text font, text size, colors) that would influence the choice and when the user triggers a mouse event the correct button would flash ahead of time.

Thinking about a genetic algorithm the genome would be the x,y positions of all UI elements, sizes, colors, etc.

I don’t think the ideia is THAT pretentious since there are lots of experiments proving that we subcounsciously decide lots of simple things “ahead of time” [1]. And I don’t even think this has anything to do with the “philosophical” free will.

I didn’t find anything similar on the internet using genetic algorithms. Do you guys think this could be a cool experiment?

[1] https://www.youtube.com/watch?v=lmI7NnMqwLQ&vl=en

submitted by /u/thiago_lira
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.