There is a game on google play called spout:
I created copy of it in java and then made neural network that can steer an agent. Next I programmed genetic algorithm that does choosing and replicating job.
It is training view:
After training few hundred generations I click “Save Best Agent’s Neural Network” and save that net to file. Next, I load that net to other program in which I can play spout and AI can play in the same time.
As you can see, it has potential to beat my score and it beats from time to time.
Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators.
0:00 – Introduction
Those are my creatures, each have its own neural network, they eat and reproduce. New generations mutate and behave differently. Entire map is 5000x5000px and starts with 160 creatures and 300 food.
I am solving for a list of 2D features [x,y].
Do you think it’s better to just solve for just the outputs or should I help guide it by reinforcing with relational data. E.g. the vector between 2 points:
Vector1 [(x1-x2), (y1-y2)],
=> [x1, y1, x2, y2, xVector1, yVector1]
[x1, y1, x2, y2]
My intuition says it’s better to include the ‘hints’ provided by the vectors. But the cautionary voice in my head says “don’t try to impose on the networks, if it’s valuable, it will eventually discover it itself. ” Am I being a naive beginner in trying to help it with additional clues or am I being paranoid in thinking the training will just magically find the optimal solution on its own?
I’ve trained a model on just points and it’s doing okish with my current dataset. And I’m increasing the size of my dataset right now. But I’m also seeing some “obvious” mistakes like points 1 pixel apart from one another. If it was solving for both I feel like the error rate would more accurately reflect whatever is causing the training to not catch those edge cases.
Abstract: The goal of this project is to learn a 3D shape representation that enables accurate surface reconstruction, compact storage, efficient computation, consistency for similar shapes, generalization across diverse shape categories, and inference from depth camera observations. Towards this end, we introduce Deep Structured Implicit Functions (DSIF), a 3D shape representation that decomposes space into a structured set of local deep implicit functions. We provide networks that infer the space decomposition and local deep implicit functions from a 3D mesh or posed depth image. During experiments, we find that it provides 10.3 points higher surface reconstruction accuracy (F-Score) than the state-of-the-art (OccNet), while requiring fewer than 1 percent of the network parameters. Experiments on posed depth image completion and generalization to unseen classes show 15.8 and 17.8 point improvements over the state-of-the-art, while producing a structured 3D representation for each input with consistency across diverse shape collections.
I’ve recently read a short story by Ted Chiang where scientists invent a machine that can predict with 100% accuracy the action a person will take, even when taking account that the person knows that the machine is trying to outsmart it, thus proving that free will is an illusion.
I’m thinking about a simple browser page with 2 buttons and a text (e.g. “You will press the red button”). The user will be prompted to choose one button to press. The model would try to serve a layout (button size, position, text font, text size, colors) that would influence the choice and when the user triggers a mouse event the correct button would flash ahead of time.
Thinking about a genetic algorithm the genome would be the x,y positions of all UI elements, sizes, colors, etc.
I don’t think the ideia is THAT pretentious since there are lots of experiments proving that we subcounsciously decide lots of simple things “ahead of time” . And I don’t even think this has anything to do with the “philosophical” free will.
I didn’t find anything similar on the internet using genetic algorithms. Do you guys think this could be a cool experiment?
Is it possible to keep an image unchanged, while positioning the camera (possibly by attaching it to a robo-hand) so that the image is misclassified by a trained classifier? The orientation and position of the camera for misclassification can be learnt by gradient descent.