Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] [R] Introducing SafeLife: Safety Benchmarks for Reinforcement Learning based on the Game of Life

The Partnership on AI (PAI) is today releasing SafeLife – a novel reinforcement learning environment that tests the safety of reinforcement learning agents and the algorithms that train them. SafeLife version 1.0 focuses on the problem of avoiding negative side effects—how can we train an RL agent to do what we want it to do but nothing more? The environment has simple rules, but rich and complex dynamics, and generally gives the agent lots of power to make big changes on its way to completing its goals. A safe agent will only change that which is necessary, but an unsafe agent will often make a big mess of things and not know how to clean it up.

SafeLife is part of a broader PAI initiative to develop benchmarks for safety, fairness, and other ethical objectives for machine learning systems. Since so much of machine learning is driven, shaped, and measured by benchmarks (and the datasets and environments they are based on), we believe it is essential that those benchmarks come to incorporate safety and ethics goals on a widespread basis, and we’re working to make that happen.

If you want to try out SafeLife for yourself, you can download the code and try playing some of the puzzle levels or get involved in the open source project. If you’d like to see how to create an AI to play SafeLife, additional details about the environment and our initial agent training can be found in our paper.

submitted by /u/pde
[link] [comments]