Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Uncategorised

TORONTO AI MEETUP – TRADE REV

TORONTO AI MEETUP – ML solution architecture at TradeRev

150 John St Toronto

Topic
——————–
Amit Jain, TradeRev’s R&D Lead will give background into the company’s product and solution, while discussing its microservice architecture, serverless scalable ML solutions and the continuous integration and deployments that keep TradeRev on the cutting edge of innovation.

Agenda:
——————–
6:00p – arrive & socialize
6:30p – talk begins
7:00p – Q&A
7:20p – AI news
7:25p – 20 second open mic rounds
7:30p – wrap up

About TradeRev
——————–
An auto tech company that changed the way car dealers buy and sell vehicles through their revolutionary app, TradeRev is constantly pushing the boundaries when it comes to its tech.

Discord & Slack
——————–
Join us on Discord: JOIN DISCORD
Join us on Slack: JOIN 

JOIN MEETUP

Trade Rev

150 John Street

43.6505,-79.3914

MEETUP ! MACHINE LEARNING – CONTEXT BASED PERSONALIZATION

Machine Learning – Context Based Personalization

Thursday, Aug 22, 2019, 6:00 PM

220 Adelaide St W
220 Adelaide St W Toronto, ON

10 manifolds Attending

Join us for an evening at Paytm Labs to learn about Context Based Personalization Speaker: Charumitra Pujari, VP of ML at Paytm Labs Discord: https://discord.gg/z3RMPUP About Paytm Labs: Paytm Labs Inc., located in Toronto, began as a research and development division of Paytm. Since 2014, the Company has applied big data, artificial intelligence …

Check out this Meetup →

CVPR 2019 Challenges on Domain Adaptation in Autonomous Driving

We all dream of a future in which autonomous cars can drive us to every corner
of the world. Numerous researchers and companies are working day and night to
chase this dream by overcoming scientific and technological barriers. One of the
greatest challenges we still face is developing machine learning models that can
be trained in a local environment and also perform well in new, unseen
situations. For example, self-driving cars may utilize perception models to
recognize drivable areas from images. Companies in Silicon Valley can build and
perfect such a model using large local datasets from the Bay Area for training.
However, if the same model were deployed in a snowy area such as Boston, it
would likely perform miserably, because it has never seen snow before. Boston,
during winter, and Silicon Valley, during any time of the year, can be labeled
as separate domains for perception models, since they present clear differences
in climate and challenges in perception. In other cases, domains may be much
closer in nature, such as a city street and a nearby highway. The process of
transferring knowledge and models between different domains in machine learning
is called domain adaptation.

A large number of papers on domain adaptation of perception models have appeared
in top publishing venues for machine learning and computer vision. However, most
of these works focus on image classification and semantic segmentation. Hardly
any attention has been paid to instance-level tasks, such as object detection
and tracking, even though localization of nearby objects is arguably more
important for autonomous driving. To foster the study of domain adaptation of
perception models, Berkeley
DeepDrive
and Didi Chuxing are
co-hosting two competitions in CVPR 2019 Workshop on
Autonomous Driving
. The challenges will focus on domain adaptation of object
detection and tracking based on the BDD100K, from Berkeley DeepDrive, and
D2-City, from Didi Chuxing, datasets. The domain of BDD100K covers US
scenes, while D2-City was collected on China’s streets. The
competitions ask participants to transfer object detectors from BDD100K to
D2-City and object trackers from D2-city to BDD100K. More
information about the challenges can be found on our website and D2-City.

Following our introduction of the BDD100K dataset, we have been busy working to
provide more temporal annotations. Above is an example of object tracking
annotation, created by our open-source annotation platform Scalabel. Some of the tracking labels are
used in the domain adaptation challenge for object tracking. More data will be
released this summer. Of course, we also have object tracking at night.

Announcing the BAIR Open Research Commons

The University of California Berkeley Artificial Intelligence Research (BAIR)
Lab is pleased to announce the BAIR Open Research Commons, a new industrial
affiliate program launched to accelerate cutting-edge AI research. AI research
is advancing rapidly in both university and corporate research settings, with
existing collaborations already underway driven by individual
researcher-to-researcher collaborations. The BAIR Commons is designed to enhance
and streamline such collaborative cutting-edge research by students, faculty,
and corporate research scholars.

The Commons agreement has been framed with the goal of promoting open research
in AI: all on-campus effort, data, and results in the Commons program will be
non-exclusive with open publication and open-source code release expected.
Fostering an environment for excellence for graduate student research is the
primary motivation of the new program: Berkeley students will lead the design of
projects in the Commons, and the program of research must be approved by their
home departments before a project commences. Students are expected to benefit
from collaboration with leading researchers in industrial research labs, as well
as the availability of partner resources useful to investigate certain open
questions in state-of-the-art AI research. The University will benefit from
membership fees paid by partners to participate in the program. The Commons
agreement provides for collaborative joint projects between the partners and
Berkeley, with intellectual property shared jointly and equally by the parties.

The agreement also provides for joint research “lablets”, which will be embedded
collaborative open research spaces inside BAIR’s 27,000 sq. ft. research
facility opening this summer in the Berkeley Way West facility on the Berkeley
campus. More than a dozen faculty and 120 students will be assigned space in
the new lab, with an equal number of visiting positions allocated for
researchers from other BAIR labs and for visiting industrial partners.

Initial alliance participants include Amazon, Facebook, Google, Samsung, and
Wave Computing. Funding for over twenty joint projects has been committed in the
initial launch of the program, which will support both BAIR facilities and
research efforts. Over 30 faculty and 200 graduate students and postdocs at
Berkeley are affiliated with BAIR. For more information about BAIR or the
Commons program please contact bair-admin@berkeley.edu.



BAIR will occupy the top floor of Berkeley Way West.

Manipulation By Feel

Guiding our fingers while typing, enabling us to nimbly strike a matchstick, and
inserting a key in a keyhole all rely on our sense of touch. It has been
shown
that the sense of touch is
very important for dexterous manipulation in humans. Similarly, for many robotic
manipulation tasks, vision alone may not be
sufficient

often, it may be difficult to resolve subtle details such as the exact position
of an edge, shear forces or surface textures at points of contact, and robotic
arms and fingers can block the line of sight between a camera and its quarry.
Augmenting robots with this crucial sense, however, remains a challenging task.

Our goal is to provide a framework for learning how to perform tactile servoing,
which means precisely relocating an object based on tactile information. To
provide our robot with tactile feedback, we utilize a custom-built tactile
sensor, based on similar principles as the GelSight
sensor
developed at MIT. The sensor is
composed of a deformable, elastomer-based gel, backlit by three colored LEDs,
and provides high-resolution RGB images of contact at the gel surface. Compared
to other sensors, this tactile sensor sensor naturally provides geometric
information in the form of rich visual information from which attributes such as
force can be inferred. Previous work using similar sensors has leveraged the
this kind of tactile sensor on tasks such as learning how to
grasp
, improving
success rates when grasping a variety of objects.

Continue reading

Assessing Generalization in Deep Reinforcement Learning



TL;DR

We present a benchmark for studying generalization in deep reinforcement
learning (RL). Systematic empirical evaluation shows that vanilla deep RL
algorithms generalize better than specialized deep RL algorithms designed
specifically for generalization. In other words, simply training on varied
environments is so far the most effective strategy for generalization. The code
can be found at https://github.com/sunblaze-ucb/rl-generalization and the
full paper is at https://arxiv.org/abs/1810.12282.

Continue reading

Controlling False Discoveries in Large-Scale Experimentation: Challenges and Solutions

“Scientific research has changed the world. Now it needs to change itself.”

– The Economist, 2013

There has been a growing concern about the validity of scientific findings. A multitude of journals, papers and reports have recognized the ever smaller number of replicable scientific studies. In 2016, one of the giants of scientific publishing, Nature, surveyed about 1,500 researchers across many different disciplines, asking for their stand on the status of reproducibility in their area of research. One of the many takeaways to the worrisome results of this survey is the following: 90% of the respondents agreed that there is a reproducibility crisis, and the overall top answer to boosting reproducibility was “better understanding of statistics”. Indeed, many factors contributing to the explosion of irreproducible research stem from the neglect of the fact that statistics is no longer as static as it was in the first half of the 20th century, when statistical hypothesis testing came into prominence as a theoretically rigorous proposal for making valid discoveries with high confidence.

Continue reading

Learning Preferences by Looking at the World

It would be great if we could all have household robots do our chores for us.
Chores are tasks that we want done to make our houses cater more to our
preferences; they are a way in which we want our house to be different from
the way it currently is. However, most “different” states are not very
desirable:

Surely our robot wouldn’t be so dumb as to go around breaking stuff when we ask
it to clean our house? Unfortunately, AI systems trained with reinforcement
learning
only optimize features specified in the reward function
and are
indifferent to anything we might’ve inadvertently left out. Generally, it is
easy to get the reward wrong by forgetting to include preferences for things
that should stay the same, since we are so used to having these preferences
satisfied, and there are so many of them. Consider the room below, and imagine
that we want a robot waiter that serves people at the dining table efficiently.
We might implement this using a reward function that provides 1 reward whenever
the robot serves a dish, and use discounting so that the robot is incentivized
to be efficient. What could go wrong with such a reward function? How would we
need to modify the reward function to take this into account? Take a minute to
think about it.

Continue reading

Soft Actor Critic—Deep Reinforcement Learning with Real-World Robots

We are announcing the release of our state-of-the-art off-policy model-free
reinforcement learning algorithm, soft actor-critic (SAC). This algorithm has
been developed jointly at UC Berkeley and Google, and we have been using
it internally for our robotics experiment. Soft actor-critic is, to our
knowledge, one of the most efficient model-free algorithms available today,
making it especially well-suited for real-world robotic learning. In this post,
we will benchmark SAC against state-of-the-art model-free RL algorithms and
showcase a spectrum of real-world robot examples, ranging from manipulation to
locomotion. We also release our implementation of SAC, which is particularly
designed for real-world robotic systems.

Continue reading

Scaling Multi-Agent Reinforcement Learning

An earlier version of this post is on the RISELab blog. It is posted here
with the permission of the authors.

We just rolled out general support for multi-agent reinforcement learning in
Ray RLlib 0.6.0. This blog post is a brief tutorial on multi-agent RL and
how we designed for it in RLlib.
Our goal is to enable multi-agent RL across a
range of use cases, from leveraging existing single-agent algorithms to training
with custom algorithms at large scale.

Continue reading