Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Following the Rubik’s cube video and paper, seems OpenAI is doing their usual business: Brute force search/learning with $$$ GPU hours, viral a video and PR, call it AGI “progress”.
I’ve seen better previous works, in sample efficiency and/or intepretability:
DeLaNet, Generalize robots as learnable Lagrangian, able to derive and seperate robot kinematics and external forces, e.g. Friction and Coriolis forces. https://arxiv.org/abs/1907.04489 https://openreview.net/forum?id=BklHpjCqKm
TossingBot, Residual dynamics model learning, able to near-online adapt and generalize to arbitrary-weight objects throwing to targets with perturbations, https://arxiv.org/abs/1903.11239 https://ai.googleblog.com/2019/03/unifying-physics-and-deep-learning-with.html
One may argue they did prior-less learning, but their sample efficiency is big problem (adaptive randomized environment is controlled data augmentation). They are going to the opposite direction of Yoshua Bengio’s composable latents and composable disentanglement approach.
submitted by /u/tsauri
[link] [comments]