Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] I’m a Reinforcement Learning researcher and I’m leaving academia.

I’m a Ph.D. student studying RL. I’m graduating soon, joining a top company as a software engineer. I have never wanted to become a professor, but I liked doing research. I found RL very interesting and still have some ideas that I’d like to work on, but the recent trends in RL in academia discouraged me doing research in RL. I’ve felt this way for a while but the recent RL tutorial at NeurIPS reminded me of it again. Lots of the papers after 2014 introduced in the talk were from either Deepmind, UC Berkeley, or MSR (which makes sense because the speaker is from MSR). I get why the speaker included those papers because they were cited many times and have been frequently discussed in their respective communities. Although there are many good papers from those groups and many of them are certainly amazing researchers, I think there are still other good papers published in top conferences which deserve to be continuously discussed. Things like experimental domains, benchmarks, and specific fields (or research directions) could be selected with a bias because of such trends. I wonder if it has always been this way or if this is something new and makes other people frustrated too.

submitted by /u/clairinf
[link] [comments]