Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Assuming we want to learn k tasks jointly, and the data for all tasks are available. We may either train a model with parallel multi-task learning (eg. each batch is a mixture of samples from the k tasks), or present tasks sequentially (eg. switch to a different task once every 5k time steps). The latter is kind of like continual learning, except that the set of tasks is fixed and there won’t be new ones. Which training paradigm yields better results? Any paper that gives theoretical analysis or makes empirical comparisons?
submitted by /u/vernunftig
[link] [comments]