Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Best practices for Multi-task Learning where labels exist for only a subset of the total tasks (partial coverage)

Say, for example, I have a goal to do a multi-task learning workflow where I have a network that predicts both foreground/background segmentation, specific object segmentation, and bounding box regression.

Suppose that there are only labels for 1-2 of the tasks per image (i.e. one input image has labels for bounding boxes and fg/bg segmentation, but no specific object segmentation; one input image has specific objects and bounding boxes, but no fg/bg). Also, suppose that there isn’t a lot of training data to use, so I would like to utilize all of the training data even without full label coverage (especially because the feature extraction portions of the network probably benefit from mutual information).

Is there an area of research or some best practices to train an entire network end-to-end in this regard? Something involving finetuning/turning training ‘off’ for certain branches of the network related to the parts that don’t have labels for that input image?

submitted by /u/Toast119
[link] [comments]

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.