Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Would it be possible to train a multi-class model on multiple datasets, where not all datasets are tagged with the same set of classes, but in the end obtain a model that predicts all classes jointly?

Let’s say I have a multi-class problem where classes are {A, B, C, Other} – where the Other class is a catch-all for all examples that are not in A, B or C.

The data comes in multiple datasets – D1 and D2. Let’s say D1 has been labeled {A, B, Other-or-C} and D2 has been labeled {A, C, Other-or-B}. In practice we can produce this kind of situation by making all C’s into Other in D1 and all B’s into Other in D2 from the original dataset D that contains all classes.

How can I modify the final layer of the network to accommodate this situation? In the end I want to train a model to predict {A, B, C, Other}

The significance of the problem is related to reducing the tagging effort. When you have D1 with 10000 examples and D2 with 500 examples, it would be much easier to train jointly with D1 and D2 as they are instead of tagging D1 with all tags in D2 and D2 with all tags in D1. Some tags might be common to D1 and D2.

This looks like a multi-task learning problem to me where the tasks are partially overlapping.

submitted by /u/visarga
[link] [comments]