Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] Any datasets for multi-modal learning between time series and text?

Multi-modal learning has traditionally focused on image/video vs text (e.g. image captioning, video description), but does anyone know good datasets for learning between text and time series? Example of time series: stock charts, power plant / wearable sensor readings, music etc.

I am looking for natural language human comments on these types of data. Examples I can think of are:
– stock charts <-> analyst notes
– power plant sensor data <-> operator notes
– wearable sensor data <-> coach notes or commentary
– music data <-> critics or teacher notes

I wonder if there are real-world datasets of these types?

Thanks.

submitted by /u/mistycheney
[link] [comments]