[Discussion] Any datasets for multi-modal learning between time series and text?
Multi-modal learning has traditionally focused on image/video vs text (e.g. image captioning, video description), but does anyone know good datasets for learning between text and time series? Example of time series: stock charts, power plant / wearable sensor readings, music etc.
I am looking for natural language human comments on these types of data. Examples I can think of are:
– stock charts <-> analyst notes
– power plant sensor data <-> operator notes
– wearable sensor data <-> coach notes or commentary
– music data <-> critics or teacher notes
I wonder if there are real-world datasets of these types?
Thanks.
submitted by /u/mistycheney
[link] [comments]