Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] LSTM Autoencoder Separating By Sequence Length Instead of Class Features

Hello,

I’ve built an LSTM autoencoder using Keras, similar to this tutorial: https://blog.keras.io/building-autoencoders-in-keras.html . I use padding with a mask to incorporate sequences of different length. However, when I look at the TSNE plot of the reduced features, I see my points clustering more by sequence duration than by features. Yes, within some larger clusters it does cluster within by interesting features, but the larger clusters are all by sequence length. Do you guys/gals have any suggestions to minimize the effect of sequence length on the encoded features?

Thanks a Billion!

submitted by /u/DataSciencePenguin
[link] [comments]