[D] LSTM Autoencoder Separating By Sequence Length Instead of Class Features
Hello,
I’ve built an LSTM autoencoder using Keras, similar to this tutorial: https://blog.keras.io/building-autoencoders-in-keras.html . I use padding with a mask to incorporate sequences of different length. However, when I look at the TSNE plot of the reduced features, I see my points clustering more by sequence duration than by features. Yes, within some larger clusters it does cluster within by interesting features, but the larger clusters are all by sequence length. Do you guys/gals have any suggestions to minimize the effect of sequence length on the encoded features?
Thanks a Billion!
submitted by /u/DataSciencePenguin
[link] [comments]