[D] Intuition behind embedding dimension and LSTM output space dimension?
So I have followed an example of building an LSTM network for sentiment analysis. I have used my own dataset and the performance of the network is pretty good. I do want to understand the logic behind choosing the right embedding dimension space and the LSTM output dimension space. How would one go on to choose an optimal space for both? What effect would reducing the dimension space?
I am quite new to this, and any help would be great! I am using Keras in Python.