[P] See RNN: Kernel-, Gate-, Channel-wise Visualization of Gradients, Weights, and Activations
DL is more than shooting in the dark and seeing what sticks; to this end, I present the first comprehensive RNN visualization API for Keras & TensorFlow layers, See RNN:
Why use? Introspection – is powerful for debugging, regularizing, and understanding NNs. For example, how can you tell whether your RNN is learning long-term dependencies? Monitor gradients: if a non-zero gradient flows through every timestep, then every timestep contributes to updating weights, so the RNN doesn’t ignore parts of sequences and is forced to learn from them. Or just because the visuals are rather pretty.
Numerous examples explored w/ image results in the link. Functionalities are fully-documented, and are compatible w/ TF 1.15.0 & Keras 2.2.5-, and TF 2.0.0+ & Keras 2.3.0+. Quickstart sandbox code included.
Feedback is welcome.