[D] How do you standardize/scale data for multi-input neural networks?
Books teach us that we have to standardize or scale data (when features have non-comparable scales) before feeding a neural network (or other algorithms such as linear or logistic regressions).
When you have multi-input neural networks (for example you have some “dense” inputs for tabular/structured data and some convolutional layers for unstructured data such as images… and then at some point you concatenate their output), how do you standardize/scale data? You have totally different input data, different NN layers and then at some point you merge them (e.g. https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models). How do you handle data normalization in this case?
submitted by /u/ekerazha
[link] [comments]