Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] How do you standardize/scale data for multi-input neural networks?

Books teach us that we have to standardize or scale data (when features have non-comparable scales) before feeding a neural network (or other algorithms such as linear or logistic regressions).

When you have multi-input neural networks (for example you have some “dense” inputs for tabular/structured data and some convolutional layers for unstructured data such as images… and then at some point you concatenate their output), how do you standardize/scale data? You have totally different input data, different NN layers and then at some point you merge them (e.g. https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models). How do you handle data normalization in this case?

submitted by /u/ekerazha
[link] [comments]