Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Hi,
As far as I’m aware of, vocabulary is not normalized at all across research. I tend to use depth for the number of consecutive convolution, and width for the number of filter used at each convolutional layer. Though, its tricky when considering the tensor shape, where height and width are used to describe the feature map, and depth corresponds to the number of filter.
Which convention do you use and which seems the more widespread ? Thank you.
submitted by /u/Towram
[link] [comments]