[D] Convolutional neural network width and depth vocabulary convention.
As far as I’m aware of, vocabulary is not normalized at all across research. I tend to use depth for the number of consecutive convolution, and width for the number of filter used at each convolutional layer. Though, its tricky when considering the tensor shape, where height and width are used to describe the feature map, and depth corresponds to the number of filter.
Which convention do you use and which seems the more widespread ? Thank you.