Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] skip-GANomaly and general issues with reproduction of papers

[D] skip-GANomaly and general issues with reproduction of papers

Hi all, I’m currently working through an implementation of skip-GANomaly, a paper on anomaly detection using an adversarially trained auto-encoder. In a nutshell, they train an auto-encoder over normal examples. Abnormal examples at test time will have poorer reconstruction, and thus higher loss, as they come from a different image distribution than normal examples.

However, this paper contains no code. Additionally, the paper contains no clear definition of their network architecture. The only description of the network architecture is a figure shown beneath:

Problematic Figure of skip-GANomaly

As there is no explanation, I interpret the initial convolutional maps to come from a single convolution from 3 -> 64 channels; each following arrow is LReLU, Conv, and BN, so a nonlinearity would be applied afterwards.

However, for any initial convolutional feature maps concatenated over, as shown in the image, reconstruction could be perfect without any of the rest of the network as a single convolution would be necessary to go from 64 + 64 -> 3 channels. This convolution could just learn the inverse of the initial convolution which created the feature maps from the image. I could be misinterpreting this, so if anyone could enlighten me, that’d be great. I’d imagine that the feature maps resulting from the first convolution could not be concatenated, unless feature maps are defined to be subsequent to batch norm and activations. If bn and LReLU is included in the initial convolution, this is not described anywhere in the paper.

In general, how frequent are papers published without code / with difficulties in obtaining the architecture or training specifications to reproduce the work? I’m a student, and this is the first time I’ve diverged from large SOTA papers with thousands of citations. I was surprised to encounter a lot of the ambiguity in this paper.

submitted by /u/good_rice
[link] [comments]