Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Is it possible to hack AutoEncoder?

Re-Ha! (which means Reddit Hi!)

As I wrote down in the title, is it possible to hack Auto-Encoder?

What the heck is ‘hacking AutoEncoder’ then?

Let me give you a simple scenario.

[Scenario]

Suppose Jane extracts latent representation, L, of her private data , X, with three features

(body weight, height, and a binary variable indicating whether she ate pizza or not) daily.

X -> Encoder -> L -> Decoder -> X’ (reconstructed input ~ X)

(X: 3 dim., L: 1 dim., X’: 3 dim.)

She made a simple ML system that continually tracks the three features every day,

trains the AutoEncoder again, and uploads in her private server.

Then, suppose Chris (friend-zoned by Jane a month ago), succeeds in stealing L,
by installing a backdoor program on Jane’s server.

But he doesn’t know the structure of Decoder network, trained weight of Decoder network, and reconstructed input X’.

What he only has is the latent representation of L (continually updated), and the dimension of the original input X.

[Question]

In this situation, is it possible for Chris to retrieve the original input X?

I think it is of course impossible, but then what can be the related mathematical concept supporting the impossibility?

Or, is there any possible method to reconstruct/approximate the original input?

Thank you in advance!

submitted by /u/vaseline555
[link] [comments]