Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Possible privacy attack method on data shrunk by autoencoder?

http://arxiv.org/abs/1910.08489

Hello, this is my first paper preprint:

Privacy-preserving Federated Bayesian Learning of a Generative Model for Imbalanced Classification of Clinical Data

Though it is not fully biased toward deep learning, or federated learning on a edge device, I made a new framework for learning a global model in a horizontally distributed setting, especially in a clinical field.

AFAIK, it is the first trial to apply Approximate Bayesian Computation(ABC) on federated learning.
(If not, please let me know!)

Without complicated perturbation techniques e.g. Differential Privacy, Homomorphic encryption, Hashing, etc., the proposed method can preserve privacy.

As I said in the paper, unless each local site reveals trained weights and the structure of Autoencoder, shrunk data CANNOT be recovered in the central server. Also not possible even if some local sites conspire against the other site to disclose the information.

  • But this is my hypothesis and expectation, so I want to listen to some feedback or opinions on this.
    Is it really impossible to make leakage on data shrunk by local autoencdoer?

Plus, a global model can be learned in the central server with the minimal information (merely with a distance between local data (perturbed via Autoencoder) and generated data (same dim. with the perturbed one)).

Welcome and thank you in advance for any feedback and questions!

submitted by /u/vaseline555
[link] [comments]