loading page

Learned Model Compression for Efficient and Privacy-Preserving Federated Learning
  • +1
  • Yiming Chen,
  • Lusine Abrahamyan,
  • Hichem Sahli,
  • Nikos Deligiannis
Yiming Chen

Corresponding Author:[email protected]

Author Profile
Lusine Abrahamyan
Hichem Sahli
Nikos Deligiannis
Author Profile

Abstract

Federated learning performs collaborative training of deep learning models among multiple clients, safeguarding data privacy, security, and legal adherence by preserving training data locally. While the training data remains stored in the client side, instead of being shared with the server and other clients, recent work has shown that the data can still be reconstructed by local updates or gradients. Different defense techniques have been proposed to address this information leakage from the gradient or updates, including introducing noise to gradients, performing gradient compression (such as sparsification), and feature perturbation. However, these methods either impede model convergence, impose restrictions on model architecture incur, or entail substantial communication costs. Furthermore, balancing model performance, communication cost and privacy preservation remains a challenging trade-off. To tackle the information leakage during collaborative training, we introduce an adaptive autoencoder-based method for compressing and, thus, perturbing the model parameters. The client utilizes an autoencoder to acquire the representation of the local model parameters within few local iterations, and then shares the compressed model parameters with the server, rather than the true model parameters. The use of the autoencoder for lossy compression serves as an effective protection against information leakage from the updates. Additionally, the perturbation is intrinsically linked to the autoencoder's input, thereby achieving a perturbation with respect to the parameters of different layers. Moreover, our approach can reduce 4.1 × the communication rate compared to federated averaging. We empirically validate our method using two widely-used models within the context of federated learning, considering three datasets, and assess its performance against several well-established defense frameworks. The results indicate that our approach attains a model performance nearly identical to that of unmodified local updates, while effectively preventing information leakage and reducing communication costs in comparison to other methods, including noisy gradients, gradient sparsification, and PRECODE.
11 Mar 2024Submitted to TechRxiv
18 Mar 2024Published in TechRxiv