Efficient Secure Aggregation for Federated Learning

  • Varun  Madathil, Yale University; Melissa Chase, Microsoft

Federated Learning (FL) trains a global model by having each selected device push only its model update to a central server, keeping raw data local. However, those updates can still leak sensitive information unless the server learns only their sum. A naïve approach is to run a generic secure‑multiparty sum, but off‑the‑shelf protocols require several rounds of interaction and even direct client‑to‑client communication – often infeasible in FL, where mobile devices are intermittently online and can drop out at any moment, and cannot be expected to interact with each other.

In this talk, I will review the secure‑aggregation problem in the context of FL and explain why naïve solutions fail by focusing on constraints unique to the FL setting. I will then present Tacita, a single‑server protocol that satisfies these FL‑specific constraints while retaining provable security. Tacita uses an external committee (needed to prevent residual leakage) to aid in secure aggregation and offers:

  • One‑shot execution: every client and every committee member sends exactly one message.
  • Constant‑size communication per client, independent of the round’s cohort of clients or committee size.
  • Robust aggregation despite client or committee dropout.

These properties are enabled by two new primitives: (i) succinct multi‑key linearly homomorphic threshold signatures (MKLHTS) for verifiable input soundness with a single aggregate signature, and (ii) a homomorphic variant of Silent Threshold Encryption (CRYPTO ’24).

Series: Cryptography Talk Series