Project Overview

Project Code: CIT 26

Project name:

Byzantine Attack Strategies in Federated Learning

TUM Department:

CIT - Electrical and Computer Engineering

TUM Chair / Institute:

Chair of Communications Engineering

Research area:

Trustworthy Distributed Learning

Student background:

Computer EngineeringComputer Science/ InformaticsElectrical EngineeringMathematics

Further disciplines:

Participation also possible online only:

Planned project location:

Technical University of Munich
Theresienstraße 90, Building N4
80333 München

Project Supervisor - Contact Details


Title:

Given name:

Sena

Family name:

Ergisi

E-mail:

sena.ergisi@tum.de

Phone:

+4917684491111

Additional Project Supervisor - Contact Details


Title:

Dr.

Given name:

Rawad

Family name:

Bitar

E-mail:

rawad.bitar@tum.de

Phone:

+49 (89) 289 - 25011

Additional Project Supervisor - Contact Details


Title:

Given name:

Family name:

E-mail:

Phone:

Project Description


Project description:


Federated Learning (FL) is an emerging distributed learning paradigm that unlocks training machine learning algorithms at a central server on private data owned by multiple clients. Despite its appeal, FL remains subject to vulnerabilities, particularly in maintaining robustness against Byzantine adversaries who intentionally try to jam the learning process. Robust FL is introduced to mitigate the effect of Byzantine attacks. The goal is to design a mechanism that can estimate which clients are sending honest computations and which are acting maliciously.

Since the data belong to the clients, a key challenge at the central server is to identify the behavior of reliable computations, usually based on the direction of the global model. This challenge is further exacerbated in settings where the client’s data is drastically different, known as heterogeneous data distribution. In this setting, even the honest clients’ computations differ significantly, making the distinction between reliable computations and malicious ones very challenging.

Previous studies [1] show that Byzantine adversaries can jam the training process by simply imitating the computation of one honest client of their choice. Other state-of-the-art attacks [2, 3] rely on colluding adversaries who send identical or similar updates to skew the majority consensus. In a recent work [4], we developed a robust FL scheme that penalizes over-proportional similarity among updates, rendering the colluding power of adversaries ineffective. This opens a new frontier: what is the most effective attack strategy available to adversaries once simple collusion is ruled out?

This project aims to examine the strength of existing attack strategies and the vulnerabilities of current defense mechanisms in FL. The primary focus is on designing novel sophisticated adversarial strategies and evaluating their performance. By doing so, this project aims to deepen our understanding of the limits of robustness in FL and provide insights into developing more resilient defense mechanisms. It is an opportunity to engage with one of the most exciting challenges in machine learning security. By pushing the limits of robustness in FL, the student will gain hands-on experience in exploring the interplay between attacks and defenses, contributing to the design of more resilient machine learning systems.

[1] Karimireddy, S. P., He, L., and Jaggi, M. (2022). Byzantine robust learning on heterogeneous datasets via bucketing. In International Conference on Learning Representations.

[2] Baruch, M., Baruch, G., and Goldberg, Y. (2019). A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 8-14 December 2019, Long Beach, CA, USA.

[3] Xie, C., Koyejo, O., and Gupta, I. (2019). Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation. In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25, 2019, page 83.

[4] Ergisi, S., Maßny, L., and Bitar, R. (2025) ProDiGy: Proximity- and Dissimilarity-Based Byzantine-Robust Federated Learning. To be presented at FLTA 2025.

Working hours per week planned:

40

Prerequisites


Required study level minimum (at time of TUM PREP project start):

2 years of bachelor studies completed

Subject related:

- Strong background in statistics and probability theory
- Programming with Python

Other:

  • Keine Stichwörter