We use cookies on this site to enhance your experience.
By selecting “Accept” and continuing to use this website, you consent to the use of cookies.
Search for academic programs, residence, tours and events and more.
Jan. 19, 2026
Print | PDFBy Sophia Haoran Chen, Master of Applied Computing student
Kaiyu Li, assistant professor, Physics and Computer Science
(study co-authors: Na Ta and Yong Wang)
Though collaborations can lead to progress, this typically only happens when fairness is involved. When individual contributions are not properly recognized, motivation can erode and collaborative efforts may begin to break down. This dynamic, which is known to exist in the social world, has begun to surface in modern AI systems.
In many sensitive domains, such as medical diagnosis and financial risk management, institutions want to build more powerful AI models together while keeping data private. To address this, scientists developed "federated learning," a technique that enables parties to collaboratively train an AI model without direct data sharing. However, our findings suggest that this ideal does not always hold in practice.

Li (left) and Chen
Our research shows that participants can submit a "disguised" model update in certain federated learning scenarios that appears reasonable and passes basic checks, yet does not genuinely contribute to actual model improvement. In some cases, such behavior can still receive approximately 15 to 20% contribution scores under commonly used evaluation methods. More critically, these behaviors do not necessarily degrade overall model performance in the short term, making them difficult to detect.
Over time, this hidden imbalance can gradually erode the fairness of collaborative systems, placing genuinely contributing participants at a disadvantage. Our study aims to expose this risk and remind researchers and system designers to rethink how contribution evaluation is designed and emphasize that performance alone is insufficient as a measure of honest participation.
These issues are particularly relevant in real-world settings such as health care and finance, where collaboration relies on both strong privacy guarantees and mutual trust. Ensuring a more transparent and reliable collaborative AI environment is not only a technical challenge but also a practical requirement for sustainable privacy-preserving AI deployment.
Our paper, Breaking Robustness: Free-rider Attacks Against Contribution Evaluation in Federated Learning, was accepted at the 2025 IEEE International Conference on Collaborative Computing and received the Best Paper Award.
We will continue working on how federated learning systems can identify and mitigate strategic or opportunistic behavior. Rather than assuming honest behavior by default, we are particularly interested in defense-oriented mechanisms that explicitly account for malicious or opportunistic behavior. By strengthening the robustness and transparency of contribution evaluation, we hope to support the development of collaborative learning systems that are both trustworthy and practical for real-world use.