SEAMS 2026
Mon 13 - Tue 14 April 2026 Rio de Janeiro, Brazil
co-located with ICSE 2026

Federated Learning (FL) is increasingly adopted as an alternative to centralized Machine Learning (ML) techniques, as it allows clients to preserve the privacy of their data. However, FL systems pose new challenges in terms of adaptation, as design choices are conditioned by client characteristics and network conditions, thus necessitating adaptive strategies that elaborate on such a different operational environment. Previous work introduces a set of architectural patterns to support practitioners at design time, but their effectiveness has only been investigated when statically activated throughout the FL process. This work presents a novel FL framework, namely FLiP, where a subset of the aforementioned patterns are dynamically and adaptively toggled in response to evolving performance metrics and boundary conditions. We empirically evaluate FLiP across multiple federation configurations and two learning tasks, considering both static and dynamic conditions. Results indicate that dynamically toggling architectural patterns can be beneficial under specific conditions, with cases leading to an improvement of up to 10% in learning accuracy, at the cost of negligible overhead at deployment time.