AI-based systems, especially those based on machine learning technologies, have become central in modern societies. In the meanwhile, users and legislators are becoming aware of privacy issues. Users are increasingly reluctant to share their sensitive information, and new laws have been enacted to regulate how private data is handled (e.g., the GDPR).
Federated Learning (FL) has been proposed to develop better AI systems without compromising users’ privacy and the legitimate interests of private companies. Although still in its infancy, FL has already shown significant theoretical and practical results making FL one of the hottest topics in the machine learning community.
Given the considerable potential in overcoming the challenges of protecting users’ privacy while making the most of available data, we propose WAFL (Workshop on Advancements in Federated Learning Technologies) at ECML-PKDD 2025.
This workshop aims to focus the attention of the ECML-PKDD research community on addressing the open questions and challenges in this thriving research area. Given the broad range of competencies in the ECML-PKDD community, the workshop will welcome foundational contributions and contributions expanding the scope of these techniques, such as improvements in the interpretability and fairness of the learned models.
WORKSHOP SCHEDULE (Tentative)
Opening
Speakers: Mirko Polato, Prof. Roberto Esposito (University of Torino)
Keynote
Speaker: Lydia Y. Chen (University of Neuchatel and Delft University of Technology). TO BE DEFINED
Technical Presentations
Coffee Break
Keynote
Speaker: Michael Kamp (Ruhr-University Bochum). TO BE DEFINED
Technical Presentations
Closing
TOPICS AND THEMES
The WAFL workshop will be centered on the theme of improving and studying the Federated Learning setting. It will welcome applicative and theoretical contributions as well as contributions about specific settings and benchmarking tools.
The topics include (but are not limited to):
Algorithmic and theoretical advances in FL
Federated Learning with non-iid data distributions
Security and privacy of FL systems (e.g., differential privacy, adversarial attacks, poisoning attacks, inference attacks, data anonymization, model distillation, secure multi-party computation ...)
Other non-functional properties of FL (e.g., fairness, interpretability/explainability, personalization ...)
FL variants and Decentralized Federated Learning (e.g., vertical FL, split-learning, gossip learning, ...)
Applications of FL (e.g., FL for healthcare, FL on edge devices, advertising, social network, blockchain, web search ...)
Tools and resources (e.g., benchmark datasets, software libraries, ...)
SUBMISSION GUIDELINES
We invite submissions of original research on all aspects of Federated Learning (see the not complete list of topics above). Each accepted (short and long) paper will be included in the workshop proceedings (published by Springer Communications in Computer and Information Science). All papers will be presented in the talk session. Authors of short and long papers will have the faculty to opt-in or opt-out from the publication in the proceedings.
We accept the following types of submissions:
Short Papers (6 pages + references): Work-in-progress, position papers, or open problems. Accepted short papers will be included in the Springer Workshop Proceedings of ECML-PKDD 2025. Short papers must follow the ECML-PKDD 2025 formatting guidelines (see here).
Long Papers (12 pages + references): Novel, original research not published elsewhere. Accepted long papers will be included in the Springer Workshop Proceedings of ECML-PKDD 2025. Long papers must follow the ECML-PKDD 2025 formatting guidelines (see here).
Non-archival Submissions: Papers recently accepted or under review at other venues. These submissions have no formatting restrictions but must be accompanied by a cover letter explaining their relevance to the workshop. Non-archival submissions will not be included in the Springer Workshop Proceedings and will undergo a different selection process by the workshop organizers.
Short and long papers need to be "best-effort" anonymized. We strongly encourage making code and data available anonymously (e.g., in an anonymous GitHub repository via Anonymous GitHub or in a Dropbox folder). The authors may have a (non-anonymous) pre-print published online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them.
Submissions will be evaluated by at least two reviewers on the basis of relevance, technical quality, potential impact, and clarity. The reviewing process is double-blind (reviewers and area chairs are not aware of the identities of the authors; reviewers can see each other’s names). Papers must not include identifying information of the authors (names, affiliations, etc.), self-references, or links (e.g., GitHub, YouTube) that reveal the authors’ identities (e.g., references to own work should be given neutrally like other references, not mentioning ‘our previous work’ or similar). However, we recognize there are limits to what is feasible with respect to anonymization. For example, if you use data from your own organization and it is relevant to the paper to name this organization, you may do so.
Important dates are reported here.
Assistant Professor
Department of Computer Science
University of Torino
Torino, Italy
Associate Professor
Department of Computer Science
University of Torino
Torino, Italy