In this tutorial, authors of TensorFlow Federated will introduce the key concepts behind federated learning, an approach to machine learning that allows a shared global model to be trained across many participating clients that keep their training data locally. By eliminating the need to collect data at a central location, yet still enabling each participant to benefit from the collective knowledge of all participants in the network, it lets you build intelligent applications that leverage insights from data that might be too costly, sensitive, or impractical to collect.
We’ll demonstrate how you can develop hands-on familiarity with federated learning using TensorFlow Federated (TFF), a new open-source framework in the TensorFlow ecosystem. We will introduce the key concepts behind TensorFlow and TFF, we’ll demonstrate by example how to setup a federated learning experiment and run it in a simulator, what the code looks like under the hood and how to extend it, and we’ll briefly discuss options for future deployment to real devices.
The talk caters to audiences with different types of backgrounds:
Machine learning developers and practitioners, who would like to experiment with running their existing machine learning models and data in a federated setting, will learn how to do so using Federated Learning API, the included simulation runtime and sample federated data sets.
Researchers, who would like to experiment with new types of federated learning algorithms or extend those that come included with the framework, or who might wish to develop custom types of federated computations such as statistical analysis over sensitive data, will learn how to do so using Federated Core API, a strongly-typed functional programming environment that allows for easy mixing of TensorFlow code with federated communication abstractions.
Systems engineers and researchers, who would like to adapt TensorFlow Federated to target new types of environments, will learn how they can benefit from the abstract platform-independent representation used to represent all computations expressed in TFF - at its core, TFF is designed to facilitate a smooth migration path for all TFF code from a simulation environment to a possible future deployment on real devices in production.
Peter Kairouz is a researcher interested in machine learning, security, and privacy. At Google, he is a Research Scientist working on decentralized and privacy-preserving machine learning algorithms. Prior to Google, his doctoral and postdoctoral research have largely focused on building decentralized technologies for anonymous broadcasting over complex networks, understanding the fundamental trade-off between data privacy and utility, and leveraging state-of-the-art deep generative models for data-driven privacy.
visit the speaker at: Homepage