We provide support in the development and deployment of privacy-preserving model generation from data, such as federated learning.
Sharing data in an industrial environment can be a challenging problem, and many trust issues have to be overcome. Companies are often reluctant or unwilling to give away their data to other parties, even if they have no immediate use for it. We are offering help with the development and deployment of technologies that allow the use of data to generate models without the need to share data and compromise on its privacy. One such example is federated learning, where a number of companies join efforts to generate models from data without sharing the data. Federated learning is also well suited for online learning on edge devices, reducing communication overhead in decentralized settings. Instead of having to transfer data across the network, only the model parameters are shared in each round. Therefore federated learning can also have benefits in scenarios where privacy is not a concern.
SPECIAL ACCESS CONDITIONS
The quality of ML models is typically upper bounded by the quality and quantity of the data that it was trained with. This has, for example, implications for intelligent industrial production, where obtaining data is often expensive w.r.t. time and materials. Here federated learning offers a solution, allowing companies to train a common model without having to physically share data. Over the past years we have set-up a federated learning framework for intelligent production, and are currently focusing on federated anomaly detection and federated learning on edge devices. We are interested in exploring novel use cases for scenarios in a wide range of applications. In the context of production technology, this could for example be end-of-the-line inspection or predictive maintenance.