Abstract
In federated learning, multiple devices compute each a part of a common machine learning model using their own private data. These partial models (or their parameters) are then exchanged in a central server that builds an aggregated model. This sharing process may leak information about the data used to train them. This problem intensifies as the machine learning model becomes simpler, indicating a higher risk for single-hidden-layer feedforward neural networks, such as extreme learning machines. In this paper, we establish a mechanism to disguise the input data to a system of linear equations while guaranteeing that the modifications do not alter the solutions, and propose two possible approaches to apply these techniques to federated learning. Our findings show that extreme learning machines can be used in federated learning with an extra security layer, making them attractive in learning schemes with limited computational resources.
| Original language | English |
|---|---|
| Article number | 102693 |
| Journal | Journal of Computational Science |
| Volume | 92 |
| DOIs | |
| Publication status | Published - Dec 2025 |
| MoE publication type | A1 Journal article-refereed |
Keywords
- Extreme learning machines
- Linear equation solving
- Private computation
- Private federated learning