Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Songnam Hong

and 1 more

Online federated learning (OFL) is a promising framework to learn a sequence of global functions using distributed sequential data at local devices. In this framework, we first introduce a {\em single} kernel-based OFL (termed S-KOFL) by incorporating the random-feature (RF) approximation, online gradient descent (OGD), and federated averaging (FedAvg) properly. However, it is nontrivial to develop a communication-efficient method with multiple kernels. One can construct a multi-kernel method (termed vM-KOFL) by following the extension principle in the centralized counterpart. This vanilla method is not practical as the communication overhead grows linearly with the size of a kernel dictionary. Moreover, this problem is not addressed via the existing communication-efficient techniques in federated learning such as quantization or sparsification. Our major contribution is to propose a novel randomized algorithm (named eM-KOFL), which can enjoy the advantage of multiple kernels while having a similar communication overhead with S-KOFL. It is theoretically proved that eM-KOFL yields the same asymptotic performance as vM-KOFL, i.e., both methods achieve an optimal sublinear regret bound. Mimicking the key principle of eM-KOFL efficiently, pM-KOFL is presented. Via numerical tests with real datasets, we demonstrate that pM-KOFL can yield the same performances as vM-KOFL and eM-KOFL on various online learning tasks while having the same communication overhead as S-KOFL. These suggest the practicality of the proposed pM-KOFL.