mtlgmm:Unsupervised Multi-Task and Transfer Learning on Gaussian
Mixture Models
Unsupervised learning has been widely used in many real-world applications. One of the simplest and most important
unsupervised learning models is the Gaussian mixture model
(GMM). In this work, we study the multi-task learning problem
on GMMs, which aims to leverage potentially similar GMM
parameter structures among tasks to obtain improved learning
performance compared to single-task learning. We propose a
multi-task GMM learning procedure based on the
Expectation-Maximization (EM) algorithm that not only can
effectively utilize unknown similarity between related tasks
but is also robust against a fraction of outlier tasks from
arbitrary sources. The proposed procedure is shown to achieve
minimax optimal rate of convergence for both parameter
estimation error and the excess mis-clustering error, in a wide
range of regimes. Moreover, we generalize our approach to
tackle the problem of transfer learning for GMMs, where similar
theoretical results are derived. Finally, we demonstrate the
effectiveness of our methods through simulations and a real
data analysis. To the best of our knowledge, this is the first
work studying multi-task and transfer learning on GMMs with
theoretical guarantees. This package implements the algorithms
proposed in Tian, Y., Weng, H., & Feng, Y. (2022)
<arXiv:2209.15224>.