Abstract
Dictionary learning is a classic representation learning method that has been widely applied in signal processing and data analytics. In this paper, we investigate a family of ℓp-norm (p > 2, p ∈ N) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects. Specifically, we prove that the global maximizers of these formulations are very close to the true dictionary with high probability, even when Gaussian noise is present. Based on the generalized power method (GPM), an efficient algorithm is then developed for the ℓp-based formulations. We further show the efficacy of the developed algorithm: for the population GPM algorithm over the sphere constraint, it first quickly enters the neighborhood of a global maximizer, and then converges linearly in this region. Extensive experiments demonstrate that the ℓp-based approaches enjoy a higher computational efficiency and better robustness than conventional approaches and p = 3 performs the best.
Original language | English |
---|---|
Pages | 280-289 |
Number of pages | 10 |
Publication status | Published - Aug 2020 |
Event | 36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 - Virtual, Online Duration: 3 Aug 2020 → 6 Aug 2020 |
Conference
Conference | 36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 |
---|---|
City | Virtual, Online |
Period | 3/08/20 → 6/08/20 |
Keywords
- Artificial intelligence
- Computational efficiency
- Data Analytics
- Data handling
- Gaussian noise (electronic)
- Signal processing
- Learning systems
ASJC Scopus subject areas
- Artificial Intelligence