Toward Fast Transform Learning - Université Toulouse III - Paul Sabatier - Toulouse INP Accéder directement au contenu
Rapport Année : 2013

Toward Fast Transform Learning

Résumé

Dictionary learning is a matrix factorization problem. It aims at finding a dictionary of atoms that best represents an image according to a given objective. A usual objective consists of representing an image or a class of images sparsely which has led to many state-of-the-art algorithms in image processing. In practice, all algorithms performing dictionary learning iteratively estimate the dictionary and a sparse representation of the images using this dictionary. However, the numerical complexity of dictionary learning restricts its use to atoms with a small support since the computations using the constructed dictionaries require too much resources to be deployed for large scale applications. In order to alleviate these issues, this paper introduces a new strategy to learn dictionaries composed of atoms obtained by translating the composition of $K$ convolutions with $S$-sparse kernels of known support. The dictionary update step associated with this strategy is a non-convex optimization problem. The purpose of the present paper is to study this non-convex optimization problem. We first reformulate the problem to reduce the number of its irrelevant stationary points. A Gauss-Seidel type algorithm, referred to as Alternative Least Square Algorithm, is introduced for its resolution. The search space of the considered optimization problem is of dimension $KS$, which is typically smaller than the size of the target atom and is much smaller than the size of the image. Moreover, the complexity of the algorithm is linear with respect to the size of the image, allowing larger atoms to be learned (as opposed to small patches). The conducted experiments show that, when $K$ is large (say $K=10$), we are able to approximate with a very good accuracy many atoms such as apodized modified discrete cosines, curvelets, sinc functions or cosines. We also argue empirically that surprisingly the algorithm generally converges to a global minimum for large values of $K$ and $S$.
Fichier principal
Vignette du fichier
FTL.pdf (2.88 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00862903 , version 1 (17-09-2013)
hal-00862903 , version 2 (28-11-2013)
hal-00862903 , version 3 (16-07-2014)
hal-00862903 , version 4 (03-03-2016)

Identifiants

  • HAL Id : hal-00862903 , version 2

Citer

Olivier Chabiron, François Malgouyres, Jean-Yves Tourneret, Nicolas Dobigeon. Toward Fast Transform Learning. 2013. ⟨hal-00862903v2⟩
633 Consultations
617 Téléchargements

Partager

Gmail Facebook X LinkedIn More