A Statistical Learning Theory Approach of Bloat - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2005

A Statistical Learning Theory Approach of Bloat

Résumé

Code bloat, the excessive increase of code size, is an important is- sue in Genetic Programming (GP). This paper proposes a theoreti- cal analysis of code bloat in the framework of symbolic regression in GP, from the viewpoint of Statistical Learning Theory, a well grounded mathematical toolbox for Machine Learning. Two kinds of bloat must be distinguished in that context, depending whether the target function lies in the search space or not. Then, important mathematical results are proved using classical results from Sta- tistical Learning. Namely, the Vapnik-Cervonenkis dimension of programs is computed, and further results from Statistical Learn- ing allow to prove that a parsimonious fitness ensures Universal Consistency (the solution minimizing the empirical error does con- verge to the best possible error when the number of samples goes to infinity). However, it is proved that the standard method consisting in choosing a maximal program size depending on the number of samples might still result in programs of infinitely increasing size whith their accuracy; a more complicated modification of the fit- ness is proposed that theoretically avoids unnecessary bloat while nevertheless preserving the Universal Consistency.
Fichier principal
Vignette du fichier
antibloatGecco2005_long_version.pdf (104.63 Ko) Télécharger le fichier

Dates et versions

inria-00000549 , version 1 (02-11-2005)

Identifiants

  • HAL Id : inria-00000549 , version 1

Citer

Olivier Teytaud, Sylvain Gelly, Nicolas Bredeche, Marc Schoenauer. A Statistical Learning Theory Approach of Bloat. Genetic and Evolutionary Computation Conference, Jun 2005, Washington D.C. USA. ⟨inria-00000549⟩
173 Consultations
183 Téléchargements

Partager

Gmail Facebook X LinkedIn More