Volver a Working Papers

Paper #418

Título:
Worst-case bounds for the logarithmic loss of predictors
Autores:
Nicolò Cesa Bianchi y Gábor Lugosi
Data:
Octubre 1999
Resumen:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes of predictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Palabras clave:
Universal prediction, universal coding, empirical processes, on-line learning, metric entropy
Códigos JEL:
C1, C13
Área de investigación:
Estadística, Econometría y Métodos Cuantitativos
Publicado en:
Machine Learning, 43, 3, (2001), pp. 247-264

Descargar el paper en formato PDF