This page is organized in 5 sections. You can select the desired section or wait for the first one. Sections are.
Head of project at USI: Marco Zaffalon
Starting date: 1 Ottobre 2006
Duration: 24 months
We focus on a generalized Bayesian statistical setup that makes use of sets of prior densities to model prior beliefs. We make the realistic assumption that the observations used to update prior to posterior beliefs may not be perfect, or, in other words, that they are the result of a so-called imperfect observational process. Our main concern is the following: what happens to statistical learning when we start in conditions of prior near-ignorance about a domain and the available sample is the result of an imperfect observational process? Our past research has given a first important and surprising answer to the above question: the above conditions prevent learning to take place, because prior near-ignorance and imperfect observations appear to be incompatible. Such a result has been derived in a multinomial statistical setup, which models prior near-ignorance using a set of Dirichlet prior densities, and considering the special case of imperfect observational process that allows observations to be confounded with each other with given probabilities. We have later extended the setup to the case of any set of prior densities that models prior near-ignorance, and to very general imperfect observational processes. This has led us to generalize our previous results by deriving a necessary condition that establishes precise limits to the possibility to learn under prior near-ignorance. Since prior near-ignorance is quite a relevant assumption to statistical inference, our results seem to point to a serious question at the foundations statistics. We propose addressing the above question according to two major research lines. In the first, we will consider the case of quasi-perfect observational processes. We are concerned with quasi-perfect observations when the probability of mistakes in the observations is small. This case is particularly interesting for applications where such small probability is typically neglected in favor of models for perfect observations. But our theoretical results show that there is a dramatic difference between models with perfect and imperfect observations, as learning is prevented also when the probability of errors is small. In this case, the common practice clashes with our theoretical results. It is therefore important to establish precise conditions that allow the traditional approaches to be applied, and to evaluate the loss incurred if they are applied outside them. It is also important to develop new methods to deal with quasi-perfect observations that are alternative to simply neglecting the probability of errors. In the second, we will consider the general problem to learn from imperfect observations under prior near-ignorance. For the general framework, we will first generalize our previous results in order to obtain necessary and sufficient conditions to learn under prior near-ignorance. We will then study minimal strengthenings of prior near-ignorance that allow learning to take place, so as to provide alternative assumptions to the currently used ones when prior near-ignorance is concerned.
Your are browsing the content Project description of the topic Research projects. You arrived from.