A generative probabilistic model for relational data consists of a family of probability distributions for relational structures over domains of different sizes. In most existing statistical relational learn- ing (SRL) frameworks, these models are not pro- jective in the sense that the marginal of the distribu- tion for size-n structures on induced substructures of size k < n is equal to the given distribution for size-k structures. Projectivity is very beneficial in that it directly enables lifted inference and statis- tically consistent learning from sub-sampled rela- tional structures. In earlier work some simple frag- ments of SRL languages have been identified that represent projective models. However, no complete characterization of, and representation framework for projective models has been given. In this pa- per we fill this gap: exploiting representation theo- rems for infinite exchangeable arrays we introduce a class of directed graphical latent variable models that precisely correspond to the class of projective relational models. As a by-product we also obtain a characterization for when a given distribution over size-k structures is the statistical frequency distri- bution of size-k substructures in much larger size- n structures. These results shed new light onto the old open problem of how to apply Halpern et al.’s “random worlds approach” for probabilistic infer- ence to general relational signatures.
|Title of host publication||Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20)|
|Publisher||International Joint Conferences on Artificial Intelligence|
|Publication status||Published - 2020|