Relational Information Gain

Marco Lippi, Manfred Jaeger, Paolo Frasconi, Andrea Passerini

Research output: Contribution to journalJournal articleResearchpeer-review

3 Citations (Scopus)

Abstract

We introduce relational information gain, a refinement scoring function measuring the informativeness of newly introduced variables. The gain can be interpreted as a conditional entropy in a well-defined sense and can be efficiently approximately computed. In conjunction with simple greedy general-to-specific search algorithms such as FOIL, it yields an efficient and competitive algorithm in terms of predictive accuracy and compactness of the learned theory. In conjunction with the decision tree learner TILDE, it offers a beneficial alternative to lookahead, achieving similar performance while significantly reducing the number of evaluated literals
Original languageEnglish
JournalMachine Learning
Volume83
Issue number2
Pages (from-to)219-239
Number of pages21
ISSN0885-6125
DOIs
Publication statusPublished - 2011

Cite this

Lippi, M., Jaeger, M., Frasconi, P., & Passerini, A. (2011). Relational Information Gain. Machine Learning, 83(2), 219-239. https://doi.org/10.1007/s10994-010-5194-7