Learning Aggregation Functions

Giovanni Pellegrini, Alessandro Tibo, Paolo Frasconi, Andrea Passerini, Manfred Jaeger

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review


Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum- (or max-) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Function), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-based architectures like Principal Neighborhood Aggregation, and can be effectively combined with attention-based architectures.
Original languageEnglish
Title of host publicationProceedings of the Thirty International Joint Conference on Artificial Intelligence (IJCAI-21)
PublisherInternational Joint Conferences on Artificial Intelligence
Publication date2021
ISBN (Electronic)978-0-9992411-9-6
Publication statusPublished - 2021
EventInternational Joint Conferences on Artificial Intelligence 2021 - Montreal, Canada
Duration: 19 Aug 202127 Aug 2021


ConferenceInternational Joint Conferences on Artificial Intelligence 2021


Dive into the research topics of 'Learning Aggregation Functions'. Together they form a unique fingerprint.

Cite this