Papers
arxiv:2408.03404

Set2Seq Transformer: Temporal and Position-Aware Set Representations for Sequential Multiple-Instance Learning

Published on Aug 6, 2024
Authors:
,
,
,
,

Abstract

Set2Seq Transformer models permutation-invariant set structures and temporal dependencies by learning position-aware representations of sets within sequences in an end-to-end multimodal approach.

AI-generated summary

In many real-world applications, modeling both the internal structure of sets and their temporal relationships is essential for capturing complex underlying patterns. Sequential multiple-instance learning aims to address this challenge by learning permutation-invariant representations of sets distributed across discrete timesteps. However, existing methods either focus on learning set representations at a static level, ignoring temporal dynamics, or treat sequences as ordered lists of individual elements, lacking explicit mechanisms for representing sets. Crucially, effective modeling of such sequences of sets often requires encoding both the positional ordering across timesteps and their absolute temporal values to jointly capture relative progression and temporal context. In this work, we propose Set2Seq Transformer, a novel architecture that jointly models permutation-invariant set structure and temporal dependencies by learning temporal and position-aware representations of sets within a sequence in an end-to-end multimodal manner. We evaluate our Set2Seq Transformer on two tasks that require modeling set structure alongside temporal and positional patterns, but differ significantly in domain, modality, and objective. First, we consider a fine art analysis task, modeling artists' oeuvres for predicting artistic success using a novel dataset, WikiArt-Seq2Rank. Second, we utilize our Set2Seq Transformer for short-term wildfire danger forecasting. Through extensive experimentation, we show that our Set2Seq Transformer consistently improves over traditional static multiple-instance learning methods by effectively learning permutation-invariant set, temporal, and position-aware representations across diverse domains, modalities, and tasks. We release all code and datasets at https://github.com/thefth/set2seq-transformer.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.03404 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.03404 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.03404 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.