| --- |
| size_categories: |
| - 100K<n<1M |
| task_categories: |
| - video-classification |
| title: 'Trokens: Semantic-Aware Relational Trajectory Tokens Dataset' |
| tags: |
| - computer-vision |
| - action-recognition |
| - few-shot-learning |
| - video-understanding |
| - point-tracking |
| viewer: false |
| license: cc-by-nc-4.0 |
| --- |
| |
| # Trokens: Semantic-Aware Relational Trajectory Tokens for Few-shot Action Recognition |
|
|
| This contains the preprocessed data for "Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition" (ICCV 2025). |
|
|
| [**Paper**](https://arxiv.org/abs/2508.03695) | [**Project Page**](https://trokens-iccv25.github.io/) | [**Code**](https://github.com/pulkitkumar95/trokens) |
|
|
| ## Dataset Overview |
|
|
| This dataset provides semantic-aware relational trajectory tokens (Trokens) extracted from multiple action recognition datasets, specifically for few-shot action recognition tasks. The dataset includes semantically meaningful point trajectories extracted using CoTracker3 and DINOv2 features, along with few-shot episode split information. |
|
|
| ## Dataset Structure |
|
|
| The dataset contains two main components: |
|
|
| ### 1. Point Tracking Data (`cotracker3_bip_fr_32/`) |
| |
| Each dataset is present in a zip file. To unzip the dataset, run the following command: |
| ```bash |
| cd cotracker3_bip_fr_32 |
| unzip *.zip |
| ``` |
| |
| Semantic point trajectories extracted using CoTracker3 with bipartite clustering on DINOv2 features: |
| |
| ``` |
| cotracker3_bip_fr_32/ |
| └── {dataset_name}/ |
| └── feat_dump/ |
| └── {video_name}.pkl |
| ``` |
| |
| Each pickle file contains: |
| - **`pred_tracks`**: Tracked point coordinates across frames [T, N, 2] |
| - **`pred_visibility`**: Visibility mask for each point [T, N] |
| - **`obj_ids`**: Object/cluster IDs for each point [N] |
| - **`point_queries`**: Original query point indices [N] |
| |
| It also contains **`vid_info`**, which contains the video information of the video the points were extracted: |
| - **`fps`**: FPS at which the video was processed for point tracking. |
| - **`height`**: Height of the video. |
| - **`width`**: Width of the video. |
| |
| |
| |
| ### 2. Few-shot Split Information (`few_shot_info/`) |
| |
| Data splits for few-shot learning evaluation across multiple datasets. |
| |
| ## Point Extraction Details |
| |
| Code for extraction can be found on the GitHub repo [here](https://github.com/pulkitkumar95/trokens/tree/main/point_tracking). Some details are provided below. |
| |
| ### Semantic Point Tracking |
| - **Method**: CoTracker3 with semantic clustering on DINOv2 features |
| - **Clustering**: Bipartite clustering for semantic entity detection |
| - **Parameters**: |
| - Clustering method: `bipartite` |
| - Number of frames for clustering: 32 |
| - Points filtered based on spatial proximity to remove redundancy |
| |
| ### Video Processing |
| - **Frame Rate**: |
| - Most datasets: 10 fps |
| - Something Something V2 (SSV2): 12 fps (original video fps) |
| - **Point Filtering**: Redundant points removed based on spatial proximity |
| - **GPU Acceleration**: CUDA support for efficient processing |
| |
| ### Key Features |
| - Robust point tracking across video frames using CoTracker3 |
| - Semantic point extraction through clustering on DINOv2 features |
| - Point filtering to remove redundant tracks |
| - Support for different clustering strategies and parameters |
| |
| ## Supported Datasets |
| |
| The point tracking data is available for few shot splits of multiple action recognition datasets: |
| - **Something Something V2 (SSV2)** |
| - **Kinetics** |
| - **UCF-101** |
| - **HMDB-51** |
| - **Finegym** |
| |
| ## Usage |
| |
| ### Loading Point Tracking Data |
| |
| ```python |
| import pickle |
| import numpy as np |
| |
| # Load point tracking data for a video |
| with open('cotracker3_bip_fr_32/{dataset}/{video_name}.pkl', 'rb') as f: |
| data = pickle.load(f) |
| |
| pred_tracks = data['pred_tracks'] # [T, N, 2] - point coordinates |
| pred_visibility = data['pred_visibility'] # [T, N] - visibility mask |
| obj_ids = data['obj_ids'] # [N] - cluster/object IDs |
| point_queries = data['point_queries'] # [N] - query point indices |
| ``` |
| |
| ### Loading Few-shot Splits |
| |
| ```python |
| # Load few-shot episode information |
| # (Structure depends on specific dataset format) |
| ``` |
| |
| ## Applications |
| |
| This dataset is designed for: |
| - **Few-shot Action Recognition**: Training models with limited labeled examples |
| - **Video Understanding**: Learning from semantic-aware relational trajectory tokens (Trokens) |
| - **Point Tracking Research**: Semantic point trajectory analysis |
| - **Action Recognition**: General video classification tasks |
| |
| ## Technical Details |
| |
| ### Dependencies |
| - PyTorch |
| - NumPy |
| - Pandas |
| - Einops |
| - CoTracker3 (for point tracking) |
| - DINOv2 (for feature extraction) |
| |
| ### Point Extraction Pipeline |
| 1. **Feature Extraction**: DINOv2 features computed for video frames |
| 2. **Semantic Clustering**: Bipartite clustering to identify semantic entities |
| 3. **Point Sampling**: Points sampled from cluster centers |
| 4. **Trajectory Tracking**: CoTracker3 used to track points across frames |
| 5. **Post-processing**: Redundant points filtered based on spatial proximity |
| |
| ## Citation |
| |
| If you use this dataset in your research, please cite our papers: |
| |
| ```bibtex |
| @inproceedings{kumar2025trokens, |
| title={Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition}, |
| author={Kumar, Pulkit and Huang, Shuaiyi and Walmer, Matthew and Rambhatla, Sai Saketh and Shrivastava, Abhinav}, |
| booktitle={International Conference on Computer Vision}, |
| year={2025} |
| } |
| |
| @inproceedings{kumar2024trajectory, |
| title={Trajectory-aligned Space-time Tokens for Few-shot Action Recognition}, |
| author={Kumar, Pulkit and Padmanabhan, Namitha and Luo, Luke and Rambhatla, Sai Saketh and Shrivastava, Abhinav}, |
| booktitle={European Conference on Computer Vision}, |
| pages={474--493}, |
| year={2024}, |
| organization={Springer} |
| } |
| ``` |
| |
| ## Authors |
| |
| [**Pulkit Kumar***](https://www.cs.umd.edu/~pulkit/)<sup>1</sup> · [**Shuaiyi Huang***](https://shuaiyihuang.github.io/)<sup>1</sup> · [**Matthew Walmer**](https://www.cs.umd.edu/~mwalmer/)<sup>1</sup> · [**Sai Saketh Rambhatla**](https://rssaketh.github.io)<sup>1,2</sup> · [**Abhinav Shrivastava**](http://www.cs.umd.edu/~abhinav/)<sup>1</sup> |
| |
| <sup>1</sup>University of Maryland, College Park    <sup>2</sup>GenAI, Meta<br> |
| <sup>*Equal contribution</sup> |
|
|
| ## License |
|
|
| This dataset is licensed under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/). |
|
|
| ## Acknowledgments |
|
|
| This dataset is built upon: |
| - [CoTracker](https://github.com/facebookresearch/co-tracker): For robust point tracking |
| - [TATs](https://github.com/pulkitkumar95/tats): Trajectory-aligned Space-time Tokens for Few-shot Action Recognition |
| - [DINOv2](https://github.com/facebookresearch/dinov2): For semantic feature extraction |
|
|
|
|
| We thank the authors for making their code publicly available. |