paper_title stringclasses 3 values | paper_id stringclasses 3 values | conference stringclasses 1 value | review_id stringclasses 10 values | weakness_content stringlengths 37 1.36k | perspective stringclasses 7 values | rebuttal_content stringlengths 83 4.59k | rebuttal_label stringclasses 4 values |
|---|---|---|---|---|---|---|---|
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | UQfBBoocAY | Although the paper is generally well-structured, the title mentions `low-resource` languages. However, the two tasks leveraged are primarily on high-resource languages, rather than low-resourced language. I would suggest to the authors to include more tasks - there are many low-resource language datasets (for instance on African languages MasakhaNEWS, Masakhaner (1.0 and 2.0 - which have been cited by the way but not used), MasakhaPOS; Indic languages: https://github.com/AI4Bharat/indicnlp_catalog; etc) and tasks. | Experiments | Thank you for recommending these excellent datasets for our evaluation.
We agree that diversifying our dataset to include African and Indic languages will significantly strengthen our paper's scope and alignment with its title. To address this, we have initiated experiments with MasakhaNEWS and plan to conduct further research with MasakhaNER and IndicNLP Catalog datasets shortly.
**MasakhaNEWS** is a benchmark dataset for news topic classification covering 16 languages widely spoken in Africa, where African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. The task involves categorizing news articles into different categories like sports, business, entertainment, and politics. We choose English, Hausa, Kiswahili, French and Yorùbá in our preliminary experiments. We sample 1433 instances for training and 411 for evaluation sets for each language.
Table 2 in our revised paper presents the preliminary results of experiments focused on MasakhaNEWS. When employing Federated Prompt Tuning in comparison to the monolingual and centralized counterparts, in general, a significant gain in accuracy is observed when adopting our Federated approach compared with the monolingual baseline.
| Method | eng | fra | hau | swa | yor | Avg |
|-------------------------------|-------|-------|-------|-------|-------|------|
| PE_Monolingual | 79.08 | 84.91 | 75.18 | 76.64 | 52.8 | 73.7 |
| PE_Centralized | 79.81 | 87.10 | 80.78 | 84.18 | 64.48 | 79.3 |
| PE_FL (Ours) | 82.99 | 89.81 | 65.96 | 86.16 | 57.20 | 76.4 | | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | The proposed is a very trivial combination of federated learning and prompt tuning, which both are established methodology in their own realm. There is no novelty, such as modification or adjustment to the method that may have give a better results. In other words, people with an objective to do federated learning for privacy purpose can easily come up with prompt tuning as a solution to reduce costs. | Novelty | We appreciate the opportunity to address the concerns raised by the reviewer and would like to defend our proposal, emphasizing its novelty and significance. In summary, we would like to clarify that our paper introduces federated prompt tuning as a solution to help address the **linguistic and geographic boundaries** hindering the application of LLMs to **various regions and lower-resource languages**.
We would like to further clarify from the following two aspects:
1. We emphasize the unique background and pressing need of our work, as we noticed the review may **overlook the part of multilingual and low-resource language in the initial review**.
First, our work primarily focuses on the under-representation of multilingual and low-resource languages in large language models. As natural language processing technologies advance, not all languages have been treated equally by developers and researchers. There are around 7,000 languages spoken in the world, and approximately 400 languages have more than 1 million speakers. However, there is scarce coverage of multilingual datasets. This is especially true for low-resource languages, where data scarcity is a major bottleneck. Furthermore, the under-indexing of certain languages is also driven by access to compute resources. Mobile data, compute, and other computational resources may often be expensive or unavailable in regions that are home to **under-represented languages**. Unless we address this disproportionate representation head-on, we risk perpetuating this divide and further widening the gap in language access to new technologies. One pressing example is **biomedical data**. Due to its global scale, this digital content is accessible in a variety of languages, yet most existing NLP tools remain English-centric. This situation highlights the need for effective strategies: how can we exploit abundant labeled data from resource-rich languages to make predictions in resource-lean languages?
Also, we wanted to highlight the urgency and timeliness of the problem. The problem is very timely compared to other application scenarios. It was not even considered a year ago. Previously, due to the smaller size of language models, the demand for data was not as high, and different kinds and sources of data were treated equally. Currently, the progress of LLMs, their usability, the amount of attention they receive, and the increased regulation on data, compound and lead to the urgency of this problem, where we are among **the first batch to attempt to break both lingual and physical barriers**. | DWC |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | Though it may have implicitly inferred by the concept of FL, the paper did not mention why and how federated learning helps with privacy and in which case one should use FL for their application. | Writing | We thank the reviewer for the insightful comments and concerns regarding privacy! We appreciate the opportunity to clarify this aspect of our work. It's important to note that multilingual finetuning here is not an approach for preserving privacy but rather a problem we aim to solve.
1. Our approach inherently supports data privacy, specifically by complying with international data privacy regulations. This compliance minimizes the need for cross-border data transmission, ensuring legal compliance and facilitating collaboration among entities with limited local computing resources, as detailed in Section 3.2.
2. We directly address privacy concerns by reducing the volume of transmitted data, thereby limiting potential privacy breaches. As demonstrated in Section 5.4, transmitting fewer parameters significantly reduces the risk of privacy leakage, aligning our methodology with the privacy focus highlighted in the abstract.
3. We would like to provide evidence for alleviating memorization from the following aspects:
(1) By freezing the core language model parameters, we prevent the model from altering its foundational understanding of language. Consequently, the prompt encoder reduces the risk of memorizing specific lexical cues and spurious correlations, as discussed in Section 5.1 [1].
(2) Components of Federated Learning play an essential role in reducing unintended memorization [2]. Specifically, clustering data according to users—a key design element in FL—significantly reduces such memorization. Additionally, using the Federated Averaging method for training further decreases the risk.
4. Regarding privacy protection, we acknowledge that we did not add extra privacy protection techniques to defend against potential privacy attacks, such as gradient inversion. Therefore, we appreciate the reviewer's valid points and have revised our paper to clarify our contribution to privacy and removed related claims about the capability of privacy protection to avoid confusion. However, various methods like secure aggregation (SA) and differential privacy (DP) can be applied in conjunction with our pipeline to further enhance privacy protection.
[1] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The Power of Scale for Parameter-Efficient Prompt Tuning." EMNLP 2021.
[2] Thakkar et al. "Understanding Unintended Memorization in Language Models Under Federated Learning." PrivateNLP 2021. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | There are better parameter-efficient finetuning methods, such as LORA/QLora, that the authors should conduct experiments on and do comparision with prompt tuning. | Experiments | Thank you for your valuable suggestions! Following the reviewer's constructive feedback, we have implemented experiments with LoRA (r=8, lora_alpha=16, lora_dropout=0.1) and summarized the results in the table below.
Table 4 in our revised paper presents the preliminary results of experiments on the NC task. Bold scores indicate the best performance between Prompt Tuning and LoRA in each column.
| Method | en | es | fr | de | ru | Avg |
|---------------------------------|------|------|------|------|------|------|
| Monolingual | 92.4 | 84.7 | 79.5 | 88.3 | 89.0 | 86.8 |
| Centralized | 93.9 | 86.7 | 82.9 | 89.5 | 88.6 | 88.3 |
| FL (IID) | 94.1 | 86.9 | 82.7 | 89.4 | 88.8 | 88.4 |
| FL (Non-IID) | 92.4 | 86.3 | 81.2 | 88.9 | 84.7 | 86.7 |
| PE_Monolingual | 82.9 | 59.7 | 47.3 | 71.4 | 60.0 | 64.3 |
| PE_Centralized | 89.1 | 76.2 | 67.4 | 78.8 | 75.9 | 77.5 |
| PE_FL (IID) (Ours) | 91.2 | 82.2 | 76.5 | 86.4 | 81.6 | 83.6 |
| PE_FL (Prompt Tuning) (Non-IID) (Ours) | 87.8 | **79.2** | 73.7 | **83.1** | 79.5 | **80.7** |
| PE_FL (LoRA) (Non-IID) (Ours) | **89.3** | 76.0 | **75.4** | 75.8 | **83.2** | 79.9 |
We also conducted a comparison of parameter efficiency and communication overhead in the NC task:
| Method | # Trainable Params | Communication Cost |
|--------------------------|--------------------|--------------------|
| Federated Full Finetuning| 278,655,764 | 108GB |
| Federated Prompt Tuning (Ours) | 1,202,708 | 478.93MB |
| Federated LoRA (Ours) | 1,491,476 | 593.92MB |
Additionally, we have included Figure 7 in Section 5.4 of our revised paper to clearly illustrate the comparison between Prompt Tuning and LoRA.
The results demonstrate that Federated LoRA and Federated Prompt Tuning achieve comparable performance, with Federated Prompt Tuning showing a slight advantage. In terms of data transmission and communication cost, Prompt Tuning requires only about 80% of the resources compared to LoRA, and leads to less privacy leakage.
Furthermore, as the pretrained model scales up, the performance of Prompt Tuning rapidly improves, approaching or even surpassing full finetuning. This indicates its significant potential. Prompt Tuning also enhances the overall model's generalization capabilities. The prompt encoder acts as a mechanism to extract linguistic-specific patterns on the client and general linguistic patterns on the server, showcasing advantages that adapters cannot match.
This justifies our use of Federated Prompt Tuning in our research, considering its efficiency in terms of parameters and communication, as well as its capability for generalization and adaptation to low-resource languages, which are crucial and undoubtedly worth it.
Regarding QLoRA, we did not consider quantization in our study since our focus is solely on updating the parameters of prompt encoders on clients and the server, keeping the pre-trained model frozen. QLoRA involves quantizing the pre-trained models, which falls outside the scope of our discussion and does not contribute to reducing communication costs, a key bottleneck in the federated setting. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | The results show prompt tuning are much worse than full-federated tuning, thus casting doubt if the cost-saving is worth it. | Evaluation | Thank you for your valuable suggestions! Following the reviewer's constructive feedback, we have implemented experiments with LoRA (r=8, lora_alpha=16, lora_dropout=0.1) and summarized the results in the table below.
Table 4 in our revised paper presents the preliminary results of experiments on the NC task. Bold scores indicate the best performance between Prompt Tuning and LoRA in each column.
| Method | en | es | fr | de | ru | Avg |
|---------------------------------|------|------|------|------|------|------|
| Monolingual | 92.4 | 84.7 | 79.5 | 88.3 | 89.0 | 86.8 |
| Centralized | 93.9 | 86.7 | 82.9 | 89.5 | 88.6 | 88.3 |
| FL (IID) | 94.1 | 86.9 | 82.7 | 89.4 | 88.8 | 88.4 |
| FL (Non-IID) | 92.4 | 86.3 | 81.2 | 88.9 | 84.7 | 86.7 |
| PE_Monolingual | 82.9 | 59.7 | 47.3 | 71.4 | 60.0 | 64.3 |
| PE_Centralized | 89.1 | 76.2 | 67.4 | 78.8 | 75.9 | 77.5 |
| PE_FL (IID) (Ours) | 91.2 | 82.2 | 76.5 | 86.4 | 81.6 | 83.6 |
| PE_FL (Prompt Tuning) (Non-IID) (Ours) | 87.8 | **79.2** | 73.7 | **83.1** | 79.5 | **80.7** |
| PE_FL (LoRA) (Non-IID) (Ours) | **89.3** | 76.0 | **75.4** | 75.8 | **83.2** | 79.9 |
We also conducted a comparison of parameter efficiency and communication overhead in the NC task:
| Method | # Trainable Params | Communication Cost |
|--------------------------|--------------------|--------------------|
| Federated Full Finetuning| 278,655,764 | 108GB |
| Federated Prompt Tuning (Ours) | 1,202,708 | 478.93MB |
| Federated LoRA (Ours) | 1,491,476 | 593.92MB |
Additionally, we have included Figure 7 in Section 5.4 of our revised paper to clearly illustrate the comparison between Prompt Tuning and LoRA.
The results demonstrate that Federated LoRA and Federated Prompt Tuning achieve comparable performance, with Federated Prompt Tuning showing a slight advantage. In terms of data transmission and communication cost, Prompt Tuning requires only about 80% of the resources compared to LoRA, and leads to less privacy leakage.
Furthermore, as the pretrained model scales up, the performance of Prompt Tuning rapidly improves, approaching or even surpassing full finetuning. This indicates its significant potential. Prompt Tuning also enhances the overall model's generalization capabilities. The prompt encoder acts as a mechanism to extract linguistic-specific patterns on the client and general linguistic patterns on the server, showcasing advantages that adapters cannot match.
This justifies our use of Federated Prompt Tuning in our research, considering its efficiency in terms of parameters and communication, as well as its capability for generalization and adaptation to low-resource languages, which are crucial and undoubtedly worth it.
Regarding QLoRA, we did not consider quantization in our study since our focus is solely on updating the parameters of prompt encoders on clients and the server, keeping the pre-trained model frozen. QLoRA involves quantizing the pre-trained models, which falls outside the scope of our discussion and does not contribute to reducing communication costs, a key bottleneck in the federated setting. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | Other generative and knowledge-based tasks, such as QA, translations and summarizations should be performed. | Experiments | We appreciate the feedback. Our current paradigm is general-purpose and can be easily adapted to other generative and knowledge-based tasks. In response, we have expanded our evaluations to encompass a broader range of scenarios, addressing the concern of limited task selection. This rebuttal is part of a series, and we will provide additional results during the discussion period.
* Low-resource Dataset
Our setting and results: **MasakhaNEWS**, covering 16 languages widely spoken in Africa, where African languages are severely under-represented in NLP research due to a lack of datasets covering several NLP tasks. The task involves categorizing news articles into different categories like sports, business, entertainment, and politics. We chose English, Hausa, Kiswahili, French and Yorùbá in our preliminary experiments. We sample 1433 instances for training and 411 for evaluation sets for each language.
Table 2 in our revised paper presents the preliminary results of experiments focused on MasakhaNEWS. When employing Federated Prompt Tuning in comparison to the monolingual and centralized counterparts, in general, a significant gain in accuracy is observed when adopting our Federated approach compared with the monolingual baseline.
| Method | eng | fra | hau | swa | yor | Avg |
| --- | --- | --- | --- | --- | --- | --- |
| PE_Monolingual | 79.08 | 84.91 | 75.18 | 76.64 | 52.8 | 73.7 |
| PE_Centralized | 79.81 | 87.10 | 80.78 | 84.18 | 64.48 | 79.3 |
| PE_FL (Ours) | 82.99 | 89.81 | 65.96 | 86.16 | 57.20 | 76.4 |
* Question Answering
Dataset and our setting: **MultiLingual Question Answering (MLQA)** is a benchmark dataset for cross-lingual question-answering performance, covering English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. We sample 1433 instances for training and 411 for evaluation sets for each language.
* Machine Translation
Dataset and our setting: **UN Corpus** is a Machine Translation dataset of official records from the UN proceedings over the years 1990 to 2014, covering six languages: English, French, Spanish, Russian, Chinese, and Arabic. we sample 10k in each direction for training and 5k each for evaluation sets. We cover three machine translation directions: En → Fr, Ar → Es, Ru → Zh, and sample 10k in each direction for training and 5k each for evaluation sets.
Once we get the result, we’ll update our response and our paper. | SRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | Citation format is incorrect; \citep{} should be used to produce something like (Abc, et al., 2023) and not Abc, et al., 2023 everywhere. | Presentation | Thanks for pointing it out. We have corrected the citations for all references in our revised paper. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | YhvDQa0GKX | Many grammatical errors exist, such as in the phrase "Throughout the fine-tuning...". | Writing | We appreciate your feedback on the grammatical errors. We have revised the grammar to avoid any confusion in our updated version. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | poor presentation: the citations are not separable enough from the main text, e.g., without any parenthesis, rendering the submission unreadable. Against the tradition and ease of reading, abbreviations are not defined in advance, e.g., NLI, PFL, PLM. | Presentation | We apologize for any confusion caused by the current citation format. We have corrected the citations for all references in our revised paper.
We realize the oversight in not defining certain abbreviations, such as NLI (Natural Language Inference), PFL (Prompt Federated Learning), and PLM (Pre-trained Language Models), at their first occurrence in the text. We appreciate you highlighting this point. In the revised paper, we have ensured that all abbreviations are defined upon their first use. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | claims unverifiable: no code release. | Reproducibility | We provide an anonymized version of the code repository, accessible through this link: https://anonymous.4open.science/r/Breaking_Physical_and_Linguistic_Borders-F1C5. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | conflating existing metrics with innovation: language distance is not a new concept. | Novelty | Thank you for your insightful comments on our paper.
We acknowledge and agree with your review that the concept of language distance is not novel, having been explored in various contexts previously. However, we emphasize that our work introduces this concept within a unique and specific scenario: multilingual federated tuning. Our novel application provides a fresh perspective on language distance as a metric that illustrates the **transferability** relationships in multilingual NLP. This enables us to analyze the performance of federated prompt tuning and local monolingual transfer learning for low-resource languages.
To avoid overemphasizing the novelty of the language distance concept itself, we've amended the language in the abstract from "introduce language distance as a new concept" to "present a new notion of language distance" in our revised paper. This modification more accurately reflects our contribution.
We refer to [1] and [2] for our language distance measurement. Furthermore, we are open to learning about any additional references or works related to language distance that the reviewers may be aware of. If there are specific references that we have overlooked or that could further strengthen our work, we welcome their inclusion to enhance the comprehensiveness and depth of our research.
[1] Malaviya et al., Learning Language Representations for Typology Prediction. *EMNLP 2017*.
[2] Littell et al., URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. *EACL 2017*. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | conceptual weakness: the contrived baseline was bound to give the proposed approach an edge due to lack of federated learning. | Experiments | We appreciate the opportunity to clarify this aspect of our work.
In previous cases, data transmission was always one-directional. Existing approaches focus on solving this locally, for example, through local transfer with monolingual data.
In our paper, we approach it from a collaborative perspective, which we call federated prompt tuning in our paper. By training LLMs collaboratively across multiple participants without sharing raw data, the accuracy, robustness, and generalizability of LLMs can be enhanced by leveraging collective knowledge and exposing models to a wider range of linguistic patterns.
As you mentioned, there exists very little research from such a collaborative perspective for low-resource languages. **Our findings open up new avenues for exploration and have the potential to inspire future research in this area.**
What we would like to demonstrate is not simply the performance boost. It’s the data efficiency (Section 5.2) and transferability for different language similarities (Section 5.3) of our paradigm’s superiority on low-resource languages.
-----
We would like to clarify that the prompts in our paper are NOT the same as classifier model input, and they are suited for all decoders-style LLMs. To further clarify the prompt tuning procedure and the prompt construction, we've added more details in Section 3 and Appendix B, and C in the revised version.
Instead of selecting discrete text prompts in a manual or automated fashion, in our paradigm, we utilize **virtual prompt** embeddings that can be optimized via gradient descent. Specifically, each prompt encoder, whether global or local, takes a series of virtual tokens, which are updated during tuning to better aid the model.
Figure 2 in our revised paper shows how our prompt tuning works on both clients and the server. Specifically, a textual prompt tailored for a specific task and input text is passed to the model. The task-specific virtual tokens are retrieved based on the textual prompt. With the input text tokenized, the discrete word token embeddings are retrieved. Then virtual token embeddings are inserted among discrete token embeddings and passed together into the pretrained models. Therefore, **the prompt is adaptable across various pre-trained model architectures, including decoder-style, encoder-style, or encoder-decoder style.**
----
We appreciate the opportunity to clarify this aspect of our work.
- Our approach inherently supports data privacy. Specifically, it complies with international data privacy regulations by minimizing the need for cross-border data transmission. This not only ensures legal compliance but also facilitates collaboration among entities with limited local computing resources, as detailed in section 3.2.
- We directly address privacy concerns by reducing the volume of transmitted data, thereby limiting potential privacy breaches. As demonstrated in section 5.4, transmitting fewer parameters significantly reduces the risk of privacy leakage, aligning our methodology with the privacy focus highlighted in the abstract. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | conceptual weakness: what the paper refers to as prompts are just classifier model input, which are different from decoders-style LLM prompts as commonly acknowledged. | Theory | We would like to clarify that the prompts in our paper are NOT the same as classifier model input, and they are suited for all decoders-style LLMs. To further clarify the prompt tuning procedure and the prompt construction, we've added more details in Section 3 and Appendix B, and C in the revised version.
Instead of selecting discrete text prompts in a manual or automated fashion, in our paradigm, we utilize **virtual prompt** embeddings that can be optimized via gradient descent. Specifically, each prompt encoder, whether global or local, takes a series of virtual tokens, which are updated during tuning to better aid the model.
Figure 2 in our revised paper shows how our prompt tuning works on both clients and the server. Specifically, a textual prompt tailored for a specific task and input text is passed to the model. The task-specific virtual tokens are retrieved based on the textual prompt. With the input text tokenized, the discrete word token embeddings are retrieved. Then virtual token embeddings are inserted among discrete token embeddings and passed together into the pretrained models. Therefore, **the prompt is adaptable across various pre-trained model architectures, including decoder-style, encoder-style, or encoder-decoder style.** | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | conceptual weakness: the approach has absolutely nothing to do with privacy which the abstract and the main body consistently bolsters. | Theory | We appreciate the opportunity to clarify this aspect of our work.
- Our approach inherently supports data privacy. Specifically, it complies with international data privacy regulations by minimizing the need for cross-border data transmission. This not only ensures legal compliance but also facilitates collaboration among entities with limited local computing resources, as detailed in section 3.2.
- We directly address privacy concerns by reducing the volume of transmitted data, thereby limiting potential privacy breaches. As demonstrated in section 5.4, transmitting fewer parameters significantly reduces the risk of privacy leakage, aligning our methodology with the privacy focus highlighted in the abstract. | DWC |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | evaluation weakness: only two tasks (new classification and XNLI) was used in evaluation. | Evaluation | we would like to highlight additional evaluation results that we have been conducting to substantiate our claims further. These additional evaluations encompass a broader range of tasks and scenarios, which we believe address the concern of limited task selection. This response is the first in a series of comprehensive rebuttals; we will provide additional experimental results during the discussion period.
**MasakhaNEWS** is a benchmark dataset for news topic classification covering 16 languages widely spoken in Africa, where African languages are severely under-represented in NLP research due to a lack of datasets covering several NLP tasks. The task involves categorizing news articles into different categories like sports, business, entertainment, and politics. We chose English, Hausa, Kiswahili, French and Yorùbá in our preliminary experiments. We sample 1433 instances for training and 411 for evaluation sets for each language.
Table 2 in our revised paper presents the preliminary results of experiments focused on MasakhaNEWS. When employing Federated Prompt Tuning in comparison to the monolingual and centralized counterparts, in general, a significant gain in accuracy is observed when adopting our Federated approach compared with the monolingual baseline.
| Method | eng | fra | hau | swa | yor | Avg |
| --- | --- | --- | --- | --- | --- | --- |
| PE_Monolingual | 79.08 | 84.91 | 75.18 | 76.64 | 52.8 | 73.7 |
| PE_Centralized | 79.81 | 87.10 | 80.78 | 84.18 | 64.48 | 79.3 |
| PE_FL (Ours) | 82.99 | 89.81 | 65.96 | 86.16 | 57.20 | 76.4 |
The tasks we’re currently working on are as follows:
- Question Answering
Dataset and our setting: **MultiLingual Question Answering (MLQA)** is a benchmark dataset for cross-lingual question-answering performance, covering English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. % We sample 1433 instances for training and 411 for evaluation sets for each language.
- Machine Translation
Dataset and our setting: **UN Corpus** is a Machine Translation dataset of official records from the UN proceedings over the years 1990 to 2014, covering six languages: English, French, Spanish, Russian, Chinese, and Arabic. we sample 10k in each direction for training and 5k each for evaluation sets. We cover three machine translation directions: En → Fr, Ar → Es, Ru → Zh, and sample 10k in each direction for training and 5k each for evaluation sets.
Once we get the result, we’ll update our response and our paper. | SRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | yJ6uMWYzMY | In section 5.4.1, regarding the statement: "In both the NC and XNLI tasks, despite the total number of parameters exceeding 278 million, the trainable parameters are only around 1.2 million, accounting for less than 0.5% of the total." — Could the authors clarify which part of the model is being fine-tuned? | Reproducibility | Yes, we clarify that we only update the prompt encoders. This includes the parameters of the local prompt encoders $h_k$ on Client k, and the parameters of the global encoder $h_g$ on the server in the revised paper (referred to as $h_{global}$ in the original paper). During this process, we keep the pre-trained language models frozen at all times.
Therefore, the trainable parameters are solely those within the prompt encoders, in contrast to the total number of parameters involved in full fine-tuning.
To further clarify and avoid any confusion, we have revised some details in Section 3. Additionally, we have attached a figure (Figure 2) in the revised version of the paper, illustrating the architecture of our prompt encoder and the tuning process. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | In terms of novelty, the proposed idea is not new, and it is only a further investigation of the multilingual setting. | Novelty | ## W1:
> In terms of novelty, the proposed idea is not new, and it is only a further investigation of the multilingual setting.
We would like to kindly defend our proposal.
To further clarify the significance and originality of our work, we've added our motivation with Multilingual NLP Background in Appendix A and the Contribution paragraph in Section 1 in our updated paper.
In the paper, we introduced federated prompt tuning as a solution to help address the linguistic and geographic boundaries hindering the application of LLMs to various regions and lower-resource languages. Here we would like to provide some clarification about the motivation and significance of our research in the following two aspects.
### Multilingual NLP and Low-resource Languages
As natural language processing technologies advance, not all languages have been treated equally by developers and researchers. There are around 7,000 languages spoken in the world, and approximately 400 languages have more than 1 million speakers. However, there is scarce coverage of multilingual datasets. This is especially true for low-resource languages, where data scarcity is a major bottleneck. Furthermore, the under-indexing of certain languages is also driven by access to compute resources. Mobile data, compute, and other computational resources may often be expensive or unavailable in regions that are home to under-represented languages. Unless we address this disproportionate representation head-on, we risk perpetuating this divide and further widening the gap in language access to new technologies [1].
One pressing example is biomedical data. Due to its global scale, this digital content is accessible in a variety of languages, yet most existing NLP tools remain English-centric [2].
This situation highlights the need for effective strategies: how can we exploit abundant labeled data from resource-rich languages to make predictions in resource-lean languages?
### Timeliness of breaking physical and linguistic barriers in the LLM era
We wanted to highlight the urgency of the problem. The problem is very timely compared to other "application scenarios." It was not even considered a year ago. Previously, due to the smaller size of language models, the demand for data was not as high, and different kinds and sources of data were treated equally. Currently, the progress of LLMs, their usability, the amount of attention they receive, and the increased regulation on data, compound and lead to the urgency of this problem, where we are among the first batch to attempt to break both lingual and physical barriers.
In previous cases, data transmission was always one-directional. Existing approaches focus on solving this locally, for example, through cross-lingual transfer, as well as data augmentation and preference training to address these bottlenecks [3, 4].
In our paper, we approach it from a collaborative perspective. By training LLMs collaboratively across multiple participants without sharing raw data, the accuracy, robustness, and generalizability of LLMs can be enhanced by leveraging collective knowledge and exposing models to a wider range of linguistic patterns.
There exists very little research from such a collaborative perspective for low-resource languages. With data and computing power being very important yet limited for LLMs, we've never needed such a lightweight collaborative paradigm more urgently than we do right now.
So we introduce the concept of "federated" as a simple and established progression to our problem, which not only contributes a timely and practical solution to a rapidly evolving field, but also vividly depicts the key innovation of our paradigm: knowledge sharing and aggregation (double direction) without data transmission.
Additionally, from the federated learning perspective, as far as we know, we are the first paper to investigate the data efficiency and transferability brought by federated learning, and we believe this sheds some light on how federated learning can benefit LLM on training generalizability and stability, beyond simply mitigating compliance risks.
[1] Joshi, Pratik, et al. The state and fate of linguistic diversity and inclusion in the NLP world. *ACL 2020*.
[2] Bérard et al., A Multilingual Neural Machine Translation Model for Biomedical Data *NLP-COVID19 2020*.
[3] Lauscher et al., From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers, *EMNLP 2020*.
[4] Xia et al., Generalized Data Augmentation for Low-Resource Translation *ACL 2019*. | DWC |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | Lack of clarity. The paper does not provide enough information about how the prompts are constructed or look like and hyperparameters for all settings. I suggest adding the information to the paper or appendix. | Reproducibility | ## Q2 & W2:
> Lack of clarity. The paper does not provide enough information about how the prompts are constructed or look like and hyperparameters for all settings. I suggest adding the information to the paper or appendix.
> How did you tune the training and parameter averaging?
To further clarify the prompt tuning procedure and the hyperparameters, we've added more details in Seciton 2 and Appendix B, C in the revised version.
### Virtual Prompt
We have also included a detailed figure 2 in our revised paper to more clearly show how the prompts are constructed and tuned.
In general, instead of selecting discrete text prompts in a manual or automated fashion, in our Multilingual Federated Prompt Tuning paradigm, we utilize **virtual prompt embeddings that can be optimized via gradient descent**. The primary objective of each **prompt encoder **is to generate an effective prompt embedding for each client based on **task specific virtual tokens**, to guide the PLM in producing the desired outputs.
Figure 2 in our revised paper shows the how our prompt tuning works on both clients and server. Specifically, a textual prompt tailored for a specific task and input text are passed to the model. Then task specific virtual tokens are retrieved based on the textual prompt. With the input text tokenized, the discrete word token embeddings are retrieved. Then virtual token embeddings are inserted among discrete token embeddings and passed together into the PLM.
-----
### Federated Prompt Averaging
In every communication round $t$, Federated Prompt Averaging includes the following steps.
**Initialization**:
The server initializes the global prompt encoder $h_g^{t}$. Each client initializes its local prompt encoder $h_0^{t}, h_1^{t}, \ldots h_k^{t}$.
**Client Selection:**
We select a fraction $C$ of the total $K$ clients for training. This subset size is $m = \max(C \times K, 1)$. The subset we choose is denoted as $S$.
**Local Encoder Tuning:**
Each client $k$ in $S$ fetches the current global prompt encoder $h_g^{t}$, and assembles it with the PLM. During the local training on the local data $\mathcal{D}_k$, The PLM's parameters stay fixed while local prompt encoder parameters $h_k^{t}$ are tuned.
**Aggregation**:
The server aggregates updates from all clients using weighted average. The global prompt encoder $h_g^{t+1}$ is updated based on the received parameters $h_k^{t}$ from clients for the next round of federated prompt tuning:
$$h\_g^{t+1}=\sum\_{k=1}^K \frac{\left|\mathcal{D}\_k\right|}{\sum\_{k=1}^K\left|\mathcal{D\}_k\right|} h\_k^{t}
$$
------
### Hyper-parameters
For all of the experiments, we report results using the 1e-3 learning rate, and we use early stopping (5 epochs of no improvement).
For FL experiments, we adjust the parameter $\alpha$ that controls the mixture of languages in the dataset. An $\alpha$ value of 1.0 signifies a uniform mixture of all languages, while values closer to 0 indicate a dominant representation of individual languages or a more separated mixture.
When we use Prompt Tuning to optimize the parameter efficiency, the prompt tuning init text is *Predict the category given the following news article* for all the News Classification tasks. By providing the string of words, we initialize virtual token embeddings from existing embedding weights. This string is tokenized and tiled or truncated to match the number of virtual tokens, which is 1 in our experiments. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | Do you have any findings on why multilingual centralized learning is far worse than federated learning in Table 2? | Evaluation | ## Q1
> Do you have any findings on why multilingual centralized learning is far worse than federated learning in Table 2?
Yes. This phenomenon has also been observed in previous works on Federated Learning [1]. Here are some possible reasons (Section 5.1, Page 7):
Firstly, **Federated Learning** has a **weight averaging** effect via the aggregation of clients’ models, which could increase the generalization of the global model, further leading to higher performance [2]. Additionally, by freezing the core language model parameters and **only learning the prompt representations**, **prompt tuning** reduces the model’s ability to overfit a dataset by memorizing specific lexical cues and spurious correlations [3].
These reasons demonstrate the superiority of our federated prompt tuning over the traditional centralized finetuning paradigm, offering a more robust and generalizable approach for low-resource languages.
[1] Rehman, Yasar Abbas Ur, et al. "Federated self-supervised learning for video understanding." *ECCV* 2022.
[2] Izmailov, Pavel, et al. "Averaging weights leads to wider optima and better generalization." *UAI 2018* .
[3] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." *EMNLP 2021* . | DWC |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | Figure number is missing on Page 2 — "As depicted in Figure , " | Presentation | ## Suggestions:
> Figure number is missing on Page 2
> "As depicted in Figure , "
> Missing Figure/Table
> "This translates to over 99% reduction in the communication overhead shown in 3"
> Typo
> "Finetuning accuracy across different lanugages on the NC task."
>
We appreciate your detailed suggestions on the typos and missing figure/table numbers! We have fixed them all in our updated version. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | Missing Figure/Table — "This translates to over 99% reduction in the communication overhead shown in 3" | Presentation | ## Suggestions:
> Figure number is missing on Page 2
> "As depicted in Figure , "
> Missing Figure/Table
> "This translates to over 99% reduction in the communication overhead shown in 3"
> Typo
> "Finetuning accuracy across different lanugages on the NC task."
>
We appreciate your detailed suggestions on the typos and missing figure/table numbers! We have fixed them all in our updated version. | CRP |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | zzqn5G9fjn | ICLR-2024 | DwcYUFIxnh | Typo — "Finetuning accuracy across different lanugages on the NC task." | Writing | ## Suggestions:
> Figure number is missing on Page 2
> "As depicted in Figure , "
> Missing Figure/Table
> "This translates to over 99% reduction in the communication overhead shown in 3"
> Typo
> "Finetuning accuracy across different lanugages on the NC task."
>
We appreciate your detailed suggestions on the typos and missing figure/table numbers! We have fixed them all in our updated version. | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | c4bD4kpXHW | While the paper’s studies show that certain designs (e.g. cross-attention) seem to confer multi-modal generalization, there are still some key questions that can be more thoroughly studied to uncover the reasons why this is the case. | Experiments | In response to the Reviewer’s concerns (and the related comment by Reviewer a4Su), we have now performed additional experiments that focus on how model scale and complexity can influence multimodal generalization. While the original manuscript was focused on understanding how a class of base neural architectures would fare on multimodal generalization, we agree that it is important to understand how choice of hyperparameters, such as number of attention heads and layer depth, can influence generalization.
As such, we have performed a systematic experiment on a standard encoder-only transformer (with the same architecture as BERT; Devlin et al., 2019). We manipulated the number of layers (1, 2, 3, 4 layers), and number of attention heads (1, 4, 8 heads) and assessed the corresponding generalization performance across all splits. (Note that 4 transformer encoder layers and 8 attention heads match the BERT-small architecture.) Indeed, we found that increasing encoder depth significantly improved distractor and systematic generalization. Increasing attention heads also improved distractor and systematic generalization, though to a lesser extent. Nevertheless, neither of these modifications influenced productive generalization. Our conclusion from this is that some tasks (e.g., systematic generalization) require more abstractions; a single layer of attention that handles multimodal inputs is insufficient. Cross-attention mechanisms offer a more targeted and efficient solution (I.e., fewer number of parameters and faster to train) that explicitly integrate visual stimuli (keys and values) with the instructions (queries). However, simply adding encoder/attention layers and parameters can suffice for some types of generalization.
We have now included discussion of some of these experiments and insights into the manuscript, with a few key revisions copied below. The new results are depicted in Figure 8 (Appendix). Below is the caption for Figure 8; please see the updated PDF submission for the actual figure: “Figure 8: Evaluating generalization splits on BERT-like single-stream transformer models with varying layers and attention heads. We manipulate a generic encoder-only transformer based on the BERT architecture, evaluating the influence of the number of encoder layers (1, 2, 3, and 4 layers), and the number of attention heads per encoder layer (1, 4, and 8 heads). Overall, increasing layers improves generalization across distractor and systematic generalization, but not productive generalization. Increasing attention heads also marginally improves distractor and systematic generalization, but to a lesser extent than adding depth. A) Evaluation on distractor generalization across all model parameters. B) The effect of adding additional encoder layers on distractor generalization performance (averaged across all attention head configurations). C) The effect of adding attention heads on distractor generalization performance (averaged across all layer depth configurations. D-F) Evaluation on systematicity for depth 1 tasks (identical to generalization split in Fig. 4a). G-I) Evaluation on systematicity for depth 3 tasks (identical to generalization split in Fig. 4d). J-L) Evaluation on productivity split (identical to generalization split in Fig. 5a).”
Updates to Contributions Section (1.2): “2. A comprehensive evaluation of commonly-used base neural models (RNNs, GRUs, Transformers, Perceivers) on distractor, systematic, and productive generalization splits. We find that for distractor and systematic generalization, including a cross-attention mechanism across input modalities is important. However, all models fail on the productivity split. In addition, we include experiments demonstrating the impact of transformer depth and attention heads on all generalization splits in an encoder-only Transformer model." | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | c4bD4kpXHW | Important discussions such as why the (cross-attention) transformers might fail at productive generalization is lacking. | Evaluation | This is a challenging question to tackle. Our ongoing hypothesis is that productive generalization is a fundamentally distinct type of generalization relative to systematic compositional generalization. We have now included a brief discussion in the Results section of Productive Compositional Generalization (Section 3.3) that addresses these challenges: “One possible explanation for the disparity between systematic and productive generalization in neural models is that systematicity requires the ability to exchange semantics (or tokens) from a known syntactic structure (e.g., a tree of certain depth). In contrast, productive generalization requires generalizing to an entirely new syntactic structure (e.g., a task tree of different size or depth). This requires understanding the syntax -- how to piece together syntactic structures on-the-fly -- requiring another level of abstraction. To our knowledge, there is nothing in the current set of mechanisms in the Transformer that would enable this. Thus, productive compositional generalization remains a difficult capability for purely neural models to achieve.” | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | c4bD4kpXHW | What is the key architectural difference between dual stream transformer and transformers with cross attn that can explain their generalization performance? Is it only the lack of a cross attention between the different modalities? | Theory | The short answer is yes. When comparing the Dual Stream Transformer with the models with cross attention, indeed, the only distinction is the lack of an attention mechanism to explicitly integrate outputs from the two input streams.
The longer answer, which became clear after performing the new experiments (that scaled up to the BERT-small architecture), is that at minimum, a second attention layer is required to systematically abstract token information from each of the inputs. While applying a large self-attention matrix twice to simultaneously presented visual and language instructions is computationally inefficient (our cross-attention models show that it is unnecessary), it provides the necessary base structure to allow for good systematic generalization. | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | c4bD4kpXHW | Possible typo: “Finally, we included a Perceiver-like model (Jaegle et al., 2021), an architecture designed to generically process multimodal inputs (Fig. 2f).”: (Fig. 2f) > (Fig. 2e). | Writing | We thank the Reviewer for spotting this error. The manuscript has now been updated. | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | WaCqOkvd4I | I'm concerned about the strength of the baselines used in the paper (see my related questions below). While the primary contribution of the paper is the dataset, it is also important to establish strong baselines for this new dataset and to ensure that the conclusions from the empirical results are valid. The appendix states that only a *single Transformer layer* with a *single attention head* was used. This is almost certainly not an optimal depth and number of attention heads. Relatedly, it looks like some models are potentially underfit, according to the figures. With >5M training examples and a relatively simple input space, I would have expected a reasonably sized Transformer model to achieve low training loss and reasonable IID generalization. If these models could have been applied to similar tasks such as gSCAN (even using symbolic tokens to represent the scene context), where they could be compared with comparable baselines from prior work, this would have helped establish that these are indeed reasonably strong baselines that have been well tuned. | Experiments | We thank the Reviewer for their thorough and thoughtful feedback. Below, we have worked to address some of the weaknesses the Reviewer raised, particularly the strength of the baselines. We have included new experiments to directly address these concerns, taking into consideration this Reviewer’s suggestion. Below, we also address all questions directly.
However, before addressing the Reviewer’s specific questions, we first wanted to address the two weaknesses the Reviewer raised. The first weakness was the lack of evaluation of deeper networks with more attention heads. We have now directly addressed this by performing additional experiments on a range of transformer layer depths (1, 2, 3, 4 layers) and attention heads (1, 4, and 8 heads), up to the size of BERT-small (4 layers, 8 heads; 12 model experiments in total). We have now included figures that detail the performance across this parameter sweep, with the primary conclusion that adding layers indeed aids with distractor and systematic generalization, but not productive generalization. We provide additional details below, and have incorporated the results for these new experiments in Figure 8 (in the Appendix), and have referenced these results in the manuscript's main text. In addition, we wanted to address another comment the Reviewer raised: “*I would have expected a reasonably sized Transformer model to achieve low training loss and reasonable IID generalization.*”
We thank the Reviewer for their suggestion on expanding our evaluations. The Reviewer was correct in their intuition – deeper models help standard single stream transformers achieve improved IID generalization across all generalization splits (Fig. 8). We have now included the following text to the main manuscript (Results section 3.2) to reference these results: "(Note, however, that increasing depth (encoder layers) to Transformers improves IID generalization on these splits; Fig. 8)." | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | WaCqOkvd4I | The qualitative difference between gCOG and datasets from prior work such as gSCAN was not very clearly described. For example, one of the key claims seemed to be gCOG "employs generic feature sets that are not tied to any specific modality". However, it seems like it is a useful property for a multimodal dataset to have a clear relation to real-world multimodal tasks. Indeed, the authors provide interpretations of their tasks in the form of natural language instructions and visual scenes (e.g. in Figure 1), and these are very useful for understanding the task. Representing this dataset using familiar modalities (e.g. vision, natural language) could enable future work to study different research questions, e.g. the impact of pre-training. The ability to alternatively represent the task input as a sequence of tokens is also reasonable for studying certain research questions, but this also seems possible for datasets from prior work. For example, I understand that gSCAN includes both symbolic descriptions as well as visual renderings. Anyways, I think clarifying the motivation for this dataset (e.g. increasing diversity of available benchmarks, focusing on different generalization challenges, etc.) separately from how inputs are represented for the experiments in this paper (e.g. token sequence vs. images and natural language) would be useful. | Novelty | The second weakness was the lack of clear distinction between our presented task, gCOG, and prior tasks such as the gSCAN task. We thank the Reviewer for this comment, and have now emphasized the primary distinctions between the two tasks. In brief, the two tasks require different neural network architectures. In terms of transformers, gSCAN is a generation task (requiring a decoder architecture) and gCOG is a classification task, which requires only an encoder Transformer. While models evaluated on gSCAN can incorporate encoder models to the architecture, as illustrated in one of the articles the Reviewer cites (Qiu et al., 2021), generating navigation instructions autoregressively adds complexity to studying compositional productivity, since autoregressive models are susceptible to exposure bias (Wang & Sennrich, 2020). Additionally, gCOG includes a distractor generalization split; our dataloaders are organized such that distractor generalization splits can interact with either systematic and/or productivity splits. We have now emphasized these differences in the manuscript, and have included citations to the studies mentioned by the Reviewer in the Related work sections.
Related work revisions:
“Our approach to constructing arbitrarily complex compositions of simple tasks is similar to (Ruis et al., 2020). However, it differs in three key ways. First, we focus on question-answer tasks (which require encoder-only architectures), rather than sequence-to-sequence learning tasks (which require decoder architectures, e.g., Qiu et al., 2021). Sequence decoding tasks introduce the added complexity of requiring autoregressive responses, which are susceptible to fundamental statistical challenges, such as exposure bias (Wang & Senn, 2021). Second, gCOG includes a distractor generalization split, in addition to systematic and productive compositional generalization splits. Finally, we methodically characterize different forms of generalization using simpler underlying abstractions (i.e., without the explicit use of image pixels).”
We thank the reviewer for bringing to our attention Furer et al. (2021), and Shaw et al, (2021). References to these two studies are now included in the preceding paragraph of the above quoted passage. | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | WaCqOkvd4I | Appendix A.2.1 - Maybe reference Tables 8 and 9 where you discuss different positional embeddings. | Presentation | *Position embeddings - Since you are representing 10x10 grids as 1D sequences, 1D relative positions may not capture this structure well. On the other hand, absolute position embeddings seem potentially problematic in the case of the SSTrfmr model, since they will not be consistently assigned to the same grid position if the text sequence is first and has varying length. Mitigating this may be important to provide for a fairer comparison with the SSTrfmr model.*
We agree that a downside of the SSTfmr is that there is ambiguity in how different modalities interact with fixed or 1d positional encodings. This was part of the motivation for comparing that model with the DSTfmr and the CrossAttn models, where each modality would have its own set of positional encodings, and then lower-level features would then be integrated with either a shared MLP or cross-attention mechanisms.
However, the additional experiments we performed in response to the Reviewer’s question 1 should demonstrate that adding more layers ameliorates the poor performance in the SSTfmr. Nevertheless, we agree that this is a potential issue for other problems that may arise, and could potentially be solved by either using separate positional encodings (for different modalities), and/or learnable positional encodings. However, we believe that this is beyond the scope of the current project, since adding additional encoder layers suffices for the systematic generalization split of this task. | DWC |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | WaCqOkvd4I | Consider discussing [3] in related work. [3] demonstrated the importance of cross-modal attention for gSCAN, and similarly studied the relative difficulty of various aspects of generalization, including distractors. | Novelty | We have additionally included discussion of Qiu et al., 2021 in the results section reporting that distractor generalization is improved using cross-modal attention: “While all models performed IID generalization well, only models that contained cross-attention mechanisms (CrossAttn and Perceiver models) exhibited excellent OOD distractor generalization (Fig. 3d). A related result was also reported in Qiu et al., (2021) using cross-modal self-attention.” | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | umBGrmnYm6 | **Pre-trained models** The paper focuses on models trained from scratch rather than pre-trained. This could be a strength and a weakness. On the one hand, it allows for isolating the contribution of the architectural choices from other factors of optimization, and training data. On the other hand, it has been observed that by training models at large enough scales enables the emergence of generalization capabilities, which we don’t see in smaller scales. I think it will be critical to also analyze the performance of pretrained models on the benchmark, in order to strengthen the paper. | Experiments | Weakness 1: Lack of evaluation using pre-trained models. We agree with the Reviewer that there is utility in assessing how a pretrained model performs on a new task to the literature. However, when trying to address this question regarding our specific task, we realized that most models would not be able to perform this task out-of-the-box without any fine-tuning, since tokens in the current task set up have no intrinsic meaning. Since we could not directly test mainstream pretrained transformer models (without devising a specific way of first aligning domains through finetuning), we addressed a related question raised by the Reviewer: How would mainstream architectures (such as BERT-like models; Devlin et al., 2019) fare on our benchmark? Moreover, much of the motivation for this manuscript was to focus on questions such as: What architectural components are essential for distractor, systematic, and productive generalization? To this end, we performed additional experiments to evaluate how standard architectures – such as BERT-like architectures -- generalize appropriately. These experiments focused on investigating how increasing encoder-layer depth, and increasing attention heads in each encoder layer, influenced generalization performance. These results have now been included as Figure 8 in the Appendix. Nevertheless, we agree that evaluating the out-of-the-box performance of pretrained models on a pixel and natural language variant of this task is important for future work to explore (once a sensible fine-tuning procedure to align the two is agreed upon). | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | umBGrmnYm6 | **COG task**: It will be useful to discuss the COG task (rather than just mentioning it) before describing the new gCOG one, so that it will be clearer to the reader what are new contributions of the new benchmark compared to COG and the degree of their importance. In the overview diagram I would also recommend showing a sample also from COG to make the differences clearer. | Presentation | *COG task: It will be useful to discuss the COG task (rather than just mentioning it) before describing the new gCOG one, so that it will be clearer to the reader what are new contributions of the new benchmark compared to COG and the degree of their importance. In the overview diagram I would also recommend showing a sample also from COG to make the differences clearer.*
We thank the Reviewer for the suggestion. We have now clarified the explicit differences between COG and gCOG in the Experimental Design section (2.1): “gCOG is a configurable question-answer dataset, originally inspired from COG (Yang et al., 2018), that programmatically composes task instructions, and then generates synthetic stimuli to satisfy those instructions on-the-fly (Fig. 1). The primary modifications in gCOG are 1) differences in the set of task operators, 2) the ability to use categorical tokens to allow for generic testing of multimodal reasoning, and 3) the ability to allow for arbitrarily long task trees to assess productive compositional generalization, in addition to distractor and systematic generalization (e.g., see Appendix Fig. 7). Importantly, the original COG task did not allow for tasks with more than a single conditional statement, e.g., a task tree of depth 3, making it ill-suited to evaluate productive compositional generalization... We additionally provide functionality in the dataset that allows the choice to load samples using either a categorical task encoding, or a task encoding with image pixels and natural language task instructions."
Due to copyright and the license of the original COG paper, we were unable to include a figure from the original COG task/paper (Springer publishing). However, we have included sample questions/queries from the COG task in the Appendix: “A few example queries from the original COG task include: `What is the color of the latest triangle? Point to the latest red object. If a square exists, then point to the current x, otherwise point to the last b.’” | CRP |
On the generalization capacity of neural networks during generic multimodal reasoning | zyBJodMrn5 | ICLR-2024 | umBGrmnYm6 | **Figures**: Would be good to increase the size of the plots in Figure 3b. It will also be good the increase the distance and visual separation between the sub-figures in each figure throughout the paper. | Presentation | *Figures: Would be good to increase the size of the plots in Figure 3b. It will also be good the increase the distance and visual separation between the sub-figures in each figure throughout the paper.*
We have now increased the size of the plots in Figure 3b, splitting panel 3b in to 3b and 3c. We have also worked to increase the amount of visual separation between panels in Figure 2, but note for some figures this was difficult due to space constraints. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | KFdqHGoMzN | Even though the authors explained in the paper, I didn't like the fact that the proposed GXMFGs have no baseline competitors to compare against. While I agree that one could argue on the contrary that the ability to work with sparse graphs is precisely the unique advantage of GXMGFs, I think that the authors should at least spend some efforts to discuss (if empirical comparison with LPGMFG is indeed unsuitable) how GXMFGs would compare with LPGMFG and GMFG in practice. | Experiments | Thank you for bringing up the important topic of an empirical comparison with existing approaches. As you mention, the ability of GXMFGs to work with sparse realistic graphs can be seen as a major conceptual advantage over existing approaches such as GMFGs and LPGMFGs. We agree that there should be an empirical comparison complementing the discussion on conceptual differences in the first version of the paper.
Thus, we have added an empirical comparison of GXMFGs and LPGMFGs on the eight real world networks and three tasks from the first paper version. Due to space constraints, we mention this new comparison in the main text and provide detailed results in the appendix, see Table 3 and the corresponding discussion at the end of the Appendix of the updated paper. The empirical comparison does not include GMFGs because they can be seen as a subclass of LPGMFGs and therefore will not yield better results than the LPGMFG framework.
Since LPGMFGs are not able to depict finite degree agents in the limiting model, all agents in the LPGMFG simulation follow the policy learned for infinite degree agents. The overall result of Table 3 is that our hybrid graphex learning approach clearly outperforms LPGMFGs across all networks and tasks. On some networks and tasks, the improvement is considerable but relatively moderate, for example on Prosper RS (error of 2.80 for LPGMFG vs. 1.58 in GXMFG). On other problems, our approach yields results that are many times better than those of the LPGMFG framework, such as Flickr SIS (16.90 vs. 3.57), Brightkite RS (14.69 vs. 3.37), and Hyves SIR (39.94 vs. 10.06). Thus, the empirical results provide strong evidence that the conceptual advantages of GXMFGs also yield a remarkably better empirical performance compared to previous methods such as LPGMFGs. For more details, please see the updated paper. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | KFdqHGoMzN | In Figure 3a, it looks like the curves are diverging rather than converging as k increases? Are the curves coloured correctly? | Presentation | The colors in Figure 3a are correct and all curves converge as the graph size $\nu$ increases. For higher $k$ the curves tend to converge slower than for low $k$ which might seem counter-intuitive. The reason for the different convergence speed is that if we sample finite graphs from the power law graphex, with high probability they will have far more nodes with degree $k=2$ than with degree $k=6$. The relatively low number of high degree nodes reflects the power law nature of the network, where many nodes have low degrees and relatively few nodes have high degrees. As a consequence, a larger graph size is required to obtain a sufficiently large, representative subset of nodes with $k=6$ that yields a good mean field estimate. In contrast, relatively small sampled graphs already have numerous vertices with $k=2$ such that the mean field convergence is observed earlier for this subset of nodes. | DWC |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | Q5LsF6DBEl | Providing an intuitive explanation for assumptions 1(b) and 1(c) would greatly enhance the paper's overall readability and accessibility. | Writing | Thank you for the valuable suggestion! To increase the accessibility of Assumptions 1 b) and c), we have added a more detailed explanation for the respective assumptions in the updated paper draft. The intuitive interpretation of Assumption 1 b) is that it describes the behavior of $\xi_W$ at infinity. More specifically, for $\sigma \in (0,1)$ Assumption 1 b) states that $\xi_W (\alpha)$ is approximately a power function $\alpha^{- \sigma}$ for large $\alpha$ which aligns with our goal to generate power law graphs. On the other hand, Assumption 1 c) is a technical assumption from the graph theory literature and has no obvious intuitive explanation to the best of our knowledge. Nevertheless, it is especially fulfilled by all separable graphexes which are characterized by the accessible property $W (\alpha, \beta) = \xi_W (\alpha) \xi_W (\beta) / \bar{\xi}_W $. Combining these two findings, one can see that Assumption 1 is especially satisfied by the separable power law graphex used in our paper. We hope that the added intuition and explanation increases both readability and accessibility. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | Q5LsF6DBEl | While the paper assumes finite state and action spaces, it may be beneficial to explore whether the proposed approach can be extended to scenarios with infinite action spaces. | Theory | In our opinion, it is worthwhile to extend the GXMFG approach to continuous state and action spaces (and also continuous time) to increase the generality of the learning method. Since the extension to a continuous setting will require different and adapted mathematical and algorithmic approaches, it is outside the scope of our paper. We have mentioned the promising research direction of defining a continuous version of GXMFGs in the conclusion of the updated draft. Thanks for the idea! | DRF |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | Q5LsF6DBEl | Including the code for the simulations would enhance reproducibility. | Reproducibility | We have uploaded the code and will add a link in the final, deanonymized version of the paper. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | gBU7uwifxA | The model is quite abstract at some places. For the theoretical results, they are mostly about the analysis of the game and I am not sure how relevant they are for this conference (although they are certainly interesting for a certain community). It might have been more interesting to focus more on the learning algorithm. | Theory | A: The analysis of the game provides the key insights into complex agent systems that are necessary to eventually provide the equilibrium learning algorithm. Only through a thorough understanding of the core periphery structure and its implications it is possible to state a principled equilibrium learning approach. Therefore, we do believe that the understanding of these complex agent networks and the resulting learning algorithm are relevant for this conference. Nevertheless, we agree that there are various open challenges and hope that our GXMFG learning approach provides a useful framework for future research. | DWC |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | gBU7uwifxA | Assumption 2 as used for instance in Lemma 1 does not seem to make much sense (unless I missed something): What is \( \boldsymbol{\pi} \)? We do not know in advance the equilibrium policy and even if we did, we would still need to define the set of admissible deviations for the Nash equilibrium. Could you please clarify? | Theory | A: We completely agree with the reviewer: the policy $\boldsymbol \pi$ should not be part of Assumption 2 and (of course) we do not assume the equilibrium policy to be known in advance; the set of admissible deviation policies from the Nash equilibrium is not restricted. Instead, we have added the Lipschitz condition (up to a finite number of discontinuities) on $\boldsymbol \pi$ to the respective theoretical results, such as Theorems 1-4. Thank you for spotting the mistake in Assumption 2. We have corrected it in the updated paper version. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | gBU7uwifxA | Algorithm 1, line 14: Could you please explain or recall what is \( Q^{k, \mu^{\tau_{\mathrm{max}}}} \)? | Writing | A: In Algorithm 1, $Q^{k, \mu^{\tau_{\max}}}$ is defined similar to $Q_{i,t}^{\pi, \mu}$, except that we substitute the reward function $r$ by $r'_k$ and use the transition kernel $P'_k$ instead of $P$. We have added the definition in the updated paper. | CRP |
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach | zwU9scoU4A | ICLR-2024 | gBU7uwifxA | Some typos: Should the state space be either \( \mathcal{X} \) or \( X \) (see section 3 for instance)? Does \( \mathbb{G}^\infty_{\alpha,t} \) depend on \( \boldsymbol{\mu} \) or not (see bottom of page 4)? Etc. | Writing | A: The state space should be $\mathcal{X}$. We used $X$ in the beginning of Section 3 to denote an arbitrary finite set. Thanks for pointing out the ambiguous notation; we have corrected it in the updated paper version.
A: In our framework, the neighborhood distribution $\mathbb{G}^\infty_{\alpha, t}$ always depends on the mean field $\boldsymbol \mu$. For notational convenience, we sometimes drop the dependence on $\boldsymbol \mu$ in the notations. The lack of any comment on dropping the dependence in the notation led to understandable confusion. We have added an explanation to the updated draft, thanks. | CRP |
README.md exists but content is empty.
- Downloads last month
- 4