Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
176
10.5k
point
stringlengths
42
623
ACL_2017_433_review
ACL_2017
- The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inte...
- It would be helpful if you provided glosses in Figure 2.
ACL_2017_67_review
ACL_2017
The main weaknesses for me are evaluation and overall presentation/writing. - The list of baselines is hard to understand. Some methods are really old and it doesn't seem justified to show them here (e.g., Mpttern). - Memb is apparently the previous state-of-the-art, but there is no mention to any reference. - While it...
- Memb is apparently the previous state-of-the-art, but there is no mention to any reference.
ICLR_2023_1833
ICLR_2023
. Strengths first: The paper is one of the first to give an empirical study of quantization of MoE networks. It would be a good manual/starting point for practitioners in the field. Weaknessess: Thoroughness: Despite having good results and having investigated several quantization options, one would still have question...
2) why not to consider finer grouping for quantization instead of per-tensor and per-channel?
NIPS_2022_1402
NIPS_2022
1. The representation could be further improved. For example, there are both “unseen classes” and “unseen-classes” in the paper, this should be unified. 2. It would be better to study the impact of the ratio of unseen classes. For example, how the performance varies with different ratios of unseen classes unlabeled exa...
2. It would be better to study the impact of the ratio of unseen classes. For example, how the performance varies with different ratios of unseen classes unlabeled examples.
ACL_2017_371_review
ACL_2017
- The description is hard to follow. Proof-reading by an English native speaker would benefit the understanding - The evaluation of the approach has several weaknesses - General discussion - In Equation 1 and 2 the authors mention a phrase representation give a fix-length word embedding vector. But this is not used in ...
- Why are you using GRU for the Pyramid and LSTM for the sequential part? Is the combination of two architectures a reason for your improvements?
NIPS_2018_700
NIPS_2018
Weakness: The major quality problem of this paper is clarity. In terms of clarity, there are several confusing places in the paper, especially in equation 9, 10, 11, 12. 1) What is s_{i,j} in these equations? In definition 1, the author mentions that s_{i,j} denotes edge weights in the graph, but what are their values ...
6) In line 135, the author says "Initially the network only has a few active vertices, due to sparsity." How is "active vertices" defined here?
NIPS_2021_1731
NIPS_2021
I am not quite convinced by the motivation of the proposed method as a discrete analogue of the continuous Beltrami flow. The “structural assumptions on the diffusivity” a seem to not be satisfied scaled dot product attention in BLEND. What is the point of all the theoretical motivation if the actual construction viola...
4. I read the paper in detail. The fact that their theory does not seem to be applicable to the used model, is not honestly mentioned in the limitations. To the contrary, the vagueness of unspecified 'structural assumptions', that are only given in the appendix, makes this theoretical limitation hard to find. I think t...
NIPS_2019_263
NIPS_2019
--- Weaknesses of the evaluation in general: * 4th loss (active fooling): The concatenation of 4 images into one and the choice of only one pair of classes makes me doubt whether the motivation aligns well with the implementation, so 1) the presentation should be clearer or 2) it should be more clearly shown that it do...
* How hard is it to find examples that illustrate the loss principles clearly like those presented in the paper and the supplement? Weaknesses of the proposed FSR metric specifically:
NIPS_2016_450
NIPS_2016
. First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, ...
* Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy?
ICLR_2023_2698
ICLR_2023
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs he...
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs he...
ICLR_2022_2470
ICLR_2022
Weakness: The idea is a bit simple -- which in of itself is not a true weakness. ResNet as an idea is not complicated at all. I find it disheartening that the paper did not really tell readers how to construct a white paper in section 3 (if I simply missed it, please let me know). However, the code in the supplementary...
1). Only projection head (CNN layers) are affected but not classification head (FCN layer);
ICLR_2023_1418
ICLR_2023
Weakness: 1. Regarding the whole framework, which part is vital for using CLIP to guide weakly supervised learning? I think the discussion is necessary (but I didn’t find clear answer in the discussion) and help this paper to be distinguished from the other related work. 2. The knowledge bank is based on classes appear...
1. Regarding the whole framework, which part is vital for using CLIP to guide weakly supervised learning? I think the discussion is necessary (but I didn’t find clear answer in the discussion) and help this paper to be distinguished from the other related work.
NIPS_2020_420
NIPS_2020
**Exposition** - I think the paper contains interesting ideas with good empirical results. However, the exposition of the method is not easy to follow and require significant revision. Here are a couple of examples that were unclear. - L6: “coherent HOI.” What does it mean to have “coherent HOI”? What are the incoheren...
- The analogy between HOI analysis and Harmonic analysis is interesting at first glance, but the link is quite weak. In the problem contexts, there is only two “basis” (human and object) to form an HOI. The decomposition/integration steps introduced in this paper also do not have a close connection with the Fourier ana...
NIPS_2020_1016
NIPS_2020
1. The PFQ algorithm introduced many hyperparameters, and I am curious how the authors chose the parameters \epsilon and \alpha. The authors simply claimed these parameters are determined from the four-stage manual PFQ from Figure 1, and then claim that FracTrain is insensitive to hyperparameters. First, the precision ...
3. Dynamic precision control during training might only show meaningful performance gains on bit-serial accelerators. However, most existing ML accelerators tend to use bit-parallel fixed-point numbers, this might restrict the implications of the proposed methodology.
ICLR_2021_2674
ICLR_2021
Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below: - the authors leverage multiple datasets, including building the...
- fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well?
NIPS_2021_28
NIPS_2021
The paper is overall interesting, well-written and makes a valuable contribution. I do, however, have some comments for the authors to consider (which in my mind, are potential limitations of the study): - Comparison of the proposed unsupervised method with the supervised baseline is not suggestive because of the absen...
- The authors should also consider defining content and style more broadly as it relates to their specific neural application (e.g., as in Gabbay &Hosehn (2018)) where style is instance-specific(?) and content includes information that can be transferred among groups. More specifically, since their model is not sequent...
NIPS_2022_2572
NIPS_2022
1. The analysis of vit quantification could be explained in depth: (a) this paper argues that `a direct quantization method leads to the information distortion’ in Line 45. The approach proposed in this paper does not improve this phenomenon either (e.g. 1.2268 in Fig1(b) v.s. 1.3672 in Fig5(b) for Block.3. The varianc...
1. The analysis of vit quantification could be explained in depth: (a) this paper argues that `a direct quantization method leads to the information distortion’ in Line 45. The approach proposed in this paper does not improve this phenomenon either (e.g. 1.2268 in Fig1(b) v.s. 1.3672 in Fig5(b) for Block.3. The varianc...
NIPS_2018_76
NIPS_2018
- A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixe...
- A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixe...
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which ...
* L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes?
ICLR_2022_1824
ICLR_2022
. However, I struggle to see the novelty in the author’s approach: spikes and local connections alone have been tried many times (Tab.3 and also [1]). Training the output layer (rather than the whole network) with an RL-based rule is somewhat new, but I find this approach unreasonable for the following reasons: The las...
8. Eq. 12 is confusing. Where does the reward come from at each trial? Is one of the r_i taken from Eq. 11? Explaining the network model in Sec. 4.2 with equations would greatly improve clarity. [1] https://www.sciencedirect.com/science/article/pii/S0893608019301741 [2] https://www.frontiersin.org/articles/10.3389/fnin...
NIPS_2022_2635
NIPS_2022
Weakness: The writing of this paper is roughly good but could be further improved. For example, there are a few typos and mistakes in grammar: 1. Row 236 in Page 4, “…show its superiority.”: I think this sentence should be polished. 2. Row 495 in Supp. Page 15: “Hard” should be “hard”. 3. Row 757 in Supp. Page 29: “…tr...
4. Row 821 in Supp. Page 31: “Fig.7” should be “Fig.12”. Last but not least, each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow. The primary concerns are motivation, methodology soundness, and experiment persuasion. I believ...
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technica...
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
mhCNUP4Udw
ICLR_2025
1 The motivation for incorporating vision modality into MPNNs for link prediction should be better clarified and discussed. Why is this design effective? Any theoretical evidence? Maybe a dedicated section for this discussion could be valuable. 2 The counterpart methods used for experimental comparison seem not SOTA en...
3 Minor Issues: Ln 32 on Page 1, ‘Empiically’ should be ‘Empirically’
NIPS_2022_1667
NIPS_2022
1. The proposed invariant learning module (Sec. 4.2) focuses on mask selection and raw-level features. The former framework (Line 167-174, Sec. 4) seems not limited to raw-level selection. There is also a discussion about representation learning in the appendix. I think the feature selection, presented in Section 4.2, ...
1. The proposed invariant learning module (Sec. 4.2) focuses on mask selection and raw-level features. The former framework (Line 167-174, Sec. 4) seems not limited to raw-level selection. There is also a discussion about representation learning in the appendix. I think the feature selection, presented in Section 4.2, ...
NIPS_2018_591
NIPS_2018
Weakness: - some details are missing. For example, how to design the rewards is not fully understandable. - some model settings are arbitrarily set and are not well tested. For example, what is the sensitivity of the model performance w.r.t. the number of layers used in GCN for both the generator and discriminator?
- some details are missing. For example, how to design the rewards is not fully understandable.
ICLR_2023_4599
ICLR_2023
Lack of clarity. The paper lacks important information to reproduce the results: Overall, the paper lacks a clear high-level explanation of the proposed method. In particular, I think Fig. 2 is very hard to parse and fails to communicate the intuition or high-level idea of the proposed method. The section b) of Fig. 2 ...
1). Second, there are other parameters that can affect the performance (e.g., L and L_max) of the proposed approach.
NIPS_2017_434
NIPS_2017
--- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ab...
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
ARR_2022_143_review
ARR_2022
1. [ Double edge point] It's an incremental improvement to K-NN based MT approach, little novelty but large engineering and execution effort, backed by good experimental design. This weakness is a little nitpicking esp when I personally execution (replicable) beats idea (novelty); but if there's no code release is prod...
1. [ Double edge point] It's an incremental improvement to K-NN based MT approach, little novelty but large engineering and execution effort, backed by good experimental design. This weakness is a little nitpicking esp when I personally execution (replicable) beats idea (novelty); but if there's no code release is prod...
xCFdAN5DY3
ICLR_2025
1. The paper falls short of establishing a compelling case for Prithvi WxC as a foundation model for weather or climate. The practical significance and advantages of this approach remain inadequately demonstrated: a.) While foundation models typically excel at zero-shot performance and data-efficient fine-tuning across...
- The selling point for ML-based emulators of climate model parametrizations is often their computational cheapness. Thus, the runtime of Prithvi WxC should be discussed. Given the large parameter count of Prithvi WxC it might be important to note its runtime as a limitation for these kinds of applications.
NIPS_2022_2182
NIPS_2022
Weakness: 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. 2. There is a gap between the proposed metric and method...
3. The novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward.
ICLR_2021_394
ICLR_2021
which lead me to recommend against acceptance. In no particular order: The crucial "unseen is forbidden" hypothesis is vague and seems to be a bit of a strawman. 2) The framing of the paper seems to oversell the method in a way that makes the contribution less clear. 3) The writing is not very clear. 4) The experiments...
2) The framing of the paper seems to oversell the method in a way that makes the contribution less clear.
NIPS_2020_309
NIPS_2020
1. The motivation is conceptually described, and an example could help reader understand how the hierarchical structure benefits the document representation. 2. A standalone literature review part could be better. 3. The model description could be improved, e.g., the generative process is in detail but presenting such ...
3. The model description could be improved, e.g., the generative process is in detail but presenting such process in separate steps should be better for understanding, too many symbols and a notation table could be better.
ARR_2022_18_review
ARR_2022
1. The exposition becomes very dense at times leading to reduced clarity of explanation. This could be improved. 2. No details on the. multi-task learning mentioned in Section 4.4 are available. 3. When generating paraphrases for the training data, it is unclear how different the paraphrases are from the original sente...
3. When generating paraphrases for the training data, it is unclear how different the paraphrases are from the original sentences. This crucially impacts the subsequent steps because the model will greatly rely on the quality of these paraphrases. If the difference between the paraphrases and the original sentence is n...
ARR_2022_252_review
ARR_2022
- The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead....
- Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified r...
jhdVt7rC8k
EMNLP_2023
I don’t find significant flaws in this paper. There are some minor suggestions: 1. The VideoQA benchmarks in the paper are all choice-based. It would be better to choose some generation-based VideoQA datasets like ActivityNet-QA to increase the diversity. 2. I believe the Flipped-QA is a general framework for various g...
2. I believe the Flipped-QA is a general framework for various generative VideoQA models. However, the authors only apply this framework to LLM-based models. It would be better to further verify the effectiveness and universality to non-LLM-based models like HiTeA and InternVideo.
NIPS_2022_1440
NIPS_2022
• The writing could be improved. It took me quite a lot of effort to go back and forth to understand the main idea and the theoretical analysis of the paper. • Using neural networks as surrogate models will certainly help to improve the model's accuracy, however, I'm wondering how the hyper-parameters of these NN surro...
• The writing could be improved. It took me quite a lot of effort to go back and forth to understand the main idea and the theoretical analysis of the paper.
4vPVBh3fhz
ICLR_2024
1. Theorem 3.2 lacks a detailed proof procedure, although the authors provide an interesting discussion on the confusion matrix in section 3.3. Please let me know where the proof is if I missed. 2. All experiments are conducted on small-scale datasets where the number of classes is small, but it is always desired to in...
3. The proposed method primarily builds upon a combination of existing methods (i.e., Clopper-Pearson intervals [1], Gaussian elimination [2]) and it doesn't present significant theoretical novelty. I am willing to improve my score, if the authors can well address these concerns. [1] Charles J Clopper and Egon S Pearso...
ICLR_2023_516
ICLR_2023
Weakness: Although the motivation is innovative in constructing the pretraining model for the UI modeling field, the overall pretraining pipeline may lack appropriate innovation and some aspects are similar to Flamingo, e.g., the vision-language model architecture, the evaluation method on the multi-task and few-shot l...
① Can the text input is concatenated by the four text elements of an object?
NIPS_2021_1251
NIPS_2021
- Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objec...
- The MV objective is nice for the proposed UCB-style algorithm and theoretical work, but for evaluation VaR and CVaR also are important considerations Writing:
ICLR_2023_1511
ICLR_2023
Weakness_ - The paper could do better to first motivate the "Why" (why do we care about what we are going to be presented). - Similarly, it is lacking a "So What" on the bounds provided, which are often just left there as final statements, without an analysis that explains whether 1) they are (likely to be) tight and 2...
- The paper could do better to first motivate the "Why" (why do we care about what we are going to be presented).
ARR_2022_130_review
ARR_2022
1. Section 4 (Models) misses some details about the proposed model. For example, what is the exact inference procedure over $O_p$ (personally I prefer equations over textual descriptions) 2. Evaluation metrics: This subsection is difficult to read and not rigorous. Comments & questions: - Abstract: The sentence in line...
- Abstract: The sentence in lines 12-17 ("After multi-span re-annotation, MultiSpanQA consists of over a total of 6,0000 multi-span questions in the basic version, and over 19,000 examples with unanswerable questions, and questions with single-, and multi-span answers in the expanded version") is cumbersome and can be ...
hkWHdI8ss5
ICLR_2024
1. Spending 1 hour to optimize a coarse mesh from a domain-specific model for furniture is not necessary. For domain-specific single-image 3D reconstruction, there are many existing fast and robust models—for example, Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction. 2. No technical n...
3. The domain-specific model is trained on Pix3D. And the experiments are conducted on Pix3D. Such comparisons to those zero-shot single-image 3D reconstruction models are even more unfair.
ICLR_2021_2824
ICLR_2021
Weakness: While the authors claimed that they challenged the hypothesis by Kang et al. that the learning of feature representation and classifier should be completely decoupled in long-tail classification, from my perspective this paper is a nature extension of Kang et al. Similar to Kang et al., this paper further dem...
3) shows that the proposed approach does not outperform or is even worse than Decouple [Kang et al.] for the overall performance. Also, Table 5 shows the trade-off between head and tail categories. But similar trade-off has not been fully investigated for the baselines; for example, by changing the hyper-parameters in ...
ICLR_2021_2926
ICLR_2021
and suggestions: 1. It is not clear to me if the warm-up phase makes a difference in performance on larger, more realistic datasets like Clothing1M. More careful analysis of how the warm-up phase affects the sample separation in SSL versus a fully supervised setting would have been useful, including experiments on CIFA...
2. Additional experiments on realistic noisy datasets like WebVision would have provided more support for C2D.
NIPS_2019_854
NIPS_2019
weakness I found in the paper is that the experimental results for Atari games are not significant enough. Here are my questions: - In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not include...
- In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not included? Also, why is $\sum_a N(s, a)$ (but not $N(s, a)$) used for $\lambda_s$ in Equation (7)?
tj4a1JY03u
ICLR_2024
1. The conclusions are a bit obvious - that higher resolution inputs and more specialized training data improve LLaVA's OCR performance. 2. The most important contribution of the paper is the collected dataset. It succeeds in showing the data improves LLaVA's OCR capabilities, but does not demonstrate it is superior to...
3. The evaluation is limited, mostly relying on 4 OCR QA datasets. As the authors admit in Fig 4(5), this evaluation may be unreliable. More scenarios like the LLaVA benchmark would be expected, especially in ablation studies.
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which ...
* L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0?
NIPS_2019_1276
NIPS_2019
* Really only one real takeaway/useful experiment from the paper, which is that disentangling is sample efficient for this strange set of upstream tasks. * I have a lot of problems with these abstract visual reasoning tasks. They seem a bit unintuitive and overly difficult (I have a lot of trouble solving them). Having...
* I have a lot of problems with these abstract visual reasoning tasks. They seem a bit unintuitive and overly difficult (I have a lot of trouble solving them). Having multiple rows and having multiple and different factors changing between each frame is very confusing and it seems like it would be hard to interpret how...
MMrqu8SD6y
EMNLP_2023
- Weak supervision could be better evaluated - eg, how realistic are the evaluated tweets? The prompt requires "all of the structured elements for perspectives to be present in the generated tweets", which doesn't see the most realistic. The generation of authors is also not realistic ("[author] embeddings are initiali...
- Weak supervision could be better evaluated - eg, how realistic are the evaluated tweets? The prompt requires "all of the structured elements for perspectives to be present in the generated tweets", which doesn't see the most realistic. The generation of authors is also not realistic ("[author] embeddings are initiali...
KUpUO7aSSg
ICLR_2025
1. The experiment appears to be somewhat limited. While the proposed method is tailored for agricultural settings, I would recommend the authors to transfer it to natural environments, such as cityscapes, to compare its effectiveness. 2. The method proposed in this paper does not seem to specifically address issues pre...
3. There is a lack of essential visualization of intermediate processes and comparisons.
9TpgFnRJ1y
ICLR_2025
1. Similar to other generator-based explanation frameworks, the transparency of the explanation process itself is limited due to the black-box nature of the neural-network-implemented generator. 2. The flexibility of the proposed method is another concern, as the delivered explanations appear to be model-specific. 3. T...
3. The expected counterfactual violates $\mathcal{P}_2$ stated in Definition 1.
NIPS_2020_1776
NIPS_2020
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally add...
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally add...
NIPS_2017_217
NIPS_2017
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. - "Embedding" is an overloaded word for a scalar value that represents object ID. - The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end witho...
- Keypoint detection results should be included in the experiments section.
NIPS_2017_351
NIPS_2017
1. The approach mentions attention over 3 modalities – image, question and answer. However, it is not clear what attention over answers mean because most of the answers are single words and even if they are multiword, they are treated as single word. The paper does not present any visualizations for attention over an...
3. Since ternary potential seems to be the main factor in the performance improvement of the proposed model, I would like the authors to compare the proposed model with existing models where answers are also used as inputs such as Revisiting Visual Question Answering Baselines (Jabri et al., ECCV16).
ICLR_2022_1794
ICLR_2022
1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work. 2 Mo...
1 For the grid search of learning rate, is it done on the validation set? Minor problems:
ACL_2017_433_review
ACL_2017
- The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inte...
- Table A2: There seem to be a lot of discourse relations (almost as many as dobj relations) in your treebank. Is this just an artifact of the colloquial language or did you use "discourse" for things that are not considered "discourse" in other languages in UD?
FEpAUnS7f7
ICLR_2025
**Originality** **[Minor]** The idea to use ML tools to assist users in interpreting privacy policies is not new—in this sense the contribution of this study is marginal. Still, there is certainly value in evaluating this idea using the most recent large language models, and there is certainly value in conducting a stu...
- L393: What about racial, economic diversity in the sample? How well might these results generalize to other groups, especially marginalized groups?
ICLR_2021_1505
ICLR_2021
1. The novelty is very low. Stage-wise and progressive training have been proposed for such a long time, they have been used everywhere. The way the authors use them don’t really exhibit anything novel to me. 2. The resolution of the outputs (128x128) is lower than prior works (e.g. DVD-GAN has 256x256 outputs). Since ...
3. Output quality is reasonable, but still far from realistic. Recent GAN works have shown amazing quality in synthesized results, and the bar has become much higher than a few years ago. In that aspect, I feel there’s still much room for improvement for the result quality. Overall, given the limited novelty, low resol...
ICLR_2021_491
ICLR_2021
I am very concerned about the experiment sections. To my understanding, Figure 2/Section 4.1 are factually incorrect. In particular, it appears that the soft-labels technique does essentially the same, or better than, CRM, across all fronts. In detail, a) In Figure 2(a), the leftmost softlabel point is equal to or bett...
1), soft labels is essentially on top of CRM and Cross entropy (for iNaturalist19, it looks like a higher beta value would be directly on top, it's unclear why the authors did not extend the curve further) These results, at first blush, seem fairly impressive. For the leftmost plots, I am concerned that the authors are...
ACL_2017_543_review
ACL_2017
- Experimental results show only incremental improvement over baseline, and the choice of evaluation makes it hard to verify one of the central arguments: that visual features improve performance when processing rare/unseen words. - Some details about the baseline are missing, which makes it difficult to interpret the ...
- The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese?
t1nZzR7ico
ICLR_2025
1. The presentation in the experiment section is not upto par with ICLR. The figures and text should be arranged properly. 2. The idea is similar to treating VLM and LLM as two agents helping to jailbreak the T2I diffusion model. How is approach different from [1]. 3. 1. VioT dataset: 20 images in each of the 4 catergo...
3.1. VioT dataset:20 images in each of the 4 catergoreis were provided. However I feel the number of images is small to text the validity of the approach.
ACL_2017_636_review
ACL_2017
- Only applied to English NER--this is a big concern since the title of the paper seems to reference sequence-tagging directly. - Section 4.1 could be clearer. For example, I presume there is padding to make sure the output resolution after each block is the same as the input resolution. Might be good to mention this. ...
- I think an ablation study of number of layers vs perf might be interesting. RESPONSE TO AUTHOR REBUTTAL: Thank you very much for a thoughtful response. Given that the authors have agreed to make the content be more specific to NER as opposed to sequence-tagging, I have revised my score upward.
NIPS_2022_2772
NIPS_2022
• The paper is hard to follow, and more intuitive explanations on the mathematical derivations are needed. Figure captions are lacking, and require additional explanations and legends (e.g., explain the colors in Fig. 2). Fig. 1 and 2 did not contribute much to my understanding, and I had to read the text few times ins...
• The paper is hard to follow, and more intuitive explanations on the mathematical derivations are needed. Figure captions are lacking, and require additional explanations and legends (e.g., explain the colors in Fig. 2). Fig. 1 and 2 did not contribute much to my understanding, and I had to read the text few times ins...
ICLR_2023_1463
ICLR_2023
Weakness: There is quite a bit of redundancy in the writing. e.g. the point that disentangled representations are better than entangled representations is made unnecessarily too many times. e.g. the first 6 lines of first paragraph in section 4.1 are not really needed in my opinion as that has already been said thrice ...
2) how sensitive are the empirical results to hyperparameter choices. This second point is especially crucial since wrong choices can conceivably wipe out whatever improvement is gained from this method. I will be willing to reconsider my rating if this particular issue is resolved.
NIPS_2021_671
NIPS_2021
and my questions about this paper: 1. The experiment of this paper is not sufficient. Firstly, there is no comparison with other data poison methods, especially with [1], which is very similar to the proposed one. 2. This work utilizes existing attack methods on a surrogate model. It is similar to use the transferabili...
2. This work utilizes existing attack methods on a surrogate model. It is similar to use the transferability of adversarial examples directly. The author needs to further claim the novelty and contribution of the proposed method.
NIPS_2022_139
NIPS_2022
1. The rates for the smooth case depend on the dimension d (not the rank as in the Lipschitz case). While the authors show tight lower bounds up to factors that depend on |w*|, this lower bound is probably obtained by taking rank=d and therefore the upper bound may not have tight dependence on the rank. It is important...
1. Text in table 1 is too small and hard to read 2. Algorithm 1: gradient symbol is missing in line 4 References: [AFKT21] Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry [BGN21] Non-Euclidean Differentially Private Stochastic Convex Optimization [LT18] Private selection from private candidates
ICLR_2021_309
ICLR_2021
I don’t have any serious complaints. The contribution is a tad narrow, but it makes progress on some tricky and difficult questions. The experiments also only produce corroborating evidence of CAD’s status as implicating causal variables, and we already know by construction that there is a causal aspect to these pertur...
2: I think you should be able to render (Wright et al., 1934; Figure 1) more naturally by using the [bracketed arguments] in \citep. ...Not sure how this plays with hyperref. P.
NIPS_2017_645
NIPS_2017
- The main paper is dense. This is despite the commendable efforts by the authors to make their contributions as readable as possible. I believe it is due to NIPS page limit restrictions; the same set of ideas presented at their natural length would make for a more easily digestible paper. - The authors do not quite di...
- The authors do not quite discuss computational aspects in detail (other than a short discussion in the appendix), but it is unclear whether their proposed methods can be made practically useful for high dimensions. As stated, their algorithm requires solving several LPs in high dimensions, each involving a parameter ...
NIPS_2019_1397
NIPS_2019
weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable b...
9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments:
NIPS_2018_606
NIPS_2018
Although the adjoint sensitivity method is an existing method, exposing this method to machine learning and computational statistics communities where, as far as I am aware it is not widely known about, is a worthwile contribution of this submission its own right. Given the ever increasing importance of AD in both comm...
* Does the ResNet in the experiments in section 7.1 share parameters between the residual blocks? If not a potentially further interesting baseline for the would be to compare to a deeper ResNet with parameter sharing as this would seem to be equivalent to an ODE net with a fixed time-step Euler integrator.
8VK9XXgFHp
EMNLP_2023
1. Poor figures (minor). Figures in this paper are not clear. I can not obtain the effectiveness of capturing fine-grained cross-entity interaction among candidates in comparison in Figure 1. 2. Poor motivation (major). The cross-encoder architecture is not "ignoring cross-entity comparison". It also "attends to all ca...
2. Poor motivation (major). The cross-encoder architecture is not "ignoring cross-entity comparison". It also "attends to all candidates at once" to obtain the final matching scores. Of course, it may be not so fine-grained.
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should menti...
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
NIPS_2022_69
NIPS_2022
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network...
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated.
NIPS_2020_1645
NIPS_2020
It's not clear that this is a global method and calling it this causes some confusion. While each explanation is given by the output of a single learned model rather solved for independently with its own optimization problem (which makes it more "global" in a sense), the explanations are still fundamentally local becau...
- Figure 1: It's unclear how the proposed method produces this type of explanation (which says "mutagens contain the NO2 group"). This seems like it requires "additional ad-hoc post-analysis ... to extract the shared motifs to explain a set of instances" [Line 48]. Perhaps this analysis is easier with the proposed meth...
NIPS_2021_422
NIPS_2021
Experimental results leave some questions open, i.e.: - One experiment to estimates the quality of uncertainty estimates measures how often the true feature importance lies within a 95% credible interval. However, the experiments uses pseudo feature importance because no true feature importance is available. The correc...
- One experiment to estimates the quality of uncertainty estimates measures how often the true feature importance lies within a 95% credible interval. However, the experiments uses pseudo feature importance because no true feature importance is available. The correctness of the pseudo feature importance relies on Prop ...
NIPS_2018_641
NIPS_2018
weakness. First, the main result, Corollary 10, is not very strong. It is asymptotic, and requires the iterates to lie in a "good" set of regular parameters; the condition on the iterates was not checked. Corollary 10 only requires a lower bound on the regularization parameter; however, if the parameter is set too larg...
4. ln. 182--184: Non-convexity may not be an issue for the SGD to converge, if the function Z has some good properties.
NIPS_2021_725
NIPS_2021
Comparing the occupational statistics computed by GPT2 vs those by the United States is very interesting and informative. However, the presentation on the methodology and the subsequent discussion is confusing to me. Particularly from section 3.4, I am not sure what “adj.” in equation (1) means and why “adj. Pred” is a...
2) the authors did not compare any models other than GPT2. Several sections of the paper read confusing to me. There is a missing citation / reference in Line 99, section 3.1. The notation \hat{D}(c) from Line 165, section 3.4 is unreferenced. The authors made great effort to acknowledge the limitations of their work.
Jszf4et48m
ICLR_2025
1. **Presentation of this work requires thorough improvements.**. - The authors should use `\citep` for most cases in the manuscript, which places the authors' names and the year in parentheses. The current use of `\cite` makes the manuscript cluttered and difficult to read. - The manuscripts contain many incoherent pa...
- What is the meaning of "$:\sigma_t^2=\alpha_t-\alpha_t^2$" in Equation (2)?
NIPS_2022_789
NIPS_2022
(-) It would be nice to show and discuss failure cases, or situations when the proposed approach does not outperform the others. Minor comments: table X, figure Y, section Z, etc. --> Table X, Figure Y, Section Z, etc. Eq. 3: x t --> x ( t ) Fix punctuation at the end of Eqs. 6 and 9 L71: R n --> R m L76: utilize L77: ...
6: there are two lines in red that should be in green SuppMat, L502: ϵ θ --> z θ SuppMat,L507: (4) --> Table 4 SuppMat, L509: (1) --> Algorithm 1
NuMemgzPYT
EMNLP_2023
1. Minor - I think a much more comprehensive and data-intensive analysis would improve this paper significantly but since it is a short paper this isn't a strong negative against what has been done by the authors. 2. I am unsure about the technical novelty of the approach - the paper appears to be simply doing prompt e...
1. Minor - I think a much more comprehensive and data-intensive analysis would improve this paper significantly but since it is a short paper this isn't a strong negative against what has been done by the authors.
ICLR_2021_2821
ICLR_2021
weakness: 1: AdpCLR_pre looks intuitive since it uses a pre-trained self-supervised model (simCLR); therefore, we can get a high-quality similarity measure in the pair of image embedding. While in the AdpCLR_full author mentioned that no pre-trained model is used then only we can ensure that P_same is the correct pair ...
3: The experimental settings are not mentioned properly; result reproducibility is critical using the provided information. The author does not provide the code.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
6