Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
pmcid: string
title: string
abstract: string
sections: struct<INTRO: list<item: string>, METHODS: list<item: string>, RESULTS: list<item: string>, CONCL: l (... 47 chars omitted)
child 0, INTRO: list<item: string>
child 0, item: string
child 1, METHODS: list<item: string>
child 0, item: string
child 2, RESULTS: list<item: string>
child 0, item: string
child 3, CONCL: list<item: string>
child 0, item: string
child 4, DISCUSS: list<item: string>
child 0, item: string
task1_function: null
task2_expression: null
pmcids: list<item: string>
child 0, item: string
summary: string
gene_symbol: string
task3_synonyms: struct<current_fullname: string, fullname_synonyms: list<item: struct<synonym: string, in_corpus: bo (... 76 chars omitted)
child 0, current_fullname: string
child 1, fullname_synonyms: list<item: struct<synonym: string, in_corpus: bool>>
child 0, item: struct<synonym: string, in_corpus: bool>
child 0, synonym: string
child 1, in_corpus: bool
child 2, symbol_synonyms: list<item: struct<synonym: string, in_corpus: bool>>
child 0, item: struct<synonym: string, in_corpus: bool>
child 0, synonym: string
child 1, in_corpus: bool
gene_id: string
to
{'gene_id': Value('string'), 'gene_symbol': Value('string'), 'summary': Value('string'), 'pmcids': List(Value('string')), 'task1_function': List(Json(decode=True)), 'task2_expression': List(Json(decode=True)), 'task3_synonyms': {'current_fullname': Value('string'), 'fullname_synonyms': List({'synonym': Value('string'), 'in_corpus': Value('bool')}), 'symbol_synonyms': List({'synonym': Value('string'), 'in_corpus': Value('bool')})}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
pmcid: string
title: string
abstract: string
sections: struct<INTRO: list<item: string>, METHODS: list<item: string>, RESULTS: list<item: string>, CONCL: l (... 47 chars omitted)
child 0, INTRO: list<item: string>
child 0, item: string
child 1, METHODS: list<item: string>
child 0, item: string
child 2, RESULTS: list<item: string>
child 0, item: string
child 3, CONCL: list<item: string>
child 0, item: string
child 4, DISCUSS: list<item: string>
child 0, item: string
task1_function: null
task2_expression: null
pmcids: list<item: string>
child 0, item: string
summary: string
gene_symbol: string
task3_synonyms: struct<current_fullname: string, fullname_synonyms: list<item: struct<synonym: string, in_corpus: bo (... 76 chars omitted)
child 0, current_fullname: string
child 1, fullname_synonyms: list<item: struct<synonym: string, in_corpus: bool>>
child 0, item: struct<synonym: string, in_corpus: bool>
child 0, synonym: string
child 1, in_corpus: bool
child 2, symbol_synonyms: list<item: struct<synonym: string, in_corpus: bool>>
child 0, item: struct<synonym: string, in_corpus: bool>
child 0, synonym: string
child 1, in_corpus: bool
gene_id: string
to
{'gene_id': Value('string'), 'gene_symbol': Value('string'), 'summary': Value('string'), 'pmcids': List(Value('string')), 'task1_function': List(Json(decode=True)), 'task2_expression': List(Json(decode=True)), 'task3_synonyms': {'current_fullname': Value('string'), 'fullname_synonyms': List({'synonym': Value('string'), 'in_corpus': Value('bool')}), 'symbol_synonyms': List({'synonym': Value('string'), 'in_corpus': Value('bool')})}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
FlyAOC: Agentic Ontology Curation Benchmark
FlyAOC is a benchmark for evaluating AI agents on end-to-end ontology curation from scientific literature. Given a Drosophila melanogaster gene symbol, systems search a corpus of full-text papers and produce structured annotations for gene function, expression, and synonyms.
This anonymous review package contains the benchmark inputs and labels, not model prediction dumps.
Files
| File | Description |
|---|---|
corpus.jsonl |
16,898 full-text PMC-OA articles converted from BioC JSON. |
pmc_license_manifest.jsonl |
Per-record provenance and license metadata extracted from the BioC-PMC source files. |
benchmark.jsonl |
The 100 benchmark genes, with FlyBase IDs, symbols, Gene Snapshot summaries, PMCID retrieval sets, and canonical verified labels for all three tasks. |
ground_truth_hidden.jsonl |
Hidden-term benchmark variant labels. |
hidden_go_terms.json |
The GO terms hidden for the specificity-gap setting. |
ontologies/go-basic.obo |
Gene Ontology source used for Task 1 term lookup and semantic evaluation. |
ontologies/fly_anatomy.obo |
FlyBase anatomy ontology source used for Task 2 anatomy lookup and semantic evaluation. |
ontologies/fly_development.obo |
FlyBase developmental stage ontology source used for Task 2 stage lookup and semantic evaluation. |
croissant.json |
Croissant metadata with core and minimal Responsible AI fields. |
Data Schema
Each corpus.jsonl record contains:
pmcid: PubMed Central identifier.title: article title.abstract: article abstract.sections: mapping from section type to paragraphs, using section keys such asINTRO,METHODS,RESULTS,DISCUSS, andCONCL.
Each benchmark.jsonl record contains one gene:
gene_id,gene_symbol,summary,pmcidstask1_function: Gene Ontology annotations with GO ID, qualifier, aspect, evidence reference, and corpus-grounding fields.task2_expression: expression annotations with anatomy/stage ontology IDs, assay metadata, evidence reference, and corpus-grounding fields.task3_synonyms: full-name and symbol synonyms with corpus-grounding fields.
Intended Use
FlyAOC is intended for evaluating systems that retrieve and synthesize structured biological annotations from a large literature corpus. The primary use case is benchmark evaluation of curation agents under controlled retrieval budgets. The dataset is not intended to train production biomedical systems without additional validation by domain experts.
Provenance and Annotation
The literature corpus was retrieved from the PubMed Central Open Access subset via the BioC-PMC API. Benchmark labels are derived from FlyBase release FB2025_04 and then annotated with corpus-grounding labels that indicate whether the supporting source is present in the provided corpus. The included ontology files define the controlled vocabularies used by the benchmark tools and semantic evaluation. The hidden-term variant removes selected GO terms from ontology search to test whether systems can describe missing concepts when no suitable ontology term is available.
License and Access
This package has mixed provenance and should not be treated as having a single blanket license.
- Literature records come from the PubMed Central Open Access subset. Article licenses vary by paper; see
pmc_license_manifest.jsonlfor per-record license metadata. - FlyBase-derived benchmark labels and FlyBase ontology files are based on FlyBase data released under CC-BY 4.0.
- Gene Ontology files are released under CC-BY 4.0.
- Users are responsible for following the terms associated with each source record. Users with stricter licensing requirements may use the PMCID manifest to re-fetch source articles from PMC directly.
Responsible AI Notes
Limitations
The benchmark covers 100 well-studied Drosophila genes and open-access literature available through PMC-OA. It does not represent all genes, all organisms, non-English literature, paywalled papers, unpublished curation evidence, or all valid biological annotations.
Biases
The corpus reflects publication and open-access biases in the scientific record. Well-studied genes, English-language publications, and journals indexed in PMC-OA are overrepresented. FlyBase labels reflect expert curation priorities and may lag newer literature.
Sensitive Information
The dataset contains scientific articles and biological database annotations. It is not designed to contain human-subject records, demographic attributes, or private personal information. Some source articles may include author names, affiliations, and acknowledgments as part of the public scholarly record.
Social Impact
The benchmark may help improve tools that assist biological database curation and scientific literature review. Misuse risks include over-trusting automated annotations or deploying systems without expert review. FlyAOC should be used as an evaluation resource, not as a substitute for professional biological curation.
Synthetic Data
The corpus and benchmark labels are not synthetic. Model-generated predictions are not included in this dataset package.
Loading
from datasets import load_dataset
corpus = load_dataset("json", data_files="corpus.jsonl")["train"]
benchmark = load_dataset("json", data_files="benchmark.jsonl")["train"]
For review, the intended hosted dataset path is:
from datasets import load_dataset
corpus = load_dataset("anonymous-042/flyaoc", data_files="corpus.jsonl")["train"]
benchmark = load_dataset("anonymous-042/flyaoc", data_files="benchmark.jsonl")["train"]
- Downloads last month
- 70