Datasets:

Modalities:
Image
ArXiv:
License:
Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager was killed while running this job (job exceeded maximum duration).
Error code:   JobManagerExceededMaximumDurationError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
image_url
string
dataset_id
string
image_id
string
annotator_id
int32
caption
string
timed_caption
sequence
traces
sequence
voice_recording
string
https://s3.amazonaws.com…c17e61428d75.jpg
open_images
6375c17e61428d75
96
In this image there are some persons standing at right side of this image and there are some lights are arranged at top of this image. There are some cabins as we can see at left side of this image and there is a floor at bottom of this image.
{ "utterance": [ "In", "this image there are some persons", "standing", "at", "right", "side", "of", "this", "image", "and", "there are", "some", "lights", "are", "arranged", "at", "top of", "this", "image. There are some", "cabins", ...
[ { "x": [ 0.0869, 0.12, 0.1508, 0.1886, 0.2439, 0.3283, 0.3788, 0.4072, 0.419, 0.419, 0.4119, 0.3567, 0.337, 0.3038, 0.2912, 0.2904, 0.3014, 0.3283, 0.4293, 0.5034, 0.5342, 0.5436, ...
open_images_train/open_images_train_6375c17e61428d75_96.ogg
https://s3.amazonaws.com…d89eb7194b30.jpg
open_images
6d67d89eb7194b30
60
In this image we can see a pie on a large spatula. On the backside we can see some fire.
{ "utterance": [ "In", "this", "image", "we", "can", "see", "a", "pie", "on", "a", "large", "spatula.", "On", "the", "backside", "we", "can", "see", "some", "fire." ], "start_time": [ 0, 0.4, 0.5, 1, 1.5, 1.8, ...
[ { "x": [ 0.6955, 0.7084, 0.7176, 0.7203, 0.7203, 0.7203, 0.7084, 0.69, 0.6653, 0.635, 0.6047, 0.5744, 0.545, 0.5156, 0.4899, 0.466, 0.4596, 0.4559, 0.4559, 0.4642, 0.4761, 0.4954, ...
open_images_train/open_images_train_6d67d89eb7194b30_60.ogg
https://s3.amazonaws.com…9a7a89bacad0.jpg
open_images
01329a7a89bacad0
107
In this image, we can see a table, on that table there is a cake and we can see a candle on the cake, there is a black color laptop on the table, in the background we can see a wall and there are some photos on the wall.
{ "utterance": [ "In", "this image, we can see a", "table,", "on", "that", "table", "there", "is a", "cake", "and", "we", "can", "see", "a", "candle", "on", "the", "cake,", "there", "is", "a", "black", "color", "laptop", ...
[ { "x": [ 1.0512, 0.9031, 0.8367, 0.8014, 0.7903, 0.7837, 0.7792, 0.7682, 0.7505, 0.735, 0.7129, 0.693, 0.6775, 0.6554, 0.609, 0.5868, 0.5647, 0.5426, 0.5139, 0.4763, 0.4276, 0.3746...
open_images_train/open_images_train_01329a7a89bacad0_107.ogg
https://s3.amazonaws.com…1b9d50b4f7a5.jpg
open_images
c1cf1b9d50b4f7a5
5
In the image we can see a T-shirt, black in color, on it there is a text. We can even see there are many flower bookies. This is a footpath, brick wall and fruits.
{ "utterance": [ "In the", "image", "we", "can", "see", "a", "T-shirt,", "black", "in", "color,", "on", "it", "there is a", "text. We", "can", "even see there", "are", "many", "flower", "bookies.", "This is a footpath, brick wall", ...
[ { "x": [ -0.0872, 0.0299, 0.1086, 0.1799, 0.2439, 0.2897, 0.3153, 0.3171, 0.3098, 0.2842, 0.2531, 0.2366, 0.2329, 0.2878, 0.3372, 0.414, 0.5201, 0.6189, 0.6847, 0.7104, 0.7122, 0.6...
open_images_train/open_images_train_c1cf1b9d50b4f7a5_5.ogg
https://s3.amazonaws.com…95b2e8954792.jpg
open_images
a2b695b2e8954792
34
In this image in the foreground there are two persons one person is wearing a hat, and in the background there are group of people one person is holding a pole and flag and some boards poles, buildings, mountains. At the bottom there are some stones, and at the top there is sky.
{ "utterance": [ "In this", "image in", "the", "foreground", "there are two persons", "one", "person", "is", "wearing", "a", "hat,", "and", "in", "the", "background", "there", "are", "group", "of", "people", "one", "person", "...
[ { "x": [ 0.5953, 0.5572, 0.5417, 0.5417, 0.5488, 0.5601, 0.5953, 0.6869, 0.7193, 0.7432, 0.7616, 0.7771, 0.7897, 0.7996, 0.8081, 0.8165, 0.8207, 0.8236, 0.8278, 0.8292, 0.8236, 0.8...
open_images_train/open_images_train_a2b695b2e8954792_34.ogg
https://s3.amazonaws.com…f495671ea4d9.jpg
open_images
6932f495671ea4d9
1
In this picture we can see a wheel, a gear and springs in the front, in the background there is grass.
{ "utterance": [ "In", "this", "picture", "we", "can", "see", "a", "wheel, a", "gear", "and", "springs", "in", "the", "front, in the", "background", "there", "is", "grass." ], "start_time": [ 0, 0, 0.5, 0.7, 0.9, 1.1, ...
[ { "x": [ 0.3916, 0.4567, 0.5621, 0.6652, 0.7072, 0.8135, 0.8801, 0.8999, 0.9015, 0.9103, 0.9158, 0.9229, 0.9317, 0.9372, 0.9396, 0.9428, 0.9451, 0.9459, 0.9459, 0.9459, 0.9459, 0.9...
open_images_train/open_images_train_6932f495671ea4d9_1.ogg
https://s3.amazonaws.com…2763a7fe8f66.jpg
open_images
bbae2763a7fe8f66
27
Here we can see a woman having a football in her hand and behind her we can see group of people standing and it is snowy
{ "utterance": [ "Here", "we", "can", "see", "a", "woman", "having", "a", "football", "in", "her", "hand", "and", "behind", "her", "we", "can", "see", "group", "of", "people", "standing", "and", "it", "is", "snowy"...
[ { "x": [ 0.4026, 0.4219, 0.4292, 0.4393, 0.4577, 0.466, 0.4752, 0.4779, 0.4834, 0.4862, 0.4871, 0.489, 0.488, 0.488, 0.4899, 0.4945, 0.5018, 0.5028, 0.5009, 0.4991, 0.4917, 0.4853,...
open_images_train/open_images_train_bbae2763a7fe8f66_27.ogg
https://s3.amazonaws.com…a5c5fd0b4768.jpg
open_images
75faa5c5fd0b4768
21
In this image there is a bed ,on the bed there are the pillows and beside the bed there is a table ,on the table there is a laptop and there is a fish pot ,under the table there is a bottle kept on the table and there is a speaker on the floor.
{ "utterance": [ "In", "this image there is a", "bed", ",on", "the", "bed there are the pillows and beside the", "bed", "there", "is", "a", "table", ",on", "the", "table", "there is a", "laptop", "and there", "is", "a", "fish", "pot",...
[ { "x": [ 0.0348, 0.0413, 0.0505, 0.0587, 0.0642, 0.0688, 0.0734, 0.078, 0.0835, 0.0891, 0.0983, 0.1056, 0.1157, 0.1212, 0.1277, 0.1341, 0.1405, 0.147, 0.1525, 0.1589, 0.1663, 0.173...
open_images_train/open_images_train_75faa5c5fd0b4768_21.ogg
https://s3.amazonaws.com…eca320e7bd78.jpg
open_images
39eaeca320e7bd78
77
In this picture we can see vehicles on the road, trees, poles and in the background we can see the sky with clouds.
{ "utterance": [ "In", "this", "picture", "we", "can", "see", "vehicles", "on", "the", "road,", "trees,", "poles", "and", "in", "the", "background we can see the", "sky", "with", "clouds." ], "start_time": [ 0.4, 0.9, 1, 1...
[ { "x": [ 0.136, 0.136, 0.1388, 0.1426, 0.1444, 0.1491, 0.1593, 0.1733, 0.1817, 0.191, 0.2003, 0.2142, 0.2291, 0.2468, 0.2654, 0.285, 0.3101, 0.3343, 0.3613, 0.4004, 0.4255, 0.4488,...
open_images_train/open_images_train_39eaeca320e7bd78_77.ogg
https://s3.amazonaws.com…c810859998c8.jpg
open_images
6ae2c810859998c8
108
In this picture, we can see a few buildings, poles, stairs, railing, road, a few people, and vehicles, and the sky with clouds, and we can see date and time in bottom right side of the picture.
{ "utterance": [ "In", "this", "picture,", "we", "can", "see a few", "buildings,", "poles,", "stairs,", "railing,", "road, a", "few", "people, and vehicles,", "and", "the", "sky", "with", "clouds,", "and", "we can see", "date", "a...
[ { "x": [ 0.3466, 0.3552, 0.3617, 0.3617, 0.3704, 0.3876, 0.4125, 0.4395, 0.473, 0.5292, 0.5799, 0.6242, 0.6459, 0.6459, 0.6437, 0.607, 0.5335, 0.4698, 0.4352, 0.42, 0.4179, 0.4244,...
open_images_train/open_images_train_6ae2c810859998c8_108.ogg
https://s3.amazonaws.com…02ac1436cbfc.jpg
open_images
0b4002ac1436cbfc
46
This image is taken in a store where we can see mirrors, chairs, wall, lights, ceiling, bottles, tripod stands, frames, jar, table and a cupboard.
{ "utterance": [ "This image is", "taken", "in", "a", "store", "where", "we", "can", "see", "mirrors,", "chairs, wall,", "lights,", "ceiling,", "bottles,", "tripod", "stands,", "frames,", "jar,", "table", "and a", "cupboard." ], "...
[ { "x": [ 0.4028, 0.4244, 0.4429, 0.4599, 0.4784, 0.4969, 0.517, 0.5386, 0.5555, 0.5833, 0.6172, 0.6296, 0.6435, 0.6589, 0.6712, 0.6805, 0.6897, 0.6928, 0.7006, 0.7083, 0.7098, 0.71...
open_images_train/open_images_train_0b4002ac1436cbfc_46.ogg
End of preview.

Dataset Card for [Dataset Name]

Dataset Summary

Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available.

As of now, there is only the OpenImages subset, but feel free to contribute the other subset of Localized Narratives!

OpenImages_captions is similar to the OpenImages subset. The differences are that captions are groupped per image (images can have multiple captions). For this subset, timed_caption, traces and voice_recording are not available.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

[More Information Needed]

Dataset Structure

Data Instances

Each instance has the following structure:

{
  dataset_id: 'mscoco_val2017',
  image_id: '137576',
  annotator_id: 93,
  caption: 'In this image there are group of cows standing and eating th...',
  timed_caption: [{'utterance': 'In this', 'start_time': 0.0, 'end_time': 0.4}, ...],
  traces: [[{'x': 0.2086, 'y': -0.0533, 't': 0.022}, ...], ...],
  voice_recording: 'coco_val/coco_val_137576_93.ogg'
}

Data Fields

Each line represents one Localized Narrative annotation on one image by one annotator and has the following fields:

  • dataset_id: String identifying the dataset and split where the image belongs, e.g. mscoco_val2017.
  • image_id String identifier of the image, as specified on each dataset.
  • annotator_id Integer number uniquely identifying each annotator.
  • caption Image caption as a string of characters.
  • timed_caption List of timed utterances, i.e. {utterance, start_time, end_time} where utterance is a word (or group of words) and (start_time, end_time) is the time during which it was spoken, with respect to the start of the recording.
  • traces List of trace segments, one between each time the mouse pointer enters the image and goes away from it. Each trace segment is represented as a list of timed points, i.e. {x, y, t}, where x and y are the normalized image coordinates (with origin at the top-left corner of the image) and t is the time in seconds since the start of the recording. Please note that the coordinates can go a bit beyond the image, i.e. <0 or >1, as we recorded the mouse traces including a small band around the image.
  • voice_recording Relative URL path with respect to https://storage.googleapis.com/localized-narratives/voice-recordings where to find the voice recording (in OGG format) for that particular image.

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @VictorSanh for adding this dataset.

Downloads last month
78

Data Sourcing report

powered
by Spawning.ai

No elements in this dataset have been identified as either opted-out, or opted-in, by their creator.

Paper for HuggingFaceM4/LocalizedNarratives