With Red Hen Lab’s Rapid Annotator we try to enable researchers worldwide to annotate large chunks of data in a very short period of time with least effort possible and try to get started with minimal training.
This project is maintained by rrrokhtar
Red Hen’s Rapid Annotator provides a platform to users to annotate large chunks of data in a short span of time and with least possible efforts. It provides the features of annotating images/videos/audios/text during collaborative situation also. With Red Hen Lab’s we will try to enable users to visualize the progress of each annotator separately and annotators can notify experimenter when the annotation is finished to make the annotation work more efficient.
In Red Hen Lab’s Rapid Annotator we try to enable researchers worldwide to annotate large chunks of data in a very short period of time with least effort possible and try to get started with minimal training.
I have been continuing working on Red Hen Lab’s Rapid Annotator after work done by Gulshan Kumar and Vaibhav Gubta at GSoC’s 20, 19 and 18.
Text fields next to the selected label
No text fields next to the selected label
Allowing the experiments’ owners to import their annotation levels without doing the over-head step [make it global, import it then make it private again]
Allowing sharing the annotation levels of an experiment to certain users selection from the settings page
Elan experiment: is a new category of experiments (extension to video experiment) that is needed to be added to rapid annotator that allows multiple and infinite count of annotations for each level. New analogies (for elan experiment):
The structure of each annotation entry will be as the following (description of database changes): Note: Database schema updated image is added in docs folder ElanAnnotation:
Review of what is the replacement: each selected label was recorded as a single record in AnnotationInfo table. For elan it is instead of keeping a single record for each entry (as they are infinite; not limited to the count of labels inside each level) it was better to use a json type So far, data is a json type contains the following:
{
"levelId01": [{"startTime": 0.01, "endTime": 0.036, "text": "Anythingcan be here"}]
}
It is an object contains (levels/tiers ids of the experiment as keys and for each tierId it contains an array which is the content of annotations) Sample view explained that image is represneted as the following:
{
"34": [
{
"text": "label 1",
"endTime": 6.5,
"startTime": 0
},
{
"text": "label 3",
"endTime": 19.25,
"startTime": 10.0625
}
],
"35": [
{
"text": "dsa",
"endTime": 15,
"startTime": 2.4375
}
]
Results is exported in 2 ways
When creating a new experiment, you will find a new type ————————————-
At view results page you will find a new column that allows you to download a .eaf file for each annotation of the selected user and clicked file ————————————-
New experiment interface (elan-similar) ————————————-
Key bindings
Add/Edit annotation form
Add/Edit annotation form Demo
Timeline
Timeline Demo
Functions
on Youtube
The annotation tiers became accessible through numbers as shortcut keys (i.e., first tier can be selected be pressing ‘1’ and second through ‘2’ and etc.).
Added the shortcut key of each label below the label
Fixed the labels’ data-list after adding the numbers shortcut keys for the tiers’ selection (the labels options were not being changed)
That PR contains two main additions
After | Before |
---|---|