| Added ability to fetch the labeling metrics specifically tied to a designated task on a given dataset | To access the metrics for a specific task, simply click on the ellipsis icon located at the end of the row corresponding to that task on the Tasks page. Then, select the "View Task metrics" option.
- This introduced functionality empowers labeling task managers with a convenient method to gauge task progress and evaluate outcomes. It enables efficient monitoring of label view counts, providing valuable insights into the effectiveness and status of labeling tasks within the broader dataset context.
|
| Improved the partitioned worker labeling strategy | - In the task creation screen, when a user selects
Worker Strategy = Partitioned , we now hide the Review Strategy dropdown, set task.review.strategy = CONSENSUS , and set task.review.consensus_strategy_info.approval_threshold = 1 . - Users now have the flexibility to conduct task consensus reviews with an approval threshold set to 1.
- We have optimized the assignment logic for partitioned tasks by ensuring that each input is assigned to only one labeler at a time, enhancing the efficiency and organization of the labeling process.
|
| Enhanced submit button functionality for improved user experience | In labeling mode, processing inputs too quickly could lead to problems, and there could also be issues related to poor network performance. Therefore, we’ve made the following improvements to the "Submit" button:
- Upon clicking the button, it is immediately disabled, accompanied by a visual change in color.
- The button remains disabled while the initial labels are still loading and while the labeled inputs are still being submitted. In the latter case, the button label dynamically changes to “Submitting.”
- The button is re-enabled promptly after the submitted labels have been processed and the page is fully prepared for the user's next action.
|
| Fixed an issue where double-clicking the "Submit" button resulted in duplicated annotations | - Previously, in the context of a visual detection task involving bounding box labeling, double-clicking the submit button caused each bounding box to be duplicated, leading to potential input skipping. We fixed it.
|
| Fixed an issue that triggered an error during label review | - When using AI Assist for labeling, a situation arose where, upon submission for review, the reviewer could successfully accept labels meeting the specified threshold. However, an error would occur after approving a certain number of inputs. We fixed it.
|
| Improved input carousel navigation and canvas display when approving labels | - Previously, when approving labels, the carousel input would progress, updating the thumbnail carousel and loading annotations for the current input. However, the main image canvas failed to display the corresponding image, showing the previous one instead. We fixed the issue and incorporated a safety logic into the "Approve" button to ensure a smoother and more accurate approval process.
|
| Fixed an issue where reviewer changes did not seem to persist | - Previously, issues arose where reviewer edits or changes didn't appear to persist reliably. We’ve introduced a loading overlay that temporarily restricts other user actions during the app's loading or submission processes, preventing unintended calls and race conditions. This enhancement ensures that reviewer changes are now consistently applied and successfully retained.
|
| Fixed an issue with labeling out-of-bounds bounding boxes | - Previously, attempting to save a bounding box positioned out of bounds resulted in an error. Specifically, drawing a bounding box beyond the defined bounds while labeling any input triggered an issue upon saving the annotation. We fixed the issue.
|
| Fixed an issue with not pulling new batches of inputs consistently | This problem persisted across labeling task creation scenarios, whether it's a full task for a single labeler or a partitioned task.
- Previously, after labeling 5-10 inputs, the labeler unexpectedly reverted to the first input instead of fetching new inputs from the stack, leading to repetitive labeling of the same set. Also, after closing the labeler screen, the “LABEL” button on the “Tasks” page could still appear but was no longer clickable. We fixed the issue.
|
| Fixed issues with annotation of bounding boxes | - Previously, there was an issue preventing the successful annotation of bounding boxes when utilizing concepts containing capital letters or dashes. We fixed the issue.
- We fixed an issue where it was not possible to delete AI-assisted bounding box labels that remain unaccepted or rejected.
- Previously, users were unable to change the concept of a bounding box within a labeling task. This constraint applied to both manually created bounding boxes and those generated through accepted or rejected AI-assist predictions. We fixed the issue.
- We fixed an issue where it was not possible to adjust a bounding box touching the edge.
- We fixed an issue where the confidence threshold for filtering predictions in AI-assist bounding box labeling did not work as intended.
|
| Fixed an issue where it was not possible to edit and add concepts to an existing labeling task | - Previously, if you created a labeling task, began annotating an input, and later revisited the labeling task to include additional concepts, the newly added concepts did not reflect in the list when returning to label inputs. We fixed the issue.
|
| Fixed an issue with adjusting the threshold of AI-assist suggestions | - Previously, there was an issue where moving the threshold beyond the score of an accepted AI-assist suggestion resulted in the hiding of the annotation. This affected both the concept list and the image canvas—though the annotations created were still successfully processed upon input submission. We fixed the issue.
|
| Fixed an issue that led to an app crashing when clicking the "X" icon on a region | - This error occurred specifically when following a sequence of actions: creating an app with the base workflow as Universal, adding images to a dataset, creating concepts, creating a labeling task, and attempting to make a region negative by clicking the "X" button on the first image. We fixed it.
|
| Fixed an issue where a collaborator could not assign an app owner as a labeler when creating a task | - Previously, if an app collaborator attempted to create a task, they could not add the app owner as a worker to the task. We fixed the issue.
|
| Fixed an issue where adding a collaborator in a labeling task was not accurately reflected | - In the process of creating a labeling task, the option to assign a collaborator was functioning correctly—confirmations, such as email notifications, were sent appropriately. However, upon editing the task subsequently, there was an incorrect display of "No" under the collaborator option. We fixed it.
|
| Fixed an issue where deleting a collaborator completely broke the labeling tasks screen | - Previously, removing a collaborator disrupted the functionality of labeler tasks. We fixed the issue.
|