core.services.model.evaluator.utils.coco_utils¶
coco_utils
¶
Functions:
Name | Description |
---|---|
load_json |
Load and return the contents of a JSON file. |
save_json |
Save data to a JSON file with an indentation of 4 spaces. |
adjust_image_ids |
Adjust image IDs in COCO data. If the image IDs start at 0, they are incremented by 1. |
renumber_annotation_ids |
Renumber annotation IDs sequentially starting from 1. |
fix_coco_ids |
Fix image and annotation IDs in a COCO file. Images whose IDs start at 0 are adjusted and |
create_image_id_mapping |
Create a mapping between image IDs from the ground truth and prediction data based on the 'file_name' field. |
fix_image_ids |
Fix the 'image_id' fields in prediction data using the provided mapping. |
match_image_ids |
Match image IDs between the ground truth and prediction files. |
compute_tp_fp_fn |
Compute the number of True Positives (TP), False Positives (FP), and False Negatives (FN) per category |
calculate_metrics |
Calculate precision, recall, and F1-score. |
evaluate_category |
Evaluate predictions for a given category using COCO metrics. |
load_json(file_path)
¶
Load and return the contents of a JSON file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
str
|
Path to the JSON file. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
Parsed JSON data. |
save_json(data, file_path)
¶
adjust_image_ids(coco_data)
¶
Adjust image IDs in COCO data. If the image IDs start at 0, they are incremented by 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
dict
|
COCO data containing the "images" and "annotations" keys. |
required |
renumber_annotation_ids(coco_data)
¶
Renumber annotation IDs sequentially starting from 1.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
dict
|
COCO data containing the "annotations" key. |
required |
fix_coco_ids(coco_path)
¶
Fix image and annotation IDs in a COCO file. Images whose IDs start at 0 are adjusted and annotation IDs are renumbered sequentially. The fixed file is saved with the suffix '_fixed'.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
str
|
Path to the original COCO file. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
Path to the fixed COCO file. |
create_image_id_mapping(gt_images, pred_images)
¶
Create a mapping between image IDs from the ground truth and prediction data based on the 'file_name' field.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
list
|
List of ground truth images (each a dict with 'file_name' and 'id'). |
required |
|
list
|
List of predicted images (each a dict with 'file_name' and 'id'). |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
Mapping {predicted_image_id: ground_truth_image_id}. |
fix_image_ids(pred_data, id_mapping)
¶
match_image_ids(ground_truth_file, prediction_file, corrected_prediction_file)
¶
Match image IDs between the ground truth and prediction files. Loads the ground truth and prediction JSON files, creates a mapping based on 'file_name', fixes the prediction image IDs using the mapping, and saves the corrected predictions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
str
|
Path to the COCO ground truth file. |
required |
|
str
|
Path to the COCO predictions file. |
required |
|
str
|
Output path for the corrected predictions file. |
required |
compute_tp_fp_fn(coco_eval)
¶
Compute the number of True Positives (TP), False Positives (FP), and False Negatives (FN) per category using a COCOeval object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
COCOeval
|
COCOeval object after evaluation. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
Dictionary mapping each category ID to a dict with keys "TP", "FP", and "FN". |
calculate_metrics(tp, fp, fn)
¶
Calculate precision, recall, and F1-score.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
Number of True Positives. |
required |
|
int
|
Number of False Positives. |
required |
|
int
|
Number of False Negatives. |
required |
Returns:
Name | Type | Description |
---|---|---|
tuple |
tuple[float, float, float]
|
Precision, recall, and F1-score. |
evaluate_category(coco_gt, coco_pred, cat_name, inference_type)
¶
Evaluate predictions for a given category using COCO metrics. Runs evaluation for the specified category and area 'all', then computes additional metrics (TP, FP, FN, precision, recall, F1-score).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
COCO
|
COCO object for the ground truth. |
required |
|
COCO
|
COCO object for the predictions. |
required |
|
str
|
Category name. |
required |
|
InferenceType
|
Type of inference (classification, detection, or segmentation). |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
Dictionary containing evaluation metrics for the category. |