Skip to content

core.services.model.evaluator.utils.coco_utils

coco_utils

Functions:

Name Description
load_json

Load and return the contents of a JSON file.

save_json

Save data to a JSON file with an indentation of 4 spaces.

adjust_image_ids

Adjust image IDs in COCO data. If the image IDs start at 0, they are incremented by 1.

renumber_annotation_ids

Renumber annotation IDs sequentially starting from 1.

fix_coco_ids

Fix image and annotation IDs in a COCO file. Images whose IDs start at 0 are adjusted and

create_image_id_mapping

Create a mapping between image IDs from the ground truth and prediction data based on the 'file_name' field.

fix_image_ids

Fix the 'image_id' fields in prediction data using the provided mapping.

match_image_ids

Match image IDs between the ground truth and prediction files.

compute_tp_fp_fn

Compute the number of True Positives (TP), False Positives (FP), and False Negatives (FN) per category

calculate_metrics

Calculate precision, recall, and F1-score.

evaluate_category

Evaluate predictions for a given category using COCO metrics.

load_json(file_path)

Load and return the contents of a JSON file.

Parameters:

Name Type Description Default

file_path

str

Path to the JSON file.

required

Returns:

Name Type Description
dict dict

Parsed JSON data.

save_json(data, file_path)

Save data to a JSON file with an indentation of 4 spaces.

Parameters:

Name Type Description Default

data

dict

Data to save.

required

file_path

str

Output path for the JSON file.

required

adjust_image_ids(coco_data)

Adjust image IDs in COCO data. If the image IDs start at 0, they are incremented by 1.

Parameters:

Name Type Description Default

coco_data

dict

COCO data containing the "images" and "annotations" keys.

required

renumber_annotation_ids(coco_data)

Renumber annotation IDs sequentially starting from 1.

Parameters:

Name Type Description Default

coco_data

dict

COCO data containing the "annotations" key.

required

fix_coco_ids(coco_path)

Fix image and annotation IDs in a COCO file. Images whose IDs start at 0 are adjusted and annotation IDs are renumbered sequentially. The fixed file is saved with the suffix '_fixed'.

Parameters:

Name Type Description Default

coco_path

str

Path to the original COCO file.

required

Returns:

Name Type Description
str str

Path to the fixed COCO file.

create_image_id_mapping(gt_images, pred_images)

Create a mapping between image IDs from the ground truth and prediction data based on the 'file_name' field.

Parameters:

Name Type Description Default

gt_images

list

List of ground truth images (each a dict with 'file_name' and 'id').

required

pred_images

list

List of predicted images (each a dict with 'file_name' and 'id').

required

Returns:

Name Type Description
dict dict

Mapping {predicted_image_id: ground_truth_image_id}.

fix_image_ids(pred_data, id_mapping)

Fix the 'image_id' fields in prediction data using the provided mapping.

Parameters:

Name Type Description Default

pred_data

dict

Prediction COCO data containing "images" and "annotations".

required

id_mapping

dict

Mapping between predicted and ground truth image IDs.

required

match_image_ids(ground_truth_file, prediction_file, corrected_prediction_file)

Match image IDs between the ground truth and prediction files. Loads the ground truth and prediction JSON files, creates a mapping based on 'file_name', fixes the prediction image IDs using the mapping, and saves the corrected predictions.

Parameters:

Name Type Description Default

ground_truth_file

str

Path to the COCO ground truth file.

required

prediction_file

str

Path to the COCO predictions file.

required

corrected_prediction_file

str

Output path for the corrected predictions file.

required

compute_tp_fp_fn(coco_eval)

Compute the number of True Positives (TP), False Positives (FP), and False Negatives (FN) per category using a COCOeval object.

Parameters:

Name Type Description Default

coco_eval

COCOeval

COCOeval object after evaluation.

required

Returns:

Name Type Description
dict dict

Dictionary mapping each category ID to a dict with keys "TP", "FP", and "FN".

calculate_metrics(tp, fp, fn)

Calculate precision, recall, and F1-score.

Parameters:

Name Type Description Default

tp

int

Number of True Positives.

required

fp

int

Number of False Positives.

required

fn

int

Number of False Negatives.

required

Returns:

Name Type Description
tuple tuple[float, float, float]

Precision, recall, and F1-score.

evaluate_category(coco_gt, coco_pred, cat_name, inference_type)

Evaluate predictions for a given category using COCO metrics. Runs evaluation for the specified category and area 'all', then computes additional metrics (TP, FP, FN, precision, recall, F1-score).

Parameters:

Name Type Description Default

coco_gt

COCO

COCO object for the ground truth.

required

coco_pred

COCO

COCO object for the predictions.

required

cat_name

str

Category name.

required

inference_type

InferenceType

Type of inference (classification, detection, or segmentation).

required

Returns:

Name Type Description
dict dict

Dictionary containing evaluation metrics for the category.