Skip to content

core.services.model.evaluator.model_evaluator

model_evaluator

Classes:

Name Description
ModelEvaluator

Evaluates model predictions and logs metrics into a Picsellia experiment.

ModelEvaluator(experiment, inference_type)

Evaluates model predictions and logs metrics into a Picsellia experiment.

Supports classification, detection (rectangle), OCR, and segmentation (polygon) evaluations, including COCO-style and sklearn metrics.

Parameters:

Name Type Description Default

experiment

Experiment

The experiment where results will be logged.

required

inference_type

InferenceType

Type of inference (classification, detection, etc.).

required

Methods:

Name Description
evaluate

Add and compute evaluation metrics from a list of predictions.

add_evaluation

Add a single prediction to the experiment as evaluation.

compute_coco_metrics

Compute COCO metrics and log them into the experiment.

compute_classification_metrics

Compute sklearn classification metrics (acc, precision, recall, F1).

Attributes:

Name Type Description
experiment
inference_type
experiment_logger

experiment = experiment instance-attribute

inference_type = inference_type instance-attribute

experiment_logger = BaseLogger(experiment=experiment, metric_mapping=MetricMapping()) instance-attribute

evaluate(picsellia_predictions)

Add and compute evaluation metrics from a list of predictions.

Parameters:

Name Type Description Default
picsellia_predictions
list

List of PicselliaPrediction objects.

required

add_evaluation(evaluation)

Add a single prediction to the experiment as evaluation.

Parameters:

Name Type Description Default
evaluation
PicselliaClassificationPrediction | PicselliaRectanglePrediction | PicselliaPolygonPrediction | PicselliaOCRPrediction

A prediction (classification, rectangle, OCR, or polygon).

required

compute_coco_metrics(assets, output_dir, training_labelmap)

Compute COCO metrics and log them into the experiment.

Parameters:

Name Type Description Default
assets
list | MultiAsset

Assets to evaluate.

required
output_dir
str

Directory to save metrics.

required
training_labelmap
dict

Label ID-to-name mapping.

required

compute_classification_metrics(assets, output_dir, training_labelmap)

Compute sklearn classification metrics (acc, precision, recall, F1).

Parameters:

Name Type Description Default
assets
list | MultiAsset

Assets to evaluate.

required
output_dir
str

Output directory.

required
training_labelmap
dict

Label ID-to-name mapping.

required