Skip to content

steps.base.model.evaluator

evaluator

Functions:

Name Description
evaluate_model

Perform evaluation of model predictions against ground truth annotations using Picsellia's ModelEvaluator.

evaluate_model(picsellia_predictions, inference_type, assets, output_dir)

Perform evaluation of model predictions against ground truth annotations using Picsellia's ModelEvaluator.

This step supports multiple inference types including classification, object detection, segmentation, and OCR. It compares the provided model predictions to the actual asset annotations, computes evaluation metrics, and stores the results in the specified output directory.

Parameters:

Name Type Description Default

picsellia_predictions

List[Union[PicselliaClassificationPrediction, PicselliaRectanglePrediction, PicselliaPolygonPrediction, PicselliaOCRPrediction]]

List of model predictions corresponding to a supported inference type.

required

inference_type

InferenceType

The type of inference performed (e.g., CLASSIFICATION, DETECTION, SEGMENTATION, OCR).

required

assets

Union[List[Asset], MultiAsset]

The ground truth dataset assets to evaluate against.

required

output_dir

str

The path to the directory where evaluation results will be saved (e.g., confusion matrix, metrics report).

required