Skip to content

steps.clip.model.loader

loader

Functions:

Name Description
load_model

Load a CLIP model using the Picsellia model interface.

load_model(pretrained_weights_name, trained_weights_name=None, config_name=None, exported_weights_name=None, repo_id='openai/clip-vit-large-patch14-336')

Load a CLIP model using the Picsellia model interface.

Parameters:

Name Type Description Default

pretrained_weights_name

str

Name of the pretrained weights artifact.

required

trained_weights_name

str | None

Optional name of the trained weights.

None

config_name

str | None

Optional name of the model config file.

None

exported_weights_name

str | None

Optional name of exported weights for evaluation or inference.

None

repo_id

str

HuggingFace repo ID used for loading the processor (default is OpenAI's ViT-L/14-336).

'openai/clip-vit-large-patch14-336'

Returns:

Type Description
CLIPModel

A loaded instance of CLIPModel, ready for inference.