steps.clip.model.loader¶
loader
¶
Functions:
| Name | Description |
|---|---|
load_model |
Load a CLIP model using the Picsellia model interface. |
load_model(pretrained_weights_name, trained_weights_name=None, config_name=None, exported_weights_name=None, repo_id='openai/clip-vit-large-patch14-336')
¶
Load a CLIP model using the Picsellia model interface.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
str
|
Name of the pretrained weights artifact. |
required |
|
str | None
|
Optional name of the trained weights. |
None
|
|
str | None
|
Optional name of the model config file. |
None
|
|
str | None
|
Optional name of exported weights for evaluation or inference. |
None
|
|
str
|
HuggingFace repo ID used for loading the processor (default is OpenAI's ViT-L/14-336). |
'openai/clip-vit-large-patch14-336'
|
Returns:
| Type | Description |
|---|---|
CLIPModel
|
A loaded instance of CLIPModel, ready for inference. |