Upload Your First Model
Upload a custom model to the Clarifai platform
The Clarifai platform allows you to upload custom models for a wide range of use cases. With just a few simple steps, you can get your models up and running and leverage the platform’s powerful capabilities.
Let’s walk through how to upload a simple custom model that appends the phrase Hello World
to any input text.
You can test the already uploaded model here.
To learn more about how to upload different types of models, check out this comprehensive guide.
Step 1: Perform Prerequisites
Install Clarifai Package
Install the latest version of the clarifai
Python SDK. This also installs the Clarifai Command Line Interface (CLI), which we'll use for uploading the model.
- Bash
pip install --upgrade clarifai
Set a PAT Key
You need to set the CLARIFAI_PAT
(Personal Access Token) as an environment variable. You can generate the PAT key in your personal settings page by navigating to the Security section.
This token is essential for authenticating your connection to the Clarifai platform.
- Unix-Like Systems
- Windows
export CLARIFAI_PAT=YOUR_PERSONAL_ACCESS_TOKEN_HERE
set CLARIFAI_PAT=YOUR_PERSONAL_ACCESS_TOKEN_HERE
On Windows, the Clarifai Python SDK expects a HOME
environment variable, which isn’t set by default. To ensure compatibility with file paths used by the SDK, set HOME
to the value of your USERPROFILE
. You can set it in your Command Prompt this way: set HOME=%USERPROFILE%
.
Step 2: Create Files
Create a project directory and organize your files as indicated below to fit the requirements of uploading models to the Clarifai platform.
your_model_directory/
├── 1/
│ └── model.py
├── requirements.txt
└── config.yaml
- your_model_directory/ – The root directory containing all files related to your custom model.
- 1/ – A subdirectory that holds the model file (Note that the folder is named as 1).
- model.py – Contains the code that defines your model, including running inference.
- requirements.txt – Lists the Python dependencies required to run your model.
- config.yaml – Contains metadata and configuration settings, such as compute requirements, needed for uploading the model to Clarifai.
- 1/ – A subdirectory that holds the model file (Note that the folder is named as 1).
Add the following snippets to each of the respective files.
model.py
- Python
from clarifai.runners.models.model_class import ModelClass
from clarifai.runners.utils.data_types import Text
class MyFirstModel(ModelClass):
"""A custom model that adds 'Hello World' to the end of a text."""
@ModelClass.method
def predict(self, text1: Text = "") -> Text:
"""
This is the method that will be called when the model is run.
It takes in an input and returns an output.
"""
output_text = text1.text + " Hello World"
return Text(output_text)
requirements.txt
- Text
clarifai>=11.3.0
config.yaml
In the model
section of the config.yaml
file, specify your model ID, Clarifai user ID, and Clarifai app ID. These will define where your model will be uploaded on the Clarifai platform.
- YAML
model:
id: "my-first-model"
user_id: "YOUR_USER_ID_HERE"
app_id: "YOUR_APP_ID_HERE"
model_type_id: "text-to-text"
build_info:
python_version: "3.11"
inference_compute_info:
cpu_limit: "1"
cpu_memory: "5Gi"
num_accelerators: 0
Step 3: Upload the Model
Once your custom model is ready, upload it to the Clarifai platform by navigating to the directory containing the model and running the following command:
- CLI
clarifai model upload
Step 4: Predict With Model
Once your model is successfully uploaded to Clarifai, you can start making predictions with it.
- Python
import os
from clarifai.client import Model
# Set your Personal Access Token (PAT)
os.environ["CLARIFAI_PAT"] = "YOUR_PAT_HERE"
# Initialize with model URL
model = Model(url="https://clarifai.com/alfrick/docs-demos/models/my-first-model")
response = model.predict("Yes, I uploaded it!")
print(response)
Output Example
Text(text='Yes, I uploaded it! Hello World', url=None)
Congratulations!
You've successfully uploaded your first model to the Clarifai platform and run inference with it!
In this example, we used the default deployment setting (Clarifai Shared
). To learn how to leverage our Compute Orchestration capabilities for scalable and cost-efficient inference across various use cases, click here.