Test Models Locally
Learn how to test your custom models locally
Before uploading a custom model to the Clarifai platform, always test it locally. It ensures smooth performance, verifies dependency compatibility, and streamlines the deployment process.
This step helps you detect problems like setup file errors, typos, code misconfigurations, or incorrect model implementations — saving you time and avoiding upload failures.
You should ensure your local environment has sufficient memory and compute resources to handle model loading and execution during the testing process.
Prerequisites
Build a Model
You can either build a custom model from scratch or leverage pre-trained models from external repositories like Hugging Face.
If you're developing your own model, our step-by-step guide provides detailed instructions to get started. You can also explore this examples repository to learn how to build models compatible with the Clarifai platform.
Note that to test your model, you need to implement a test
method in the model.py
file. This method should internally call other model methods to perform validation. When you run the test-locally
CLI command shown below, it will automatically invoke the test
method to carry out the testing process.
Below is a sample model.py
file with an example implementation of the test
method:
- Python
from clarifai.runners.models.model_class import ModelClass
from clarifai.runners.utils.data_types import Text
class MyModel(ModelClass):
"""A custom runner that adds "Hello World" to the end of the text."""
def load_model(self):
"""Load the model here."""
@ModelClass.method
def predict(self, text1: Text = "") -> Text:
output_text = text1.text + "Hello World"
return Text(output_text)
@ModelClass.method
def generate(self, text1: Text = Text("")) -> Iterator[Text]:
"""Example yielding a whole batch of streamed stuff back."""
for i in range(10): # fake something iterating generating 10 times.
output_text = text1.text + f"Generate Hello World {i}"
yield Text(output_text)
def test(self):
res = self.predict(Text("test"))
assert res.text == "testHello World"
res = self.generate(Text("test"))
for i, r in enumerate(res):
assert r.text == f"testGenerate Hello World {i}"
Install Clarifai CLI
Install the latest version of the Clarifai CLI (Command Line Interface) tool. We'll use this tool to test models in the local development environment.
- Bash
pip install --upgrade clarifai
Set up Docker or a Virtual Environment
Set up either a Docker container (recommended) or a Python virtual local development environment for testing the model locally. This ensures proper dependency management and prevents conflicts in your project.
These are the key CLI flags available for local testing and running your models:
--mode
— Specify how to run the model:env
for virtual environment orcontainer
for Docker container. Defaults toenv
.-p
or--port
— The port to host the gRPC server for running the model locally. Defaults to8000
.--keep_env
— Retain the virtual environment after testing the model locally (applicable forenv
mode). Defaults toFalse
.--keep_image
— Retain the Docker image built after testing the model locally (applicable forcontainer
mode). Defaults toFalse
.--skip_dockerfile
— Flag to skip generating a dockerfile so that you can manually edit an already created dockerfile.
You can specify the path to the directory containing the custom model you want to test. For example, if your model's files are stored in ./examples/models/clarifai_llama
, use the following command:
clarifai model test-locally ./examples/models/clarifai_llama --mode container
If you don’t specify a path, the current directory is used by default. In that case, simply navigate to the directory and run:
clarifai model test-locally --mode container
Test by Running Locally
The test-locally
method allows you to test your model with a single CLI command. It runs the model locally and sends a sample request to verify that the model responds successfully.
The results of the request are displayed directly in the console.
Here is how to test a model in a Docker Container:
- Bash
clarifai model test-locally --mode container
Here is how to test a model in a virtual environment:
- Bash
clarifai model test-locally --mode env
Test by Starting a gRPC Server
The run-locally
method starts a local gRPC server at https://localhost:{port}/
for running the model. Once the server is running, you can perform inference on the model via the Clarifai Python SDK.
Here is how to test a model in a Docker Container:
- Bash
clarifai model run-locally --mode container --port 8000
Here is how to test a model in a virtual environment:
- Bash
clarifai model run-locally --mode env --port 8000
Make Inference Requests
Once the model is running locally, you need to configure the CLARIFAI_API_BASE
environment variable to point to the localhost and port where the gRPC server is running.
- Unix-Like Systems
- Windows
export CLARIFAI_API_BASE="localhost:add-port-here"
set CLARIFAI_API_BASE="localhost:add-port-here"
You can then make inference requests using the model.