Test Models Locally
Learn how to test your custom models locally
To successfully upload a custom model to the Clarifai platform — whether you've built it from scratch or sourced it from an external repository like Hugging Face — it's crucial to test it locally in a Docker or virtual environment first.
This step helps identify and resolve potential issues such as setup file errors, typos, code misconfigurations, or incorrect model implementations before uploading.
By doing so, you can ensure the model runs seamlessly and that all dependencies are properly configured, minimizing the risk of upload failures and ensuring optimal performance.
Prerequisites
- Set up the latest version of the Clarifai CLI (command line interface) tool. We'll use the tool to test models in the local development environment.
- Set up either a Docker container (recommended) or a Python virtual local development environment. This ensures proper dependency management and prevents conflicts in your project.
These are the key CLI flags available for local testing and running your models:
--mode
— Specify how to run the model:env
for virtual environment orcontainer
for Docker container. Defaults toenv
.-p
or--port
— The port to host the gRPC server for running the model locally. Defaults to8000
.--keep_env
— Retain the virtual environment after testing the model locally (applicable forenv
mode). Defaults toFalse
.--keep_image
— Retain the Docker image built after testing the model locally (applicable forcontainer
mode). Defaults toFalse
.--skip_dockerfile
— Flag to skip generating a dockerfile so that you can manually edit an already created dockerfile.
You can specify the path to the directory containing the custom model you want to test. For example, if your model's files are stored in ./examples/models/clarifai_llama
, use the following command:
clarifai model test-locally ./examples/models/clarifai_llama --mode container
If you don’t specify a path, the current directory is used by default. In that case, simply navigate to the directory and run:
clarifai model test-locally --mode container
Test by Running Locally
The test-locally
method allows you to test your model with a single CLI command. It runs the model locally and sends a sample request to verify that the model responds successfully. The results of the request are displayed directly in the console.
Here is how to test a model in a Docker Container:
- Bash
clarifai model test-locally --mode container
Here is how to test a model in a virtual environment:
- Bash
clarifai model test-locally --mode env
Test by Starting a gRPC Server
The run-locally
method starts a local gRPC server at https://localhost:{port}/
for running the model. Once the server is running, you can perform inference on the model via the Clarifai Python SDK.
Here is how to test a model in a Docker Container:
- Bash
clarifai model run-locally --mode container --port 8000
Here is how to test a model in a virtual environment:
- Bash
clarifai model run-locally --mode env --port 8000
Make Inference Requests
Once the model is running locally, you need to configure the CLARIFAI_API_BASE
environment variable to point to the localhost and port where the gRPC server is running.
- Bash
export CLARIFAI_API_BASE="localhost:add-port-here"
You can then make different types of inference requests using the model — unary-unary, unary-stream, or stream-stream predict calls.
Here is an example of a unary-unary prediction call:
- Python
from clarifai.client.model import Model
model = Model(model_id='model_id', user_id='user_id', app_id='app_id') # no need to provide any actual values of `model_id`, `user_id` and `app_id`
image_url = "https://samples.clarifai.com/metro-north.jpg"
# Model Predict
model_prediction = model.predict_by_url(image_url,)