Local Runners
Run models locally for development, debugging, or compute tasks
Local Runners are a powerful feature that let you securely expose your locally running models or servers via a public URL, allowing you to quickly develop, test, and share any models running on your own hardware.
Instead of running solely in the cloud, you can run your models anywhere that supports Python and has an internet connection — whether it's your workstation or on-premise server.
With Local Runners, you can connect your own models to Clarifai's compute plane. This seamless integration enables you to leverage the Clarifai cloud API, workflows, and other platform capabilities with your custom models.
Your model can securely receive and process requests from anywhere, just as it would in a production cloud deployment.
Note: A runner is the actual running instance of your model. It is a unique process that pulls tasks (such as prediction requests) from a queue and executes them using the model’s logic.
Prerequisites
Before you can start developing and testing your models locally with Local Runners, there are a couple of things you'll need.
Install Clarifai CLI
Install the latest version of the Clarifai CLI (version 11.6.3
or higher) tool. It includes built-in support for Local Runners.
- Bash
pip install --upgrade clarifai
Note: You'll need Python 3.10 or higher installed to successfully run the Local Runners.
Get a PAT Key and User ID
To authenticate your connection with the Clarifai platform, you'll need a Personal Access Token (PAT) key. You can generate the PAT key in your personal settings page by navigating to the Security section.
Additionally, visit the Account section to find your User ID, which is also required for setup.
Quick Start
Once you've completed the prerequisites above, run the following commands and follow the prompts in your terminal to quickly get started with Local Runners.
Log in to Clarifai
Connect your environment to the Clarifai platform and create a context profile.
- CLI
clarifai login
Set up a Model
Generate a sample toy model with the necessary files.
- CLI
clarifai model init
Start Your Local Runner
Next, you'll connect your model to a public URL using Local Runners. The CLI will guide you through a series of confirmations for key objects on the Clarifai platform, such as compute clusters, nodepools, and deployments — which are described below.
Just review each prompt and confirm to proceed.
- CLI
clarifai model local-runner
Once your runner launches successfully, your model will be running and accessible via a public URL. You can then open a new terminal, copy the sample code provided in the output, and test your model!
Use Cases for Local Runners
-
Streamlined model development — Local Runners make it easy to build and test new models directly within your local environment. You can spin up a runner on your machine, route API calls through our public cloud endpoint, and watch requests hit your model in real time. This allows you to debug, set breakpoints, return results, and validate outputs.
-
Leverage your own compute resources — If you have powerful hardware, you can take advantage of that local compute without relying on Clarifai's autoscaling infrastructure. Your model remains accessible through our API with full authentication, even though it runs locally.
-
Locally connect agents — Because Local Runners execute on your chosen hardware, they can interact with local file systems, make OS-level calls, or access private data stores. With our MCP (Model Context Protocol) model type, you can give your cloud-hosted agents or any MCP-enabled clients authenticated access to your locally controlled information, regardless of their deployment location.
-
Run models anywhere — Whether on a local development machine, an on-premises server, or a private cloud cluster, Local Runners seamlessly connect your models to our platform. This enables you to keep sensitive data and custom-built models securely within your own environment.
Step 1: Build a Model
Start by building the model you want to run using Local Runners.
You can either create a custom model from scratch or leverage pre-trained models from external sources like Hugging Face.
If you're building your own model, follow our comprehensive step-by-step guide to get started.
You can also explore our examples repository to see models built for compatibility with the Clarifai platform.
You can automatically generate a default model by running the clarifai model init
command in the terminal from your current directory. After the model's files are created, you can modify them as needed or go with the default options.
Step 2: Create a Context (Optional)
Running the local development runner relies on certain environment variables defined in your current context. The context refers to the active environment settings that determine how your commands interact with the Clarifai platform.
Note: You can create this context using the provided default values when you run
clarifai login
andlocal-runner
commands.
Any configurations you create locally — such as the computer cluster and app — will also be created on the Clarifai platform, making them reusable whenever you test your model with the local development runner.
Click here to learn how to create and manage various aspects of your Clarifai context, including switching contexts and editing your configuration information.
These are the environment variables required to create a runner:
Variable | Description |
---|---|
CLARIFAI_PAT | Personal Access Token (PAT) for authentication |
CLARIFAI_USER_ID (user_id ) | User ID of the account owning the model |
CLARIFAI_APP_ID (app_id ) | App ID containing the model |
CLARIFAI_MODEL_ID (model_id ) | The model ID for the model to be run locally |
CLARIFAI_COMPUTE_CLUSTER_ID (compute_cluster_id ) | Compute cluster where the Local Runner will reside. Note that the user_id of the compute cluster must match the user_id of the model. |
CLARIFAI_NODEPOOL_ID (nodepool_id ) | Nodepool within the compute cluster |
CLARIFAI_DEPLOYMENT (deployment_id ) | Deployment for a model into the cluster and nodepool |
CLARIFAI_RUNNER_ID (runner_id ) | Auto-generated unique runner ID, created by the API and stored in the context |
Step 3: Run Your Model
We'll use this method to run and test the model we created here.
To run your model with the local development runner, navigate to the directory where your custom model is located.
Then, follow these steps.
Log in to Clarifai
Run the following command to log in to the Clarifai platform and establish a connection.
- CLI
clarifai login
After running the login
command, you'll be prompted to enter the following details to authenticate your connection:
- CLI
context name (default: "default"):
user id:
personal access token value (default: "ENVVAR" to get our env var rather than config):
- Context name — You can provide a custom name for your Clarifai configuration context, or simply press Enter to use the default name, "default". This helps you manage different configurations if needed.
- User ID — Enter your Clarifai user ID.
- PAT — Enter your Clarifai PAT. Note that if you press Enter, and you have set the
CLARIFAI_PAT
environment variable, it will use that token automatically.
Start Your Local Runner
Next, start a local development runner.
- CLI
clarifai model local-runner
Or:
- CLI
clarifai model local-runner [OPTIONS] [MODEL_PATH]
MODEL_PATH
is an optional path to the model directory. If omitted, the current directory is used by default.
If the runner doesn't detect the necessary context configurations in your environment, it will prompt you to create them using default values.
This ensures that all essential components required for Local Runners are properly set up or included in your configuration context, including:
- A compute cluster and nodepool configured for Local Runners.
Note: This cluster is created exclusively for Local Runners. It is not designed to support other tasks and lacks features like autoscaling to handle variable traffic demands, among other cloud-specific capabilities. You also cannot use other types of clusters for Local Runners — only the special cluster created for this purpose is supported.
-
An app with a model and model version representing the local runner.
-
A deployment that places the model version into the designated nodepool.
Example Output
clarifai login
context name (default: "default"):
user id: alfrick
personal access token value (default: "ENVVAR" to get our of env var rather than config): c02f72c***************
clarifai model local-runner
[INFO] 14:00:01.177806 Checking setup for local development runner... | thread=3524
[INFO] 14:00:01.179203 Current context: default | thread=3524
[INFO] 14:00:01.189128 Current user_id: alfrick | thread=3524
[INFO] 14:00:01.189128 Current compute_cluster_id: local-dev-compute-cluster | thread=3524
[INFO] 14:00:02.394761 Failed to get compute cluster with ID local-dev-compute-cluster: code: CONN_DOES_NOT_EXIST
description: "Resource does not exist"
details: "ComputeCluster with ID \'local-dev-compute-cluster\' not found. Check your request fields."
req_id: "sdk-python-11.4.1-c8a4b5bab7a84ca7a91cd10fbc620c0f"
| thread=3524
Compute cluster not found. Do you want to create a new compute cluster alfrick/local-dev-compute-cluster? (y/n): y
[INFO] 14:00:20.755481
Compute Cluster created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-31b3b07f08f04cbb81a50edfba30454f"
| thread=3524
[INFO] 14:00:20.768179 Current nodepool_id: local-dev-nodepool | thread=3524
[INFO] 14:00:21.948659 Failed to get nodepool with ID local-dev-nodepool: code: CONN_DOES_NOT_EXIST
description: "Resource does not exist"
details: "Nodepool not found. Check your request fields."
req_id: "sdk-python-11.4.1-2e792b00b7d64521942e09f990db7e84"
| thread=3524
Nodepool not found. Do you want to create a new nodepool alfrick/local-dev-compute-cluster/local-dev-nodepool? (y/n): y
[INFO] 14:00:25.698797
Nodepool created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-65d15f07e7534d44aa79a0d1a8607350"
| thread=3524
[INFO] 14:00:25.720799 Current app_id: local-dev-runner-app | thread=3524
[INFO] 14:00:26.010459 Failed to get app with ID local-dev-runner-app: code: CONN_DOES_NOT_EXIST
description: "Resource does not exist"
details: "app identified by path /users/alfrick/apps/local-dev-runner-app not found"
req_id: "sdk-python-11.4.1-b3c6fcd7b4ad4c2e91f742dc31705a5b"
| thread=3524
App not found. Do you want to create a new app alfrick/local-dev-runner-app? (y/n): y
[INFO] 14:00:31.354436
App created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-425accc65e1b451f9f6909d9dac25ecb"
| thread=3524
[INFO] 14:00:31.363481 Current model_id: local-dev-model | thread=3524
[INFO] 14:00:33.396312 Failed to get model with ID local-dev-model: code: MODEL_DOES_NOT_EXIST
description: "Model does not exist"
details: "Model \'local-dev-model\' does not exist."
req_id: "sdk-python-11.4.1-024501ac37044ed4a4ed5a397f860cf8"
| thread=3524
Model not found. Do you want to create a new model alfrick/local-dev-runner-app/models/local-dev-model? (y/n): y
[INFO] 14:00:43.972640
Model created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-052d3566144440bc83d3e5b66826a396"
| thread=3524
[INFO] 14:00:45.317992 No model versions found. Creating a new version for local dev runner. | thread=3524
[INFO] 14:00:45.695709
Model Version created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-cc9130981bea437895364a1a0debb079"
| thread=3524
[INFO] 14:00:45.698444 Created model version 5aaae29b6551422ea9f78c84e5f19205 | thread=3524
[INFO] 14:00:45.698444 Current model version 5aaae29b6551422ea9f78c84e5f19205 | thread=3524
[INFO] 14:00:45.698444 Create the local dev runner tying this
alfrick/local-dev-runner-app/models/local-dev-model model (version: 5aaae29b6551422ea9f78c84e5f19205) to the
alfrick/local-dev-compute-cluster/local-dev-nodepool nodepool. | thread=3524
[INFO] 14:00:46.922695
Runner created
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.4.1-c1176211bd5e4cceadb35040f8d8d2a7"
with id: 9dffc801b4904095a987b5c2b8508edf | thread=3524
[INFO] 14:00:46.936701 Current runner_id: 9dffc801b4904095a987b5c2b8508edf | thread=3524
[INFO] 14:00:47.239289 Current deployment_id: local-dev-deployment | thread=3524
[INFO] 14:00:47.241289 Current model section of config.yaml: {'id': 'my-uploaded-model', 'user_id': 'alfrick', 'app_id': 'docs-demos', 'model_type_id': 'text-to-text'} | thread=3524
Do you want to backup config.yaml to config.yaml.bk then update the config.yaml with the new model information? (y/n): y
[INFO] 14:00:51.122777
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# About to start up the local dev runner in this terminal...
# Here is a code snippet to call this model once it start from another terminal:
| thread=3524
[INFO] 14:00:51.123778
# Clarifai Model Client Script
# Example usage:
import os
from clarifai.client import Model
from clarifai.runners.utils import data_types
model = Model.from_current_context()
# Example model prediction from different model methods:
response = model.predict(text1='""')
print(response)
response = model.generate(text1='""')
for res in response:
print(res)
| thread=3524
[INFO] 14:00:51.123778 Now starting the local dev runner... | thread=3524
Note: If the
config.yaml
file does not contain model information that matches theuser_id
,app_id
, andmodel_id
defined in your current context, it will be automatically updated to include the new model details. This ensures that the model started by the local development runner is the same one you intend to call via the API. If needed, you can back up the existingconfig.yaml
file asconfig.yaml.bk
.
You can view the active runners associated with your model on its individual page in the Clarifai platform. For a centralized view and easier management of all active runners, use the Runners dashboard.
Step 4: Test with Snippet
Once the local development runner starts in your terminal, an example client code snippet is automatically generated based on the model's signature to help you test it.
Example Code Snippet
# Clarifai Model Client Script
# Example usage:
import os
from clarifai.client import Model
from clarifai.runners.utils import data_types
model = Model.from_current_context()
# Example model prediction from different model methods:
response = model.predict(text1='""')
print(response)
response = model.generate(text1='""')
for res in response:
print(res)
If you run the generated snippet in a separate terminal, but within the same directory, you’ll receive the model’s response output.
After you're done testing, simply close the terminal running the local development runner to shut it down.