Skip to main content

Clarifai CLI

Learn how to use the Clarifai Command Line Interface (CLI)


Clarifai’s Command Line Interface (CLI) is a powerful, user-friendly tool designed to simplify and enhance your experience with our AI platform. By offering a streamlined way to execute tasks directly from the terminal, the CLI eliminates the need for extensive coding or constant reliance on the web interface.

The Clarifai CLI supports a broad range of functionalities, from making model predictions to leveraging advanced Compute Orchestration capabilities, making it an essential tool for for a wide range of use cases.

Bundled within our Python SDK package, the CLI empowers both technical and non-technical users to efficiently manage tasks and boost productivity on the Clarifai platform.

Installation

To begin, install the latest version of the clarifai Python package.

pip install --upgrade clarifai

Basics

The CLI tool is designed to help users manage various aspects of their compute resources, deployments, and models through a series of intuitive commands and aliases.

Usage: clarifai [OPTIONS] COMMAND [ARGS]...

Clarifai CLI

Options:
--help Show this message and exit.

Commands:
cc Alias for 'computecluster'
computecluster Manage Compute Clusters: create, delete, list
deployment Manage Deployments: create, delete, list
dpl Alias for 'deployment'
login Login command to set PAT and other configurations.
model Manage models: upload, test locally, run_locally, predict
nodepool Manage Nodepools: create, delete, list
np Alias for 'nodepool'

The --help option is particularly useful to quickly understand the available functionalities and how to use them.

clarifai COMMAND --help

For example:

clarifai model --help

Produces this output:

Usage: clarifai model [OPTIONS] COMMAND [ARGS]...

Manage models: upload, test locally, run_locally, predict

Options:
--help Show this message and exit.

Commands:
predict Predict using the given model
run-locally Run the model locally and start a gRPC server to serve...
test-locally Test model locally.
upload Upload a model to Clarifai.

tip

You can learn how to use the run-locally, test-locally, and upload commands here.

Login

To use the Clarifai CLI, you must first log in using a Personal Access Token (PAT). This requires creating a YAML login configuration file to securely store your credentials.

user_id: "YOUR_USER_ID_HERE"
pat: "YOUR_PAT_HERE"

Once the configuration file is set up, you can authenticate your CLI session with Clarifai using the stored credentials. This ensures seamless access to the CLI's features and functionalities.

clarifai login --config <add-config-filepath-here>

Compute Orchestration

The Clarifai CLI simplifies Compute Orchestration tasks. With the CLI, you can easily manage the infrastructure required for deploying and scaling machine learning models, even without extensive technical expertise.

You can learn how to use the CLI for Compute Orchestration here.

Model Operations

You can perform model predictions using the Clarifai CLI in the following ways:

  • By specifying user_id, app_id, and model_id
  • By providing the model URL
  • By using a YAML configuration file
CLI Predict Options
Usage: clarifai model predict [OPTIONS]

Predict using the given model

Options:
--config PATH Path to the model predict config file.
--model_id TEXT Model ID of the model used to predict.
--user_id TEXT User ID of the model used to predict.
--app_id TEXT App ID of the model used to predict.
--model_url TEXT Model URL of the model used to predict.
--file_path TEXT File path of file for the model to predict
--url TEXT URL to the file for the model to predict
--bytes TEXT Bytes to the file for the model to predict
--input_id TEXT Existing input id in the app for the model
to predict
--input_type TEXT Type of input
-cc_id, --compute_cluster_id TEXT
Compute Cluster ID to use for the model
-np_id, --nodepool_id TEXT Nodepool ID to use for the model
-dpl_id, --deployment_id TEXT Deployment ID to use for the model
--inference_params TEXT Inference parameters to override
--output_config TEXT Output config to override
--help Show this message and exit.

Predict by IDs

You can use the --bytes argument along with specifying user_id, app_id, and model_id.

clarifai model predict --model_id claude-v2 --user_id anthropic --app_id completion --bytes "Human: Write a tweet on future of AI\nAssistant:" --input_type text
Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "sdk-python-11.0.5-ee8ff730ab144888b6062a6b5ece6d1b"
}
outputs {
id: "dddb6616ba594f41abf77dd8f35b351d"
status {
code: SUCCESS
description: "Ok"
}
created_at {
seconds: 1737623976
nanos: 884184949
}
model {
id: "claude-v2"
name: "claude-v2"
created_at {
seconds: 1689360584
nanos: 718730000
}
app_id: "completion"
model_version {
id: "f39db57922eb48188cf41a26660aaf74"
created_at {
seconds: 1706762592
nanos: 463521000
}
status {
code: MODEL_TRAINED
description: "Model is trained and ready"
}
completed_at {
seconds: 1706762763
nanos: 246861000
}
visibility {
gettable: PUBLIC
}
app_id: "completion"
user_id: "anthropic"
metadata {
}
}
user_id: "anthropic"
model_type_id: "text-to-text"
visibility {
gettable: PUBLIC
}
modified_at {
seconds: 1729160329
nanos: 739032000
}
workflow_recommended {
}
image {
url: "https://data.clarifai.com/large/users/anthropic/apps/completion/inputs/image/b9d666a9e16a31c8bbbf6da89cceb804"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/anthropic/apps/completion/inputs/image/b9d666a9e16a31c8bbbf6da89cceb804"
sizes: "small"
sizes: "large"
crossorigin: "use-credentials"
}
}
license_type: CLOSED_SOURCE
source: WRAPPED
creator: "anthropic"
}
input {
id: "13cf01b7817e4b38a0c7d140a3ce0755"
data {
text {
raw: "Human: Write a tweet on future of AI\\nAssistant:"
url: "https://samples.clarifai.com/placeholder.gif"
}
}
}
data {
text {
raw: " Here\'s a draft 280 character tweet on the future of AI:\n\nThe future of AI holds tremendous promise. As algorithms continue improving, AI will transform industries from healthcare to transportation. But we must ensure AI develops safely and ethically, prioritizing human wellbeing over profits or progress at any cost. Together, through foresight and care, we can build an AI-powered world that benefits all."
text_info {
encoding: "UnknownTextEnc"
}
}
}
}

You can also use the --file_path argument, which specifies the local path to the file that contains the instructions for the model to generate predictions.

clarifai model predict --model_id claude-v2 --user_id anthropic --app_id completion --file_path <add-predict-filepath-here> --input_type text

You can also use the --url argument, which specifies the URL of the file that contains the instructions for the model to generate predictions.

clarifai model predict --model_id llama2-7b-chat --user_id meta --app_id Llama-2 --url https://samples.clarifai.com/featured-models/llama2-prompt3.txt --input_type text

Predict by Model URL

You can make predictions by using the --model_url argument, which specifies the URL of the model to be used for generating predictions.

clarifai model predict --model_url https://clarifai.com/anthropic/completion/models/claude-v2 --bytes "Human: Write a tweet on future of AI\nAssistant:" --input_type text

Predict by a YAML file

You can provide the instructions for generating predictions in a YAML configuration file.

Here is an example:

model_url: "https://clarifai.com/anthropic/completion/models/claude-v2"
bytes: "Human: Write a tweet on future of AI\nAssistant:"
input_type: "text"

Then, you need to specify the path to that file.

clarifai model predict --config <add-config-filepath-here>

Specify Prediction Parameters

You can specify prediction parameters to influence the output of some models. These settings allow you to control the model's behavior during prediction, influencing attributes such as creativity, coherence, and diversity in the results.

You can get a description of the prediction parameters here.

Here is how you can specify various inference parameters :

clarifai model predict --model_url https://clarifai.com/openai/whisper/models/whisper-large-v2 --url https://s3.amazonaws.com/samples.clarifai.com/featured-models/record_out+(3).wav --input_type audio --inference_params "{\"task\":\"transcribe\"}"

Here is how you can specify output configuration parameters:

clarifai model predict --model_url https://clarifai.com/clarifai/main/models/general-image-recognition --url https://samples.clarifai.com/dog2.jpeg --input_type image --output_config "{\"max_concepts\":3}"