Quick Start
Get started quickly with Clarifai in a few simple steps
Clarifai provides an intuitive interface and a robust API designed to get you up and running quickly. In just a few simple steps or a few lines of code, you can bring your AI projects to life within minutes.
Try Out Our Community Models
We offer a diverse collection of Community models that you can browse, test, and integrate into your projects.
Step 1: Find a Model
You can easily find a model to use by heading to the homepage and exploring the Trending AI models section, which showcases popular and ready-to-use options.
After finding the model, click the TEST IN PLAYGROUND button in the bottom left corner of its information card.
For this example, we'll use the Llama-3.2-3B-Instruct model.
Alternatively, you can select the Playground option in the top navigation bar.
Step 2: Run Your Inference in Playground
You'll be taken to the Playground interface, which is a pre-authenticated testing environment that allows you to quickly interact with Clarifai's AI models without additional setup or authentication.
In the chat interface at the bottom of the Playground, enter your desired prompt to generate text with the selected model. Note that if the model supports image inputs as prompts, you can also upload images directly into the interface.
Alternatively, in the upper-left section of the Playground, you can choose the model you'd like to use for inference.
Then, click the arrow icon to submit your request.
The results will be streamed directly in the interface, allowing you to see the output in real time.
-
For this example, we're using the default settings for deployment (
Clarifai Shared
), inference parameters, and others. You can customize these settings as needed for more advanced use cases. -
You can toggle the button in the upper-left section of the Playground to display ready-to-use API code snippets in various programming languages. Simply copy and use them in your project.
Call Your First Model With Our API
You can access the Clarifai API effortlessly using your preferred method:
-
SDKs – Quick integration with official client libraries.
-
CLI (Command Line Interface) – Manage tasks directly from the command line.
-
HTTP Requests – Use any programming language with REST API calls.
-
gRPC Clients – High-performance support for popular languages.
Step 1: Get a PAT Key
You need a PAT (Personal Access Token) key to authenticate your connection to the Clarifai platform. You can generate the PAT key in your personal settings page by navigating to the Security section.
Step 2: Install Your Preferred Client
- Python SDK
- CLI
- Python (gRPC)
pip install --upgrade clarifai
pip install --upgrade clarifai
python -m pip install clarifai-grpc
Step 3: Send an API Request
For this example, let's use the Llama-3.2-3B-Instruct model to generate text based on a given prompt.
- Python SDK
- CLI
- cURL
- JavaScript (REST)
- Python (gRPC)
from clarifai.client.model import Model
prompt = "What is the future of AI?"
model_url="https://clarifai.com/meta/Llama-3/models/Llama-3_2-3B-Instruct"
# Model Predict
model_prediction = Model(url=model_url, pat="YOUR_PAT_HERE").predict_by_bytes(prompt.encode(), input_type="text")
print(model_prediction.outputs[0].data.text.raw)
//Use the CLI to log in to the Clarifai platform first: https://docs.clarifai.com/getting-started/api-overview/cli
clarifai model predict --model_url https://clarifai.com/meta/Llama-3/models/Llama-3_2-3B-Instruct --bytes "What is the future of AI?" --input_type text
curl -X POST "https://api.clarifai.com/v2/users/meta/apps/Llama-3/models/Llama-3_2-3B-Instruct/versions/52528868e11d431fa0450f00b22af18c/outputs" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"inputs": [
{
"data": {
"text": {
"raw": "What is the future of AI?"
}
}
}
]
}'
<!--index.html file-->
<script>
////////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, user and app ID, model details, and the raw
// text we want as a prompt. Change these strings to run your own example.
///////////////////////////////////////////////////////////////////////////////////////////////////
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = 'YOUR_PAT_HERE';
// Specify the correct user_id/app_id pairings
// Since you're making inferences outside your app's scope
const USER_ID = 'meta';
const APP_ID = 'Llama-3';
// Change these to whatever model and text you want to use
const MODEL_ID = 'Llama-3_2-3B-Instruct';
const MODEL_VERSION_ID = '52528868e11d431fa0450f00b22af18c';
const RAW_TEXT = 'What is the future of AI?';
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const raw = JSON.stringify({
"user_app_id": {
"user_id": USER_ID,
"app_id": APP_ID
},
"inputs": [
{
"data": {
"text": {
"raw": RAW_TEXT
}
}
}
]
});
const requestOptions = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Authorization': 'Key ' + PAT
},
body: raw
};
// NOTE: MODEL_VERSION_ID is optional, you can also call prediction with the MODEL_ID only
// https://api.clarifai.com/v2/models/{YOUR_MODEL_ID}/outputs
// this will default to the latest version_id
fetch("https://api.clarifai.com/v2/models/" + MODEL_ID + "/versions/" + MODEL_VERSION_ID + "/outputs", requestOptions)
.then((response) => {
return response.json();
})
.then((data) => {
if(data.status.code != 10000) console.log(data.status);
else console.log(data['outputs'][0]['data']['text']['raw']);
}).catch(error => console.log('error', error));
</script>
######################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the raw
# text we want as a prompt. Change these strings to run your own example.
######################################################################################################
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'meta'
APP_ID = 'Llama-3'
# Change these to whatever model and text you want to use
MODEL_ID = 'Llama-3_2-3B-Instruct'
MODEL_VERSION_ID = '52528868e11d431fa0450f00b22af18c'
RAW_TEXT = 'What is the futute of AI?'
############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
metadata = (('authorization', 'Key ' + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception(f"Post model outputs failed, status: {post_model_outputs_response.status.description}")
# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0]
print("Completion:\n")
print(output.data.text.raw)
Output Example
The future of AI is a topic of much debate and speculation. While it's difficult to predict exactly what the future holds, here are some potential trends and developments that could shape the future of AI:
1. **Increased Integration with Other Technologies**: AI is likely to become even more integrated with other technologies such as blockchain, the Internet of Things (IoT), and 5G networks. This could lead to new applications and use cases that we cannot yet imagine.
2. **Advances in Explainability and Transparency**: As AI becomes more pervasive, there will be a growing need for explainability and transparency. This could involve the development of new techniques for interpreting and understanding AI decision-making processes.
3. **Rise of Edge AI**: Edge AI refers to the processing of AI tasks at the edge of the network, rather than in a centralized cloud. This could enable faster and more efficient AI processing, particularly in applications such as autonomous vehicles and smart cities.
4. **Increased Focus on Ethics and Fairness**: As AI becomes more influential, there will be a growing need to address issues of ethics and fairness. This could involve the development of new AI systems that are designed to be more transparent, explainable, and equitable.
5. **Potential for AI to Disrupt Traditional Industries**: AI has the potential to disrupt traditional industries such as healthcare, finance, and education. This could lead to significant job displacement and the need for workers to adapt to new roles and responsibilities.
6. **Advances in Natural Language Processing (NLP)**: NLP is a key area of research in AI, and it's likely that we'll see significant advances in this area over the coming years. This could enable AI systems to better understand and respond to human language, leading to more natural and intuitive interfaces.
7. **Increased Use of Reinforcement Learning**: Reinforcement learning is a type of machine learning that involves training AI systems through trial and error. This could enable AI systems to learn more efficiently and effectively, particularly in applications such as robotics and game playing.
8. **Potential for AI to Enhance Human Capabilities**: AI has the potential to enhance human capabilities, particularly in areas such as cognitive abilities and physical abilities. This could lead to significant improvements in productivity and quality of life.
Some potential timelines for these developments include:
* **2025**: Edge AI becomes more widespread, enabling faster and more efficient AI processing.
* **2030**: AI systems become more integrated with other technologies such as blockchain and IoT.
* **2035**: Advances in NLP lead
Congratulations — you've just get started with the Clarifai platform!