Skip to main content

Text

Make predictions on text inputs


To get predictions for a given text input, you need to supply the text along with the specific model from which you wish to receive predictions. You can supply the text via a publicly accessible URL, a local text file, or in its raw format.

The file size of each text input should be less than 20MB.

You need to specify your choice of model for prediction by utilizing the MODEL_ID parameter.

tip

Most of our models now have new versions that support inference hyperparameters like temperature, top_k, etc. You can learn how to configure them here.

info

The initialization code used in the following examples is outlined in detail on the client installation page.

Text Classification

Input: Text

Output: Concepts

Text classification is the process of categorizing text documents into predefined categories based on their content. This task is typically accomplished using machine learning models trained on labeled datasets, where each document is associated with a specific category.

These models learn patterns and features in the text data during training, enabling them to classify new, unseen documents into the relevant categories effectively.

Predict via URL

Below is an example of how you would make predictions on passages of text hosted on the web from the multilingual-uncased-sentiment model.

######################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the URL of
# the text we want as an input. Change these strings to run your own example.
######################################################################################################

# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'nlptownres'
APP_ID = 'text-classification'
# Change these to whatever model and text URL you want to use
MODEL_ID = 'multilingual-uncased-sentiment'
MODEL_VERSION_ID = '29d5fef0229a4936a607380d7ef775dd'
TEXT_FILE_URL = 'https://samples.clarifai.com/negative_sentence_12.txt'

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
url=TEXT_FILE_URL
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description)

# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0]

print("Predicted concepts:")
for concept in output.data.concepts:
print("%s %.2f" % (concept.name, concept.value))

# Uncomment this line to print the raw output
#print(output)
Text Output Example
Predicted concepts:
3-stars 0.25
2-stars 0.23
1-star 0.20
4-stars 0.17
5-stars 0.14
Raw Output Example
id: "479d7bb7e0e4415da8a265ceb10d4c7c"
status {
code: SUCCESS
description: "Ok"
}
created_at {
seconds: 1701798270
nanos: 281180166
}
model {
id: "multilingual-uncased-sentiment"
name: "multilingual-uncased-sentiment"
created_at {
seconds: 1656469244
nanos: 44961000
}
app_id: "text-classification"
model_version {
id: "29d5fef0229a4936a607380d7ef775dd"
created_at {
seconds: 1656469244
nanos: 60443000
}
status {
code: MODEL_TRAINED
description: "Model is trained and ready"
}
visibility {
gettable: PUBLIC
}
app_id: "text-classification"
user_id: "nlptownres"
metadata {
fields {
key: "Model version logs zipped"
value {
string_value: "https://s3.amazonaws.com/clarifai-temp/prod/29d5fef0229a4936a607380d7ef775dd.zip"
}
}
}
}
user_id: "nlptownres"
model_type_id: "text-classifier"
visibility {
gettable: PUBLIC
}
modified_at {
seconds: 1661364520
nanos: 417454000
}
task: "text-classification"
workflow_recommended {
}
}
input {
id: "b188010be5b84bafa330092327f4f4c0"
data {
text {
url: "https://samples.clarifai.com/negative_sentence_12.txt"
}
}
}
data {
concepts {
id: "3-stars"
name: "3-stars"
value: 0.2539905309677124
app_id: "text-classification"
}
concepts {
id: "2-stars"
name: "2-stars"
value: 0.23382391035556793
app_id: "text-classification"
}
concepts {
id: "1-star"
name: "1-star"
value: 0.20093071460723877
app_id: "text-classification"
}
concepts {
id: "4-stars"
name: "4-stars"
value: 0.17351166903972626
app_id: "text-classification"
}
concepts {
id: "5-stars"
name: "5-stars"
value: 0.13774323463439941
app_id: "text-classification"
}
}

Predict via Local Files

Below is an example of how you would provide text inputs via local text files and receive predictions from the multilingual-uncased-sentiment model.

#######################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the location
# of the text we want as an input. Change these strings to run your own example.
#######################################################################################################

# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'nlptownres'
APP_ID = 'text-classification'
# Change these to whatever model and text input you want to use
MODEL_ID = 'multilingual-uncased-sentiment'
MODEL_VERSION_ID = '29d5fef0229a4936a607380d7ef775dd'
TEXT_FILE_LOCATION = 'YOUR_TEXT_FILE_LOCATION_HERE'

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

with open(TEXT_FILE_LOCATION, "rb") as f:
file_bytes = f.read()

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description)

# Since we have one input, one output will exist here.
output = post_model_outputs_response.outputs[0]

print("Predicted concepts:")
for concept in output.data.concepts:
print("%s %.2f" % (concept.name, concept.value))

# Uncomment this line to print the raw output
#print(output)

Predict via Raw Text

Below is an example of how you would provide raw text inputs and receive predictions from the multilingual-uncased-sentiment model.

#########################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the raw text
# we want as an input. Change these strings to run your own example.
########################################################################################################

# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'nlptownres'
APP_ID = 'text-classification'
# Change these to whatever model and raw text you want to use
MODEL_ID = 'multilingual-uncased-sentiment'
MODEL_VERSION_ID = '29d5fef0229a4936a607380d7ef775dd'
RAW_TEXT = 'I love your product very much'

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description)

# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0]

print("Predicted concepts:")
for concept in output.data.concepts:
print("%s %.2f" % (concept.name, concept.value))

# Uncomment this line to print the raw output
#print(output)

Text-to-Image Generation

Input: Text

Output: Images

Text-to-image generation involves creating visual images based on textual descriptions. In this field, machine learning models are trained to establish a meaningful connection between textual descriptions and their corresponding visual representations.

Then, when given a textual input, these models can generate images that accurately reflect the content described in the text.

Below is an example of how you would perform text-to-image generation using the Stable Diffusion XL model.

#################################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the prompt text we want
# to provide as an input. Change these strings to run your own example.
#################################################################################################################

# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'stability-ai'
APP_ID = 'stable-diffusion-2'
# Change these to whatever model and text URL you want to use
MODEL_ID = 'stable-diffusion-xl'
MODEL_VERSION_ID = '0c919cc1edfc455dbc96207753f178d7'
RAW_TEXT = 'A penguin watching the sunset.'
# To use a hosted text file, assign the URL variable
# TEXT_FILE_URL = 'https://samples.clarifai.com/negative_sentence_12.txt'
# Or, to use a local text file, assign the location variable
# TEXT_FILE_LOCATION = 'YOUR_TEXT_FILE_LOCATION_HERE'

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

# To use a local text file, uncomment the following lines
# with open(TEXT_FILE_LOCATION, "rb") as f:
# file_bytes = f.read()

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
# url=TEXT_FILE_URL
# raw=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description)

# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0].data.image.base64

image_filename = f"gen-image.jpg"
with open(image_filename, 'wb') as f:
f.write(output)

Here is a generated output example:

generated image output example

Text-to-Speech Generation

Input: Text

Output: Audio

Text-to-speech (TTS) generation involves converting written text into spoken words. A machine learning model is used to synthesize human-like speech from input text, allowing a computer or device to "speak" the provided content.

Below is an example of how you would perform text-to-speech generation using the Speech-synthesis model.

note

In this example, we've used the params.update() method to fine-tune various inference parameters that allow us to customize the behavior of the Speech-synthesis model. You can check the various inference parameters you can configure on the model's description page on the Clarifai portal.

#################################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the text we want
# to provide as an input. Change these strings to run your own example.
#################################################################################################################

# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = "eleven-labs"
APP_ID = "audio-generation"
# Change these to whatever model and text URL you want to use
MODEL_ID = "speech-synthesis"
MODEL_VERSION_ID = "f588d92c044d4487a38c8f3d7a3b0eb2"
RAW_TEXT = "I love your product very much!"
# To use a hosted text file, assign the URL variable
# TEXT_FILE_URL = "https://samples.clarifai.com/negative_sentence_12.txt"
# Or, to use a local text file, assign the location variable
# TEXT_FILE_LOCATION = "YOUR_TEXT_FILE_LOCATION_HERE"

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

params = Struct()
params.update(
{
"model_id": "eleven_multilingual_v1",
"voice_id": "pNInz6obpgDQGcFmaJgB",
"stability": 0.4,
"similarity_boost": 0.7,
}
)

metadata = (("authorization", "Key " + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

# To use a local text file, uncomment the following lines
# with open(TEXT_FILE_LOCATION, "rb") as f:
# file_bytes = f.read()

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
# url=TEXT_FILE_URL
# raw=file_bytes
)
)
)
],
model=resources_pb2.Model(
model_version=resources_pb2.ModelVersion(
output_info=resources_pb2.OutputInfo(params=params)
)
),
),
metadata=metadata,
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description )

# Uncomment this line to print the full Response JSON
# print(post_model_outputs_response)

# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0].data.audio.base64

audio_filename = f"audio_file.wav"

with open(audio_filename, "wb") as f:
f.write(output)

Use Third-Party API Keys

info

The ability to use third-party API keys is currently exclusively available to Enterprise users. Learn more here.

For the third-party models we've wrapped into our platform, like those provided by OpenAI, Anthropic, Cohere, and others, you can also choose to utilize their API keys as an option—in addition to using the default Clarifai keys.

This Bring Your Own Key (BYOK) flexibility allows you to integrate your preferred services and APIs into your workflow, enhancing the versatility of our platform.

Here is an example of how to add an OpenAI API key for Dalle-3 for text-to-image tasks.

curl -X POST "https://api.clarifai.com/v2/users/openai/apps/dall-e/models/dall-e-3/versions/dc9dcb6ee67543cebc0b9a025861b868/outputs" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"inputs": [
{
"data": {
"text": {
"raw": "An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula"
}
}
}
],
"model": {
"model_version": {
"output_info": {
"params": {
"size":"1024x1024",
"quality":"hd",
"api_key":"ADD_THIRD_PARTY_KEY_HERE"
}
}
}
}
}'