Text as Input
Learn how to perform inference with text as input using Clarifai SDKs
Unlock the potential of Clarifai's state-of-the-art text-based AI features, allowing you to elevate your applications with unparalleled accuracy and efficiency. Dive into a comprehensive suite of tools designed to simplify the integration of Clarifai's AI capabilities, empowering developers to unleash the potential of text-driven applications across various domains. Discover a robust and developer-friendly SDKs that streamlines the incorporation of advanced text-based AI models, making it easier than ever to implement powerful natural language processing solutions.
Text Classifier
Empower your applications with text classification models using Clarifai's Predict API for Text. By providing input text to your preferred classification model, you can gain valuable insights into the content's nature. This API offers flexibility, allowing you to provide data through URLs or files for seamless text classification.
The file size of each text input should be less than 20MB.
- Python
- Typescript
from clarifai.client.model import Model
# Your PAT (Personal Access Token) can be found in the Account's Security section
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
#USER_ID = "nlptownres"
#APP_ID = "text-classification"
# Text sentiment analysis with 3 classes positive, negative, neutral.
# You can set the model using model URL or model ID.
# Change these to whatever model you want to use
# eg : MODEL_ID = 'sentiment-analysis-twitter-roberta-base'
# You can also set a particular model version by specifying the version ID
# eg: MODEL_VERSION_ID = 'aa7f35c01e0642fda5cf400f543e7c40'
# Model class objects can be inititalised by providing its URL or also by defining respective user_id, app_id and model_id
# eg : model = Model(user_id="clarifai", app_id="main", model_id=MODEL_ID)
model_url = "https://clarifai.com/erfan/text-classification/models/sentiment-analysis-twitter-roberta-base"
# The predict API gives flexibility to generate predictions for data provided through URL,Filepath and bytes format.
# Example for prediction through Bytes:
# model_prediction = model.predict_by_bytes(input_bytes, input_type="text")
# Example for prediction through URL:
# model_prediction = Model(model_url).predict_by_url(URL, input_type="text")
file_path = "datasets/upload/data/text_files/positive/0_9.txt"
model_prediction = Model(url=model_url, pat="YOUR_PAT").predict_by_filepath(
file_path, input_type="text"
)
# Get the output
for concept in model_prediction.outputs[0].data.concepts:
print(f"concept: {concept.name:<20} confidence: {round(concept.value, 3)}")
Output
concept: LABEL_0 confidence: 0.605
concept: LABEL_1 confidence: 0.306
concept: LABEL_2 confidence: 0.089
import { Model } from "clarifai-nodejs";
import path from "path";
/**
Your PAT (Personal Access Token) can be found in the Account's Security section
Specify the correct userId/appId pairings
Since you're making inferences outside your app's scope
USER_ID = "nlptownres"
APP_ID = "text-classification"
You can set the model using model URL or model ID.
Change these to whatever model you want to use
eg : MODEL_ID = "sentiment-analysis-twitter-roberta-base"
You can also set a particular model version by specifying the version ID
eg: MODEL_VERSION_ID = "aa7f35c01e0642fda5cf400f543e7c40"
Model class objects can be initialised by providing its URL or also by defining respective userId, appId and modelId
eg :
const model = new Model({
authConfig: {
userId: "clarifai",
appId: "main",
pat: process.env.CLARIFAI_PAT,
},
modelId: MODEL_ID,
});
*/
const modelUrl =
"https://clarifai.com/erfan/text-classification/models/sentiment-analysis-twitter-roberta-base";
/**
The predict API gives flexibility to generate predictions for data provided through URL, Filepath and bytes format.
Example for prediction through Bytes:
const modelPrediction = await model.predictByBytes({
inputBytes,
inputType
});
Example for prediction through Filepath:
const modelPrediction = await model.predictByFilepath({
filepath,
inputType
});
*/
const filepath = path.resolve(__dirname, "../../../assets/sample.txt");
const model = new Model({
url: modelUrl,
authConfig: {
pat: process.env.CLARIFAI_PAT,
},
});
const modelPrediction = await model.predictByFilepath({
filepath,
inputType: "text",
});
// Get the output
console.log(
modelPrediction?.[modelPrediction.length - 1]?.data?.conceptsList,
);
Text Generation Using LLM
Empower your applications with dynamic text creation using the robust capabilities of the Clarifai Predict API. This API leverages cutting-edge text generation models to generate textual content dynamically based on user-defined prompts, providing a versatile and powerful tool for various applications.
- Python
- Typescript
from clarifai.client.model import Model
# Your PAT (Personal Access Token) can be found in the Account's Security section
prompt = "What’s the future of AI?"
# You can set the model using model URL or model ID.
model_url="https://clarifai.com/openai/chat-completion/models/GPT-4"
# Model Predict
model_prediction = Model(url=model_url,pat="YOUR_PAT").predict_by_bytes(prompt.encode(), input_type="text")
print(model_prediction.outputs[0].data.text.raw)
Output
The future of AI is vast and holds immense potential. Here are a few possibilities:
1. Enhanced Personalization: AI will be able to understand and predict user preferences with increasing accuracy. This will allow for highly personalized experiences, from product recommendations to personalized healthcare.
2. Automation: AI will continue to automate routine tasks, freeing up time for individuals to focus on more complex problems. This could be in any field, from manufacturing to customer service.
3. Advanced Data Analysis: AI will be able to analyze and interpret large amounts of data more efficiently. This could lead to significant breakthroughs in fields like climate science, medicine, and economics.
4. AI in Healthcare: AI is expected to revolutionize healthcare, from predicting diseases before symptoms appear, to assisting in surgeries, to personalized treatment plans.
5. Improved AI Ethics: As AI becomes more integral to our lives, there will be an increased focus on ensuring it is used ethically and responsibly. This could lead to advancements in AI that are more transparent, fair, and accountable.
6. General AI: Perhaps the most exciting (and daunting) prospect is the development of Artificial General Intelligence (AGI) - AI systems that possess the ability to understand, learn, adapt, and implement knowledge across a wide array of tasks, much like a human brain.
Remember, while AI holds great promise, it's also important to consider the challenges and implications it brings, such as job displacement due to automation, privacy concerns, and ethical considerations.
import { Model } from "clarifai-nodejs";
// Your PAT (Personal Access Token) can be found in the Account's Security section
const prompt = "What’s the future of AI?";
// You can set the model using model URL or model ID.
const modelUrl = "https://clarifai.com/openai/chat-completion/models/GPT-4";
// Model Predict
const model = new Model({
url: modelUrl,
authConfig: {
pat: process.env.CLARIFAI_PAT,
},
});
const modelPrediction = await model.predictByBytes({
inputBytes: Buffer.from(prompt),
inputType: "text",
});
console.log(modelPrediction?.[0]?.data?.text?.raw);
Set Inference Parameters
When making predictions using LLMs on our platform, some models offer the ability to specify various inference parameters to influence their output. These parameters control the behavior of the model during the generation process, affecting aspects like creativity, coherence, and the diversity of the generated text.
You can learn more about them here.
- Python
from clarifai.client.model import Model
# Your PAT (Personal Access Token) can be found in the Account's Security section
prompt = "What’s the future of AI?"
# You can set inference parameters
prompt_template = '''<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>'''
system_prompt= "You're the helpful assistant"
inference_params = dict(temperature=0.7, max_tokens=200, top_k = 50, top_p= 0.95, prompt_template= prompt_template, system_prompt=system_prompt)
# You can set the model using model URL or model ID.
model_url="https://clarifai.com/meta/Llama-3/models/llama-3_1-8b-instruct"
# Model Predict
model_prediction = Model(url=model_url,pat="YOUR_PAT").predict_by_bytes(prompt.encode(), input_type="text", inference_params=inference_params)
print(model_prediction.outputs[0].data.text.raw)
Text Classifier Using LLM
Dive into the realm of text classification with Clarifai's Predict API, where you can leverage Language Models (LLM) to categorize text based on carefully constructed prompts.
- Python
- Typescript
from clarifai.client.model import Model
prompt = """Classes: [`positive`, `negative`, `neutral`]
Text: Sunny weather makes me happy.
Classify the text into one of the above classes."""
# Model Predict
model_prediction = Model("https://clarifai.com/openai/chat-completion/models/GPT-4").predict_by_bytes(prompt.encode(), input_type="text")
print(model_prediction.outputs[0].data.text.raw)
Output
`positive`
import { Model } from "clarifai-nodejs";
const prompt = `Classes: ['positive', 'negative', 'neutral']
Text: Sunny weather makes me happy.
Classify the text into one of the above classes.`;
// Model Predict
const model = new Model({
url: "https://clarifai.com/openai/chat-completion/models/GPT-4",
authConfig: {
pat: process.env.CLARIFAI_PAT,
},
});
const modelPrediction = await model.predictByBytes({
inputBytes: Buffer.from(prompt),
inputType: "text",
});
console.log(modelPrediction?.[0]?.data?.text?.raw);
Text to image
Leverage the power of the Predict API to seamlessly transform textual input into vibrant and expressive images. With the Text to Image models, you can effortlessly generate visually compelling content by providing text as input.
- Python
- Typescript
from clarifai.client.model import Model
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Your PAT (Personal Access Token) can be found in the Account's Security section
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
#USER_ID = "stability-ai"
#APP_ID = "stable-diffusion-2"
# You can set the model using model URL or model ID.
# Change these to whatever model you want to use
# eg : MODEL_ID = 'stable-diffusion-xl'
# You can also set a particular model version by specifying the version ID
# eg: MODEL_VERSION_ID = '0c919cc1edfc455dbc96207753f178d7'
# Model class objects can be inititalised by providing its URL or also by defining respective user_id, app_id and model_id
# eg : model = Model(user_id="clarifai", app_id="main", model_id=MODEL_ID)
input_text = b"floor plan for 2 bedroom kitchen house"
# The predict API gives flexibility to generate predictions for data provided through URL,Filepath and Bytes format.
# Example for prediction through URL:
# model_prediction = model.predict_by_url(url, input_type="text")
# Example for prediction through Filepath:
# model_prediction = Model(model_url).predict_by_filepath(filepath, input_type="text")
# Image Generation using Stable Diffusion XL
model_url = "https://clarifai.com/stability-ai/stable-diffusion-2/models/stable-diffusion-xl"
model_prediction = Model(url=model_url, pat="YOUR_PAT").predict_by_bytes(
input_text, input_type="text"
)
# Base64 image to numpy array
im_b = model_prediction.outputs[0].data.image.base64
image_np = np.frombuffer(im_b, np.uint8)
img_np = cv2.imdecode(image_np, cv2.IMREAD_COLOR)
# Display the image
plt.axis("off")
plt.imshow(img_np[..., ::-1])
Output
import { Model } from "clarifai-nodejs";
import fs from "fs";
/**
Your PAT (Personal Access Token) can be found in the Account's Security section
Specify the correct userId/appId pairings
Since you're making inferences outside your app's scope
USER_ID = "stability-ai"
APP_ID = "stable-diffusion-2"
You can set the model using model URL or model ID.
Change these to whatever model you want to use
eg : MODEL_ID = "stable-diffusion-xl"
You can also set a particular model version by specifying the version ID
eg: MODEL_VERSION_ID = "0c919cc1edfc455dbc96207753f178d7"
Model class objects can be initialised by providing its URL or also by defining respective userId, appId and modelId
eg :
const model = new Model({
authConfig: {
userId: "clarifai",
appId: "main",
pat: process.env.CLARIFAI_PAT,
},
modelId: MODEL_ID,
});
*/
const inputText: Buffer = Buffer.from("floor plan for 2 bedroom kitchen house");
/**
The predict API gives flexibility to generate predictions for data provided through URL, Filepath and bytes format.
Example for prediction through Bytes:
const modelPrediction = await model.predictByBytes({
inputBytes,
inputType
});
Example for prediction through Filepath:
const modelPrediction = await model.predictByFilepath({
filepath,
inputType
});
*/
// Image Generation using Stable Diffusion XL
const modelUrl =
"https://clarifai.com/stability-ai/stable-diffusion-2/models/stable-diffusion-xl";
const model = new Model({
url: modelUrl,
authConfig: { pat: process.env.CLARIFAI_PAT },
});
const modelPrediction = await model.predictByBytes({
inputBytes: inputText,
inputType: "text",
});
// Base64 image to numpy array
const outputBase64 = modelPrediction?.[0]?.data?.image?.base64 ?? "";
fs.writeFileSync("image.png", outputBase64, "base64");
Text to Audio
The Text to Audio models, powered by our Predict API, seamlessly transforms provided textual content into an audio file using advanced speech synthesis models. This capability allows users to effortlessly convert written text into a natural and expressive audio experience.
- Python
- Typescript
from clarifai.client.model import Model
# Your PAT (Personal Access Token) can be found in the Account's Security section
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
#USER_ID = "eleven-labs"
#APP_ID = "audio-generation"
# You can set the model using model URL or model ID.
# Change these to whatever model you want to use
# eg : MODEL_ID = 'speech-synthesis'
# You can also set a particular model version by specifying the version ID
# eg: MODEL_VERSION_ID = 'f588d92c044d4487a38c8f3d7a3b0eb2'
# Model class objects can be inititalised by providing its URL or also by defining respective user_id, app_id and model_id
# eg : model = Model(user_id="clarifai", app_id="main", model_id=MODEL_ID)
input_text = "Hello, How are you doing today!"
# The predict API gives flexibility to generate predictions for data provided through URL,Filepath and bytes format.
# Example for prediction through URL:
# model_prediction = model.predict_by_url(url, input_type="text")
# Example for prediction through Filepath:
# model_prediction = Model(model_url).predict_by_filepath(filepath, input_type="text")
model_url = "https://clarifai.com/eleven-labs/audio-generation/models/speech-synthesis"
model_prediction = Model(url=model_url, pat="YOUR_PAT").predict_by_bytes(
input_text, "text"
)
# Save the audio file
with open("output_audio.wav", mode="bx") as f:
f.write(model_prediction.outputs[0].data.audio.base64)
import { Model } from "clarifai-nodejs";
import fs from "fs";
/**
Your PAT (Personal Access Token) can be found in the Account's Security section
Specify the correct userId/appId pairings
Since you're making inferences outside your app's scope
USER_ID = "eleven-labs"
APP_ID = "audio-generation"
You can set the model using model URL or model ID.
Change these to whatever model you want to use
eg : MODEL_ID = 'speech-synthesis'
You can also set a particular model version by specifying the version ID
eg: MODEL_VERSION_ID = "f588d92c044d4487a38c8f3d7a3b0eb2"
Model class objects can be initialised by providing its URL or also by defining respective userId, appId and modelId
eg :
const model = new Model({
authConfig: {
userId: "clarifai",
appId: "main",
pat: process.env.CLARIFAI_PAT,
},
modelId: MODEL_ID,
});
*/
const inputText = Buffer.from("Hello, How are you doing today!");
/**
The predict API gives flexibility to generate predictions for data provided through URL, Filepath and bytes format.
Example for prediction through Bytes:
const modelPrediction = await model.predictByBytes({
inputBytes,
inputType
});
Example for prediction through Filepath:
const modelPrediction = await model.predictByFilepath({
filepath,
inputType
});
*/
const modelUrl =
"https://clarifai.com/eleven-labs/audio-generation/models/speech-synthesis";
const model = new Model({
url: modelUrl,
authConfig: {
pat: process.env.CLARIFAI_PAT,
},
});
const modelPrediction = await model.predictByBytes({
inputType: "text",
inputBytes: inputText,
});
// Save the audio file
// Note: The following code assumes you have the necessary logic to write the audio data to a file in TypeScript.
// You may need to modify this part based on your specific requirements.
const outputBase64 = modelPrediction?.[0]?.data?.audio?.base64 ?? "";
fs.writeFileSync("audio.wav", outputBase64, "base64");
Text Embedder
The Predict API offers a versatile set of capabilities, including the conversion of text into embedding vectors through the Text Embedder model. This powerful functionality serves various purposes, making it an invaluable tool for applications such as Semantic Similarity Analysis, Content Recommendation Systems, Anomaly Detection, and Document Clustering.
- Python
- Typescript
from clarifai.client.model import Model
# Your PAT (Personal Access Token) can be found in the Account's Security section
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
#USER_ID = "cohere"
#APP_ID = "embed"
# You can set the model using model URL or model ID.
# Change these to whatever model you want to use
# eg : MODEL_ID = 'cohere-embed-english-v3_0'
# You can also set a particular model version by specifying the version ID
# eg: MODEL_VERSION_ID = 'model_version'
# Model class objects can be inititalised by providing its URL or also by defining respective user_id, app_id and model_id
# eg : model = Model(user_id="clarifai", app_id="main", model_id=MODEL_ID)
input_text = """In India Green Revolution commenced in the early 1960s that led to an increase in food grain production, especially in Punjab, Haryana, and Uttar Pradesh. Major milestones in this undertaking were the development of high-yielding varieties of wheat. The Green revolution is revolutionary in character due to the introduction of new technology, new ideas, the new application of inputs like HYV seeds, fertilizers, irrigation water, pesticides, etc. As all these were brought suddenly and spread quickly to attain dramatic results thus it is termed as a revolution in green agriculture.
"""
# The predict API gives the flexibility to generate predictions for data provided through URL, Filepath and bytes format.
# Example for prediction through URL:
# model_prediction = model.predict_by_url(URL ,input_type="text")
# Example for prediction through Filepath:
# model_prediction = Model(model_url).predict_by_filepath(image_filepath, input_type="text")
model_url = "https://clarifai.com/cohere/embed/models/cohere-embed-english-v3_0"
model_prediction = Model(url=model_url, pat="YOUR_PAT").predict_by_bytes(
input_text, "text"
)
embeddings = model_prediction.outputs[0].data.embeddings[0].vector
num_dimensions = model_prediction.outputs[0].data.embeddings[0].num_dimensions
print(embeddings[:10])
Output
[-0.02596100978553295,
0.023946398869156837,
-0.07173235714435577,
0.032294824719429016,
0.020313993096351624,
-0.026998838409781456,
0.008684193715453148,
-0.016651064157485962,
-0.012316598556935787,
0.00042328768176957965]
import { Model } from "clarifai-nodejs";
/**
Your PAT (Personal Access Token) can be found in the Account's Security section
Specify the correct userId/appId pairings
Since you're making inferences outside your app's scope
USER_ID = "cohere"
APP_ID = "embed"
You can set the model using model URL or model ID.
Change these to whatever model you want to use
eg : MODEL_ID = 'cohere-embed-english-v3_0'
You can also set a particular model version by specifying the version ID
eg: MODEL_VERSION_ID = "model_version"
Model class objects can be initialised by providing its URL or also by defining respective userId, appId and modelId
eg :
const model = new Model({
authConfig: {
userId: "clarifai",
appId: "main",
pat: process.env.CLARIFAI_PAT,
},
modelId: MODEL_ID,
});
*/
const inputText = Buffer.from(
`In India Green Revolution commenced in the early 1960s that led to an increase in food grain production, especially in Punjab, Haryana, and Uttar Pradesh. Major milestones in this undertaking were the development of high-yielding varieties of wheat. The Green revolution is revolutionary in character due to the introduction of new technology, new ideas, the new application of inputs like HYV seeds, fertilizers, irrigation water, pesticides, etc. As all these were brought suddenly and spread quickly to attain dramatic results thus it is termed as a revolution in green agriculture.`,
);
/**
The predict API gives flexibility to generate predictions for data provided through URL, Filepath and bytes format.
Example for prediction through Bytes:
const modelPrediction = await model.predictByBytes({
inputBytes,
inputType
});
Example for prediction through Filepath:
const modelPrediction = await model.predictByFilepath({
filepath,
inputType
});
*/
const modelUrl =
"https://clarifai.com/cohere/embed/models/cohere-embed-english-v3_0";
const model = new Model({
url: modelUrl,
authConfig: {
pat: process.env.CLARIFAI_PAT,
},
});
const modelPrediction = await model.predictByBytes({
inputBytes: inputText,
inputType: "text",
});
const embeddings =
modelPrediction?.[0]?.data?.embeddingsList?.[0]?.vectorList ?? [];
// const numDimensions =
// modelPrediction?.[0]?.data?.embeddingsList?.[0]?.numDimensions;
console.log(embeddings.slice(0, 10));