Managing Inputs
Upload, list, download, update, or delete inputs
Managing inputs in the Clarifai platform is a streamlined process that includes uploading, updating, deleting, and performing various data processing tasks. Inputs can encompass a wide range of data types, such as images, videos, text, and more.
Whether your inputs are hosted online via URLs, stored locally as file paths, or represented as bytes, our API supports all these formats, ensuring flexibility and ease of use.
API Upload Limits
When uploading data to the Clarifai platform by using methods like upload_from_bytes()
or upload_from_url()
(which are illustrated below), your inputs should meet the following conditions.
Note that these conditions also apply when uploading inputs for inferencing.
Images
- Each request can include up to 128 image inputs per batch.
- Each image file must be a maximum of 85 megapixels and less than 20MB in size.
- The total batch size (in bytes) for each request must be less than 128MB.
Videos
- Each request can include only 1 video input.
- If uploading via URL, the video can be up to 300MB or 10 minutes long.
- If uploading via direct file upload, the video must be less than 128MB.
Text Files
- Each request can include up to 128 text files per batch.
- Each text file must be less than 20MB.
- The total batch size (in bytes) must be less than 128MB.
Audio Files
- Each request can include up to 128 audio files per batch.
- Each audio file must be less than 20MB in size (suitable for a 48kHz, 60-second, 16-bit recording).
- The total batch size (in bytes) must be less than 128MB.
You can bypass these limits by using the
upload_from_folder()
method from theDataset
class, which efficiently handles larger volumes of inputs by automatically batching them while adhering to upload restrictions.
For example, when uploading images in bulk, the method incrementally processes and uploads them in multiple batches, ensuring that each batch contains a maximum of 128 images and does not exceed 128MB in size.
You can also customize the
batch_size
variable, which allows for concurrent upload of inputs and annotations. For example, if your folder exceeds 128MB, you can set the variable to ensure that each batch contains an appropriate number of images while staying within the 128MB per batch limit.
Upload Image Data
Below is an example of how to upload image data.
- Python
- Typescript
from clarifai.client.input import Inputs
img_url = "https://samples.clarifai.com/metro-north.jpg"
input_obj = Inputs(user_id="user_id", app_id="test_app", pat="YOUR_PAT")
# You can also upload data through Bytes and Filepath,
# Upload from file
# input_obj.upload_from_file(input_id='demo', image_file=’image_filepath')
# Upload from bytes
# input_obj.upload_from_bytes(input_id='demo', image_bytes=image)
input_obj.upload_from_url(input_id="demo", image_url=img_url)
Output
2024-01-15 16:38:49 INFO clarifai.client.input: input.py:669
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "a14eda72951b06cd25561381d70ced74"
import { Input } from "clarifai-nodejs";
const imageUrl = "https://samples.clarifai.com/metro-north.jpg";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
await input.uploadFromUrl({
inputId: "demo",
imageUrl,
});
Upload Text Data
Below is an example of how to upload text data.
- Python
- Typescript
from clarifai.client.input import Inputs
input_text = b"Write a tweet on future of AI"
input_obj = Inputs(user_id="user_id", app_id="test_app", pat="YOUR_PAT")
# You can also upload data through URLand Filepath,
# Upload from file
# input_obj.upload_from_file(input_id='text_dat', text_file=’text_filepath')
# Upload from url
# input_obj.upload_from_url(input_id='text,text_url=”text_url”)
input_obj.upload_from_bytes(input_id="text_data", text_bytes=input_text)
Output
2024-01-16 14:14:41 INFO clarifai.client.input: input.py:669
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "80d2454a1dea0411e20fb03b2fe0c8b1"
import { Input } from "clarifai-nodejs";
const inputText = "Write a tweet on future of AI";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
input.uploadText({
inputId: "text_data",
rawText: inputText,
});
Write Custom Functions for Data Processing
You can add your own custom functions for data processing with ease.
Below is an example of how to clean text data by removing Unicode characters before uploading it to the Clarifai platform.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Remove unicode from text
def remove_unicode_and_upload(input_id, text):
string_encode = text.encode("ascii", "ignore")
string_decode = string_encode.decode()
input_object.upload_text(input_id=input_id,raw_text=string_decode)
remove_unicode_and_upload(input_id='test', text="This is a test \u200c example. ")
Upload Audio Data
Below is an example of how to upload audio data.
- Python
- Typescript
from clarifai.client.input import Inputs
audio_url = "https://s3.amazonaws.com/samples.clarifai.com/GoodMorning.wav"
input_obj = Inputs(user_id="user_id", app_id="test_app", pat="YOUR_PAT")
# You can also upload data through Bytes and Filepath,
# Upload from file
# input_obj.upload_from_file(input_id='audio_data', audio_file=’audio_filepath')
# Upload from bytes
# input_obj.upload_from_bytes(input_id='audio_data’, audio_bytes=audio)
input_obj.upload_from_url(
input_id="audio_data",
audio_url=audio_url,
)
Output
2024-01-16 14:18:58 INFO clarifai.client.input: input.py:669
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "c16d3dd066d7ee48d038744daacef6e8"
import { Input } from "clarifai-nodejs";
const audioUrl =
"https://s3.amazonaws.com/samples.clarifai.com/GoodMorning.wav";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
input.uploadFromUrl({
inputId: "audio_data",
audioUrl,
});
Upload Video Data
Below is an example of how to upload video data.
- Python
- Typescript
from clarifai.client.input import Inputs
video_url = "https://samples.clarifai.com/beer.mp4"
input_obj = Inputs(user_id="user_id", app_id="test_app", pat="YOUR_PAT")
# You can also upload data through Bytes and Filepath,
# Upload from file
# input_obj.upload_from_file(input_id='video_data', video_file=’video_filepath')
# Upload from bytes
# input_obj.upload_from_bytes(input_id='video_data’, video_bytes=video)
input_obj.upload_from_url(
input_id="video_data", video_url= video_url
)
Output
2024-01-16 14:25:26 INFO clarifai.client.input: input.py:669
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "00576d040a6254019942ab4eceb306ad"
import { Input } from "clarifai-nodejs";
const videoUrl = "https://samples.clarifai.com/beer.mp4";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
await input.uploadFromUrl({
inputId: "video_data",
videoUrl,
});
Upload Multimodal Data
Below is an example of how to upload a combination of different input types, such as images and text, to the Clarifai platform.
Currently, Clarifai supports specific multimodal input combinations, such as [Image, Text] -> Text
. This allows you to process and analyze interconnected data types for advanced use cases.
- Python
- Typescript
from clarifai.client.input import Inputs
input_obj = Inputs(user_id="user_id", app_id="test_app", pat="YOUR_PAT")
# initialize inputs of different type
prompt = "What time of day is it?"
image_url = "https://samples.clarifai.com/metro-north.jpg"
# Here you can give the value for different types of inputs
input_obj.get_multimodal_input(
input_id="multimodal_data", image_url=image_url, raw_text=prompt
)
Output
id: "multimodal_data"
data {
image {
url: "https://samples.clarifai.com/metro-north.jpg"
}
text {
raw: "What time of day is it?"
}
}
import { Input } from "clarifai-nodejs";
const prompt = "What time of day is it?";
const imageUrl = "https://samples.clarifai.com/metro-north.jpg";
const multimodalInput = Input.getMultimodalInput({
inputId: "multimodal_data",
imageUrl,
rawText: prompt,
});
console.log(multimodalInput);
Upload Custom Metadata
When using the Clarifai SDKs, you can enhance your inputs by attaching custom metadata alongside concepts. This feature enables you to include additional contextual information, such as categorization, filtering criteria, or reference data, making it easier to organize and retrieve your inputs later.
Below are examples of how to upload inputs with custom metadata. In these examples, the metadata includes details about the filename and the dataset split (e.g., train, validate, or test) to which the input belongs.
Image With Metadata
- Python
- Typescript
# Import necessary modules
from google.protobuf.struct_pb2 import Struct
from clarifai.client.input import Inputs
# Create an Inputs object with user_id and app_id
input_object = Inputs(user_id="user_id", app_id="app_id", pat="YOUR_PAT")
# Create a Struct object for metadata
metadata = Struct()
# Update metadata with filename and split information
metadata.update({"filename": "XiJinping.jpg", "split": "train"})
# URL of the image to upload
url = "https://samples.clarifai.com/XiJinping.jpg"
# Upload the image from the URL with associated metadata
input_object.upload_from_url(input_id="metadata", image_url=url, metadata=metadata)
Output
2024-04-05 13:03:24 INFO clarifai.client.input: input.py:674
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "951a64b950cccf05c8d274c8acc1f0f6"
INFO:clarifai.client.input:
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "951a64b950cccf05c8d274c8acc1f0f6"
('8557e0f57f464c22b3483de76757fb4f',
status {
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "951a64b950cccf05c8d274c8acc1f0f6"
}
inputs {
id: "metadata"
data {
image {
url: "https://samples.clarifai.com/XiJinping.jpg"
image_info {
format: "UnknownImageFormat"
color_mode: "UnknownColorMode"
}
}
metadata {
fields {
key: "filename"
value {
string_value: "XiJinping.jpg"
}
}
fields {
key: "split"
value {
string_value: "train"
}
}
}
}
created_at {
seconds: 1712322204
nanos: 737881425
}
modified_at {
seconds: 1712322204
nanos: 737881425
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}
inputs_add_job {
id: "8557e0f57f464c22b3483de76757fb4f"
progress {
pending_count: 1
}
created_at {
seconds: 1712322204
nanos: 714751000
}
modified_at {
seconds: 1712322204
nanos: 714751000
}
status {
code: JOB_QUEUED
description: "Job is queued to be ran."
}
})
import { Input } from "clarifai-nodejs";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
const metadata = {
filename: "XiJinping.jpg",
split: "train",
};
const imageUrl = "https://samples.clarifai.com/XiJinping.jpg";
await input.uploadFromUrl({
inputId: "image_with_metadata",
imageUrl,
metadata,
});
Video With Metadata
- Python
- Typescript
from google.protobuf.struct_pb2 import Struct
from clarifai.client.input import Inputs
# Initialize an Inputs object with specified user_id and app_id
input_object = Inputs(user_id="user_id", app_id="app_id", pat="YOUR_PAT")
# Define the URL of the video to upload
video_url = "https://samples.clarifai.com/beer.mp4"
# Create a Struct object to hold metadata
metadata = Struct()
# Update the metadata with filename and split information
metadata.update({"filename": "drinks.jpg", "split": "train"})
# Upload the video from the specified URL with the provided metadata
input_object.upload_from_url(
input_id="video_data_metadata", video_url=video_url, metadata=metadata
)
Output
2024-04-05 13:05:49 INFO clarifai.client.input: input.py:674
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "72c9820d805efb9f3ee7f0508778c1f3"
INFO:clarifai.client.input:
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "72c9820d805efb9f3ee7f0508778c1f3"
('7fdc30b9c2a24f31b6a41b32bd9fea02',
status {
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "72c9820d805efb9f3ee7f0508778c1f3"
}
inputs {
id: "video_data_metadata"
data {
video {
url: "https://samples.clarifai.com/beer.mp4"
video_info {
video_format: "UnknownVideoFormat"
}
}
metadata {
fields {
key: "filename"
value {
string_value: "drinks.jpg"
}
}
fields {
key: "split"
value {
string_value: "train"
}
}
}
}
created_at {
seconds: 1712322349
nanos: 628288634
}
modified_at {
seconds: 1712322349
nanos: 628288634
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}
inputs_add_job {
id: "7fdc30b9c2a24f31b6a41b32bd9fea02"
progress {
pending_count: 1
}
created_at {
seconds: 1712322349
nanos: 602487000
}
modified_at {
seconds: 1712322349
nanos: 602487000
}
status {
code: JOB_QUEUED
description: "Job is queued to be ran."
}
})
import { Input } from "clarifai-nodejs";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
const metadata = {
filename: "beer.mp4",
split: "train",
};
const videoUrl = "https://samples.clarifai.com/beer.mp4";
await input.uploadFromUrl({
inputId: "video_data_metadata",
videoUrl,
metadata,
});
Text With Metadata
- Python
- Typescript
# Import necessary modules
from google.protobuf.struct_pb2 import Struct
from clarifai.client.input import Inputs
# Define the input object with user_id and app_id
input_object = Inputs(user_id="user_id", app_id="app_id", pat="YOUR_PAT")
# Define the input text
input_text = b"Write a tweet on future of AI"
# Create a Struct object for metadata
metadata = Struct()
# Update metadata with filename and split information
metadata.update({"filename": "tweet.txt", "split": "train"})
# Upload the input from bytes with custom metadata
input_object.upload_from_bytes(input_id="text_data_metadata", text_bytes=input_text, metadata=metadata)
Output
2024-04-05 13:07:04 INFO clarifai.client.input: input.py:674
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "835f6c736f032947d1f4067e39c10b72"
INFO:clarifai.client.input:
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "835f6c736f032947d1f4067e39c10b72"
('e3de274f644a4e98a488e7c85f94c0d1',
status {
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "835f6c736f032947d1f4067e39c10b72"
}
inputs {
id: "text_data_metadata"
data {
metadata {
fields {
key: "filename"
value {
string_value: "tweet.txt"
}
}
fields {
key: "split"
value {
string_value: "train"
}
}
}
text {
url: "https://data.clarifai.com/orig/users/8tzpjy1a841y/apps/visual_classifier_eval/inputs/text/c439598b04d8112867eec70097aa00c2"
text_info {
encoding: "UnknownTextEnc"
}
}
}
created_at {
seconds: 1712322424
nanos: 56818659
}
modified_at {
seconds: 1712322424
nanos: 56818659
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}
inputs_add_job {
id: "e3de274f644a4e98a488e7c85f94c0d1"
progress {
pending_count: 1
}
created_at {
seconds: 1712322423
nanos: 941401000
}
modified_at {
seconds: 1712322423
nanos: 941401000
}
status {
code: JOB_QUEUED
description: "Job is queued to be ran."
}
})
import { Input } from "clarifai-nodejs";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
const textBytes = Buffer.from("Write a tweet on future of AI");
const metadata = {
filename: "tweet.txt",
split: "train",
};
await input.uploadFromBytes({
inputId: "text_with_metadata",
textBytes,
metadata,
});
Audio With Metadata
- Python
- Typescript
# Import necessary modules
from clarifai.client.input import Inputs
from google.protobuf.struct_pb2 import Struct
# Define the input object with user_id and app_id
input_object = Inputs(user_id="user_id", app_id="app_id", pat="YOUR_PAT")
# Define the URL of the audio file
audio_url = "https://s3.amazonaws.com/samples.clarifai.com/GoodMorning.wav"
# Create a new Struct to hold metadata
metadata = Struct()
# Update the metadata with filename and split information
metadata.update({"filename": "goodmorning.wav", "split": "test"})
# Upload the input from the specified URL with metadata
input_object.upload_from_url(
input_id="audio_data_metadata", # Specify an ID for the input
audio_url=audio_url, # URL of the audio file
metadata=metadata # Custom metadata associated with the input
)
Output
2024-04-08 06:39:32 INFO clarifai.client.input: input.py:674
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "4c96e4167170c174838c7987101f3478"
INFO:clarifai.client.input:
Inputs Uploaded
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "4c96e4167170c174838c7987101f3478"
('109349aa790a404db39f6324415a47a5',
status {
code: SUCCESS
description: "Ok"
details: "All inputs successfully added"
req_id: "4c96e4167170c174838c7987101f3478"
}
inputs {
id: "audio_data_metadata"
data {
metadata {
fields {
key: "filename"
value {
string_value: "goodmorning.wav"
}
}
fields {
key: "split"
value {
string_value: "test"
}
}
}
audio {
url: "https://s3.amazonaws.com/samples.clarifai.com/GoodMorning.wav"
audio_info {
audio_format: "UnknownAudioFormat"
}
}
}
created_at {
seconds: 1712558372
nanos: 764691920
}
modified_at {
seconds: 1712558372
nanos: 764691920
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}
inputs_add_job {
id: "109349aa790a404db39f6324415a47a5"
progress {
pending_count: 1
}
created_at {
seconds: 1712558372
nanos: 751997000
}
modified_at {
seconds: 1712558372
nanos: 751997000
}
status {
code: JOB_QUEUED
description: "Job is queued to be ran."
}
})
import { Input } from "clarifai-nodejs";
const input = new Input({
authConfig: {
userId: process.env.CLARIFAI_USER_ID,
pat: process.env.CLARIFAI_PAT,
appId: "test_app",
},
});
const metadata = {
filename: "goodmorning.wav",
split: "test",
};
const audioUrl =
"https://s3.amazonaws.com/samples.clarifai.com/GoodMorning.wav";
await input.uploadFromUrl({
inputId: "audio_data_metadata",
audioUrl,
metadata,
});
Upload Inputs with Geospatial Information
When uploading inputs to Clarifai, you can enrich them by including geospatial data, such as longitude and latitude coordinates from the GPS system.
This allows you to associate each input with a specific geographic location. Note that each input can have at most one geospatial point associated with it.
- Python
from clarifai.client.input import Inputs
# URL of the image to upload
image_url = "https://samples.clarifai.com/Ferrari.jpg"
# Provide the Geoinfo to be added to the input
# geo_info=[longitude, latitude]
geo_points = [102,73]
# Create an Inputs object with user_id and app_id
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload the image from the URL with associated GeoInfo
input_object.upload_from_url(input_id="geo_info", image_url=image_url, geo_info=geo_points)
Upload Inputs With Annotations
You can upload inputs along with their corresponding annotations, such as bounding boxes or polygons.
Bounding Box Annotations
Below is an example of how to label a new rectangular bounding box for a specific region within an image. The bounding box coordinates should be normalized to the image dimensions, with values scaled to the range of [0, 1.0].
This ensures that the coordinates are independent of the image resolution, making the annotations consistent across different image sizes.
- Python
# Start by uploading the image with a specific input ID as described earlier
# For example, you can upload this image: https://samples.clarifai.com/BarackObama.jpg
# Then, after successfully uploading it, apply the bounding box annotations
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload bounding box annotations
bbox_points = [.1, .1, .8, .9] # Coordinates of the bounding box
annotation = input_object.get_bbox_proto(input_id="bbox", label="face", bbox=bbox_points, label_id="id-face", annot_id="demo")
input_object.upload_annotations([annotation])
Polygon Annotations
Below is an example of how to annotate any polygon-shaped region within an image.
A polygon is defined by a list of points, each specified by:
- row — The row position of the point, represented as a value between 0.0 and 1.0, where 0.0 corresponds to the top row and 1.0 corresponds to the bottom.
- col — The column position of the point, represented as a value between 0.0 and 1.0, where 0.0 corresponds to the left column of the image and 1.0 corresponds to the right column.
- Python
# Start by uploading the image with a specific input ID as described earlier
# For example, you can upload this image: https://samples.clarifai.com/airplane.jpeg
# Then, after successfully uploading it, apply the polygon annotations
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload polygon annotations
#polygons=[[[x,y],...,[x,y]],...]
polygon_pts = [[.15,.24],[.4,.78],[.77,.62],[.65,.15]]
annotation = input_object.get_mask_proto(input_id="mask", label="airplane", polygons=polygon_pts)
input_object.upload_annotations([annotation])
Concepts Annotations
Below is an example of how to annotate different types of inputs with concepts.
- Python
from clarifai.client.input import Inputs
url = "https://samples.clarifai.com/featured-models/Llama2_Conversational-agent.txt"
# Change this depending on the type of input you want to annotate
concepts = ["mobile","camera"]
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload text data with concepts
input_object.upload_from_url(input_id="text1", text_url=url, labels=concepts)
# Upload image data with concepts
#input_object.upload_from_url(input_id="image1", image_url="ADD_URL_HERE", labels=concepts)
# Upload video data with concepts
#input_object.upload_from_url(input_id="video1", video_url="ADD_URL_HERE", labels=concepts)
# Upload audio data with concepts
#input_object.upload_from_url(input_id="audio1", audio_url="ADD_URL_HERE", labels=concepts)
Bulk Delete Input Annotations
Below is an example of how to delete all the annotations associated with a given input by setting the input ID(s).
The annotation_ids
parameter is optional. However, if provided, the number and order of annotation_ids
must match the corresponding input_ids
.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Bulk delete annotations
input_object.delete_annotations(input_ids=["input_id1", "input_id1", "input_id2"], annotation_ids=["annot_id11", "annot_id12", "annot_id21"])
List inputs
You can retrieve all inputs within your app using the list_inputs()
method. This method supports pagination, so you can efficiently organize and display data.
You can customize your queries by adjusting page_no
and per_page
parameters to fit your specific needs.
- Python
from clarifai.client.user import User
# Create the input object
input_obj = User(user_id="user_id").app(app_id="test_app", pat="YOUR_PAT").inputs()
# list the inputs with pagination
all_inputs = list(input_obj.list_inputs(page_no=1,per_page=3))
print(all_inputs)
Output
[id: "demo1"
data {
image {
url: "https://samples.clarifai.com/metro-north.jpg"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/8tzpjy1a841y/apps/test_app/inputs/image/140c856dc82565d2c4d6ea720fceff78"
sizes: "orig"
sizes: "tiny"
sizes: "small"
sizes: "large"
crossorigin: "use-credentials"
}
image_info {
width: 512
height: 384
format: "JPEG"
color_mode: "YUV"
}
}
}
created_at {
seconds: 1705917660
nanos: 789409000
}
...
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
]
Download Inputs
Below is an example of how to download inputs from your app.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Download inputs
input_object.download_inputs(list(input_object.list_inputs()))
Delete Inputs
Below is an example of how to delete inputs from your app by providing a list of input IDs.
Be certain that you want to delete a particular input as the operation cannot be undone.
- Python
from clarifai.client.user import User
input_obj = User(user_id="user_id", pat="YOUR_PAT").app(app_id="test_app").inputs()
# provide the inputs ids as parameters in delete_inputs function
input_obj.delete_inputs(list(input_obj.list_inputs()))
Output
2024-01-16 14:44:28 INFO clarifai.client.input: input.py:732
Inputs Deleted
code: SUCCESS
description: "Ok"
req_id: "4ae26cd15c7da98a1c2d3647b03d2768"
Patch Inputs
You can apply patch operations to an input, allowing for the merging or removal of items. By default, these actions overwrite existing data, but they behave differently when handling lists of objects.
-
The
merge
action replaces akey:value
pair with akey:new_value
, or appends new values to an existing list. When dealing with dictionaries, it merges entries that share the sameid
field. -
The
remove
action replaces akey:value
pair with akey:new_value
, or removes any items from a list that match the IDs of the provided values. -
The
overwrite
action fully replaces an existing object with a new one.
Metadata
Here is an example of how to patch the metadata of an input.
- Python
from clarifai.client.input import Inputs
from google.protobuf.struct_pb2 import Struct
# Metadata structure should be of Struct, so we create it, add the necessary details and provide it to input proto
metadata = Struct()
metadata.update({"split": "test"})
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
new_input = input_object._get_proto(input_id="YOUR_INPUT_ID_HERE", metadata= metadata)
# Update the metadata
input_object.patch_inputs([new_input],action="merge")
# Overwrite the metadata
input_object.patch_inputs([new_input],action='overwrite')
Bounding Box Annotation
Here is an example of how to patch a bounding box annotation on an input.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload the image with a specific input ID
input_object.upload_from_url(input_id="bbox", image_url="https://samples.clarifai.com/BarackObama.jpg")
# Upload initial bounding box annotations
bbox_points = [.1, .1, .8, .9] # Coordinates of the bounding box
annotation = input_object.get_bbox_proto(input_id="bbox", label="face", bbox=bbox_points, label_id="id-face", annot_id="demo")
input_object.upload_annotations([annotation])
# Update existing bounding box annotations with new coordinates
bbox_points = [.35, .45, .6, .7] # New coordinates of the bounding box
annotation = input_object.get_bbox_proto(input_id="bbox", label="face", bbox=bbox_points, label_id="id-face", annot_id="demo")
input_object.patch_annotations([annotation], action='merge')
# Remove the bounding box annotations
bbox_points = [.3, .3, .6, .7] # Coordinates of the bounding box to be removed
annotation = input_object.get_bbox_proto(input_id="bbox", label="face", bbox=bbox_points, label_id="id-face", annot_id="demo")
input_object.patch_annotations([annotation], action='remove')
Polygon Annotation
Here is an example of how to patch a polygon annotation on an input.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# Upload the image with a specific input ID
input_object.upload_from_url(input_id="polygon", image_url="https://samples.clarifai.com/BarackObama.jpg")
# Upload initial polygon annotations
polygon_pts = [[.1,.1],[.1,.9],[.9,.9],[.9,.1]] # Coordinates of the polygon
annotation = input_object.get_mask_proto(input_id="polygon", label="label", polygons=polygon_pts, annot_id="annotation_id")
input_object.upload_annotations([annotation])
# Update existing polygon annotations with new coordinates
polygon_pts = [[.15,.15],[.15,.95],[.95,.95],[.95,.15]] # New coordinates of the polygon
annotation = input_object.get_mask_proto(input_id="polygon", label="label", polygons=polygon_pts, annot_id="annotation_id")
input_object.patch_annotations([annotation],action='merge')
# Remove the polygon annotations
polygon_pts = [[.3,.3],[.3,.7],[.8,.8],[.7,.3]] # Coordinates of the polygon to be removed
annotation = input_object.get_mask_proto(input_id="polygon", label="label", polygons=polygon_pts, annot_id="annotation_id")
input_object.patch_annotations([annotation],action='remove')
Concepts
Below is an example of performing a patch operation on concepts. Currently, only the overwrite
action is supported, allowing you to update the label names associated with an input.
- Python
from clarifai.client.input import Inputs
# Initialize the Inputs object with user and app IDs
input_object = Inputs(user_id="YOUR_USER_ID_HERE", app_id="YOUR_APP_ID_HERE", pat="YOUR_PAT_HERE")
# This example changes the existing concept label "id-face" to "obama_face"
input_object.patch_concepts(
concept_ids=["id-face"], # The ID of the concept you want to update
labels=["obama_face"], # The new label name to overwrite the existing one
values=[],
action='overwrite'
)