Skip to main content

Custom Transfer Learning Models

Develop your own custom models using transfer learning


You do not need many images to get started creating a custom model using our world-class transfer learning technology. We recommend starting with 10 and adding more as needed.

Before you create and train your first model, you need to create an application and select Image/Video as the primary input type. The Base Workflow will be automatically selected for you.

info

The initialization code used in the following examples is outlined in detail on the client installation page.

Add Images With Concepts

tip

This walkthrough example assumes that you've selected a Classification Base Workflow. If you choose a Detection Base Workflow, then this Add Images With Concepts example could throw an error message, such as Adding/patching inputs with pre-tagged concepts is not allowed for apps with a detection model in their base workflow. Please use Post or Patch Annotations instead. If you get such an error, you should first upload the inputs without any concepts attached and then use the Annotations endpoint to label the inputs.

To get started training your own model, you need to first add images that already contain the concepts you want your model to see.

##############################################################################
# In this section, we set the user authentication, app ID, and the images and
# concepts we want to add. Change these strings to run your own example.
##############################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to add your own images with concepts
IMAGE_URL_1 = 'https://samples.clarifai.com/puppy.jpeg'
IMAGE_URL_2 = 'https://samples.clarifai.com/wedding.jpg'
CONCEPT_ID_1 = 'charlie'
CONCEPT_ID_2 = 'our_wedding'
CONCEPT_ID_3 = 'our_wedding'
CONCEPT_ID_4 = 'charlie'
CONCEPT_ID_5 = 'cat'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_inputs_response = stub.PostInputs(
service_pb2.PostInputsRequest(
user_app_id=userDataObject,
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
url=IMAGE_URL_1,
allow_duplicate_url=True
),
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_1, value=1),
resources_pb2.Concept(id=CONCEPT_ID_2, value=0),
]
)
),
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
url=IMAGE_URL_2,
allow_duplicate_url=True
),
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_3, value=1),
resources_pb2.Concept(id=CONCEPT_ID_4, value=0),
resources_pb2.Concept(id=CONCEPT_ID_5, value=0),
]
)
),
]
),
metadata=metadata
)

if post_inputs_response.status.code != status_code_pb2.SUCCESS:
print("There was an error with your request!")
for input_object in post_inputs_response.inputs:
print("Input " + input_object.id + " status:")
print(input_object.status)

print(post_inputs_response.status)
raise Exception("Post inputs failed, status: " + post_inputs_response.status.description)

print(post_inputs_response)
JSON Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "7ff42b88ef477bb9b9ecab0b61d051ca"
}
inputs {
id: "7b708ee204284ed0a914dc37a7def8be"
data {
image {
url: "https://samples.clarifai.com/puppy.jpeg"
image_info {
format: "UnknownImageFormat"
color_mode: "UnknownColorMode"
}
}
concepts {
id: "charlie"
name: "charlie"
value: 1.0
app_id: "test-app"
}
concepts {
id: "our_wedding"
name: "our_wedding"
app_id: "test-app"
}
}
created_at {
seconds: 1646288847
nanos: 89138802
}
modified_at {
seconds: 1646288847
nanos: 89138802
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}
inputs {
id: "5571376e9d42447dafb76711669f6731"
data {
image {
url: "https://samples.clarifai.com/wedding.jpg"
image_info {
format: "UnknownImageFormat"
color_mode: "UnknownColorMode"
}
}
concepts {
id: "our_wedding"
name: "our_wedding"
value: 1.0
app_id: "test-app"
}
concepts {
id: "charlie"
name: "charlie"
app_id: "test-app"
}
concepts {
id: "cat"
name: "cat"
app_id: "test-app"
}
}
created_at {
seconds: 1646288847
nanos: 89138802
}
modified_at {
seconds: 1646288847
nanos: 89138802
}
status {
code: INPUT_DOWNLOAD_PENDING
description: "Download pending"
}
}

Create a Model

After adding images with concepts, you are now ready to create a custom transfer learning model (also called an "embedding-classifier"). You need to provide an ID for the model.

tip

If you want to create another type of model you could use the model_type_id parameter to specify it. Otherwise, the "embedding-classifier" model type will be created by default.

Take note of the model id, as we'll need that for the next steps.

##############################################################################
# In this section, we set the user authentication, app ID, and model ID.
# Change these strings to run your own example.
##############################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change this to create your own model
MODEL_ID = 'my-pets'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_models_response = stub.PostModels(
service_pb2.PostModelsRequest(
user_app_id=userDataObject,
models=[
resources_pb2.Model(
id=MODEL_ID
)
]
),
metadata=metadata
)

if post_models_response.status.code != status_code_pb2.SUCCESS:
print(post_models_response.status)
raise Exception("Post models failed, status: " + post_models_response.status.description)

JSON Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "c179a31bea659b27214213ee137215f8"
}
model {
id: "my-pets"
name: "my-pets"
created_at {
seconds: 1693506608
nanos: 652910264
}
app_id: "items-app"
user_id: "my-user-id"
model_type_id: "embedding-classifier"
visibility {
gettable: PRIVATE
}
metadata {
}
modified_at {
seconds: 1693506608
nanos: 652910264
}
presets {
}
workflow_recommended {
}
}

Train the Model

Now that you've added images with concepts, then created a model, the next step is to train the model. When you train a model, you are telling the system to look at all the images with concepts you've provided and learn from them.

This train operation is asynchronous. It may take a few seconds for your model to be fully trained and ready.

Take note of the model_version id in the response. We'll need that for the next section when we predict with the model.

########################################################################################
# In this section, we set the user authentication, app ID, model ID, and concept IDs.
# Change these strings to run your own example.
########################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to train your own model
MODEL_ID = 'my-pets'
CONCEPT_ID_1 = 'charlie'
CONCEPT_ID_2 = 'our_wedding'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_versions = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
output_info=resources_pb2.OutputInfo(
data=resources_pb2.Data(
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_1, value=1), # 1 means true, this concept is present
resources_pb2.Concept(id=CONCEPT_ID_2, value=1)
]
),
)
)]
),
metadata=metadata
)

if post_model_versions.status.code != status_code_pb2.SUCCESS:
print(post_model_versions.status)
raise Exception("Post models versions failed, status: " + post_model_versions.status.description)
JSON Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "c2b73a383ff73d57ce10eb92d4ceeca3"
}
model {
id: "my-pets"
name: "my-pets"
created_at {
seconds: 1693501169
nanos: 811818000
}
app_id: "items-app"
model_version {
id: "adbd648acc8146f788520dad0376411e"
created_at {
seconds: 1693558909
nanos: 61554817
}
status {
code: MODEL_QUEUED_FOR_TRAINING
description: "Model is currently in queue for training."
}
active_concept_count: 2
visibility {
gettable: PRIVATE
}
app_id: "items-app"
user_id: "alfrick"
metadata {
}
output_info {
output_config {
}
message: "Show output_info with: GET /models/{model_id}/output_info"
params {
fields {
key: "max_concepts"
value {
number_value: 20.0
}
}
fields {
key: "min_value"
value {
number_value: 0.0
}
}
fields {
key: "select_concepts"
value {
list_value {
}
}
}
}
}
input_info {
params {
}
base_embed_model {
id: "general-image-embedding"
app_id: "main"
model_version {
id: "bb186755eda04f9cbb6fe32e816be104"
}
user_id: "clarifai"
model_type_id: "visual-embedder"
}
}
train_info {
params {
fields {
key: "dataset_id"
value {
string_value: ""
}
}
fields {
key: "dataset_version_id"
value {
string_value: ""
}
}
fields {
key: "enrich_dataset"
value {
string_value: "Automatic"
}
}
}
}
import_info {
}
}
user_id: "alfrick"
model_type_id: "embedding-classifier"
visibility {
gettable: PRIVATE
}
metadata {
}
modified_at {
seconds: 1693501169
nanos: 811818000
}
presets {
}
workflow_recommended {
}
}

Predict With the Model

Now that we have trained the model, we can start making predictions with it. In our predict call, we specify three items: the model id, model version id (optional, defaults to the latest trained version if omitted), and the input we want a prediction for.

tip

You can repeat the above steps as often as you like. By adding more images with concepts and training, you can get the model to predict exactly how you want it to.

####################################################################################
# In this section, we set the user authentication, app ID, model ID, model version,
# and image URL. Change these strings to run your own example.
####################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to make your own predictions
MODEL_ID = 'my-pets'
MODEL_VERSION = '8eb21f63ba9d40c7b84ecfd664ac603d'
IMAGE_URL = 'https://samples.clarifai.com/puppy.jpeg'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
version_id=MODEL_VERSION, # This is optional. Defaults to the latest model version.
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
url=IMAGE_URL
)
)
)
]
),
metadata=metadata
)

if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Post model outputs failed, status: " + post_model_outputs_response.status.description)

# Since we have one input, one output will exist here.
output = post_model_outputs_response.outputs[0]

print("Predicted concepts:")
for concept in output.data.concepts:
print("%s %.2f" % (concept.name, concept.value))

# Uncomment this line to print the full Response JSON
#print(post_model_outputs_response)
Code Output Example
Predicted concepts:
charlie 1.00
JSON Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "db4cf89c13303aa9889a89f2ae0a91f4"
}
outputs {
id: "20ed3f59dc5b4b1e9082a7e91ff29f48"
status {
code: SUCCESS
description: "Ok"
}
created_at {
seconds: 1646333543
nanos: 352417324
}
model {
id: "my-pets"
name: "my-pets"
created_at {
seconds: 1646291711
nanos: 640607000
}
app_id: "test-app"
output_info {
output_config {
}
message: "Show output_info with: GET /models/{model_id}/output_info"
params {
fields {
key: "max_concepts"
value {
number_value: 20.0
}
}
fields {
key: "min_value"
value {
number_value: 0.0
}
}
fields {
key: "select_concepts"
value {
list_value {
}
}
}
}
}
model_version {
id: "8eb21f63ba9d40c7b84ecfd664ac603d"
created_at {
seconds: 1646330065
nanos: 537080000
}
status {
code: MODEL_TRAINED
description: "Model is trained and ready"
}
total_input_count: 14
completed_at {
seconds: 1646330068
nanos: 100250000
}
visibility {
gettable: PRIVATE
}
app_id: "test-app"
user_id: "ei2leoz3s3iy"
metadata {
}
}
user_id: "ei2leoz3s3iy"
input_info {
}
train_info {
}
model_type_id: "embedding-classifier"
visibility {
gettable: PRIVATE
}
modified_at {
seconds: 1646291711
nanos: 640607000
}
import_info {
}
}
input {
id: "f1ce5584c5e54653b722ac3ef163a077"
data {
image {
url: "https://samples.clarifai.com/puppy.jpeg"
}
}
}
data {
concepts {
id: "charlie"
name: "charlie"
value: 0.9998574256896973
app_id: "test-app"
}
}
}