Deep Fine-Tuning
Learn how deep fine-tuning works
Fine-tuning is a deep learning technique that refers to taking a pre-trained model and further training it on a new dataset or task. The term "fine-tuning" implies making small adjustments or refinements to the already learned representations in the pre-trained model rather than training from scratch.
Fine-tuning leverages the power of pre-trained models to improve their performance on a new, related task. It involves taking a pre-trained model, which was previously trained on a vast dataset for a general-purpose task, and tailoring it to a more specific task.
Click here to learn why you might consider deep fine-tuning.
The initialization code used in the following examples is outlined in detail on the client installation page.
Create Models
To create a deep fine-tuned model, you need to specify the type of model using the model_type_id
parameter.
You can use the ListModelTypes
method to learn more about the available model types and their hyperparameters.
Here some types of deep fine-tuned models you can create:
- Visual classifier (
visual-classifier
)—Create this model to classify images and video frames into a set of concepts. - Visual detector (
visual-detector
)—Create this model to detect bounding box regions in images or video frames and then classify the detected images. You can also send the image regions to an image cropper model to create a new cropped image. - Visual embedder (
visual-embedder
)—Create this model to transform images and video frames into "high level" vector representation understood by our AI models. These embeddings enable visual search and can be used as base models to train other models. - Visual segmenter (
visual-segmenter
)—Create this model to segment a per-pixel mask in images where things are and then classify objects, descriptive words, or topics within the masks. - Visual anomaly heatmap (
visual-anomaly-heatmap
)—Create this model to perform visual anomaly detection with image-level score and anomaly heatmap. - Text classifier (
text-classifier
)—Create this model to classify text into a set of concepts. - Text generator (
text-to-text
)—Create this model to generate or convert text based on the provided text input. For example, you can create it for prompt completion, translation, or summarization tasks.
Below is an example of how you would create a visual classifier model.
- Python
- JavaScript (REST)
- NodeJS
- Java
- PHP
- cURL
##########################################################################################
# In this section, we set the user authentication, app ID, model ID, and model type ID.
# Change these strings to run your own example.
#########################################################################################
USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to create your own model
MODEL_ID = 'petsID'
MODEL_TYPE_ID = 'visual-classifier'
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
metadata = (('authorization', 'Key ' + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
post_models_response = stub.PostModels(
service_pb2.PostModelsRequest(
user_app_id=userDataObject,
models=[
resources_pb2.Model(
id=MODEL_ID,
model_type_id=MODEL_TYPE_ID
)
]
),
metadata=metadata
)
if post_models_response.status.code != status_code_pb2.SUCCESS:
print(post_models_response.status)
raise Exception("Post models failed, status: " + post_models_response.status.description)
<!--index.html file-->
<script>
///////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and model type ID.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = 'YOUR_USER_ID_HERE';
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = 'YOUR_PAT_HERE';
const APP_ID = 'YOUR_APP_ID_HERE';
// Change these to create your own model
const MODEL_ID = 'petsID';
const MODEL_TYPE_ID = 'visual-classifier';
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const raw = JSON.stringify({
"user_app_id": {
"user_id": USER_ID,
"app_id": APP_ID
},
"model": {
"id": MODEL_ID,
"model_type_id": MODEL_TYPE_ID
}
});
const requestOptions = {
method: 'POST',
headers: {
'Accept': 'application/json',
'Authorization': 'Key ' + PAT
},
body: raw
};
fetch("https://api.clarifai.com/v2/models", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
</script>
//index.js file
////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and model type ID.
// Change these strings to run your own example.
///////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = 'YOUR_USER_ID_HERE';
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = 'YOUR_PAT_HERE';
const APP_ID = 'YOUR_APP_ID_HERE';
// Change these to create your own model
const MODEL_ID = 'petsID';
const MODEL_TYPE_ID = 'visual-classifier';
/////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
/////////////////////////////////////////////////////////////////////////////
const { ClarifaiStub, grpc } = require("clarifai-nodejs-grpc");
const stub = ClarifaiStub.grpc();
// This will be used by every Clarifai endpoint call
const metadata = new grpc.Metadata();
metadata.set("authorization", "Key " + PAT);
stub.PostModels(
{
user_app_id: {
"user_id": USER_ID,
"app_id": APP_ID
},
models: [
{
id: MODEL_ID,
model_type_id: MODEL_TYPE_ID
}
]
},
metadata,
(err, response) => {
if (err) {
throw new Error(err);
}
if (response.status.code !== 10000) {
throw new Error("Post models failed, status: " + response.status.description);
}
}
);
package com.clarifai.example;
import com.clarifai.grpc.api.*;
import com.clarifai.channel.ClarifaiChannel;
import com.clarifai.credentials.ClarifaiCallCredentials;
import com.clarifai.grpc.api.status.StatusCode;
public class ClarifaiExample {
////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and model type ID.
// Change these strings to run your own example.
///////////////////////////////////////////////////////////////////////////////////////////
static final String USER_ID = "YOUR_USER_ID_HERE";
//Your PAT (Personal Access Token) can be found in the portal under Authentication
static final String PAT = "YOUR_PAT_HERE";
static final String APP_ID = "YOUR_APP_ID_HERE";
// Change these to create your own model
static final String MODEL_ID = "petsID";
static final String MODEL_TYPE_ID = "visual-classifier";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) {
V2Grpc.V2BlockingStub stub = V2Grpc.newBlockingStub(ClarifaiChannel.INSTANCE.getGrpcChannel())
.withCallCredentials(new ClarifaiCallCredentials(PAT));
SingleModelResponse postModelsResponse = stub.postModels(
PostModelsRequest.newBuilder()
.setUserAppId(UserAppIDSet.newBuilder().setUserId(USER_ID).setAppId(APP_ID))
.addModels(
Model.newBuilder()
.setId(MODEL_ID)
.setModelTypeId(MODEL_TYPE_ID)
).build()
);
if (postModelsResponse.getStatus().getCode() != StatusCode.SUCCESS) {
throw new RuntimeException("Post models failed, status: " + postModelsResponse.getStatus());
}
}
}
<?php
require __DIR__ . "/vendor/autoload.php";
/////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and model type ID.
// Change these strings to run your own example.
/////////////////////////////////////////////////////////////////////////////////////////////////
$USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
$PAT = "YOUR_PAT_HERE";
$APP_ID = "YOUR_APP_ID_HERE";
// Change these to create your own model
$MODEL_ID = "petsID";
$MODEL_TYPE_ID = "visual-classifier";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
use Clarifai\ClarifaiClient;
use Clarifai\Api\Model;
use Clarifai\Api\PostModelsRequest;
use Clarifai\Api\Status\StatusCode;
use Clarifai\Api\UserAppIDSet;
$client = ClarifaiClient::grpc();
$metadata = ["Authorization" => ["Key " . $PAT]];
$userDataObject = new UserAppIDSet([
"user_id" => $USER_ID,
"app_id" => $APP_ID,
]);
// Let's make a RPC call to the Clarifai platform. It uses the opened gRPC client channel to communicate a
// request and then wait for the response
[$response, $status] = $client->PostModels(
// The request object carries the request along with the request status and other metadata related to the request itself
new PostModelsRequest([
"user_app_id" => $userDataObject,
"models" => [
new Model([
"id" => $MODEL_ID,
"model_type_id" => $MODEL_TYPE_ID,
]),
],
]),
$metadata
)->wait();
// A response is returned and the first thing we do is check the status of it
// A successful response will have a status code of 0; otherwise, there is some error
if ($status->code !== 0) {
throw new Exception("Error: {$status->details}");
}
// In addition to the RPC response status, there is a Clarifai API status that reports if the operation was a success or failure
// (not just that the communication was successful)
if ($response->getStatus()->getCode() != StatusCode::SUCCESS) {
throw new Exception("Failure response: " . $response->getStatus()->getDescription() . " " . $response->getStatus()->getDetails());
}
?>
curl -X POST "https://api.clarifai.com/v2/users/YOUR_USER_ID_HERE/apps/YOUR_APP_ID_HERE/models" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"model": {
"id": "petsID",
"model_type_id": "visual-classifier"
}
}'
Template Types
You can take advantage of a variety of our pre-configured templates when developing your deep fine-tuned models. Templates give you the control to choose the specific architecture used by your neural network, and also define a set of hyperparameters that you can use to fine-tune the way your model learns.
Click here to learn more about the template types we offer—alongside their hyperparameters.
Below is an example of how you would use the ListModelTypes
endpoint to list the templates and hyperparameters available in a specific model type.
- Python
- JavaScript (REST)
- NodeJS
- Java
- PHP
#####################################################################################
# In this section, we set the user authentication, app ID, and model type ID.
# Change these strings to run your own example.
####################################################################################
USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change this to list the template types of your preferred model
MODEL_TYPE = 'visual-classifier'
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
metadata = (('authorization', 'Key ' + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
response = stub.ListModelTypes(
service_pb2.ListModelTypesRequest(
user_app_id=userDataObject
),
metadata=metadata
)
if response.status.code != status_code_pb2.SUCCESS:
print(response.status)
raise Exception("List models failed, status: " + response.status.description)
for model_type in response.model_types:
if model_type.id == MODEL_TYPE:
for modeltypefield in model_type.model_type_fields:
if modeltypefield.path.split('.')[-1] == "template":
for template in modeltypefield.model_type_enum_options:
print(template)
<!--index.html file-->
<script>
////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and model type ID.
// Change these strings to run your own example.
////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = 'YOUR_USER_ID_HERE';
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = 'YOUR_PAT_HERE';
const APP_ID = 'YOUR_APP_ID_HERE';
// Change this to list the template types of your preferred model
const MODEL_TYPE = 'visual-classifier';
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const requestOptions = {
method: 'GET',
headers: {
'Accept': 'application/json',
'Authorization': 'Key ' + PAT
}
};
fetch(`https://api.clarifai.com/v2/users/${USER_ID}/apps/${APP_ID}/models/types?per_page=20&page=1`, requestOptions)
.then(response => response.json())
.then(result => {
console.log('Clarifai API Response:', result);
result.model_types.forEach(modelType => {
if (modelType.id === MODEL_TYPE) {
modelType.model_type_fields.forEach(modelTypeField => {
if (modelTypeField.path.split('.').slice(-1)[0] === 'template') {
modelTypeField.model_type_enum_options.forEach(template => {
console.log('Template:', template);
});
}
});
}
});
})
.catch(error => console.error('Clarifai API Error:', error));
</script>
//index.js file
///////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and model type ID.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////////
const USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = "YOUR_PAT_HERE";
const APP_ID = "YOUR_APP_ID_HERE";
// Change this to list the template types of your preferred model
const MODEL_TYPE = "visual-classifier";
/////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
/////////////////////////////////////////////////////////////////////////////
const { ClarifaiStub, grpc } = require("clarifai-nodejs-grpc");
const stub = ClarifaiStub.grpc();
// This will be used by every Clarifai endpoint call
const metadata = new grpc.Metadata();
metadata.set("authorization", "Key " + PAT);
stub.ListModelTypes(
{
user_app_id: {
"user_id": USER_ID,
"app_id": APP_ID
},
page: 1,
per_page: 500
},
metadata,
(err, response) => {
if (err) {
throw new Error(err);
}
if (response.status.code !== 10000) {
throw new Error("Received status: " + response.status.description + "\n" + response.status.details);
}
response.model_types.forEach((modelType) => {
if (modelType.id === MODEL_TYPE) {
modelType.model_type_fields.forEach((modelTypeField) => {
if (modelTypeField.path.split('.').pop() === 'template') {
modelTypeField.model_type_enum_options.forEach((template) => {
console.log(template);
});
}
});
}
});
}
);
package com.clarifai.example;
import com.clarifai.grpc.api.*;
import com.clarifai.grpc.api.status.StatusCode;
import com.clarifai.channel.ClarifaiChannel;
import com.clarifai.credentials.ClarifaiCallCredentials;
public class ClarifaiExample {
//////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and model type ID.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////
static final String USER_ID = "YOUR_USER_ID_HERE";
//Your PAT (Personal Access Token) can be found in the portal under Authentication
static final String PAT = "YOUR_PAT_HERE";
static final String APP_ID = "YOUR_APP_ID_HERE";
// Change this to list the template types of your preferred model
static final String MODEL_TYPE = "visual-classifier";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) {
V2Grpc.V2BlockingStub stub = V2Grpc.newBlockingStub(ClarifaiChannel.INSTANCE.getGrpcChannel())
.withCallCredentials(new ClarifaiCallCredentials(PAT));
MultiModelTypeResponse listModelTypesResponse = stub.listModelTypes(
ListModelTypesRequest.newBuilder()
.setUserAppId(UserAppIDSet.newBuilder().setUserId(USER_ID).setAppId(APP_ID))
.build()
);
if (listModelTypesResponse.getStatus().getCode() != StatusCode.SUCCESS) {
throw new RuntimeException("List models failed, status: " + listModelTypesResponse.getStatus());
}
for (ModelType modelType : listModelTypesResponse.getModelTypesList()) {
if (modelType.getId().equals(MODEL_TYPE)) {
for (ModelTypeField modelTypeField : modelType.getModelTypeFieldsList()) {
if (modelTypeField.getPath().split("\\.")[modelTypeField.getPath().split("\\.").length - 1]
.equals("template")) {
for (ModelTypeEnumOption template : modelTypeField.getModelTypeEnumOptionsList()) {
System.out.println(template);
}
}
}
}
}
}
}
<?php
require __DIR__ . "/vendor/autoload.php";
///////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and model type ID.
// Change these strings to run your own example.
/////////////////////////////////////////////////////////////////////////////////////
$USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
$PAT = "YOUR_PAT_HERE";
$APP_ID = "YOUR_APP_ID_HERE";
// Change this to list the template types of your preferred model
$MODEL_TYPE = "visual-classifier";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
use Clarifai\ClarifaiClient;
use Clarifai\Api\ListModelTypesRequest;
use Clarifai\Api\Status\StatusCode;
use Clarifai\Api\UserAppIDSet;
$client = ClarifaiClient::grpc();
$metadata = ["Authorization" => ["Key " . $PAT]];
$userDataObject = new UserAppIDSet([
"user_id" => $USER_ID,
"app_id" => $APP_ID,
]);
// Let's make a RPC call to the Clarifai platform. It uses the opened gRPC client channel to communicate a
// request and then wait for the response
[$response, $status] = $client->ListModelTypes(
// The request object carries the request along with the request status and other metadata related to the request itself
new ListModelTypesRequest([
"user_app_id" => $userDataObject
]),
$metadata
)->wait();
// A response is returned and the first thing we do is check the status of it
// A successful response will have a status code of 0; otherwise, there is some error
if ($status->code !== 0) {
throw new Exception("Error: {$status->details}");
}
// In addition to the RPC response status, there is a Clarifai API status that reports if the operation was a success or failure
// (not just that the communication was successful)
if ($response->getStatus()->getCode() != StatusCode::SUCCESS) {
throw new Exception("Failure response: " . $response->getStatus()->getDescription() . " " . $response->getStatus()->getDetails());
}
foreach ($response->getModelTypes() as $modelType) {
if ($modelType->getId() === $MODEL_TYPE) {
foreach ($modelType->getModelTypeFields() as $modelTypeField) {
$pathComponents = explode('.', $modelTypeField->getPath());
if (end($pathComponents) === 'template') {
foreach ($modelTypeField->getModelTypeEnumOptions() as $template) {
echo $template->serializeToJsonString() . "\n";
}
}
}
}
}
Raw Output Example
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.logreg"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
internal_only: true
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 128.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
internal_only: true
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.lrate"
field_type: NUMBER
default_value {
number_value: 0.1
}
description: "the learning rate (per minibatch)"
placeholder: "lrate"
internal_only: true
}
model_type_fields {
path: "train_info.params.base_gradient_multiplier"
field_type: NUMBER
default_value {
number_value: 0.001
}
description: "learning rate multipler applied to the pre-initialized backbone model weights"
placeholder: "base_gradient_multiplier"
internal_only: true
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 20.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
internal_only: true
}
model_type_fields {
path: "train_info.params.embeddings_layer"
field_type: STRING
default_value {
string_value: "mod5B.concat"
}
description: "the embedding layer to use as output from this model."
placeholder: "embeddings_layer"
internal_only: true
}
model_type_fields {
path: "train_info.params.average_horizontal_flips"
field_type: BOOLEAN
default_value {
bool_value: true
}
description: "if true then average the embeddings from the image and a horizontal flip of the image to get the final embedding vectors to output."
placeholder: "average_horizontal_flips"
internal_only: true
}
internal_only: true
id: "classification_basemodel_v1"
description: "A training template that uses Clarifais training implementation. "
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.model_cfg"
field_type: STRING
default_value {
string_value: "resnext"
}
description: "the underlying model configuration to use."
placeholder: "model_cfg"
internal_only: true
}
model_type_fields {
path: "train_info.params.preinit"
field_type: STRING
default_value {
string_value: "general-v1.5"
}
description: "specifies pre-initialized net to use."
placeholder: "preinit"
internal_only: true
}
model_type_fields {
path: "train_info.params.logreg"
field_type: NUMBER
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
internal_only: true
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
internal_only: true
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
internal_only: true
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.init_epochs"
field_type: RANGE
default_value {
number_value: 25.0
}
description: "number of epochs to run at the initial learning rate."
placeholder: "init_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.step_epochs"
field_type: RANGE
default_value {
number_value: 7.0
}
description: "the number of epochs between learning rate decreases."
placeholder: "step_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 65.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 7.8125e-05
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
internal_only: true
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
internal_only: true
}
model_type_fields {
path: "train_info.params.inference_crop_type"
field_type: STRING
default_value {
string_value: "sorta2"
}
description: "the crop type to use for inference (used when evaluating the model)."
placeholder: "inference_crop_type"
internal_only: true
}
internal_only: true
id: "classification_cifar10_v1"
description: "A runner optimized for cifar10 training. Not to be used in real use cases. "
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 32.0
}
description: "the image size to train on. This is for the minimum dimension."
placeholder: "image_size"
internal_only: true
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 128.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
internal_only: true
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 65.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.inference_crop_type"
field_type: STRING
default_value {
string_value: "sorta2"
}
description: "the crop type to use for inference (used when evaluating the model)."
placeholder: "inference_crop_type"
internal_only: true
}
internal_only: true
id: "Clarifai_InceptionTransferEmbedNorm"
description: "A custom visual classifier template inspired by Inception networks and tuned for speed with\nother optimizations for transfer learning. "
model_type_fields {
path: "train_info.params.logreg"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 128.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.lrate"
field_type: NUMBER
default_value {
number_value: 0.1
}
description: "the learning rate (per minibatch)"
placeholder: "lrate"
}
model_type_fields {
path: "train_info.params.base_gradient_multiplier"
field_type: NUMBER
default_value {
number_value: 0.001
}
description: "learning rate multipler applied to the pre-initialized backbone model weights"
placeholder: "base_gradient_multiplier"
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 20.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
}
model_type_fields {
path: "train_info.params.average_horizontal_flips"
field_type: BOOLEAN
default_value {
bool_value: true
}
description: "if true then average the embeddings from the image and a horizontal flip of the image to get the final embedding vectors to output."
placeholder: "average_horizontal_flips"
}
id: "Clarifai_ResNext"
description: "A custom visual classifier template inspired by ResNext networks. "
model_type_fields {
path: "train_info.params.logreg"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.init_epochs"
field_type: RANGE
default_value {
number_value: 25.0
}
description: "number of epochs to run at the initial learning rate."
placeholder: "init_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.step_epochs"
field_type: RANGE
default_value {
number_value: 7.0
}
description: "the number of epochs between learning rate decreases."
placeholder: "step_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 65.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 7.8125e-05
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
}
id: "Clarifai_InceptionV2"
description: "A custom visual classifier template inspired by Inception-V2 networks. "
model_type_fields {
path: "train_info.params.logreg"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.init_epochs"
field_type: RANGE
default_value {
number_value: 25.0
}
description: "number of epochs to run at the initial learning rate."
placeholder: "init_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.step_epochs"
field_type: RANGE
default_value {
number_value: 7.0
}
description: "the number of epochs between learning rate decreases."
placeholder: "step_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 65.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 7.8125e-05
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
}
id: "Clarifai_InceptionBatchNorm"
description: "A custom visual classifier template inspired by Inception networks tuned for speed. "
model_type_fields {
path: "train_info.params.logreg"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "Whether to use sigmoid units (logreg=1) or softmax (logreg=0)."
placeholder: "logreg"
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: RANGE
default_value {
number_value: 256.0
}
description: "Input image size (minimum side dimension)."
placeholder: "image_size"
model_type_range_info {
min: 32.0
max: 1024.0
step: 16.0
}
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
model_type_range_info {
min: 1.0
max: 128.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.init_epochs"
field_type: RANGE
default_value {
number_value: 25.0
}
description: "number of epochs to run at the initial learning rate."
placeholder: "init_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.step_epochs"
field_type: RANGE
default_value {
number_value: 7.0
}
description: "the number of epochs between learning rate decreases."
placeholder: "step_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 65.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 7.8125e-05
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
}
model_type_fields {
path: "train_info.params.num_items_per_epoch"
field_type: NUMBER
default_value {
number_value: 0.0
}
description: "number of input images that constitute an \"epoch\". Default is the number of images in the dataset."
placeholder: "num_items_per_epoch"
}
id: "MMClassification"
description: "A training template that uses the MMClassification toolkit and a custom configuration "
model_type_fields {
path: "train_info.params.seed"
field_type: NUMBER
default_value {
number_value: -1.0
}
description: "[internal_only] the random seed to init training. If seed < 0, it is not set"
placeholder: "seed"
internal_only: true
}
model_type_fields {
path: "train_info.params.custom_config"
field_type: PYTHON_CODE
default_value {
string_value: "\n_base_ = \'/mmclassification/configs/resnext/resnext101_32x4d_b32x8_imagenet.py\'\nrunner = dict(type=\'EpochBasedRunner\', max_epochs=60)\ndata = dict(\n train=dict(\n data_prefix=\'\',\n ann_file=\'\',\n classes=\'\'),\n val=dict(\n data_prefix=\'\',\n ann_file=\'\',\n classes=\'\'))\n"
}
description: "custom mmclassification config, in python config file format. Note that the \'_base_\' field, if used, should be a config file relative to the parent directory \'/mmclassification/\', e.g. \"_base_ = \'/mmclassification/configs/efficientnet/efficientnet-b8_8xb32-01norm_in1k.py\'\". The \'num_classes\' field must be included somewhere in the config. The \'data\' section should include \'train\' and \'val\' sections, each with \'ann_file\', \'data_prefix\', and \'classes\' fields with empty strings as values. These values will be overwritten to be compatible with Clarifai\'s system, but must be included in the imported config."
placeholder: "custom_config"
}
model_type_fields {
path: "train_info.params.concepts_mutually_exclusive"
field_type: BOOLEAN
default_value {
bool_value: false
}
description: "whether the concepts are mutually exclusive. If true then each input is expected to only be tagged with a single concept."
placeholder: "concepts_mutually_exclusive"
}
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: ARRAY_OF_NUMBERS
default_value {
list_value {
values {
number_value: 320.0
}
}
}
description: "the image size for inference (the training image size is defined in the mmcv config). If a single value, specifies the size of the min side."
placeholder: "image_size"
}
id: "MMClassification_EfficientNet"
description: "A training template that uses the MMClassification toolkit and EfficientNet-B8 configuration "
model_type_fields {
path: "train_info.params.seed"
field_type: NUMBER
default_value {
number_value: -1.0
}
description: "[internal_only] the random seed to init training. If seed < 0, we will not set it."
placeholder: "seed"
internal_only: true
}
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: NUMBER
default_value {
number_value: 336.0
}
description: "the image size for training and inference. EfficientNet works on square images."
placeholder: "image_size"
internal_only: true
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 4.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
internal_only: true
model_type_range_info {
min: 1.0
max: 256.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 30.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 200.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 0.000390625
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
internal_only: true
}
model_type_fields {
path: "train_info.params.weight_decay"
field_type: RANGE
default_value {
number_value: 0.0001
}
description: "the weight decay value"
placeholder: "weight_decay"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.momentum"
field_type: RANGE
default_value {
number_value: 0.9
}
description: "the momentum value for the SGD optimizer"
placeholder: "momentum"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.pretrained_weights"
field_type: ENUM
default_value {
string_value: "ImageNet-1k"
}
description: "whether to use pretrained weights."
placeholder: "pretrained_weights"
model_type_enum_options {
id: "None"
}
model_type_enum_options {
id: "ImageNet-1k"
}
internal_only: true
}
model_type_fields {
path: "train_info.params.flip_probability"
field_type: RANGE
default_value {
number_value: 0.5
}
description: "the probability an image will be flipped during training"
placeholder: "flip_probability"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.flip_direction"
field_type: ENUM
default_value {
string_value: "horizontal"
}
description: "the direction to randomly flip during training."
placeholder: "flip_direction"
model_type_enum_options {
id: "horizontal"
}
model_type_enum_options {
id: "vertical"
}
internal_only: true
}
model_type_fields {
path: "train_info.params.concepts_mutually_exclusive"
field_type: BOOLEAN
default_value {
bool_value: false
}
description: "whether the concepts are mutually exclusive. If true then each input is expected to only be tagged with a single concept."
placeholder: "concepts_mutually_exclusive"
internal_only: true
}
internal_only: true
id: "MMClassification_ResNet_50_RSB_A1"
description: "A training template that uses the MMClassification toolkit and ResNet-50 (rsb-a1) configuration "
model_type_fields {
path: "train_info.params.seed"
field_type: NUMBER
default_value {
number_value: -1.0
}
description: "[internal_only] the random seed to init training. If seed < 0, we will not set it."
placeholder: "seed"
internal_only: true
}
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: NUMBER
default_value {
number_value: 224.0
}
description: "the image size for training and inference. ResNet uses square images."
placeholder: "image_size"
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use during training."
placeholder: "batch_size"
model_type_range_info {
min: 1.0
max: 256.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 60.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
model_type_range_info {
min: 1.0
max: 600.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 1.953125e-05
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
}
model_type_fields {
path: "train_info.params.weight_decay"
field_type: RANGE
default_value {
number_value: 0.01
}
description: "the weight decay value"
placeholder: "weight_decay"
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_min_lrate"
field_type: NUMBER
default_value {
number_value: 1.5625e-08
}
description: "The minimum learning (per item) at end of training using cosine schedule."
placeholder: "per_item_min_lrate"
}
model_type_fields {
path: "train_info.params.warmup_iters"
field_type: NUMBER
default_value {
number_value: 100.0
}
description: "The number of steps in the warmup phase"
placeholder: "warmup_iters"
}
model_type_fields {
path: "train_info.params.warmup_ratio"
field_type: NUMBER
default_value {
number_value: 0.0001
}
description: " Warmup phase learning rate multiplier"
placeholder: "warmup_ratio"
}
model_type_fields {
path: "train_info.params.pretrained_weights"
field_type: ENUM
default_value {
string_value: "ImageNet-1k"
}
description: "whether to use pretrained weights."
placeholder: "pretrained_weights"
model_type_enum_options {
id: "None"
}
model_type_enum_options {
id: "ImageNet-1k"
}
}
model_type_fields {
path: "train_info.params.flip_probability"
field_type: RANGE
default_value {
number_value: 0.5
}
description: "the probability an image will be flipped during training"
placeholder: "flip_probability"
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.flip_direction"
field_type: ENUM
default_value {
string_value: "horizontal"
}
description: "the direction to randomly flip during training."
placeholder: "flip_direction"
model_type_enum_options {
id: "horizontal"
}
model_type_enum_options {
id: "vertical"
}
}
model_type_fields {
path: "train_info.params.concepts_mutually_exclusive"
field_type: BOOLEAN
default_value {
bool_value: false
}
description: "whether the concepts are mutually exclusive. If true then each input is expected to only be tagged with a single concept."
placeholder: "concepts_mutually_exclusive"
}
recommended: true
id: "MMClassification_ResNet_50"
description: "A training template that uses the MMClassification toolkit and ResNet-50 configuration "
model_type_fields {
path: "train_info.params.seed"
field_type: NUMBER
default_value {
number_value: -1.0
}
description: "[internal_only] the random seed to init training. If seed < 0, we will not set it."
placeholder: "seed"
internal_only: true
}
model_type_fields {
path: "train_info.params.num_gpus"
field_type: RANGE
default_value {
number_value: 1.0
}
description: "[internal_only] the number of gpus to train with."
placeholder: "num_gpus"
internal_only: true
model_type_range_info {
max: 1.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.image_size"
field_type: NUMBER
default_value {
number_value: 224.0
}
description: "the image size for training and inference. ResNet works on square images."
placeholder: "image_size"
internal_only: true
}
model_type_fields {
path: "train_info.params.batch_size"
field_type: RANGE
default_value {
number_value: 64.0
}
description: "the batch size to use per gpu during training."
placeholder: "batch_size"
internal_only: true
model_type_range_info {
min: 1.0
max: 256.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.num_epochs"
field_type: RANGE
default_value {
number_value: 60.0
}
description: "the total number of epochs to train for."
placeholder: "num_epochs"
internal_only: true
model_type_range_info {
min: 1.0
max: 600.0
step: 1.0
}
}
model_type_fields {
path: "train_info.params.per_item_lrate"
field_type: NUMBER
default_value {
number_value: 0.000390625
}
description: "the initial learning rate per item. The overall learning rate (per step) is set to lrate = batch_size * per_item_lrate"
placeholder: "per_item_lrate"
internal_only: true
}
model_type_fields {
path: "train_info.params.learning_rate_steps"
field_type: ARRAY_OF_NUMBERS
default_value {
list_value {
values {
number_value: 30.0
}
values {
number_value: 40.0
}
values {
number_value: 50.0
}
}
}
description: "epoch schedule for stepping down learning rate"
placeholder: "learning_rate_steps"
internal_only: true
}
model_type_fields {
path: "train_info.params.weight_decay"
field_type: RANGE
default_value {
number_value: 0.0001
}
description: "the weight decay value"
placeholder: "weight_decay"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.momentum"
field_type: RANGE
default_value {
number_value: 0.9
}
description: "the momentum value for the SGD optimizer"
placeholder: "momentum"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.pretrained_weights"
field_type: ENUM
default_value {
string_value: "ImageNet-1k"
}
description: "whether to use pretrained weights."
placeholder: "pretrained_weights"
model_type_enum_options {
id: "None"
}
model_type_enum_options {
id: "ImageNet-1k"
}
internal_only: true
}
model_type_fields {
path: "train_info.params.flip_probability"
field_type: RANGE
default_value {
number_value: 0.5
}
description: "the probability an image will be flipped during training"
placeholder: "flip_probability"
internal_only: true
model_type_range_info {
max: 1.0
}
}
model_type_fields {
path: "train_info.params.flip_direction"
field_type: ENUM
default_value {
string_value: "horizontal"
}
description: "the direction to randomly flip during training."
placeholder: "flip_direction"
model_type_enum_options {
id: "horizontal"
}
model_type_enum_options {
id: "vertical"
}
internal_only: true
}
model_type_fields {
path: "train_info.params.concepts_mutually_exclusive"
field_type: BOOLEAN
default_value {
bool_value: false
}
description: "whether the concepts are mutually exclusive. If true then each input is expected to only be tagged with a single concept."
placeholder: "concepts_mutually_exclusive"
internal_only: true
}
internal_only: true
Training Time Estimator
Before you train a deep fine-tuned model, you can use the Training Time Estimator feature to approximate the amount of time the training process could take. This offers transparency in expected training costs.
Instead of providing an estimated input count, an alternative approach is to specify a dataset version ID in the train_info.params
of the request. Here is an example: params.update({"template":"MMDetection_FasterRCNN", "dataset_version_id":"dataset-version-1681974758238s"})
.
- Python
- JavaScript (REST)
- Java
- cURL
###################################################################################################
# In this section, we set the user authentication, app ID, model ID, and estimated input count.
# Change these strings to run your own example.
##################################################################################################
USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change these to get your training time estimate
MODEL_ID = "YOUR_CUSTOM_MODEL_ID_HERE"
ESTIMATED_INPUT_COUNT = 100
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
params = Struct()
params.update({
"template": "MMDetection_FasterRCNN"
})
metadata = (("authorization", "Key " + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
training_time_estimate_response = stub.PostModelVersionsTrainingTimeEstimate(
service_pb2.PostModelVersionsTrainingTimeEstimateRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
train_info=resources_pb2.TrainInfo(params=params)
),
],
estimated_input_count=ESTIMATED_INPUT_COUNT
),
metadata=metadata,
)
if training_time_estimate_response.status.code != status_code_pb2.SUCCESS:
print(training_time_estimate_response.status)
raise Exception("Post model outputs failed, status: " + training_time_estimate_response.status.description)
print(training_time_estimate_response)
<!--index.html file-->
<script>
///////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and estimated input count.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = "YOUR_PAT_HERE";
const APP_ID = "YOUR_APP_ID_HERE";
// Change these to get your training time estimate
const MODEL_ID = "YOUR_CUSTOM_MODEL_ID_HERE";
const ESTIMATED_INPUT_COUNT = 100;
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const raw = JSON.stringify({
"user_app_id": {
"user_id": USER_ID,
"app_id": APP_ID
},
"model_versions": [{
"train_info": {
"params": {
"template": "MMDetection_FasterRCNN"
}
},
}],
"estimated_input_count": ESTIMATED_INPUT_COUNT
});
const requestOptions = {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Key " + PAT
},
body: raw
};
fetch(`https://api.clarifai.com/v2/users/${USER_ID}/apps/${APP_ID}/models/${MODEL_ID}/versions/time_estimate/`, requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log("error", error));
</script>
package com.clarifai.example;
import com.clarifai.grpc.api.*;
import com.clarifai.channel.ClarifaiChannel;
import com.clarifai.credentials.ClarifaiCallCredentials;
import com.clarifai.grpc.api.status.StatusCode;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
public class ClarifaiExample {
//////////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and estimated input count.
// Change these strings to run your own example.
/////////////////////////////////////////////////////////////////////////////////////////////////////
static final String USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the portal under Authentication
static final String PAT = "YOUR_PAT_HERE";
static final String APP_ID = "YOUR_APP_ID_HERE";
// Change these to get your training time estimate
static final String MODEL_ID = "YOUR_CUSTOM_MODEL_ID_HERE";
static final int ESTIMATED_INPUT_COUNT = 100;
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) {
V2Grpc.V2BlockingStub stub = V2Grpc.newBlockingStub(ClarifaiChannel.INSTANCE.getGrpcChannel())
.withCallCredentials(new ClarifaiCallCredentials(PAT));
Struct.Builder params = Struct.newBuilder()
.putFields("template", Value.newBuilder().setStringValue("MMDetection_FasterRCNN").build());
MultiTrainingTimeEstimateResponse trainingTimeEstimateResponse = stub.postModelVersionsTrainingTimeEstimate(
PostModelVersionsTrainingTimeEstimateRequest.newBuilder()
.setUserAppId(UserAppIDSet.newBuilder().setUserId(USER_ID).setAppId(APP_ID))
.setModelId(MODEL_ID)
.addModelVersions(ModelVersion.newBuilder()
.setTrainInfo(TrainInfo.newBuilder()
.setParams(params)
)
)
.setEstimatedInputCount(ESTIMATED_INPUT_COUNT)
.build()
);
if (trainingTimeEstimateResponse.getStatus().getCode() != StatusCode.SUCCESS) {
throw new RuntimeException("Post model outputs failed, status: " + trainingTimeEstimateResponse.getStatus());
}
System.out.print(trainingTimeEstimateResponse);
}
}
curl -X POST "https://api.clarifai.com/v2/users/YOUR_USER_ID_HERE/apps/YOUR_APP_ID_HERE/models/YOUR_MODEL_ID_HERE/versions/time_estimate/" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"model_versions": [{
"train_info": {
"params": {
"template": "MMDetection_FasterRCNN"
}
}
}],
"estimated_input_count": 100
}'
Raw Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "f45dfcf36746a567f690744f0b3805a7"
}
training_time_estimates {
seconds: 308
}
Train a Model
After creating a model, you can now train it. It is an asynchronous operation.
Training enables the deep fine-tuned model to learn patterns, relationships, and representations from the input data. It allows the model to adjust its parameters based on the provided input data so that it can make accurate predictions.
You can repeat the training operation as often as you like. By adding more input data with concepts and training, you can get the model to predict exactly how you want it to.
The PostModelVersions
endpoint kicks off training and creates a new model version. You can also add concepts to a model when creating the model version—and only if the model type supports it as defined in the model type parameters.
Example
Below is an example of how you would train a visual classifier model.
We use the params.update()
method to set the template and hyperparameters for the visual classifier model. If training another model type, you'll need to state the specific template and hyperparameters associated with that particular model.
- Python
- JavaScript (REST)
- Java
- cURL
########################################################################################
# In this section, we set the user authentication, app ID, model ID, and concept IDs.
# Change these strings to run your own example.
########################################################################################
USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change these to train your own model
MODEL_ID = "petsID"
CONCEPT_ID_1 = "ferrari23"
CONCEPT_ID_2 = "outdoors23"
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
params = Struct()
params.update(
{
"template": "MMClassification_ResNet_50_RSB_A1",
"num_epochs": 2
}
)
metadata = (("authorization", "Key " + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
post_model_versions = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
train_info=resources_pb2.TrainInfo(
params=params,
),
output_info=resources_pb2.OutputInfo(
data=resources_pb2.Data(
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_1),
resources_pb2.Concept(id=CONCEPT_ID_2)
]
),
),
)
],
),
metadata=metadata,
)
if post_model_versions.status.code != status_code_pb2.SUCCESS:
print(post_model_versions.status)
raise Exception("Post models versions failed, status: " + post_model_versions.status.description)
<!--index.html file-->
<script>
//////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and concept IDs.
// Change these strings to run your own example.
/////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = "YOUR_PAT_HERE";
const APP_ID = "YOUR_APP_ID_HERE";
// Change these to train your own model
const MODEL_ID = "petsID";
const CONCEPT_ID_1 = "ferrari23";
const CONCEPT_ID_2 = "outdoors23";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const raw = JSON.stringify({
"user_app_id": {
"user_id": USER_ID,
"app_id": APP_ID
},
"model_versions": [{
"train_info": {
"params": {
"template": "MMClassification_ResNet_50_RSB_A1",
"num_epochs": 2
}
},
"output_info": {
"data": {
"concepts": [
{
"id": CONCEPT_ID_1
},
{
"id": CONCEPT_ID_2
}
]
}
}
}]
});
const requestOptions = {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Key " + PAT
},
body: raw
};
fetch(`https://api.clarifai.com/v2/models/${MODEL_ID}/versions`, requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log("error", error));
</script>
package com.clarifai.example;
import com.clarifai.grpc.api.*;
import com.clarifai.channel.ClarifaiChannel;
import com.clarifai.credentials.ClarifaiCallCredentials;
import com.clarifai.grpc.api.status.StatusCode;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
public class ClarifaiExample {
//////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, model ID, and concept IDs.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////////////
static final String USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the portal under Authentication
static final String PAT = "YOUR_PAT_HERE";
static final String APP_ID = "YOUR_APP_ID_HERE";
// Change these to train your own model
static final String MODEL_ID = "petsID";
static final String CONCEPT_ID_1 = "ferrari23";
static final String CONCEPT_ID_2 = "outdoors23";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) {
V2Grpc.V2BlockingStub stub = V2Grpc.newBlockingStub(ClarifaiChannel.INSTANCE.getGrpcChannel())
.withCallCredentials(new ClarifaiCallCredentials(PAT));
Struct.Builder params = Struct.newBuilder()
.putFields("template", Value.newBuilder().setStringValue("MMClassification_ResNet_50_RSB_A1").build())
.putFields("num_epochs", Value.newBuilder().setNumberValue(2).build());
SingleModelResponse postModelVersionsResponse = stub.postModelVersions(
PostModelVersionsRequest.newBuilder()
.setUserAppId(UserAppIDSet.newBuilder().setUserId(USER_ID).setAppId(APP_ID))
.setModelId(MODEL_ID)
.addModelVersions(ModelVersion.newBuilder()
.setTrainInfo(TrainInfo.newBuilder()
.setParams(params)
)
.setOutputInfo(OutputInfo.newBuilder()
.setData(Data.newBuilder()
.addConcepts(Concept.newBuilder()
.setId(CONCEPT_ID_1)
)
.addConcepts(Concept.newBuilder()
.setId(CONCEPT_ID_2)
)
)
)
)
.build()
);
if (postModelVersionsResponse.getStatus().getCode() != StatusCode.SUCCESS) {
throw new RuntimeException("Post model outputs failed, status: " + postModelVersionsResponse.getStatus());
}
}
}
curl -X POST "https://api.clarifai.com/v2/users/YOUR_USER_ID_HERE/apps/YOUR_APP_ID_HERE/models/YOUR_MODEL_ID_HERE/versions" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"model_versions": [{
"train_info": {
"params": {
"template": "MMClassification_ResNet_50_RSB_A1",
"num_epochs": 2
}
},
"output_info": {
"data": {
"concepts": [
{
"id": "ferrari23"
},
{
"id": "outdoors23"
}
]
}
}
}]
}'
Incrementally Train a Model
You can update existing deep fine-tuned models with new data without retraining from scratch. After training a model version, a checkpoint file is automatically saved. And you can initiate incremental training from that previously trained version checkpoint.
Below is an example of how you would perform incremental training from a specific version of a visual detector model.
Incremental model training functionality has been introduced starting from the 10.1 release.
- Python
- JavaScript (REST)
- Java
- cURL
###################################################################################################
# In this section, we set the user authentication, app ID, and details for incremental training.
# Change these strings to run your own example.
###################################################################################################
USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change these to incrementally train your own model
MODEL_ID = "detection-test"
MODEL_VERSION_ID = "5af1bd0fb79d47289ab82d5bb2325c81"
CONCEPT_ID = "face"
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
params = Struct()
params.update({
"template": "MMDetection_SSD",
"num_epochs": 1
})
metadata = (("authorization", "Key " + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
post_model_versions = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
train_info=resources_pb2.TrainInfo(
params=params,
resume_from_model=resources_pb2.Model(
id=MODEL_ID,
model_version=resources_pb2.ModelVersion(id=MODEL_VERSION_ID),
),
),
output_info=resources_pb2.OutputInfo(
data=resources_pb2.Data(
concepts=[resources_pb2.Concept(id=CONCEPT_ID)]
),
),
)
],
),
metadata=metadata,
)
if post_model_versions.status.code != status_code_pb2.SUCCESS:
print(post_model_versions.status)
raise Exception(
"Post models versions failed, status: " + post_model_versions.status.description
)
print(post_model_versions)
<!--index.html file-->
<script>
////////////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and details for incremental training.
// Change these strings to run your own example.
////////////////////////////////////////////////////////////////////////////////////////////////////////
const USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the Account's Security section
const PAT = "YOUR_PAT_HERE";
const APP_ID = "YOUR_APP_ID_HERE";
// Change these to incrementally train your own model
const MODEL_ID = "detection-test";
const MODEL_VERSION_ID = "5af1bd0fb79d47289ab82d5bb2325c81";
const CONCEPT_ID = "face";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
const raw = JSON.stringify({
"user_app_id": {
"user_id": USER_ID,
"app_id": APP_ID
},
"model_versions": [{
"train_info": {
"params": {
"template": "MMDetection_SSD",
"num_epochs": 1
},
"resume_from_model": {
"id": MODEL_ID,
"model_version": {
"id": MODEL_VERSION_ID
}
}
},
"output_info": {
"data": {
"concepts": [
{
"id": CONCEPT_ID
}
]
}
}
}]
});
const requestOptions = {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Key " + PAT
},
body: raw
};
fetch(`https://api.clarifai.com/v2/models/${MODEL_ID}/versions`, requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log("error", error));
</script>
package com.clarifai.example;
import com.clarifai.grpc.api.*;
import com.clarifai.channel.ClarifaiChannel;
import com.clarifai.credentials.ClarifaiCallCredentials;
import com.clarifai.grpc.api.status.StatusCode;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
public class ClarifaiExample {
//////////////////////////////////////////////////////////////////////////////////////////////////////
// In this section, we set the user authentication, app ID, and details for incremental training.
// Change these strings to run your own example.
//////////////////////////////////////////////////////////////////////////////////////////////////////
static final String USER_ID = "YOUR_USER_ID_HERE";
// Your PAT (Personal Access Token) can be found in the portal under Authentication
static final String PAT = "YOUR_PAT_HERE";
static final String APP_ID = "YOUR_APP_ID_HERE";
// Change these to incrementally train your own model
static final String MODEL_ID = "detection-test";
static final String MODEL_VERSION_ID = "5af1bd0fb79d47289ab82d5bb2325c81";
static final String CONCEPT_ID = "face";
///////////////////////////////////////////////////////////////////////////////////
// YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
///////////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) {
V2Grpc.V2BlockingStub stub = V2Grpc.newBlockingStub(ClarifaiChannel.INSTANCE.getGrpcChannel())
.withCallCredentials(new ClarifaiCallCredentials(PAT));
Struct.Builder params = Struct.newBuilder()
.putFields("template", Value.newBuilder().setStringValue("MMDetection_SSD").build())
.putFields("num_epochs", Value.newBuilder().setNumberValue(1).build());
SingleModelResponse postModelVersionsResponse = stub.postModelVersions(
PostModelVersionsRequest.newBuilder()
.setUserAppId(UserAppIDSet.newBuilder().setUserId(USER_ID).setAppId(APP_ID))
.setModelId(MODEL_ID)
.addModelVersions(ModelVersion.newBuilder()
.setTrainInfo(TrainInfo.newBuilder()
.setParams(params)
.setResumeFromModel(Model.newBuilder()
.setId(MODEL_ID)
.setModelVersion(ModelVersion.newBuilder()
.setId(MODEL_VERSION_ID)
)
)
)
.setOutputInfo(OutputInfo.newBuilder()
.setData(Data.newBuilder()
.addConcepts(Concept.newBuilder()
.setId(CONCEPT_ID)
)
)
)
)
.build()
);
if (postModelVersionsResponse.getStatus().getCode() != StatusCode.SUCCESS) {
throw new RuntimeException("Post model outputs failed, status: " + postModelVersionsResponse.getStatus());
}
}
}
curl -X POST "https://api.clarifai.com/v2/users/YOUR_USER_ID_HERE/apps/YOUR_APP_ID_HERE/models/detection-test/versions" \
-H "Authorization: Key YOUR_PAT_HERE" \
-H "Content-Type: application/json" \
-d '{
"model_versions": [{
"train_info": {
"params": {
"template": "MMDetection_SSD",
"num_epochs": 1
},
"resume_from_model": {
"id": "detection-test",
"model_version": {
"id": "5af1bd0fb79d47289ab82d5bb2325c81"
}
}
},
"output_info": {
"data": {
"concepts": [
{
"id": "face"
}
]
}
}
}]
}'
Train Using Your Own Template
You can create your own deep fine-tuned template and use it to train a model.
You need to create a Python configuration file and pass it as a training parameter to the PostModelVersions
endpoint. Here is an example of a training_config.py
file for creating a custom deep fine-tuned template using the MMDetection open source toolbox for visual detection tasks.
- Python
_base_ = '/mmdetection/configs/yolof/yolof_r50_c5_8x8_1x_coco.py'
model=dict(
bbox_head=dict(num_classes=0))
data=dict(
train=dict(
ann_file='',
img_prefix='',
classes=''
),
val=dict(
ann_file='',
img_prefix='',
classes=''))
optimizer=dict(
_delete_=True,
type='Adam',
lr=0.0001,
weight_decay=0.0001)
lr_config = dict(
_delete_=True,
policy='CosineAnnealing',
warmup='linear',
warmup_iters=1000,
warmup_ratio=0.1,
min_lr_ratio=1e-5)
runner = dict(
_delete_=True,
type='EpochBasedRunner',
max_epochs=10)
Here is how you could use the custom template to train a deep fine-tuned model.
- Python
########################################################################################
# In this section, we set the user authentication, app ID, model ID, and concept ID.
# Change these strings to run your own example.
########################################################################################
USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change this to train your own model
MODEL_ID = "test_config"
CONCEPT_ID_1 = "house"
##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
params = Struct()
params.update({"template": "MMDetection"})
CONFIG_FILE = 'training_config.py'
params.update({"custom_config": open(CONFIG_FILE, "r").read()})
metadata = (("authorization", "Key " + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
post_model_versions = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
train_info=resources_pb2.TrainInfo(
params=params,
),
output_info=resources_pb2.OutputInfo(
data=resources_pb2.Data(
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_1, value=1)
]
),
)
)
],
),
metadata=metadata,
)
if post_model_versions.status.code != status_code_pb2.SUCCESS:
print(post_model_versions.status)
raise Exception(
"Post models versions failed, status: " + post_model_versions.status.description
)
print(post_model_versions)