Skip to main content

Custom Transfer Learning Text Model

Develop your own custom text classifier using transfer learning


The Clarifai API has the ability not only to learn concepts from images and videos, but from texts as well.

In this walkthrough, you'll learn how to create and use a custom text model, learn from your own text data using the power of the Clarifai's base text model, and predict on new text examples.

You'll also learn how to use our world-class transfer learning technology to create and train text models accurately and fast.

The steps below can all be done via the Clarifai's portal. But here you'll learn how to do them programmatically via an API, using our gRPC Python client. The examples map directly to any of our other gRPC clients.

info

The walkthrough assumes you have already created your Clarifai's user account and the Personal Access Token. Also, first set up the gRPC Python client together with the initial code. See the client installation page.

For debugging purposes, each response returned by a method call can be printed to the console, and its entire data and structure will be shown verbosely.

Create a New Application

The first step is manual. In the Clarifai Portal, create a new application with Text/Document selected as the primary input type. The Base Workflow will be automatically selected for you.

Add a Batch of Texts

We'll now add several text inputs that we will later use as training data in our custom model. The idea is that we'll create a model which can differentiate between positive and negative sentences (in a grammatical sense).

We'll mark each input with one of the two concepts: positive or negative.

The texts can be added either directly (it's called raw) or from a URL.

##########################################################################
# In this section, we set the user authentication, app ID, and negative
# and positive texts. Change these strings to run your own example.
##########################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Add your own batch of texts
positive_raw_texts = [
"Marie is a published author.",
"In three years, everyone will be happy.",
"Nora Roberts is the most prolific romance writer the world has ever known.",
"She has written more than 225 books.",
"If you walk into Knoxville, you'll find a shop named Rala.",
"There are more than 850 miles of hiking trails in the Great Smoky Mountains.",
"Harrison Ford is 6'1\".",
"According to Reader's Digest, in the original script of Return of The Jedi, Han Solo died.",
"Kate travels to Doolin, Ireland every year for a writers' conference.",
"Fort Stevens was decommissioned by the United States military in 1947.",
]
negative_text_urls = [
"https://samples.clarifai.com/negative_sentence_1.txt",
"https://samples.clarifai.com/negative_sentence_2.txt",
"https://samples.clarifai.com/negative_sentence_3.txt",
"https://samples.clarifai.com/negative_sentence_4.txt",
"https://samples.clarifai.com/negative_sentence_5.txt",
"https://samples.clarifai.com/negative_sentence_6.txt",
"https://samples.clarifai.com/negative_sentence_7.txt",
"https://samples.clarifai.com/negative_sentence_8.txt",
"https://samples.clarifai.com/negative_sentence_9.txt",
"https://samples.clarifai.com/negative_sentence_10.txt",
]

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_inputs_response = stub.PostInputs(
service_pb2.PostInputsRequest(
user_app_id=userDataObject,
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(raw=raw_text),
concepts=[resources_pb2.Concept(id="positive", value=1)]
)
)
for raw_text in positive_raw_texts
] + [
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
url=text_url,
allow_duplicate_url=True
),
concepts=[resources_pb2.Concept(id="negative", value=1)]
)
)
for text_url in negative_text_urls
]
),
metadata=metadata
)

if post_inputs_response.status.code != status_code_pb2.SUCCESS:
print(post_inputs_response.status)
raise Exception("Failed response, status: " + post_inputs_response.status.description)

# Uncomment this line to see the structure and data of the response
#print(post_inputs_response)

Wait for Inputs to Download

Let's now wait for all the inputs to download.

###############################################################
# In this section, we set the user authentication and app ID.
# Change these strings to run your own example.
###############################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
import time

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

while True:
list_inputs_response = stub.ListInputs(
service_pb2.ListInputsRequest(
user_app_id=userDataObject,
page=1,
per_page=100
),
metadata=metadata
)

if list_inputs_response.status.code != status_code_pb2.SUCCESS:
print(list_inputs_response.status)
raise Exception("Failed response, status: " + list_inputs_response.status.description)

for the_input in list_inputs_response.inputs:
input_status_code = the_input.status.code
if input_status_code == status_code_pb2.INPUT_DOWNLOAD_SUCCESS:
continue
elif input_status_code in (status_code_pb2.INPUT_DOWNLOAD_PENDING, status_code_pb2.INPUT_DOWNLOAD_IN_PROGRESS):
print("Not all inputs have been downloaded yet. Checking again shortly.")
break
else:
error_message = (
str(input_status_code) + " " +
the_input.status.description + " " +
the_input.status.details
)
raise Exception(
f"Expected inputs to download, but got {error_message}. Full response: {list_inputs_response}"
)
else:
# Once all inputs have been successfully downloaded, break the while True loop
print("All inputs have been successfully downloaded.")
break
time.sleep(2)

Create a Custom Model

Let's create a custom transfer learning model (also called an "embedding-classifier").

################################################################################
# In this section, we set the user authentication, app ID, and the ID of the
# model we want to create. Change these strings to run your own example.
################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change this to create your own custom model
MODEL_ID = 'text-model-1'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_models_response = stub.PostModels(
service_pb2.PostModelsRequest(
user_app_id=userDataObject,
models=[
resources_pb2.Model(
id=MODEL_ID
)
]
),
metadata=metadata
)

if post_models_response.status.code != status_code_pb2.SUCCESS:
print(post_models_response.status)
raise Exception("Failed response, status: " + post_models_response.status.description)

# Uncomment this line to see the structure and data of the response
#print(post_models_response)

Train the Model

Let's train the model using the positive and negative concepts.

All inputs (in our application) associated with these two concepts will be used as training data. This will make the model to learn from these inputs so that we can later predict new text examples.

################################################################################
# In this section, we set the user authentication, app ID, and the ID of the
# model we want to train. Change these strings to run your own example.
################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to train your own custom model
MODEL_ID = 'text-model-1'
CONCEPT_ID_1 = 'positive'
CONCEPT_ID_2 = 'negative'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_versions_response = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
model_versions=[
resources_pb2.ModelVersion(
output_info=resources_pb2.OutputInfo(
data=resources_pb2.Data(
concepts=[
resources_pb2.Concept(id=CONCEPT_ID_1, value=1), # 1 means true, this concept is present
resources_pb2.Concept(id=CONCEPT_ID_2, value=1)
]
),
)
)]
),
metadata=metadata
)

if post_model_versions_response.status.code != status_code_pb2.SUCCESS:
print(post_model_versions_response.status)
raise Exception("Failed response, status: " + post_model_versions_response.status.description)

# Uncomment this line to see the structure and data of the response
#print(post_model_versions_response)

Wait for Model Training to Complete

Let's wait for the model training to complete.

Each model training produces a new model version. Notice that on the bottom of the following code example, we placed the model version ID into its own variable.

We'll be using it later to specify which specific model version we want to use (since a model can have multiple versions).

###############################################################################
# In this section, we set the user authentication, app ID, and the ID of the
# model to wait for its training. Change these strings to run your own example.
################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change this to wait for your own model's training
MODEL_ID = 'text-model-1'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
import time

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

while True:
get_model_response = stub.GetModel(
service_pb2.GetModelRequest(
user_app_id=userDataObject,
model_id=MODEL_ID
),
metadata=metadata
)

if get_model_response.status.code != status_code_pb2.SUCCESS:
print(get_model_response.status)
raise Exception("Failed response, status: " + get_model_response.status.description)

version_status_code = get_model_response.model.model_version.status.code
if version_status_code == status_code_pb2.MODEL_TRAINED:
print("The model has been successfully trained.")
break
elif version_status_code in (status_code_pb2.MODEL_QUEUED_FOR_TRAINING, status_code_pb2.MODEL_TRAINING):
print("The model hasn't been trained yet. Trying again shortly.")
time.sleep(2)
else:
error_message = (
str(get_model_response.status.code) + " " +
get_model_response.status.description + " " +
get_model_response.status.details
)
raise Exception(
f"Expected model to train, but got {error_message}. Full response: {get_model_response}"
)

model_version_id = get_model_response.model.model_version.id

Predict on New Inputs

Let's now use the trained custom model to predict new text examples.

###########################################################################
# In this section, we set the user authentication, app ID, model details,
# and new input samples. Change these strings to run your own example.
###########################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to make your own predictions
MODEL_ID = 'text-model-1'
MODEL_VERSION_ID = '49219b5968624221ac488303dde55327'
INPUT_TEXT = 'Butchart Gardens contains over 900 varieties of plants.'
INPUT_URL = 'https://samples.clarifai.com/negative_sentence_12.txt'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
# By default, the latest model version will be used, but it doesn't hurt to set it explicitly
version_id=MODEL_VERSION_ID,
inputs=[
resources_pb2.Input(data=resources_pb2.Data(text=resources_pb2.Text(raw=INPUT_TEXT))),
resources_pb2.Input(data=resources_pb2.Data(text=resources_pb2.Text(url=INPUT_URL))),
]
),
metadata=metadata
)

if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception("Failed response, status: " + post_model_outputs_response.status.description)

for output in post_model_outputs_response.outputs:
text_object = output.input.data.text
val = text_object.raw if text_object.raw else text_object.url

print(f"The following concepts were predicted for the input `{val}`:")
for concept in output.data.concepts:
print(f"\t{concept.name}: {concept.value:.2f}")
Text Output Example
The following concepts were predicted for the input `Butchart Gardens contains over 900 varieties of plants.`:
positive: 0.83
negative: 0.17
The following concepts were predicted for the input `https://samples.clarifai.com/negative_sentence_12.txt`:
negative: 1.00
positive: 0.00

Start Model Evaluation

Let's now test the performance of the model by using model evaluation. Take note of the evaluation_id returned in the response, as you will need it for the next step.

tip

See the Evaluating Models section to learn more.

###########################################################################
# In this section, we set the user authentication, app ID, model ID,
# and model version. Change these strings to run your own example.
###########################################################################

USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change these to make your own evaluations
MODEL_ID = "text-model-1"
MODEL_VERSION_ID = "3ad2c152232e46ebb16ed31f67dc54d8"

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (("authorization", "Key " + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_evaluations = stub.PostEvaluations(
service_pb2.PostEvaluationsRequest(
user_app_id=userDataObject,
eval_metrics=[
resources_pb2.EvalMetrics(
model=resources_pb2.Model(
app_id=APP_ID,
user_id=USER_ID,
id=MODEL_ID,
model_version=resources_pb2.ModelVersion(id=MODEL_VERSION_ID),
)
)
],
),
metadata=metadata,
)

if post_model_evaluations.status.code != status_code_pb2.SUCCESS:
print(post_model_evaluations.status)
raise Exception("Failed response, status: " + post_model_evaluations.status.description)

print(post_model_evaluations)
Raw Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "cbc9cbf2478ea008c68b74cc07126c05"
}
eval_metrics {
status {
code: MODEL_QUEUED_FOR_EVALUATION
description: "Model is queued for evaluation."
}
user_id: "ei2leoz3s3iy"
app_id: "text-search-app"
id: "e223fa4ac14b4784b223cd31cc545f34"
}

Wait for Model Evaluation Results

Model evaluation takes some time—depending on the amount of data the model has.

Let's wait for it to complete, and print all the results that it gives us.

###########################################################################################
# In this section, we set the user authentication, app ID, and the model evaluation ID.
# Change these strings to run your own example.
##########################################################################################

USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "text-search-app"
# Change these to wait for your own model's evaluation results
EVALUATION_ID = "e223fa4ac14b4784b223cd31cc545f34"

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
import time

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (("authorization", "Key " + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

while True:
get_evaluation_response = stub.GetEvaluation(
service_pb2.GetEvaluationRequest(
user_app_id=userDataObject,
evaluation_id=EVALUATION_ID,
fields=resources_pb2.FieldsValue(
confusion_matrix=True,
cooccurrence_matrix=True,
label_counts=True,
binary_metrics=True,
test_set=True,
metrics_by_area=True,
metrics_by_class=True,
),
),
metadata=metadata,
)

if get_evaluation_response.status.code != status_code_pb2.SUCCESS:
print(get_evaluation_response.status)
raise Exception("Get model version metrics failed: " + get_evaluation_response.status.description)

print(get_evaluation_response)
metrics_status_code = get_evaluation_response.eval_metrics.status.code
if metrics_status_code == status_code_pb2.MODEL_EVALUATED:
print("The model has been successfully evaluated.")
break
elif metrics_status_code in (
status_code_pb2.MODEL_NOT_EVALUATED,
status_code_pb2.MODEL_QUEUED_FOR_EVALUATION,
status_code_pb2.MODEL_EVALUATING,
):
print("The model hasn't been evaluated yet. Trying again shortly.")
time.sleep(2)
else:
error_message = (
str(get_evaluation_response.status.code) + " " +
get_evaluation_response.status.description + " " +
get_evaluation_response.status.details
)
raise Exception(
f"Expected model to evaluate, but got {error_message}. Full response: {get_evaluation_response}"
)

print("The model metrics response object:")
print(get_evaluation_response)
Raw Output Example
status {
code: SUCCESS
description: "Ok"
req_id: "aa8517cf0d340d6afef484ddae938124"
}
eval_metrics {
status {
code: MODEL_EVALUATED
description: "Model was successfully evaluated."
}
summary {
macro_avg_roc_auc: 1.0
macro_avg_f1_score: 0.8809523582458496
macro_std_f1_score: 0.13677529990673065
macro_avg_precision: 0.9375
macro_avg_recall: 0.875
}
confusion_matrix {
matrix {
predicted: "positive"
actual: "positive"
value: 0.7497637867927551
predicted_concept {
id: "positive"
name: "positive"
value: 0.7497637867927551
app_id: "text-search-app"
}
actual_concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "negative"
actual: "positive"
value: 0.2502362132072449
predicted_concept {
id: "negative"
name: "negative"
value: 0.2502362132072449
app_id: "text-search-app"
}
actual_concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "positive"
actual: "negative"
value: 3.033356961168465e-07
predicted_concept {
id: "positive"
name: "positive"
value: 3.033356961168465e-07
app_id: "text-search-app"
}
actual_concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "negative"
actual: "negative"
value: 0.9999997019767761
predicted_concept {
id: "negative"
name: "negative"
value: 0.9999997019767761
app_id: "text-search-app"
}
actual_concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
concept_ids: "positive"
concept_ids: "negative"
}
cooccurrence_matrix {
matrix {
row: "positive"
col: "positive"
count: 10
}
matrix {
row: "negative"
col: "negative"
count: 11
}
concept_ids: "positive"
concept_ids: "negative"
}
label_counts {
positive_label_counts {
concept_name: "positive"
count: 10
concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
positive_label_counts {
concept_name: "negative"
count: 11
concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
}
binary_metrics {
num_pos: 2
num_neg: 2
num_tot: 4
roc_auc: 1.0
f1: 0.8333333730697632
concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
roc_curve {
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 1.0
tpr: 0.0
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 1.0
thresholds: 1.0
thresholds: 0.9900000095367432
thresholds: 0.9800000190734863
thresholds: 0.9700000286102295
thresholds: 0.9599999785423279
thresholds: 0.949999988079071
thresholds: 0.9399999976158142
thresholds: 0.9300000071525574
thresholds: 0.9200000166893005
thresholds: 0.9100000262260437
thresholds: 0.8999999761581421
thresholds: 0.8899999856948853
thresholds: 0.8799999952316284
thresholds: 0.8700000047683716
thresholds: 0.8600000143051147
thresholds: 0.8500000238418579
thresholds: 0.8399999737739563
thresholds: 0.8299999833106995
thresholds: 0.8199999928474426
thresholds: 0.8100000023841858
thresholds: 0.800000011920929
thresholds: 0.7900000214576721
thresholds: 0.7799999713897705
thresholds: 0.7699999809265137
thresholds: 0.7599999904632568
thresholds: 0.75
thresholds: 0.7400000095367432
thresholds: 0.7300000190734863
thresholds: 0.7200000286102295
thresholds: 0.7099999785423279
thresholds: 0.699999988079071
thresholds: 0.6899999976158142
thresholds: 0.6800000071525574
thresholds: 0.6700000166893005
thresholds: 0.6600000262260437
thresholds: 0.6499999761581421
thresholds: 0.6399999856948853
thresholds: 0.6299999952316284
thresholds: 0.6200000047683716
thresholds: 0.6100000143051147
thresholds: 0.6000000238418579
thresholds: 0.5899999737739563
thresholds: 0.5799999833106995
thresholds: 0.5699999928474426
thresholds: 0.5600000023841858
thresholds: 0.550000011920929
thresholds: 0.5400000214576721
thresholds: 0.5299999713897705
thresholds: 0.5199999809265137
thresholds: 0.5099999904632568
thresholds: 0.5
thresholds: 0.49000000953674316
thresholds: 0.47999998927116394
thresholds: 0.4699999988079071
thresholds: 0.46000000834465027
thresholds: 0.44999998807907104
thresholds: 0.4399999976158142
thresholds: 0.4300000071525574
thresholds: 0.41999998688697815
thresholds: 0.4099999964237213
thresholds: 0.4000000059604645
thresholds: 0.38999998569488525
thresholds: 0.3799999952316284
thresholds: 0.3700000047683716
thresholds: 0.36000001430511475
thresholds: 0.3499999940395355
thresholds: 0.3400000035762787
thresholds: 0.33000001311302185
thresholds: 0.3199999928474426
thresholds: 0.3100000023841858
thresholds: 0.30000001192092896
thresholds: 0.28999999165534973
thresholds: 0.2800000011920929
thresholds: 0.27000001072883606
thresholds: 0.25999999046325684
thresholds: 0.25
thresholds: 0.23999999463558197
thresholds: 0.23000000417232513
thresholds: 0.2199999988079071
thresholds: 0.20999999344348907
thresholds: 0.20000000298023224
thresholds: 0.1899999976158142
thresholds: 0.18000000715255737
thresholds: 0.17000000178813934
thresholds: 0.1599999964237213
thresholds: 0.15000000596046448
thresholds: 0.14000000059604645
thresholds: 0.12999999523162842
thresholds: 0.11999999731779099
thresholds: 0.10999999940395355
thresholds: 0.10000000149011612
thresholds: 0.09000000357627869
thresholds: 0.07999999821186066
thresholds: 0.07000000029802322
thresholds: 0.05999999865889549
thresholds: 0.05000000074505806
thresholds: 0.03999999910593033
thresholds: 0.029999999329447746
thresholds: 0.019999999552965164
thresholds: 0.009999999776482582
thresholds: 0.0
}
precision_recall_curve {
recall: 1.0
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.0
precision: 0.5
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
thresholds: 0.0
thresholds: 0.009999999776482582
thresholds: 0.019999999552965164
thresholds: 0.029999999329447746
thresholds: 0.03999999910593033
thresholds: 0.05000000074505806
thresholds: 0.05999999865889549
thresholds: 0.07000000029802322
thresholds: 0.07999999821186066
thresholds: 0.09000000357627869
thresholds: 0.10000000149011612
thresholds: 0.10999999940395355
thresholds: 0.11999999731779099
thresholds: 0.12999999523162842
thresholds: 0.14000000059604645
thresholds: 0.15000000596046448
thresholds: 0.1599999964237213
thresholds: 0.17000000178813934
thresholds: 0.18000000715255737
thresholds: 0.1899999976158142
thresholds: 0.20000000298023224
thresholds: 0.20999999344348907
thresholds: 0.2199999988079071
thresholds: 0.23000000417232513
thresholds: 0.23999999463558197
thresholds: 0.25
thresholds: 0.25999999046325684
thresholds: 0.27000001072883606
thresholds: 0.2800000011920929
thresholds: 0.28999999165534973
thresholds: 0.30000001192092896
thresholds: 0.3100000023841858
thresholds: 0.3199999928474426
thresholds: 0.33000001311302185
thresholds: 0.3400000035762787
thresholds: 0.3499999940395355
thresholds: 0.36000001430511475
thresholds: 0.3700000047683716
thresholds: 0.3799999952316284
thresholds: 0.38999998569488525
thresholds: 0.4000000059604645
thresholds: 0.4099999964237213
thresholds: 0.41999998688697815
thresholds: 0.4300000071525574
thresholds: 0.4399999976158142
thresholds: 0.44999998807907104
thresholds: 0.46000000834465027
thresholds: 0.4699999988079071
thresholds: 0.47999998927116394
thresholds: 0.49000000953674316
thresholds: 0.5
thresholds: 0.5099999904632568
thresholds: 0.5199999809265137
thresholds: 0.5299999713897705
thresholds: 0.5400000214576721
thresholds: 0.550000011920929
thresholds: 0.5600000023841858
thresholds: 0.5699999928474426
thresholds: 0.5799999833106995
thresholds: 0.5899999737739563
thresholds: 0.6000000238418579
thresholds: 0.6100000143051147
thresholds: 0.6200000047683716
thresholds: 0.6299999952316284
thresholds: 0.6399999856948853
thresholds: 0.6499999761581421
thresholds: 0.6600000262260437
thresholds: 0.6700000166893005
thresholds: 0.6800000071525574
thresholds: 0.6899999976158142
thresholds: 0.699999988079071
thresholds: 0.7099999785423279
thresholds: 0.7200000286102295
thresholds: 0.7300000190734863
thresholds: 0.7400000095367432
thresholds: 0.75
thresholds: 0.7599999904632568
thresholds: 0.7699999809265137
thresholds: 0.7799999713897705
thresholds: 0.7900000214576721
thresholds: 0.800000011920929
thresholds: 0.8100000023841858
thresholds: 0.8199999928474426
thresholds: 0.8299999833106995
thresholds: 0.8399999737739563
thresholds: 0.8500000238418579
thresholds: 0.8600000143051147
thresholds: 0.8700000047683716
thresholds: 0.8799999952316284
thresholds: 0.8899999856948853
thresholds: 0.8999999761581421
thresholds: 0.9100000262260437
thresholds: 0.9200000166893005
thresholds: 0.9300000071525574
thresholds: 0.9399999976158142
thresholds: 0.949999988079071
thresholds: 0.9599999785423279
thresholds: 0.9700000286102295
thresholds: 0.9800000190734863
thresholds: 0.9900000095367432
thresholds: 1.0
}
}
binary_metrics {
num_pos: 2
num_neg: 2
num_tot: 4
roc_auc: 1.0
f1: 0.9285714626312256
concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
roc_curve {
fpr: 0.0
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 1.0
tpr: 0.5
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
thresholds: 1.0
thresholds: 0.9900000095367432
thresholds: 0.9800000190734863
thresholds: 0.9700000286102295
thresholds: 0.9599999785423279
thresholds: 0.949999988079071
thresholds: 0.9399999976158142
thresholds: 0.9300000071525574
thresholds: 0.9200000166893005
thresholds: 0.9100000262260437
thresholds: 0.8999999761581421
thresholds: 0.8899999856948853
thresholds: 0.8799999952316284
thresholds: 0.8700000047683716
thresholds: 0.8600000143051147
thresholds: 0.8500000238418579
thresholds: 0.8399999737739563
thresholds: 0.8299999833106995
thresholds: 0.8199999928474426
thresholds: 0.8100000023841858
thresholds: 0.800000011920929
thresholds: 0.7900000214576721
thresholds: 0.7799999713897705
thresholds: 0.7699999809265137
thresholds: 0.7599999904632568
thresholds: 0.75
thresholds: 0.7400000095367432
thresholds: 0.7300000190734863
thresholds: 0.7200000286102295
thresholds: 0.7099999785423279
thresholds: 0.699999988079071
thresholds: 0.6899999976158142
thresholds: 0.6800000071525574
thresholds: 0.6700000166893005
thresholds: 0.6600000262260437
thresholds: 0.6499999761581421
thresholds: 0.6399999856948853
thresholds: 0.6299999952316284
thresholds: 0.6200000047683716
thresholds: 0.6100000143051147
thresholds: 0.6000000238418579
thresholds: 0.5899999737739563
thresholds: 0.5799999833106995
thresholds: 0.5699999928474426
thresholds: 0.5600000023841858
thresholds: 0.550000011920929
thresholds: 0.5400000214576721
thresholds: 0.5299999713897705
thresholds: 0.5199999809265137
thresholds: 0.5099999904632568
thresholds: 0.5
thresholds: 0.49000000953674316
thresholds: 0.47999998927116394
thresholds: 0.4699999988079071
thresholds: 0.46000000834465027
thresholds: 0.44999998807907104
thresholds: 0.4399999976158142
thresholds: 0.4300000071525574
thresholds: 0.41999998688697815
thresholds: 0.4099999964237213
thresholds: 0.4000000059604645
thresholds: 0.38999998569488525
thresholds: 0.3799999952316284
thresholds: 0.3700000047683716
thresholds: 0.36000001430511475
thresholds: 0.3499999940395355
thresholds: 0.3400000035762787
thresholds: 0.33000001311302185
thresholds: 0.3199999928474426
thresholds: 0.3100000023841858
thresholds: 0.30000001192092896
thresholds: 0.28999999165534973
thresholds: 0.2800000011920929
thresholds: 0.27000001072883606
thresholds: 0.25999999046325684
thresholds: 0.25
thresholds: 0.23999999463558197
thresholds: 0.23000000417232513
thresholds: 0.2199999988079071
thresholds: 0.20999999344348907
thresholds: 0.20000000298023224
thresholds: 0.1899999976158142
thresholds: 0.18000000715255737
thresholds: 0.17000000178813934
thresholds: 0.1599999964237213
thresholds: 0.15000000596046448
thresholds: 0.14000000059604645
thresholds: 0.12999999523162842
thresholds: 0.11999999731779099
thresholds: 0.10999999940395355
thresholds: 0.10000000149011612
thresholds: 0.09000000357627869
thresholds: 0.07999999821186066
thresholds: 0.07000000029802322
thresholds: 0.05999999865889549
thresholds: 0.05000000074505806
thresholds: 0.03999999910593033
thresholds: 0.029999999329447746
thresholds: 0.019999999552965164
thresholds: 0.009999999776482582
thresholds: 0.0
}
precision_recall_curve {
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 0.5
precision: 0.5
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 1.0
thresholds: 0.0
thresholds: 0.009999999776482582
thresholds: 0.019999999552965164
thresholds: 0.029999999329447746
thresholds: 0.03999999910593033
thresholds: 0.05000000074505806
thresholds: 0.05999999865889549
thresholds: 0.07000000029802322
thresholds: 0.07999999821186066
thresholds: 0.09000000357627869
thresholds: 0.10000000149011612
thresholds: 0.10999999940395355
thresholds: 0.11999999731779099
thresholds: 0.12999999523162842
thresholds: 0.14000000059604645
thresholds: 0.15000000596046448
thresholds: 0.1599999964237213
thresholds: 0.17000000178813934
thresholds: 0.18000000715255737
thresholds: 0.1899999976158142
thresholds: 0.20000000298023224
thresholds: 0.20999999344348907
thresholds: 0.2199999988079071
thresholds: 0.23000000417232513
thresholds: 0.23999999463558197
thresholds: 0.25
thresholds: 0.25999999046325684
thresholds: 0.27000001072883606
thresholds: 0.2800000011920929
thresholds: 0.28999999165534973
thresholds: 0.30000001192092896
thresholds: 0.3100000023841858
thresholds: 0.3199999928474426
thresholds: 0.33000001311302185
thresholds: 0.3400000035762787
thresholds: 0.3499999940395355
thresholds: 0.36000001430511475
thresholds: 0.3700000047683716
thresholds: 0.3799999952316284
thresholds: 0.38999998569488525
thresholds: 0.4000000059604645
thresholds: 0.4099999964237213
thresholds: 0.41999998688697815
thresholds: 0.4300000071525574
thresholds: 0.4399999976158142
thresholds: 0.44999998807907104
thresholds: 0.46000000834465027
thresholds: 0.4699999988079071
thresholds: 0.47999998927116394
thresholds: 0.49000000953674316
thresholds: 0.5
thresholds: 0.5099999904632568
thresholds: 0.5199999809265137
thresholds: 0.5299999713897705
thresholds: 0.5400000214576721
thresholds: 0.550000011920929
thresholds: 0.5600000023841858
thresholds: 0.5699999928474426
thresholds: 0.5799999833106995
thresholds: 0.5899999737739563
thresholds: 0.6000000238418579
thresholds: 0.6100000143051147
thresholds: 0.6200000047683716
thresholds: 0.6299999952316284
thresholds: 0.6399999856948853
thresholds: 0.6499999761581421
thresholds: 0.6600000262260437
thresholds: 0.6700000166893005
thresholds: 0.6800000071525574
thresholds: 0.6899999976158142
thresholds: 0.699999988079071
thresholds: 0.7099999785423279
thresholds: 0.7200000286102295
thresholds: 0.7300000190734863
thresholds: 0.7400000095367432
thresholds: 0.75
thresholds: 0.7599999904632568
thresholds: 0.7699999809265137
thresholds: 0.7799999713897705
thresholds: 0.7900000214576721
thresholds: 0.800000011920929
thresholds: 0.8100000023841858
thresholds: 0.8199999928474426
thresholds: 0.8299999833106995
thresholds: 0.8399999737739563
thresholds: 0.8500000238418579
thresholds: 0.8600000143051147
thresholds: 0.8700000047683716
thresholds: 0.8799999952316284
thresholds: 0.8899999856948853
thresholds: 0.8999999761581421
thresholds: 0.9100000262260437
thresholds: 0.9200000166893005
thresholds: 0.9300000071525574
thresholds: 0.9399999976158142
thresholds: 0.949999988079071
thresholds: 0.9599999785423279
thresholds: 0.9700000286102295
thresholds: 0.9800000190734863
thresholds: 0.9900000095367432
thresholds: 1.0
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9999601244926453
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 3.9902104617794976e-05
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "UyZACEDqN6WhAQOO"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/e9dc85e0585a7a6d32da791e6cfa2c52"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/e9dc85e0585a7a6d32da791e6cfa2c52"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 507
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435506
nanos: 969900000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "negative"
name: "negative"
value: 0.9999999403953552
app_id: "text-search-app"
}
predicted_concepts {
id: "positive"
name: "positive"
value: 6.925120743517255e-08
app_id: "text-search-app"
}
ground_truth_concepts {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
input {
id: "A2SsbMJrHqiAUmnr"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/77fe0c1ff92bcb6d876ec8e551e9268f"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/77fe0c1ff92bcb6d876ec8e551e9268f"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 440
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687436288
nanos: 370201000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "negative"
name: "negative"
value: 0.9999988675117493
app_id: "text-search-app"
}
predicted_concepts {
id: "positive"
name: "positive"
value: 1.1424209560573217e-06
app_id: "text-search-app"
}
ground_truth_concepts {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
input {
id: "NnIBViXRFrXAGKkd"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/65d33e371bbdd16cbcf7a2826ab0bcc1"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/65d33e371bbdd16cbcf7a2826ab0bcc1"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 359
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435598
nanos: 758239000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9999937415122986
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 6.254503659874899e-06
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "OK2cJALgpFQYafF2"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/90fd9b5c2f9af536520add1a2da13db8"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/90fd9b5c2f9af536520add1a2da13db8"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 399
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435419
nanos: 661211000
}
modified_at {
seconds: 1690525794
nanos: 810009000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9985570907592773
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 0.001442914130166173
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "YViTPPBzt3pTlmgY"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/5bccdbc8c9f8fb9a673d61dc2e32e40b"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/5bccdbc8c9f8fb9a673d61dc2e32e40b"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 413
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687436129
nanos: 772735000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
id: "e223fa4ac14b4784b223cd31cc545f34"
eval_info {
params {
fields {
key: "dataset_id"
value {
string_value: ""
}
}
fields {
key: "dataset_version_id"
value {
string_value: ""
}
}
fields {
key: "use_kfold"
value {
bool_value: true
}
}
}
}
model {
id: "text-model-1"
app_id: "text-search-app"
model_version {
id: "3ad2c152232e46ebb16ed31f67dc54d8"
created_at {
seconds: 1693564041
nanos: 515456000
}
status {
code: MODEL_TRAINED
description: "Model is trained and ready"
}
active_concept_count: 2
metrics {
status {
code: MODEL_EVALUATED
description: "Model was successfully evaluated."
}
summary {
macro_avg_roc_auc: 1.0
macro_avg_f1_score: 0.8809523582458496
macro_std_f1_score: 0.13677529990673065
macro_avg_precision: 0.9375
macro_avg_recall: 0.875
}
}
total_input_count: 21
completed_at {
seconds: 1693564044
nanos: 915680000
}
visibility {
gettable: PRIVATE
}
app_id: "text-search-app"
user_id: "alfrick"
metadata {
}
output_info {
output_config {
}
message: "Show output_info with: GET /models/{model_id}/output_info"
params {
fields {
key: "max_concepts"
value {
number_value: 20.0
}
}
fields {
key: "min_value"
value {
number_value: 0.0
}
}
fields {
key: "select_concepts"
value {
list_value {
}
}
}
}
}
input_info {
base_embed_model {
id: "multilingual-text-embedding"
app_id: "main"
model_version {
id: "9b33adf15280465b857163ddaaacdcb1"
}
user_id: "clarifai"
model_type_id: "text-embedder"
}
}
train_info {
params {
fields {
key: "dataset_id"
value {
string_value: ""
}
}
fields {
key: "dataset_version_id"
value {
string_value: ""
}
}
fields {
key: "enrich_dataset"
value {
string_value: "Automatic"
}
}
}
}
import_info {
}
}
user_id: "alfrick"
model_type_id: "embedding-classifier"
}
user_id: "alfrick"
app_id: "text-search-app"
}

The model has been successfully evaluated.
The model metrics response object:
status {
code: SUCCESS
description: "Ok"
req_id: "aa8517cf0d340d6afef484ddae938124"
}
eval_metrics {
status {
code: MODEL_EVALUATED
description: "Model was successfully evaluated."
}
summary {
macro_avg_roc_auc: 1.0
macro_avg_f1_score: 0.8809523582458496
macro_std_f1_score: 0.13677529990673065
macro_avg_precision: 0.9375
macro_avg_recall: 0.875
}
confusion_matrix {
matrix {
predicted: "positive"
actual: "positive"
value: 0.7497637867927551
predicted_concept {
id: "positive"
name: "positive"
value: 0.7497637867927551
app_id: "text-search-app"
}
actual_concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "negative"
actual: "positive"
value: 0.2502362132072449
predicted_concept {
id: "negative"
name: "negative"
value: 0.2502362132072449
app_id: "text-search-app"
}
actual_concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "positive"
actual: "negative"
value: 3.033356961168465e-07
predicted_concept {
id: "positive"
name: "positive"
value: 3.033356961168465e-07
app_id: "text-search-app"
}
actual_concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
matrix {
predicted: "negative"
actual: "negative"
value: 0.9999997019767761
predicted_concept {
id: "negative"
name: "negative"
value: 0.9999997019767761
app_id: "text-search-app"
}
actual_concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
concept_ids: "positive"
concept_ids: "negative"
}
cooccurrence_matrix {
matrix {
row: "positive"
col: "positive"
count: 10
}
matrix {
row: "negative"
col: "negative"
count: 11
}
concept_ids: "positive"
concept_ids: "negative"
}
label_counts {
positive_label_counts {
concept_name: "positive"
count: 10
concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
}
positive_label_counts {
concept_name: "negative"
count: 11
concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
}
}
binary_metrics {
num_pos: 2
num_neg: 2
num_tot: 4
roc_auc: 1.0
f1: 0.8333333730697632
concept {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
roc_curve {
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 0.0
fpr: 1.0
tpr: 0.0
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 0.75
tpr: 1.0
thresholds: 1.0
thresholds: 0.9900000095367432
thresholds: 0.9800000190734863
thresholds: 0.9700000286102295
thresholds: 0.9599999785423279
thresholds: 0.949999988079071
thresholds: 0.9399999976158142
thresholds: 0.9300000071525574
thresholds: 0.9200000166893005
thresholds: 0.9100000262260437
thresholds: 0.8999999761581421
thresholds: 0.8899999856948853
thresholds: 0.8799999952316284
thresholds: 0.8700000047683716
thresholds: 0.8600000143051147
thresholds: 0.8500000238418579
thresholds: 0.8399999737739563
thresholds: 0.8299999833106995
thresholds: 0.8199999928474426
thresholds: 0.8100000023841858
thresholds: 0.800000011920929
thresholds: 0.7900000214576721
thresholds: 0.7799999713897705
thresholds: 0.7699999809265137
thresholds: 0.7599999904632568
thresholds: 0.75
thresholds: 0.7400000095367432
thresholds: 0.7300000190734863
thresholds: 0.7200000286102295
thresholds: 0.7099999785423279
thresholds: 0.699999988079071
thresholds: 0.6899999976158142
thresholds: 0.6800000071525574
thresholds: 0.6700000166893005
thresholds: 0.6600000262260437
thresholds: 0.6499999761581421
thresholds: 0.6399999856948853
thresholds: 0.6299999952316284
thresholds: 0.6200000047683716
thresholds: 0.6100000143051147
thresholds: 0.6000000238418579
thresholds: 0.5899999737739563
thresholds: 0.5799999833106995
thresholds: 0.5699999928474426
thresholds: 0.5600000023841858
thresholds: 0.550000011920929
thresholds: 0.5400000214576721
thresholds: 0.5299999713897705
thresholds: 0.5199999809265137
thresholds: 0.5099999904632568
thresholds: 0.5
thresholds: 0.49000000953674316
thresholds: 0.47999998927116394
thresholds: 0.4699999988079071
thresholds: 0.46000000834465027
thresholds: 0.44999998807907104
thresholds: 0.4399999976158142
thresholds: 0.4300000071525574
thresholds: 0.41999998688697815
thresholds: 0.4099999964237213
thresholds: 0.4000000059604645
thresholds: 0.38999998569488525
thresholds: 0.3799999952316284
thresholds: 0.3700000047683716
thresholds: 0.36000001430511475
thresholds: 0.3499999940395355
thresholds: 0.3400000035762787
thresholds: 0.33000001311302185
thresholds: 0.3199999928474426
thresholds: 0.3100000023841858
thresholds: 0.30000001192092896
thresholds: 0.28999999165534973
thresholds: 0.2800000011920929
thresholds: 0.27000001072883606
thresholds: 0.25999999046325684
thresholds: 0.25
thresholds: 0.23999999463558197
thresholds: 0.23000000417232513
thresholds: 0.2199999988079071
thresholds: 0.20999999344348907
thresholds: 0.20000000298023224
thresholds: 0.1899999976158142
thresholds: 0.18000000715255737
thresholds: 0.17000000178813934
thresholds: 0.1599999964237213
thresholds: 0.15000000596046448
thresholds: 0.14000000059604645
thresholds: 0.12999999523162842
thresholds: 0.11999999731779099
thresholds: 0.10999999940395355
thresholds: 0.10000000149011612
thresholds: 0.09000000357627869
thresholds: 0.07999999821186066
thresholds: 0.07000000029802322
thresholds: 0.05999999865889549
thresholds: 0.05000000074505806
thresholds: 0.03999999910593033
thresholds: 0.029999999329447746
thresholds: 0.019999999552965164
thresholds: 0.009999999776482582
thresholds: 0.0
}
precision_recall_curve {
recall: 1.0
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.75
recall: 0.0
precision: 0.5
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
precision: 1.0
thresholds: 0.0
thresholds: 0.009999999776482582
thresholds: 0.019999999552965164
thresholds: 0.029999999329447746
thresholds: 0.03999999910593033
thresholds: 0.05000000074505806
thresholds: 0.05999999865889549
thresholds: 0.07000000029802322
thresholds: 0.07999999821186066
thresholds: 0.09000000357627869
thresholds: 0.10000000149011612
thresholds: 0.10999999940395355
thresholds: 0.11999999731779099
thresholds: 0.12999999523162842
thresholds: 0.14000000059604645
thresholds: 0.15000000596046448
thresholds: 0.1599999964237213
thresholds: 0.17000000178813934
thresholds: 0.18000000715255737
thresholds: 0.1899999976158142
thresholds: 0.20000000298023224
thresholds: 0.20999999344348907
thresholds: 0.2199999988079071
thresholds: 0.23000000417232513
thresholds: 0.23999999463558197
thresholds: 0.25
thresholds: 0.25999999046325684
thresholds: 0.27000001072883606
thresholds: 0.2800000011920929
thresholds: 0.28999999165534973
thresholds: 0.30000001192092896
thresholds: 0.3100000023841858
thresholds: 0.3199999928474426
thresholds: 0.33000001311302185
thresholds: 0.3400000035762787
thresholds: 0.3499999940395355
thresholds: 0.36000001430511475
thresholds: 0.3700000047683716
thresholds: 0.3799999952316284
thresholds: 0.38999998569488525
thresholds: 0.4000000059604645
thresholds: 0.4099999964237213
thresholds: 0.41999998688697815
thresholds: 0.4300000071525574
thresholds: 0.4399999976158142
thresholds: 0.44999998807907104
thresholds: 0.46000000834465027
thresholds: 0.4699999988079071
thresholds: 0.47999998927116394
thresholds: 0.49000000953674316
thresholds: 0.5
thresholds: 0.5099999904632568
thresholds: 0.5199999809265137
thresholds: 0.5299999713897705
thresholds: 0.5400000214576721
thresholds: 0.550000011920929
thresholds: 0.5600000023841858
thresholds: 0.5699999928474426
thresholds: 0.5799999833106995
thresholds: 0.5899999737739563
thresholds: 0.6000000238418579
thresholds: 0.6100000143051147
thresholds: 0.6200000047683716
thresholds: 0.6299999952316284
thresholds: 0.6399999856948853
thresholds: 0.6499999761581421
thresholds: 0.6600000262260437
thresholds: 0.6700000166893005
thresholds: 0.6800000071525574
thresholds: 0.6899999976158142
thresholds: 0.699999988079071
thresholds: 0.7099999785423279
thresholds: 0.7200000286102295
thresholds: 0.7300000190734863
thresholds: 0.7400000095367432
thresholds: 0.75
thresholds: 0.7599999904632568
thresholds: 0.7699999809265137
thresholds: 0.7799999713897705
thresholds: 0.7900000214576721
thresholds: 0.800000011920929
thresholds: 0.8100000023841858
thresholds: 0.8199999928474426
thresholds: 0.8299999833106995
thresholds: 0.8399999737739563
thresholds: 0.8500000238418579
thresholds: 0.8600000143051147
thresholds: 0.8700000047683716
thresholds: 0.8799999952316284
thresholds: 0.8899999856948853
thresholds: 0.8999999761581421
thresholds: 0.9100000262260437
thresholds: 0.9200000166893005
thresholds: 0.9300000071525574
thresholds: 0.9399999976158142
thresholds: 0.949999988079071
thresholds: 0.9599999785423279
thresholds: 0.9700000286102295
thresholds: 0.9800000190734863
thresholds: 0.9900000095367432
thresholds: 1.0
}
}
binary_metrics {
num_pos: 2
num_neg: 2
num_tot: 4
roc_auc: 1.0
f1: 0.9285714626312256
concept {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
roc_curve {
fpr: 0.0
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 0.25
fpr: 1.0
tpr: 0.5
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
tpr: 1.0
thresholds: 1.0
thresholds: 0.9900000095367432
thresholds: 0.9800000190734863
thresholds: 0.9700000286102295
thresholds: 0.9599999785423279
thresholds: 0.949999988079071
thresholds: 0.9399999976158142
thresholds: 0.9300000071525574
thresholds: 0.9200000166893005
thresholds: 0.9100000262260437
thresholds: 0.8999999761581421
thresholds: 0.8899999856948853
thresholds: 0.8799999952316284
thresholds: 0.8700000047683716
thresholds: 0.8600000143051147
thresholds: 0.8500000238418579
thresholds: 0.8399999737739563
thresholds: 0.8299999833106995
thresholds: 0.8199999928474426
thresholds: 0.8100000023841858
thresholds: 0.800000011920929
thresholds: 0.7900000214576721
thresholds: 0.7799999713897705
thresholds: 0.7699999809265137
thresholds: 0.7599999904632568
thresholds: 0.75
thresholds: 0.7400000095367432
thresholds: 0.7300000190734863
thresholds: 0.7200000286102295
thresholds: 0.7099999785423279
thresholds: 0.699999988079071
thresholds: 0.6899999976158142
thresholds: 0.6800000071525574
thresholds: 0.6700000166893005
thresholds: 0.6600000262260437
thresholds: 0.6499999761581421
thresholds: 0.6399999856948853
thresholds: 0.6299999952316284
thresholds: 0.6200000047683716
thresholds: 0.6100000143051147
thresholds: 0.6000000238418579
thresholds: 0.5899999737739563
thresholds: 0.5799999833106995
thresholds: 0.5699999928474426
thresholds: 0.5600000023841858
thresholds: 0.550000011920929
thresholds: 0.5400000214576721
thresholds: 0.5299999713897705
thresholds: 0.5199999809265137
thresholds: 0.5099999904632568
thresholds: 0.5
thresholds: 0.49000000953674316
thresholds: 0.47999998927116394
thresholds: 0.4699999988079071
thresholds: 0.46000000834465027
thresholds: 0.44999998807907104
thresholds: 0.4399999976158142
thresholds: 0.4300000071525574
thresholds: 0.41999998688697815
thresholds: 0.4099999964237213
thresholds: 0.4000000059604645
thresholds: 0.38999998569488525
thresholds: 0.3799999952316284
thresholds: 0.3700000047683716
thresholds: 0.36000001430511475
thresholds: 0.3499999940395355
thresholds: 0.3400000035762787
thresholds: 0.33000001311302185
thresholds: 0.3199999928474426
thresholds: 0.3100000023841858
thresholds: 0.30000001192092896
thresholds: 0.28999999165534973
thresholds: 0.2800000011920929
thresholds: 0.27000001072883606
thresholds: 0.25999999046325684
thresholds: 0.25
thresholds: 0.23999999463558197
thresholds: 0.23000000417232513
thresholds: 0.2199999988079071
thresholds: 0.20999999344348907
thresholds: 0.20000000298023224
thresholds: 0.1899999976158142
thresholds: 0.18000000715255737
thresholds: 0.17000000178813934
thresholds: 0.1599999964237213
thresholds: 0.15000000596046448
thresholds: 0.14000000059604645
thresholds: 0.12999999523162842
thresholds: 0.11999999731779099
thresholds: 0.10999999940395355
thresholds: 0.10000000149011612
thresholds: 0.09000000357627869
thresholds: 0.07999999821186066
thresholds: 0.07000000029802322
thresholds: 0.05999999865889549
thresholds: 0.05000000074505806
thresholds: 0.03999999910593033
thresholds: 0.029999999329447746
thresholds: 0.019999999552965164
thresholds: 0.009999999776482582
thresholds: 0.0
}
precision_recall_curve {
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 1.0
recall: 0.5
precision: 0.5
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 0.875
precision: 1.0
thresholds: 0.0
thresholds: 0.009999999776482582
thresholds: 0.019999999552965164
thresholds: 0.029999999329447746
thresholds: 0.03999999910593033
thresholds: 0.05000000074505806
thresholds: 0.05999999865889549
thresholds: 0.07000000029802322
thresholds: 0.07999999821186066
thresholds: 0.09000000357627869
thresholds: 0.10000000149011612
thresholds: 0.10999999940395355
thresholds: 0.11999999731779099
thresholds: 0.12999999523162842
thresholds: 0.14000000059604645
thresholds: 0.15000000596046448
thresholds: 0.1599999964237213
thresholds: 0.17000000178813934
thresholds: 0.18000000715255737
thresholds: 0.1899999976158142
thresholds: 0.20000000298023224
thresholds: 0.20999999344348907
thresholds: 0.2199999988079071
thresholds: 0.23000000417232513
thresholds: 0.23999999463558197
thresholds: 0.25
thresholds: 0.25999999046325684
thresholds: 0.27000001072883606
thresholds: 0.2800000011920929
thresholds: 0.28999999165534973
thresholds: 0.30000001192092896
thresholds: 0.3100000023841858
thresholds: 0.3199999928474426
thresholds: 0.33000001311302185
thresholds: 0.3400000035762787
thresholds: 0.3499999940395355
thresholds: 0.36000001430511475
thresholds: 0.3700000047683716
thresholds: 0.3799999952316284
thresholds: 0.38999998569488525
thresholds: 0.4000000059604645
thresholds: 0.4099999964237213
thresholds: 0.41999998688697815
thresholds: 0.4300000071525574
thresholds: 0.4399999976158142
thresholds: 0.44999998807907104
thresholds: 0.46000000834465027
thresholds: 0.4699999988079071
thresholds: 0.47999998927116394
thresholds: 0.49000000953674316
thresholds: 0.5
thresholds: 0.5099999904632568
thresholds: 0.5199999809265137
thresholds: 0.5299999713897705
thresholds: 0.5400000214576721
thresholds: 0.550000011920929
thresholds: 0.5600000023841858
thresholds: 0.5699999928474426
thresholds: 0.5799999833106995
thresholds: 0.5899999737739563
thresholds: 0.6000000238418579
thresholds: 0.6100000143051147
thresholds: 0.6200000047683716
thresholds: 0.6299999952316284
thresholds: 0.6399999856948853
thresholds: 0.6499999761581421
thresholds: 0.6600000262260437
thresholds: 0.6700000166893005
thresholds: 0.6800000071525574
thresholds: 0.6899999976158142
thresholds: 0.699999988079071
thresholds: 0.7099999785423279
thresholds: 0.7200000286102295
thresholds: 0.7300000190734863
thresholds: 0.7400000095367432
thresholds: 0.75
thresholds: 0.7599999904632568
thresholds: 0.7699999809265137
thresholds: 0.7799999713897705
thresholds: 0.7900000214576721
thresholds: 0.800000011920929
thresholds: 0.8100000023841858
thresholds: 0.8199999928474426
thresholds: 0.8299999833106995
thresholds: 0.8399999737739563
thresholds: 0.8500000238418579
thresholds: 0.8600000143051147
thresholds: 0.8700000047683716
thresholds: 0.8799999952316284
thresholds: 0.8899999856948853
thresholds: 0.8999999761581421
thresholds: 0.9100000262260437
thresholds: 0.9200000166893005
thresholds: 0.9300000071525574
thresholds: 0.9399999976158142
thresholds: 0.949999988079071
thresholds: 0.9599999785423279
thresholds: 0.9700000286102295
thresholds: 0.9800000190734863
thresholds: 0.9900000095367432
thresholds: 1.0
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9999601244926453
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 3.9902104617794976e-05
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "UyZACEDqN6WhAQOO"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/e9dc85e0585a7a6d32da791e6cfa2c52"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/e9dc85e0585a7a6d32da791e6cfa2c52"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 507
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435506
nanos: 969900000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "negative"
name: "negative"
value: 0.9999999403953552
app_id: "text-search-app"
}
predicted_concepts {
id: "positive"
name: "positive"
value: 6.925120743517255e-08
app_id: "text-search-app"
}
ground_truth_concepts {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
input {
id: "A2SsbMJrHqiAUmnr"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/77fe0c1ff92bcb6d876ec8e551e9268f"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/77fe0c1ff92bcb6d876ec8e551e9268f"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 440
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687436288
nanos: 370201000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "negative"
name: "negative"
value: 0.9999988675117493
app_id: "text-search-app"
}
predicted_concepts {
id: "positive"
name: "positive"
value: 1.1424209560573217e-06
app_id: "text-search-app"
}
ground_truth_concepts {
id: "negative"
name: "negative"
value: 1.0
app_id: "text-search-app"
}
input {
id: "NnIBViXRFrXAGKkd"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/65d33e371bbdd16cbcf7a2826ab0bcc1"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/65d33e371bbdd16cbcf7a2826ab0bcc1"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 359
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435598
nanos: 758239000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9999937415122986
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 6.254503659874899e-06
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "OK2cJALgpFQYafF2"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/90fd9b5c2f9af536520add1a2da13db8"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/90fd9b5c2f9af536520add1a2da13db8"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 399
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687435419
nanos: 661211000
}
modified_at {
seconds: 1690525794
nanos: 810009000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
test_set {
predicted_concepts {
id: "positive"
name: "positive"
value: 0.9985570907592773
app_id: "text-search-app"
}
predicted_concepts {
id: "negative"
name: "negative"
value: 0.001442914130166173
app_id: "text-search-app"
}
ground_truth_concepts {
id: "positive"
name: "positive"
value: 1.0
app_id: "text-search-app"
}
input {
id: "YViTPPBzt3pTlmgY"
data {
text {
url: "https://data.clarifai.com/orig/users/alfrick/apps/text-search-app/inputs/text/5bccdbc8c9f8fb9a673d61dc2e32e40b"
hosted {
prefix: "https://data.clarifai.com"
suffix: "users/alfrick/apps/text-search-app/inputs/text/5bccdbc8c9f8fb9a673d61dc2e32e40b"
sizes: "orig"
crossorigin: "use-credentials"
}
text_info {
char_count: 413
encoding: "UTF8"
}
}
}
created_at {
seconds: 1687436129
nanos: 772735000
}
modified_at {
seconds: 1690525794
nanos: 876378000
}
status {
code: INPUT_DOWNLOAD_SUCCESS
description: "Download complete"
}
}
}
id: "e223fa4ac14b4784b223cd31cc545f34"
eval_info {
params {
fields {
key: "dataset_id"
value {
string_value: ""
}
}
fields {
key: "dataset_version_id"
value {
string_value: ""
}
}
fields {
key: "use_kfold"
value {
bool_value: true
}
}
}
}
model {
id: "text-model-1"
app_id: "text-search-app"
model_version {
id: "3ad2c152232e46ebb16ed31f67dc54d8"
created_at {
seconds: 1693564041
nanos: 515456000
}
status {
code: MODEL_TRAINED
description: "Model is trained and ready"
}
active_concept_count: 2
metrics {
status {
code: MODEL_EVALUATED
description: "Model was successfully evaluated."
}
summary {
macro_avg_roc_auc: 1.0
macro_avg_f1_score: 0.8809523582458496
macro_std_f1_score: 0.13677529990673065
macro_avg_precision: 0.9375
macro_avg_recall: 0.875
}
}
total_input_count: 21
completed_at {
seconds: 1693564044
nanos: 915680000
}
visibility {
gettable: PRIVATE
}
app_id: "text-search-app"
user_id: "alfrick"
metadata {
}
output_info {
output_config {
}
message: "Show output_info with: GET /models/{model_id}/output_info"
params {
fields {
key: "max_concepts"
value {
number_value: 20.0
}
}
fields {
key: "min_value"
value {
number_value: 0.0
}
}
fields {
key: "select_concepts"
value {
list_value {
}
}
}
}
}
input_info {
base_embed_model {
id: "multilingual-text-embedding"
app_id: "main"
model_version {
id: "9b33adf15280465b857163ddaaacdcb1"
}
user_id: "clarifai"
model_type_id: "text-embedder"
}
}
train_info {
params {
fields {
key: "dataset_id"
value {
string_value: ""
}
}
fields {
key: "dataset_version_id"
value {
string_value: ""
}
}
fields {
key: "enrich_dataset"
value {
string_value: "Automatic"
}
}
}
}
import_info {
}
}
user_id: "ei2leoz3s3iy"
model_type_id: "embedding-classifier"
}
user_id: "ei2leoz3s3iy"
app_id: "text-search-app"
}